Profile picture for user Samuel.Barber
Posted By
Sam
Barber
Samuel.Barber@cognitomedia.com

Media training of spokespeople has long been a service offered by many consultants, agencies, and specialist trainers. Everyone offers it; few do it well. 

We know there are multiple approaches but they all share fundamentals that have barely changed over time. There’s typically a conversation with theory and techniques for working with the media, followed by filmed practice sessions.

These sessions moved online during the pandemic. Even with return to office, virtual interviews allow for easier incorporation of new technologies. Platforms have instant transcription services and recordings meaning you don’t have to juggle conducting an interview, taking notes, and thinking about feedback all at once. 

At Cognito, we’re now taking our media training sessions a step further by incorporating AI-based speech assessment and generative AI tools. 

We use Yoodli, a speech coach who can measure a host of mannerisms and traits. Yoodli will conveniently connect to your virtual call, record the mock interview you are undertaking and instantly transcribe. 

The data it produces once its analyzed the recorded are incredibly useful for more detailed and instant feedback to the spokesperson:

  • Use of filler words – e.g. 5% of speech were filler words, that’s 2% higher than the recommended level 

  • Use of “weak” words such as ‘just’ and ‘like’ – e.g.  the speaker used 47 weak words during a speech which made up 3% of the total speech

  • Filler words – e.g. use of ‘um’ and ah’, the person used 81 filler words during a 10-minute interview

  • Listening rate – Average length of talk time 

  • Monologue length time – did the spokesperson go on any long-winded answers 

  • Eye contact – 68% of speech was delivered while looking at the camera

  • Centering – how often was the spokesperson center of the screen rather than moving about

  • Smiling – how often the person smiled 

  • Pacing – how many words per minute. The spokesperson was 176 words/minute which is considered slightly too fast

  • Pacing variation – how much someone’s pace varied through the speech

The second part of using AI for media training is around generative AI. From the instant transcript, you can locate a specific answer during the mock interview, plug it into a generative AI tool and ask it to rewrite the speech making it more impactful – for example by leading with a point or statement. 

You can use AI to rewrite an answer so it doesn’t come across as defensive. 

Here’s an example. We asked ChatGPT for some alternatives for the following response: “I don’t really want to comment about that – it’s proprietary.” Suggestions included  "I'm afraid I cannot share details about that since it's proprietary information” and "Unfortunately, I'm unable to provide specific comments on that topic due to its proprietary nature." A definite improvement. 

Privacy concerns around AI tools abound, particularly when recording and storing potentially sensitive information online. We make sure to ask permission before recording and using these tools. We also check and understand the tools’ privacy guidelines and how information is encrypted and stored. 

There’s lots of noise about the impact of AI on the comms profession, with much of the focus on generative AI like ChatGPT. Away from the debate of job displacement, there’s a huge amount of potential to use the new technology to augment what we do and provide better service to our clients. Media training is just one aspect where leveraging new technology has really transformed our approach to media relations.

Sam Barber is a senior vice president in New York