16 Comments
User's avatar
Graham Lovelace's avatar

Great reporting Stephanie! You were right all along. I wonder what they’ll do now? A Polish radio station tried AI presenters but dropped them after a public outcry. At least it was open about it!

Expand full comment
Stephanie's avatar

The right response would be to fully acknowledge the mistake, create internal policies which ensures AI usage is disclosed to the audience, and communicate them to your audience.

I would also commit to getting some up-and-coming diverse talent on their stations to address their diversity problem.

Alas, I think there's one maneuver in ARN's PR crisis playbook, which is go quiet, hope it all blows over.

Expand full comment
Stephanie's avatar

Also - thank you!!

Expand full comment
lucy's avatar

here from a mention in culture club on apple podcasts! this is both fascinating and just downright despicable… thank you for covering this topic and putting it out to the public!

Expand full comment
Stephanie's avatar

OH HOW COOL.

Thanks for letting me know!!! I’m going to take a listen now.

Expand full comment
Chris's avatar

If AI sounds this bad and ARN got away with it for 6 months, clearly no one listens. Or their demographic things that's how Thy should sound. Both are sad.

Expand full comment
Stephanie's avatar

It was barely used, I think, because it sounded bad and was a bit fiddly. Who knows why they persevered with it - everything you need to know about AI voice clones can be heard in those very short grabs.

Expand full comment
Rob's avatar

Thinking commercial radio might have some ethics is a bit of big ask.

Expand full comment
Stephanie's avatar

A person can hope........

Expand full comment
Ben's avatar

"ElevenLabs uses text-to-voice technology, meaning every sentence that Thy “spoke” had to first be written out manually by a person and then fed into the program. These sentences are then turned into MP3s, which need to be loaded into their radio playback system Zetta for use on air."

I don't think that's necessarily true - ElevenLabs offers an API, which means that they could use software to control the generation of audio. They wouldn't have to type in every sentence, they could have had an LLM write a script based on a track list, used ElevenLabs to generate the audio, and then had it automatically added into Zetta.

It's still insane that they used an AI without telling anyone.

Expand full comment
Stephanie's avatar

In this case, that is exactly how it worked. I've spoken to a few people familiar with how ARN implemented the system.

Expand full comment
Ben's avatar

Wow, in that case it doesn’t seem like it would even save them much staff time!

Expand full comment
Stephanie's avatar

I would argue that I might have even taken longer…!

Expand full comment
Zoe's avatar

Hey! I’m so curious to hear what she sounds like… how do I access that Flashback tool so I can give Thy a listen?

Expand full comment
Stephanie's avatar

Heyo, if you're reading this on the browser, you should see an embedded audio player.

Expand full comment
Khayyam, I am.'s avatar

I remember doing this a couple years ago =)

https://open.spotify.com/show/2psVSPHdQ9yzyNvkEXoMju

Expand full comment