i run a fairly popular podcast (wont name it here, around 50k downloads per episode). theres literally hundreds of hours of my voice publicly available on every podcast platform
recently a listener sent me a clip that sounded exactly like me promoting a crypto scam. it was obviously generated using my voice without my consent. the quality was good enough that even my wife wasnt sure at first
i had to put out a statement clarifying it wasnt me which probably drew more attention to the fake clip than ignoring it would have. no idea how many people heard the scam version before that
my questions:
- is there any way to technically protect my voice from being cloned?
- are there voice authentication tools that could prove a clip is or isnt actually me?
- has anyone dealt with this legally? what are the options?
this feels like it should be a bigger conversation than it is
this is going to be a huge issue for every content creator with a public voice. youtubers, podcasters, streamers - anyone with enough audio out there
on the technical side: you cant really prevent cloning since the audio is already public. some companies are working on “voice watermarking” that embeds an inaudible signature in your audio so clones can be identified but adoption is basically zero right now
your best bet currently is probably legal. several states in the US have updated their right of publicity laws to cover ai voice cloning. tennessee passed the ELVIS Act specifically for this
the voice authentication question is particularly interesting. There are forensic audio analysis techniques that can identify cloned speech by analyzing spectral characteristics and comparing them to known authentic samples. However, this requires expert analysis and isn’t something you can do with a consumer tool.
Some emerging approaches use “voice fingerprinting” where your authentic voice characteristics are registered and can be compared against suspected clones. Think of it like a vocal version of content credentials. Still very early stage though.
from a brand perspective i’d recommend getting ahead of this. put a disclaimer on your podcast page and social profiles saying your voice is not authorized for use in any other content. wont prevent cloning but gives you legal standing
also consider reaching out to the platforms where the fake clip appeared. most have policies against synthetic media impersonation now - x twitter and youtube both added these recently
@SilentBean64 the ELVIS Act is interesting, will look into that. though enforcing anything against anonymous accounts sharing clips is a whole other challenge
@Marc_Delrieu voice fingerprinting sounds promising if it actually gets deployed. would definitely register for something like that. right now it feels like im just waiting for the next fake clip to appear