Saw a political ad that looked 100% real but was entirely ai generated

Not going to link it because i dont want to spread it but there was a political ad circulating on twitter/x last week that showed a candidate saying something they never said. and i mean it looked REAL. not the obvious deepfake stuff from a few years ago, this was indistinguishable from actual footage

the only reason it got caught was because the candidates team had timestamped footage of where they actually were at that time. if it had been a less prominent person with no pr team there would have been no way to debunk it

with elections happening globally this year this is a massive problem. and theres basically no infrastructure for verifying video content at the speed social media spreads it

how do we even approach this? content credentials? mandatory watermarks on ai video? both?

This is arguably the most urgent application of content authenticity technology. The Coalition for Content Provenance and Authenticity (C2PA) has been working on exactly this but adoption remains limited.

The core challenge is speed vs. verification. A deepfake video can go viral in hours while verification takes days. We need real-time or near-real-time verification infrastructure embedded directly into social platforms.

Some platforms are experimenting with content credentials but its entirely voluntary which makes it nearly useless for bad actors.

From the marketing side i can tell you that political campaigns are already terrified of this. a client of mine in political consulting told me they now have a rapid response team specifically for deepfake debunking. that team didnt exist 18 months ago

the scary thing is you dont even need a perfect deepfake to cause damage. you just need it to be good enough to survive the first 6 hours of sharing before fact checkers catch up. by then the narrative is set

This connects to media literacy which is something I try to incorporate into my teaching. Students need to develop critical evaluation skills for video content just as they have for text. But the tools for verification are far less accessible.

I would welcome any kind of accessible video verification tool that I could teach students to use. Currently the best advice I can give is “check the source” which feels increasingly inadequate.

@JaxOnFire the “survive the first 6 hours” point is really key. thats the window where damage happens. any verification solution needs to work within that timeframe or its basically useless

@Marc_Delrieu voluntary credentials is such a frustrating approach. the people creating deepfakes obviously arent going to opt into a provenance system

What scares me most is we’re still in the early days. The tools making these deepfakes are getting cheaper and easier every quarter. By the next election cycle the quality will be even better and the barrier to entry even lower. We needed video verification infrastructure yesterday.