We are past the point where convincing AI fakes are easy to create and hard to prove, but it hasn't yet had the widespread disastrous effects many of us would have predicted. Is there a theory as to why? It clearly isn't that people don't fall for these sorts of things, they clearly do. (Video created on the first try with the Veo 2 prompt: "1960s film of Elvis shaking hands with an alien in the White House")
The reason AI fakes haven't caused chaos isn't comforting. It's terrifying. Real data from social media studies: • 78% can't spot obvious fakes • 82% share without verification • 91% trust based on engagement • 95% follow emotional triggers We haven't seen disaster because: → People already believe what they want → Truth was optional pre-AI → Fakes existed before deepfakes → Society runs on shared delusions The market reality: • Facebook: Profits from misinformation • Twitter: Amplifies controversy • TikTok: Rewards fiction • LinkedIn: Celebrates fake success stories We're not immune to fakes. We're numb to truth.
Quick reaction: I think there is a lag. Because people are the creators (by that I mean we take credit for the input at least, and probably the architecture of the story and its intended impact), we have to first know we can, then figure out what to make and what we want from it. That's a lot of work, so it's only there for those who want the reward (clever design, selfish manipulation or complete social scam). This also makes the content smell a way that some of us can detect, this means there's an art to it... beyond the machine, people. When I think back through the flood of political content, aka shit (Bannon's words, not mine) we can see the people who pay (not just money, attention, focus, care) for the content to be made. We can see the connective systems and the messages they are built to amplify. They (incentivized parties) are often the source of manufactured dis-content.
How do we know that we have not had disastrous effects?
They are being used in small scale scams (e.g. see https://www.theguardian.com/australia-news/2024/mar/01/scams-promoted-in-fake-news-articles-and-deepfake-videos-cost-australians-more-than-8m-last-year) but not yet at a large scale like many of us were most concerned about. It is often a question of cost, e.g. spam email took off when costs fell. The guardrails (such as they are) on the free video creators prevent the worst uses, and running your own hosted model without the guardrails is expensive still.
I think it stems from the fact that regardless of who is using it, and what it is being used for, generative AI is just a tool. When used by bad actors, it generally aggravates existing problems like verifying court evidence, scams, social engineering attacks, phishing, fraud, misinformation, unauthorized replication of IP or likeness, etc. by exploiting weaknesses in human perception and judgement, and systems that replicate human perception. While there is data reflecting how generative AI is impacting each of these underlying problems, the good news is that effective products and processes architected to combat each of these core issues don't rely on human judgement and perception alone, and are more resilient to deepfakes as a result. When it comes to truly high stakes problems aggravated by deepfakes, something is likely already in place that is being adapted to address these new tools and tactics. In areas where this isn't the case, new startups are launching and are providing solutions to solve these gaps in the market. While the volume of deepfakes is only going to increase, as will their effectiveness against human perception, those with the most to lose when it comes to each underlying problem are adapting as well.
Reminds me of Orson Welles' famous "War of the Worlds" radio broadcast from Halloween Evening, October 30, 1938. This was a dramatic radio adaptation of H.G. Wells' science fiction novel "The War of the Worlds," performed as part of the Mercury Theatre on the Air series on CBS Radio. The broadcast was presented as a series of simulated news bulletins describing a Martian invasion of Earth, specifically focusing on an invasion in New Jersey. Directed and narrated by Orson Welles, the program was so realistic that it caused panic among some listeners who believed an actual alien invasion was happening. While the extent of the panic has been somewhat exaggerated over time, the broadcast did cause genuine confusion for some listeners who tuned in mid-program and thought the events were real. The incident became a significant moment in media history, demonstrating the power of radio as a storytelling medium and its potential to blur the lines between fiction and reality. The broadcast is now considered a landmark in radio drama and a pivotal moment in understanding mass media's influence on public perception. 📉🤖📈
People are indeed swayed by deep fakes, as long as the implications are aligned with their existing beliefs. Deep fakes won't convince people in any significant volume to believe things they don't believe or doubt things they already believe, since beliefs aren't formed or informed by evidence, in general. This is true for everyone, including you and me.
Most people stopped believing anything they read or see online that doesn’t confirm pre-existing beliefs long ago, so while some people are deceived by AI-generated media, it hasn’t worsened a situation where people are already living in a filter bubble. Ergo, no big impact on the real world. Also, and perhaps contradicting my above point, I think we’re getting better at correcting disinformation and misinformation, such as with Community Notes on X. And I think watermarking of AI media is another step in that direction.
Delivering Governance, Risk & Compliance solutions in Europe
2dIn principle, the tools for „shallow fakes“, i.e., forged pictures, documents, and currency have been around since time immemorial. The two differences with today‘s deep fakes are that (1) AI makes them available for more complex manipulations, and (2) you can now generate them on an industrial scale. The real question is: Why did the world not turn upside down when Photoshop was introduced? Possibly because we knew about it and just became more careful? In consequence, the real risk now isn’t the proliferation of advanced faking tools - it’s the fact that mankind is increasingly suffering from confirmation bias, founded on ever more ridiculous political ideas. Someone who, for instance, believes that liberals are literally drinking the blood of small children will have no difficulty believing the most outrageous AI generated clips to be true. That part is new. And that part may prove to be our undoing.