- AI capabilities, including so-called ‘deepfakes,’ are rapidly expanding
- Tech companies have signed an accord to mitigate AI election misinformation
- Expert: ‘It’s so easy to fake the things we used to take for granted’
(NewsNation) — Big Tech companies say they’re taking action to make sure artificial intelligence can’t be used to disrupt democracy through misinformation and deepfakes.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok announced last week a new framework for responding to AI-generated deepfakes designed to trick voters, the Associated Press reported. Another 12 companies, including X, are expected to sign on to the accord.
Cory Johnson, chief marketing strategist at the technology research group Futurum, joined “NewsNation Now” to break down those efforts and explain AI’s role in misinformation.
AI’s rapid development is not only “breathtaking,” Johnson said, but it’s also an “infinitely larger” deal now in terms of elections than it was just a few months ago.
“They’re making such amazing advances in such a short period of time that things that weren’t possible four months ago are possible today,” Johnson said.
Reports last month reported robocalls that used AI to mimic President Joe Biden, apparently discouraging voters from showing up to the polls ahead of the Jan. 23 primary in New Hampshire, NewsNation’s Texas affiliate station KXAN reported. The state attorney general has said the calls appeared to be an illegal attempt at voter suppression. An investigation is underway.
To combat deceptive uses of AI, tech companies agreed to develop new technologies to mitigate the risks of AI election content. They also vowed to assess current AI models to understand what risks they might pose, detect and address deceptive content and provide public transparency.
“We see things like Adobe making it impossible to create certain kinds of images like cigarettes with children in the same query,” Johnson said. “I do think this is kind of a game of whack-a-mole from the creators’ standpoint.”
Combatting AI election misinformation requires tech companies to be responsible but also will need “every member of our society to consider the information we’re getting and the source,” Johnson said.
“It’s so easy to fake the things we used to take for granted,” he said.
The Associated Press contributed to this report.