© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

'Our Changing State' Vote 24: How AI generated misinformation affects voters

Ways To Subscribe
Alex Mahadevan, the director MediaWise, Poynter Institute’s digital media literacy project speaks with Mathew Peddie, host of "Florida Matters" and the podcast "Our Changing State.
Chandler Balkcom
/
WUSF
Alex Mahadevan, the director MediaWise, Poynter Institute’s digital media literacy project speaks with Mathew Peddie, host of "Florida Matters" and the podcast "Our Changing State."

Alex Mahadevan, the director of MediaWise, discusses the impact of misinformation and how to spot it.

With a contentious election just weeks away and Floridians still dealing with the aftermath of back-to-back hurricanes, AI-generated misinformation online continues to spread.

Alex Mahadevan, the director of MediaWise, Poynter Institute’s digital media literacy project, speaks with Matthew Peddie about the impact of misinformation and how to spot it.

Misinformation preys on people’s vulnerability.

“The technology is getting a lot better, so it's a lot harder to spot these AI images,” said Mahadevan. “They don't care that they're AI-generated. That's the thing that has worried me the most. So we're seeing the zone flooded with AI-generated content, and people are sharing it because it supports what they think about the world.”

Flooded by emotions

Two viral images of a distressed child holding a puppy trapped in floodwaters after Hurricane Helene are archetypal of contagious AI-generated content consumption.

Mahadevan expressed concern for how certain flooded areas received no attention, and therefore fewer donations and humanitarian support in times of need, while fake images continue to make waves on social networks by evoking passionate responses.

“it is something that just absolutely punches you in the gut,” said Mahadevan. "If you're not thinking about it, and you don't look closely to see that it looks a little too computerized, then you might share it.”

Such innocent shares are not without implications, however.

“By sharing fake images, all you're doing is making it harder for real victims to get their story out there,” said Mahadevan. “It's not really humorous that our reality is being completely supplanted by artificial intelligence.”

Fuel for political falsehood

An image showing Kamala Harris addressing a communist rally, for instance, was shared by Elon Musk, the owner of X, formerly Twitter.

“The point of these AI-generated images is not necessarily to sway someone's opinion,” said Mahadevan. “It is to confirm someone's existing opinion.”

Kamala Harris’ communism rally is most likely seen by moderate Republicans, whose previous exposure to commercials about her being a socialist is completely confirmed.

“The biggest harm in all this AI-generated political content is it's keeping people locked in their polarized unreality, because it is not realistic to say that Kamala Harris is a communist or socialist,” said Mahadevan.

However, there's a large swath of people who want to believe that, and they get to believe that if they see an image that looks like this. So they get to live in this false reality.”

Worse yet, such AI-generated political images are not flagged by any fact-checking systems on X. Nor do bots that fire out hundreds of AI-generated tweets help.

“It can turn anyone into a full-fledged conspiracy theorist,” said Mahadevan. "Because if they open up X and see 100 posts in a row that are all saying this claim is actually true, then they might believe it, even though it's false.”

Citizen fact checking holds power

As the internet continues to be inundated with misinformation, citizens must rely on their own judgment and their community to get bearings on the truth.

Mahadevan’s story of checking Facebook profiles in trucker groups spreading misinformation about love bugs is a clear example.

“They are sharing other conspiracy theories, but they are changing people's minds, who are usually vulnerable to this type of misinformation, because they have that lived experience,” said Mahadevan. "“I think in-group, personal fact checking is incredibly powerful.

"We're always trying to encourage people to have these conversations with their friends and their family and their colleagues and those in their community, because you can make a difference.”

Solely relying on citizen fact-checking is not enough. However, specific tools for identifying AI-generated images are yet to be available, Mahadevan said.

What can we do in the meantime?

Source matters

“Before sharing something, make sure what you're sharing was posted by an expert or a legitimate journalist,” said Mahadevan.

Finding the source of information through a reverse image search, done by a tool called Google Lens, is another great tip.

“If you are using Chrome, it actually pops up in your URL bar to search for an image,” said Mahadevan. “If you see an image that you think is AI-generated, you can do a search on that. It might lead you to an artificial intelligence art community.”

Or not. But it is important to verify the source to find out.

Ultimately, there’s power to being critical of what you see online and not believing something just because it confirms how you feel about the world and your political views.

“Practice intellectual humility,” said Mahadevan. "When you see something that makes you feel like you are 100% right and you've always been right, that's a cue to stop and check it out.”

Quyen Tran is the WUSF Stephen Noble Digital/Social News intern for fall 2024.