© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How AI is polluting our culture

Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)
Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)

AI-generated content online is almost impossible to avoid. There are AI-boosted Google search results, AI-generated imagery, AI-generated articles, AI-generated music, even AI-generated children’s TV shows.

Neuroscientist Erik Hoel says we’re drowning in “AI dream slop.”

Today, On Point: The cost to our humanity in a world of synthetic culture.

Guest

Erik Hoel, neuroscientist and writer. Author of the Substack newsletter The Intrinsic Perspective.

Transcript

Part I

DAVE ESPINO: Imagine making $1.2 million creating AI generated videos. We’re going to show you how in this video.

MEGHNA CHAKRABARTI: That’s Dave Espino, host of a YouTube channel called Making Money With AI. In that clip you just heard, Espino’s using a company called RV AppStudios as an example of how to make those bags of money.

RV AppStudios makes games and apps for adults, some of which you have to pay for. They also make free games, apps, and online videos for children. And they do that with AI, as Espino says. He describes how RV AppStudios makes the content on one of their kids’ learning channels called Lucas and Friends.

ESPINO: Yeah, this channel is mind boggling. It has 917,000 subscribers. This is the type of animation that you can create. Let me show you really quick. And you could create that type of animation using AI, and these are mostly learning videos. So the kid is getting some great value, they’re learning all kinds of stuff.

These are for toddlers, as you can see.

CHAKRABARTI: Dave’s co-host, James Renouf, emphasizes repeatedly that gone are the days where children’s television makers need a huge staff of writers and animators. All you need now is some AI software, and in the quote “writer’s room,” ChatGPT.

JAMES RENOUF: And I don’t want people to say, gosh, it takes, these videos are 30 minutes long.

First of all, these are like not crazy dialogue here, okay? We’re not writing Shakespeare, okay? It’s like the letter A, the letter B. And you can use ChatGPT to make these little scripts. So say make me a little script where we teach kids ABCs, or we teach them numbers, etc. And then you use the power of AI to make these videos.

You don’t have to be a graphic artist. It’s amazing what can be done very simply. So little Johnny here, go learn your ABCs while mommy puts her feet up, okay? You create these kind of videos and you can have a channel blow up.

CHAKRABARTI: (LAUGHS) Sorry, as a mommy myself, yeah, sometimes you want to put your feet up, but that’s not necessarily why you would need your child to learn the ABCs.

But the point is that kid’s content on YouTube these days is a very different beast than traditional educational kids’ television. For example, The Sesame Workshop, home to Big Bird of course, has more than 1,000 employees in the United States alone. And among them are an army of childhood learning experts who obsess over the latest research on child development, and they even commission studies of their own, such as a 2021 study Sesame Street commissioned to investigate how the pandemic was impacting young children in particular, and how they could incorporate those findings into their television programming.

Over at RV AppStudios, it’s not clear how much research or academic expertise informs how they create Lucas and Friends. Maybe there is, but it’s not easy information to find. But it is clear that the company is having an impact, given the kind of metrics that matter most to digital organizations, metrics like 400 million free kid apps download so far, pacing at 15 million more every month.

Their AI generated YouTube videos get millions of views within weeks of being posted. So here’s an example of one. This is from Lucas and Friends, their animated YouTube video. And you can’t see it, but it’s Lucas, a yellow animated animal figure, happily teaching your baby how to wave.

LUCAS: Hello, friends! Wave your hand and say, Hello! Hello! Hello! When you meet someone, you say, Hello! When you meet someone, you say, Hello!

CHAKRABARTI: Joining us today is Erik Hoel. He’s a neuroscientist and writer and author of “The World Behind the Word: Consciousness, Free Will, and the Limits of Science.” He’s also author of a Substack newsletter called “The Intrinsic Perspective.” Erik, welcome to On Point.

ERIK HOEL: Thank you so much for having me.

CHAKRABARTI: You were literally grimacing when we played that sound from Lucas and Friends. Is it up there in sort of the Baby Shark level of adult irritation for you?

HOEL: No, I think Baby Shark is so much better. Only a human could write a tune as catchy as Baby Shark.

I first stumbled across this stuff while I was researching the proliferation of AI generated content on the internet. And I wrote about it on my Substack and then Wired ended up doing an investigation into some of the channels. And what I found was that if you look at the actual content of these videos, most of the time there are numerous errors, deep, baked deep into them.

So they’ll show a shape, that’s a hexagon, they’ll say it’s a pentagon. They are incredibly formulaic because they have to be. In the end, these technologies, like forgetting about this sort of moral, philosophical question of if we should be entrusting non-human minds with the education of our children, there’s still just the practical question of these things get a lot of things wrong.

I was recently testing some of these more frontier models of AI’s. I’ve been teaching my son to read. He turns three in a week. And so we’ve been going over simple sentences with the most common letter sounds. And so you can create these simple sentences. And it’s really the best way, I think, to teach a child to read.

So you say something like Bob sat in mud. And I have to create a lot of these sentences to give him new sentences to practice, and it’s this boring task. Perfect, I would think, for an AI to maybe outsource this, too. So I tried honestly using the AI for some lesson planning to give me back simple sentences.

And I asked the smartest AI, which I think is probably Claude Pro, maybe this new recently released GPT 4.0 is slightly better. But at the time that I asked Claude Pro. Claude was basically the leading model and it couldn’t do it. It couldn’t come up with sentences that are all the simplest sounds because it’s something that’s not in their training set.

It’s a weird ask and in the end, it would say something like, Bob was big. Was? That’s not the way an S normally sounds, is it? And even when I pointed out the mistakes to it, it would go back and make the same mistakes over again. And if you can’t teach a two-year-old basic letter sounds, how are you going to scale that up to, the dreams of AI supplementary tutors teaching, physics to kids?

CHAKRABARTI: I suppose you could argue that the more complex the learning, maybe the better the AI is at it, but we’ll come back to that in a second. It’s interesting that you made the example of ‘was.’

And the S sound. Of course, that sort of more Z sound is, in American English, the S sometimes makes that sound. But the key thing is the progression in which how a child learns the different kinds of S sounds there are, right? And it stuck the was in there, whereas for an early reader, sat or something like that would probably have been better, right?

So it’s understanding how the child learns, which seems, at least in that example, seems to be missing a little bit.

HOEL: And I don’t think that the people making these YouTube generated videos are using even the latest frontier models. They’re using whatever the cheap, free versions are.

CHAKRABARTI: To that point, we heard them say all you need is an animation, AI program, and ChatGPT. Okay, so Erik, I love doing real time experiments. It drives the staff crazy because they have no idea what I’m going to do. But I have a computer here, obviously, in the studio with us, and I’ve got ChatGPT.

This is 3.5. Open, it’s the free one. Alright, so let’s write a script for a kid’s animated YouTube short.

HOEL: (LAUGHS)

CHAKRABARTI: So I’m going to say, write a script for a toddler’s YouTube video, should we say that?

HOEL: Sure.

CHAKRABARTI: About what? Let’s do learning how to read. Okay teach the child. Let’s make it a little more specific, to be fair.

HOEL: Let’s say, teach them the most common letter sounds.

CHAKRABARTI: Teach the child the most common letter sounds. Should we specify how long the script should be?

HOEL: Yeah, can you say, create simple sentences?

CHAKRABARTI: Okay.

HOEL: That use only the most common letter sounds.

CHAKRABARTI: Sentences. I’ll say create 15 sentences. Simple.

HOEL: Sentences that use only the most common letter sounds.

CHAKRABARTI: Letter sounds. Ready?

HOEL: Okay. Live experiment.

CHAKRABARTI: Okay. Oh, it’s thinking. Ha! Title card! Let’s learn letter sounds with Tim. Timmy, cheerful music plays as video begins. Oh my god. This is actually quite a long script, ChatGPT, a good thing I don’t have to pay it. Then Timmy says, Hi friends.

I am Timmy. And today we’re going to learn some super cool letter sounds together. Are you ready? Yay. Okay, he claps his hands, and he says let’s start with the letter ‘A.’ Can you say A? Oh, A, it says ah, so that’s the letter sound.

HOEL: So already we asked it to create 15 sentences that only used the most common letter sounds.

It didn’t create 15 sentences that used the most common letter sounds.

CHAKRABARTI: Oh, there’s more though, I scroll, I gotta scroll, oh no, it didn’t. It just says the letter sounds, C, cat, D, dog, I’m getting my child YouTube video voice on here, E, elephant, oh where are the, there’s no sentences.

HOEL: Yeah. Okay. Yeah, exactly.

‘Cause you asked it to do something slightly weird.

CHAKRABARTI: It doesn’t seem weird.

HOEL: No. It doesn’t seem weird to us. But if you think about what’s in their training set, there’s probably a huge number of scripts in the training set. But you asked for something very specific. And most of the time, these AIs are just not very good at out-of-distribution sampling.

So what you’ve been presented with is something that looks impressive. It’s a lot of text, right? But it’s not actually really grokking the fundamental thing of what you just asked it for.

CHAKRABARTI: Yeah.

HOEL: And now if you imagine the feedback between a confused child and the AI, right? You get this spiral of confusion.

And this is when it’s interactive. Most of these scripts for these YouTube videos are exactly like this. They’re just the most obvious, what would it be, ‘A’ for Apple, et cetera, et cetera, et cetera. Getting it to do anything beyond that, it’s actually surprisingly difficult.

CHAKRABARTI: Wow. Okay, so Timmy goes on to say, Timmy, I’m actually calling this character like it really exists. Whee! That was fun! Let’s do more! And then, you get down to P! Penguin! Claps! Q! Queen! Oh, see, Q is really interesting one. That’s really interesting.

HOEL: Queen.

CHAKRABARTI: Queen. Yeah. You’re not actually teaching what the ‘Q’ needs to do.

Do you know when I was in kindergarten, I took a little test and I was asked by the test giver Meghna, say a sound, say a word that starts with ‘Q.’ And I said cute, right?

HOEL: (LAUGHS) Clever.

CHAKRABARTI: But again, it’s like, how does the brain work in terms of processing information? And here’s the thing. This is why we’ve invited you not to use ChatGPT to write new YouTube kids’ content, Erik. But you’ve written extensively about how this kind of quickly generated AI content is everywhere, to the point where you say it’s hurting our culture. It’s hurting our way to consider, think of ourselves as human beings.

Part II

CHAKRABARTI: Now, I should say, Erik, we got a lot of responses from listeners when we said we were going to talk about the ubiquity of AI generated content and how it’s really impossible to find a space on the internet, or growing increasingly challenging, where there’s not clearly this sort of synthetic content.

So let me just play a little bit of what some of our listeners said. This is Rachel Chu from Charleston, South Carolina, and she told us that recently she started to notice a lot of this stuff on Facebook.

RACHEL CHU: Just today I saw an AI generated photo of a young girl, like a toddler on the shore of a beach with an oxygen mask lying in the water next to a birthday cake and the caption said something like, “My birthday is today hope I get birthday greetings.”

And I guess this is to create comments and likes, and whoever is behind these is trying to play on users’ emotions and make them think they can help somehow. I’m not sure if they’re making money, but it does feel wrong when there are real people and real causes that do need attention.

CHAKRABARTI: So that’s Rachel from Charleston, South Carolina, and here’s Eli Hornstein, a scientist who works with plants and reptiles.

He left us a message talking about two recent Google searches that he did. In one, he asked whether there were vegetarian snakes. Oh boy. And in the second, he asked if there were edible bromeliads other than pineapples.

ELI HORNSTEIN: And the only results on the entire internet for those questions were AI generated lies, which are perfectly composed, sound factual, but use the names of real organisms in completely made-up ways, saying that the rainbow boa, which is a real snake, is vegetarian, which it is not. Or that a long list of plants are edible bromeliads when they’re neither edible nor bromeliads.

I really don’t understand what goes on underneath the surface to produce this type of content there waiting for me, but I’m frankly quite alarmed by it.

CHAKRABARTI: And Eli, my apologies for mispronouncing bromeliads. Okay, so Erik, these are examples about how we already know that sometimes AI can be very factually challenged, let’s put it that way.

But you take your analysis or your criticism even further, and you say that our entire culture is becoming affected, as you say, by AI’s runoff. What do you mean by that?

HOEL: I think the reason I use analogies like runoff, and I think are appropriate, is that if you look at the history of technological change, there’s been these various problems that have cropped up, and one of the most significant ones are issues around climate change, global warming, and also local destructions of environment, and it required a change in thinking in the 20th century.

Where we went from thinking of the environment as this big immutable thing that could not really be injured by us because it’s so big, it’s so omnipresent, to something that’s actually fragile and that we needed to protect, and we needed to enact regulations to protect it. I think the same realization has to happen in the 21st century for human culture.

It’s been this big immutable thing. That there is human culture, it’s produced by humans, it’s the water in which we swim, and it’s so big that we don’t think anything can really hurt it. At this point, it would not surprise me if 5% of the content online was being produced by AIs. You can go to any leading tweet, or I guess now post, and find the top reply will be something very obviously AI written, once you hear that cheery Wikipedia voice of ChatGPT, you will find it everywhere.

Even on my own blog, I’ve had to ban people for just posting AI comments just for engagement. And it’s because there’s this economic incentive. We are a cult, we are a content hungry economy and the ability to create cheap content, even if it’s not good, like even if the quality is much lower than a human, if it’s orders of magnitude cheaper, there’s just pure economic reasons to pursue that.

CHAKRABARTI: So then, so let’s get back. Let’s look to understand your concern here. I want to have some shared definitions just so that we all were talking about the same things when you talk about culture. It is, because of its ubiquity, it’s an amorphous concept. And, obviously there are millions of various cultures and many more microcultures, etc.

So what, how are you defining what culture in this case is?

HOEL: Everything you see online, everything you read, everything you watch, let me give a brief example of this. Sports Illustrated, right? Culture, right? Okay. It’s being produced by humans as content for humans, but, they were recently caught using fake AI writers to create their articles and because there’s a clear economic incentive for them to do that.

And that, there is a possibility where when I was born, everything I saw, everything I read, everything I watched even the lowly labels at a grocery store were thought over and created by human minds, and it’s very possible that I will die in a world where the vast majority of the things I read, see, or watch are not created by human minds.

They’re created by unconscious artificial neural networks. And I think that is the real immediate risk of AI because it’s what we’re already seeing. Again, you can go out on the internet and find all these numerous examples of that. And there’s this creeping weirdness to changes.

And let me give like a little brief story about how this seeps even into the real world. As I said, my son Roman is turning three, so we’re going to host a birthday party, and it’s Curious George themed. So we got all these Curious George stickers that we ordered online, and we were about to put them all into the little packs for people to take to go, right?

And my wife is looking through them, and she comes to me, and she says, These are very strange, some of these. And some of them are fine, and some of them are like, Curious George holding an automatic rifle, Curious George without skin, Curious George OD’ing, bi Curious George holding a banana evocatively.

It’s just obviously not safe for work content. And if any human had been in the process of making these stickers, and I don’t know, I have no evidence that they did use AI or if it’s just people, somewhere in China, a company in China who doesn’t know what Curious George is and doesn’t really care.

And it’s just pulling stuff. But the point is that when you create culture algorithmically, you begin to run into these scenarios where clearly there was no conscious thought behind this at all. And that’s only going to continue. There’s going to be this creeping alien-ness to our culture.

CHAKRABARTI: Yeah. Algorithms are the perfect rule followers, right?

So it’s following the sets of rules given to it. But there’s no discernment about whether what it’s producing is good, bad, appropriate, that kind of thing. As long as it fits within the framework of the rules given. Yeah. If you’ve noticed like this an increasing high strangeness to some things, it often is either AI or algorithmically produced content.

And I think at this point, there’s not a huge distinction, but soon most algorithmically produced content will be fully AI generated.

CHAKRABARTI: Okay. Okay, so I want to get some more examples here. Now that I have a better understanding of what you’re talking about when you say culture, we started by giving the example of the increasing amount of AI, either enhanced or generated kids content, right?

Say on YouTube, and other issues with that are what there’s oftentimes no narrative cohesion to what they see. Yes, you said it’s getting things completely wrong sometimes, that is troubling, but it also has great reach. On the other end of the spectrum, you say that this sort of AI, not only generated content, but then there’s a feedback loop that happens when more AI generated content’s getting out there and then AI is learning from that content that it made.

And you say you can see that even in the scientific literature?

HOEL: There’s this phenomenon, which is little discussed, but very well known, called model collapse. And if you look at, the funny thing is that in a way, the companies and I have common cause in that most of these companies and us have common cause, in that these companies don’t want to train their latest version of their AI on their old AI’s output. They don’t like that. They don’t want that. You might ask, wait a minute. Why wouldn’t you want that?

And it’s because of this issue of model collapse. And what happens is when you take a model and you start feeding it its own data, it eventually collapses. Researchers have compared it to getting mad cow disease. Because of course for mad cow disease, the cattle eat the brains of other cattle.

And in this case, it’s they’re eating their own, they’re eating their own dog food. And they begin to collapse in on themselves and you trigger either something that looks like schizophrenia or just incredibly simplistic outputs or so on. And you have to think about how strange that is, that the companies don’t want their AI generated products, even really in the training of their next generated model, but it’s fine for us to consume them, right?

There’s this strange hypocrisy baked into the whole thing. And that’s because these systems, they are trained on our data, and they are very impressive in some ways, as much as I sometimes point out their simplistic reasoning flaws, at other times I play around with them.

And I think this is like world changing. This is crazy. I can’t believe that I’m living in a world in which this sort of thing is possible. So it flips to me like a Necker cube, like an optical illusion where I can see it one way. And then sometimes I see it the other way. And I do think that, in the end, we’re going to have to start making some decisions about to what degree do we put limits on this?

They themselves are very protective of their own models. We ourselves are neural networks, biological neural networks. Should we be concerned about if culture stops being just 5% and it’s just funny Curious George stickers and it starts being most of the content that you read or see, or a huge amount of what’s posted online. That seems to me to be the immediate concern.

Other people are very concerned about AI, but they’re often concerned about things more like existential risk. These more sort of sci fi scenarios

CHAKRABARTI: Oh, the world coming to an end.

HOEL: The world coming to an end.

CHAKRABARTI: Terminator scenario.

HOEL: Yeah, exactly. The Skynet scenario. But there is an effect that’s happening right now, which is that the internet is getting filled up with junk because of the economics of it.

Back in 1968, Garrett Hardin wrote a very famous article in Science that was instrumental for the environmental movement, and in it he coined the term, a tragedy of the commons. And getting people to think that way, that there was this commons. That needed to be protected, that you couldn’t just say a chemical plant wants to make money and so they can just go pollute this river. Like, no, you actually can’t do that.

You’re damaging the commons in a particular way. And I think human culture is a commons, like even Curious George stickers are a commons, right? Like I expect my Curious George stickers to be okay. And this AI creates this fundamental mistrust, and I don’t think we should necessarily throw out the entire technology or anything, but we need to start putting in similar sort of pressure and regulatory guidelines that we did for actual physical pollution.

CHAKRABARTI: We’re going to talk about that a little bit later in the show in detail, Erik, but I want to lean on your academic expertise as a neuroscientist, right? Because there’s our common experience of culture, which you’re already saying we should be thinking about, or concerned about AI’s impact on that.

But I’m also wondering about just how we as human beings, how our brains absorb this information or take that cultural feedback. AI hasn’t, at scale, has not been around long enough, I would say, to have any sort of real robust study on this question. So I want to put that out there.

But you do quote in your Times piece, you quote Einstein actually, right? And you say that Einstein once said, let me see if I can find this here. About if you want to really teach your, oh yeah. If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.

So what is, why’d you use that quote?

HOEL: That’s such a great question. What’s funny is that this connects to this issue that I’ve been fascinated with ever since I was young, you mentioned in the introduction that I grew up in my mother’s independent bookstore, the Jabberwocky, which is here on the east coast.

And so I was always surrounded by fictions. And I was also interested in science and neuroscience. And at some point, I began thinking what is the purpose of these things, right? One could imagine a race of aliens who are like literalists, who are like, why do you care about Harry Potter?

Everything about Harry Potter is a lie, right? Everything that happens in Harry Potter is a lie. You people seem to care massively. And the common explanation, which is normally given by evolutionary psychologists. This would be something that Steven Pinker would probably say, which is that fictions are just, the fictions of our culture, the stories of our culture are just the super stimuli, and we just like them for the same reason we like cheesecake.

In fact, I think Steven Pinker once said that music was auditory cheesecake. And I always thought that can’t be right. And one way in which I think that’s not right is that if you think about humans as a continuously learning neural network, we need to sample things that are outside of our day-to-day distribution in order to generalize our learning.

And so this is now getting more theoretical. So I introduced this hypothesis called the overfitted brain hypothesis. And the idea is that during your day-to-day learning, you’re becoming very statistically fitted to what you’re doing, and you need something to shake you out of that, and probably that’s one of the reasons why dreams initially evolved, but also one of the reasons why we tell stories, and we tell fictions.

We talk about things that never happened and couldn’t happen. And these things probably are cognitively important to us. So it’s not just cheesecake. It’s not just some super stimulus that we’re attracted to because there’s heightened emotions or because there’s lots of action. We’re actually getting them.

Maybe something fundamental out of human culture for our brains themselves, for our learning. And then if you take that view of things and you begin asking, Okay, so what are the effects going to be of filling up our culture with text that’s the most obvious continuation. All these sort of properties of these artificial neural networks. And the answer is, we might start damaging this really fundamental thing that I think humans have relied on, which is having an enriching culture that allows you to generalize your day-to-day learning.

CHAKRABARTI: Did I hear correctly? Did you say brain micronutrients? Or is my brain remembering that from your article?

HOEL: That’s from the article. Okay. And yeah, so I think stories contain within them cognitive micronutrients. You could call them something like that. And, I’ll caution listeners that if you think neuroscience is a set of well-established facts, I have unfortunate news for you. Neuroscience is like a bunch of competing narratives and hypotheses, and this is one of them. But it does say that if we are entering this unknown space where we don’t know exactly what the risks of letting go of the control of our own culture. And I think that’s a microcosm of the problem with AI generally, which is this problem of do humans maintain agency?

Yeah, if we don’t maintain agency over the content that we create, right? What are our chances of maintaining agency in the long run?

This article was originally published on WBUR.org.

Copyright 2024 NPR

Tags
You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.