Artificial intelligence powers tools in use every day – Siri, Amazon Alexa, unlocking iPhones with facial recognition. But these tools serve some people better than others.
Tina Tallon is an assistant professor of artificial intelligence in the arts in the University of Florida’s School of Music. She studies what’s called algorithmic justice – how language, racial and gender biases are baked into these technologies, and how to fix them.
WUFT’s Report for America Corps Member Katie Hyson sat down with Tallon to talk about what that means and why it matters.
This interview has been edited and condensed for clarity. Listen above or read a slightly longer version below.
TALLON: So I’m very interested in all of the AI tools that are in use in every day life – I mean, we come into contact with them every single time we open our phones –and the various types of biases that are ingrained in the tools people are using.
HYSON: Can you speak to what some of those biases are?
TALLON: The majority of the data set is in English. And so already you have a bias toward English speakers, right, where people who might speak other languages are not represented in those datasets.
And then, of course, when you’re dealing with computer vision, there are incredible amounts of racial biases. Historically, film and various photographic sensors on cameras, unfortunately, did not image darker skin as well as lighter skin.
We also have gender biases with respect to audio technology, the microphones that we’re using right now, right?
I’m a singer. And so as I was working with a lot of microphones and other types of voice technology, I noticed that they didn’t work as well for me as they did for some of my colleagues.
There are biases kind of inherent in some of the circuitry and those designs go back all the way to the late 19th, early 20th century.
HYSON: So for someone who’s not in AI, not in the science field, who may not even know that many of the tools they’re using during the day are artificial intelligence, what would be a day to day example of how someone might interact with this tool and it might not be serving them as well as someone else?
TALLON: A great example of this is hiring. Many people aren’t aware of the fact that a lot of first round sifting through CVs and resumes actually uses a lot of AI tools. And so the AI is trained on various types of words to look for and other types of datasets that might disproportionately favor someone of a specific background over someone else.
Another example – many immigration exams actually require some sort of language proficiency. There was a case in Australia where a native English speaker from either Ireland or Scotland had come and taken an [AI] English language proficiency test for her visa in Australia. And it said that her language proficiency was not up to par. And she failed the test even though she’s a native English speaker.
I think we owe it to ourselves and everyone around us to question what the underlying structures are that lead to these emergent experiences and that we have in everyday life.
Every time you unlock your phone, or try to use Siri or Alexa, right, all of those things are powered by AI. And every single time we engage with them, some amount of data is going to those companies to kind of reinforce the learning in these data sets.
HYSON: Is there any significant work already being done to address these issues? And what are some possible solutions?
TALLON: Right now, algorithmic justice and accountability is very much a hot topic of conversation. And a lot of people are paying attention to it.
However, we see big tech companies like Twitter and Google who actually have fired their teams who are responsible for holding the other members of their companies accountable or for doing research that supports this justice work. And so it’s tough because I think that we were making a lot of progress, but it’s all very fickle, and it just depends on who’s in power.
At the end of the day, I think a lot of it comes down to just broader education and the public demanding accountability from these companies.
One of the things I’ve pushed for is kind of like an algorithmic FDA of sorts, right? With our own FDA, any medical intervention, either a therapeutic device or a drug, has to be vetted by the FDA before we bring it to market.
And I think the same thing needs to happen with algorithmic tools. We need to have somebody who goes through and says, “Alright, what is the impact of this tool going to be on society? Have you proven that you took the measures to adequately vet your algorithmic tool for various types of bias?”
HYSON: Can you put words to why it matters that these algorithms and these technologies work equally for everyone?
TALLON: Unfortunately, AI is reinforcing a lot of the biases that already exists. And already, it’s reinforcing the systems of discrimination that we see negatively impacting various communities around the world.
Data are a reflection of a society’s values. And I think, unfortunately, the technology that has collected the data, that’s also a reflection of a society’s values. And unfortunately, what we’ve seen time and time again, is that the values that are being reflected right now are those of bias and discrimination.
And so we need to be very careful, because once a specific piece of technology or idea gets ingrained, you build so many things on top of it that it’s impossible to change it.
If we don’t act now, to counteract those various types of bias [in AI] they will become ingrained. And that’s even more dangerous, because then the technologies that we have in the future will be built on that. And so we have to stop that cycle somewhere. And I think now’s a good time to do it.
HYSON: Is there anything else you want people to understand?
TALLON: There are a lot of great uses for AI. There are a lot of amazing ways in which AI can create tools for access. There’s a lot of ways in which we can use AI to improve health outcomes. There are a lot of ways in which we could use AI to mitigate the impacts of climate change.
And so it’s not all doom and gloom.
However, we need to be very critical of these technologies. Algorithmic literacy is really important. We need everybody to be involved.
And we need to make sure that everybody understands what the stakes are and how they can play a role in trying to use these tools to create a better future.