What’s Next for Ethical AI?

Portrait of woman operating digital interface technology

By 2030, will most of the AI systems in use employ ethical principles focused primarily on the public good?

68% of global experts believe they will not. That’s the headline result of a non-scientific, non-random sampling conducted by Pew Research Center and Elon’s Imagining the Internet Center. It’s the crux of their 12th annual 127-page Imagining the Internet Report and formed the basis of a lively panel discussion that drew wide interest from a global audience including people from Denmark, the United Kingdom, Nigeria, and across the United States. In this blog post, we cover a few threads from what was a rich and wide-ranging discussion. Please watch the entire discussion here to see the whole, dynamic conversation.

The report drew its responses from 602 technology innovators, developers, business and policy leaders, researchers, and activists. But was their sentiment reflective of popular feelings about AI? And are there also positive possibilities for technology to solve society’s most pressing problems?

Video

Watch the Webinar

Audience Views Diverge from That of Experts in Report

Danil Mikhailov, Executive Director, data.org kicked off the discussion by asking the audience what they predict the impact of AI will be in the next decade, on a scale of one to five, with one being very negative and five being very positive. The majority of the audience, 83%, chose three or better, and only 20% chose two or worse.

“A lot of people are in the middle, but I think overall there is a greater preponderance of positivity, which is interesting because, for the experts, two-thirds were more skeptical on the side,” Danil said. “So we have a bit of a split in the audience from the experts.”

“The big surprise is that a team of experts in this report has a cynical view of the future,” said Uyi Stewart, Senior Director, Data Science at Seagen Inc. and Chairman, Data Science Nigeria. “But when we poll the community out there, we see that they are still hopeful about this technology.” Stewart later noted that geographic and cultural context plays a significant role in the perception of what matters most about technology.

Challenges with Defining AI Ethics

The expert responses in the report projected a variety of global uses for AI and their impacts around the world. And they showed three themes, said Lee Rainie, Director of Internet Research, Pew Research Center, as he introduced the report, along with Janna Anderson, Executive Director of the Imagining the Internet Center.

First, a major challenge to implementing ethical design is that it’s difficult to define ethics in the first place, and cultures define ethics differently, Lee said. Another emerging theme is that the control of AI today is concentrated in the hands of powerful companies and governments driven by motives other than ethical concerns. And a third theme emerged about the prospects for ethical AI design given the reality that many AI systems are already deployed. With AI already in the field and abuses already occurring, then “the systems causing the abuses are opaque at best, and impossible to dissect at worse,” Lee pointed out.

First and foremost, I think we really need to start at square one. We all come to the table, whether we are sociologists, computer scientists, anthropologists, and others with our values norms and assumptions about the world and those values norms and assumptions and those explicit and implicit biases factor into the design and development of computer models.

Nicol-Turner-Lee Nicol Turner Lee Senior Fellow, Center for Technology Innovation The Brookings Institution

An International Movement for Ethics

Countering the concerns, the panel focused on efforts to mitigate harm and improve outcomes, like the work of Algorithmic Bias Researcher Joy Buolamwini, who founded the Algorithmic Justice League out of her graduate research at the MIT Media Lab.

Speaking to the necessity of these efforts, Dr. Nicol Lee Turner, Senior Fellow, Center for Technology Innovation, The Brookings Institution argued, “First and foremost, I think we really need to start at square one. We all come to the table, whether we are sociologists, computer scientists, anthropologists, and others with our values norms and assumptions about the world and those values norms and assumptions and those explicit and implicit biases factor into the design and development of computer models.” Her upcoming book is called “Digitally Invisible: How the Internet is Creating a New Underclass.”

Fellow panelist Ethan Zuckerman, Associate Professor, Director of the Digital Public Infrastructure Initiative, UMass Amherst echoed this sentiment by claiming he is “rarely the most optimistic person in the room.” But added that he has been “absolutely astounded by how effective young activists have been in bringing up issues, particularly around algorithmic bias and racial bias.” He cited Joy Buolamwini’s efforts as spearheading a movement of young academics and researchers who want to understand how we build systems with fairness at the heart.

Technologies being developed in Silicon Valley have huge impacts on low and middle-income countries (LMICs), Ethan said. But we’re asking these questions much earlier as the result of this kind of activist work.

“So to be super clear, I am not optimistic about AI because I think that Facebook and Alphabet are good,” Ethan said. “I am not optimistic about AI because I think a lot of the people building predictive systems are actually thinking about these issues. I am optimistic about AI because there is an army of ferocious activists coming out of academia and particularly coming from LMICs who are trying to put these questions at the forefront of people’s minds rather than waiting for 10 years to happen, after which will be too late to do this work.”

Laura Montoya, Founder and Managing Director of Accel Impact, cited the efforts of Timnit Gebru when she led Google’s ethical AI team. “A lot of the work that she did when she was at Google included things like the data cards for data models and model cards for models. The idea behind these model sheets and these model cards is to ensure that when a researcher is developing these particular tools, they’re informing not just the population, but also other researchers on how this technology is being developed, what is being used as far as the data is concerned what the outcomes were.”

Inclusion to Correct Mistakes

Activists have already affected policy in this area, for example, with Washington State sharply curbing the use of facial recognition technologies because of racial bias. Speaking to this point, Nicol reiterated that concerns about AI remain real.

“It’s no secret that we have seen some big mistakes, like a health algorithm that kicks out Black patients,” Nicol said. “Or there are criminal justice algorithms which draw on data based on over-policing, and perpetuate historic divisions in society that provoked civil rights marches in the 1960s,” she added.

“What it comes down to is we don’t have the right people at the table designing the models,” Nicol said. “If you are on the wrong side of digital opportunity, you don’t have access. Yes, we’re using AI to actually solve issues around climate change. We’re using AI to help us with regards to even getting a vaccine. But if you are not one of the subjects that is part of the conversation get what you are a passive observer in the ecosystem.”

Nicol said it is important to have a systematized rating system for AI models based on how inclusive their production was. And the new report prompts a conversation about areas of vulnerability in society.

“We have algorithms that are essentially preying upon the historical vulnerabilities of our society, done and created by developers who are not sensitive to the experiences of bias,” she said. “And we need to go back to the table because the technology that is gone for good ultimately comes the harm that we are trying to fight for a very long time.”

Responding to Nicol’s points on developer’s biases, Danil agreed and stated, “The biggest problem as I can see it, and you mentioned it, is the people who are building and designing. Are they representative of the communities that these processes are designed for? Do they look like those communities? Are they living among those communities? Without it, they will not ask the right questions.”

Much more is needed to bridge the gap between the promise of aid and the realities of global inequalities

Uyi-Stewart-photo Uyi Stewart, Ph.D. Chief Data and Technology Officer data.org

A Global Perspective

Uyi Stewart urged a global perspective on the question of ethics, focusing on the “life and death needs” of people in countries like Nigeria, where 70 percent of people work in agriculture, and there are more mobile phones than adults. “That’s a vast untapped opportunity,” Uyi said.

Uyi suggested that in countries like Nigeria, AI can help farmers understand when best to plant. It can help mothers understand how best to breastfeed. And in a country where the majority of people earn less than two dollars per day, AI can help people optimize their income for school fees and housing.

“Much more is needed to bridge the gap between the promise of aid and the realities of global inequalities,” he said. “The major concern is that they should focus on what is good, public good, and address physiological needs because it’s a matter of life and death. I understand data governance. I understand everything about ethics and fairness, but it’s about what is good.”

Laura also remarked, “I think what really blew me away the most about this particular report is how much it focused on a global mindset about the use of AI and its effects and impact on different societies.” She reinforced Uyi’s perspective about the global complexity touched upon by many of the respondents in the report.

Specifically, Laura spoke about her work with the Latinx and AI organization and their efforts to reformat research materials that are only available in English. “A lot of the information and resources that are available are primarily released in English…which obviously creates a barrier to entry for those members of our community. So, we are actually redeveloping many of the materials that are currently available today from companies and organizations like MIT, Harvard, Yale, and Princeton, and we are translating them into English and presenting them in a format that is more accessible for our members.”

A Key Moment of Opportunity

Ethan concluded the discussion by mentioning a visit to his lab by Sherrilyn Ifill, President and Director-Counsel of the NAACP Legal Defense Fund. She came to his lab for a day-long education session on algorithmic justice.

“And at a certain point, I had to turn to her and say, ‘Sherrilyn, you’re the most important civil rights advocate in this country. Why are you spending a day with me? And she ended up saying, ‘look, if we could have fixed redlining in the 1950s, we would have closed the black wealth gap in America.’” The study of and commitment to ethical AI design represents the potential to prevent ongoing harm by mitigating and even reversing systemic biases.

AI can clearly be a risk. But it’s possible that it can also be one of the best tools to evaluate and improve long-running disparities in society, Ethan said. For example, by looking at questions like access to credit and employment, we can follow through to a policy framework that sets AI on a more ethical path.

A Dynamic Conversation

A dynamic backchannel chat accompanied the webinar, with viewers weighing in from all sides. Some of the commenters had been experts involved in the polling, and one expressed awareness of their privileged position in being part of the poll. Others highlighted how conversations about AI happening now will impact the way the technology is perceived in the decades to come, and offered parallels with other technologies such as social media, where 20 years ago people wondered aloud if we might be storing up problems for the future.

Commenters focused on different cultural perceptions of the value of AI technology by location, and got into detailed discussions about how the emerging debates around ethics could shape up into the future. Regardless of their viewpoint, all commenters agreed that ethics in AI is an important area for examination, and welcomed an ongoing, thorough, and cross-disciplinary approach to the conversation.