May 5, 2021
Webinar Summary By JoAnne Wadsworth, Communications Consultant, G20 Interfaith Forum
Webinar Summary
On Wednesday, May 5th, the G20 Interfaith Forum’s working group on Research and Innovation for Science, Technology and Infrastructure held their first webinar, “Artificial Intelligence: Challenges and Opportunities.” Panelists included Dr. Peter Asaro, Associate Professor in the School of Media Studies at the New School in New York City; Dr. Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge; John Markoff, Journalist and Research Affiliate, Institute for Human-Centered Artificial Intelligence, Stanford University; Dr. Selma Šabanović, Associate Professor, School of Informatics and Computing, Indiana University; and Dr. Branka Marijan, Senior Researcher, Project Ploughshares.
The event addressed important questions being raised surrounding advancements in Artificial Intelligence and robotics: How can a human-centered AI design be ensured? What are the main concerns that require ethical considerations and governance responses? What role is there for the developers of these technologies? How can we combine research and innovation in a globally competitive environment and construct an international dialogue and framework fostering human rights?
Both remarks from panelists and input from viewers are intended to be used in furthering the working group’s preparation of policy recommendations for the G20 Summit leaders in Italy this fall.
Dr. Branka Marijan, who moderated the discussion, began by asking each panelist to outline what development around AI technology they were most excited about, in addition to their greatest concern.
Dr. Kanta Dihal
Dihal, who focuses much of her research on the different ways Artificial Intelligence has been imagined and portrayed throughout history (and how those narratives affect AI), focused on the fact that AI-centered societies are typically portrayed as extremes: either utopian or dystopian.
However, based on recent surveys of the British public, Dihal said that some traditionally “utopian” scenarios are beginning to be viewed with some trepidation, signaling a misalignment between positive narratives and negative public response, particularly in regard to AI reaching and surpassing human social interaction skill levels.
Dr. Selma Šabanović
Šabanović expressed excitement over the potential AI offers in the public health sector, particularly in serving the elderly, but focused her comments on one main concern: The lack of diversity and inclusion in the design of these machines. With the majority of AI designers, ideators, and developers coming from W.E.I.R.D. (Western, Educated, Industrialized, Rich, and Democratic) backgrounds, she believes that care needs to be taken to incorporate a more balanced value set to benefit all humanity:
“What I’d like to call for is efforts to integrate a more complex understanding of social issues and realities into the design of these machines, and to create a more participatory framework for how we invest in and develop these technologies.”
John Markoff
Markoff said the heart of AI development—and by extension, the concerns around AI—is a debate between two philosophies: Whether to replace the human mind or simply extend it.
With language models becoming more and more advanced, and conversational AI’s entering the market, Markoff said he found it a hopeful sign that so many ethics classes and discussions are popping up around the topic, because he sees the challenges posed by deception to be a large issue.
Dr. Peter Asaro
Asaro made the point that AI, as a “technological mirror of how we think of ourselves,” is simultaneously a philosophical theory about emotion/interaction and a practical tool, resulting sometimes in unexpected outcomes.
“The institutions and individuals who have the power to shape these systems will have their interests advanced, and those without that power will be left out—whether we’re talking about social manipulation, technological unemployment, mass deception, biometric technologies collecting mass amounts of private data, etc.”
He referred to how AI is already changing the world at large, and the questions it is raising regarding its “decision-making” capabilities: Should autonomous weapons systems be able to make life-and-death decisions? Should AI be able to decide whether you’ll get a loan or a job?
“Let’s not make the mistake of thinking that these machines understand things that they don’t understand. They aren’t moral or legal agents.”
Q&A—Regulation
The first issue brought to the panelists regarded the EU’s new privacy regulation plans, and government regulation of AI in general:
“The prohibition against deception is, in my opinion, one of the most important issues we need to deal with going forward. We have a tendency to anthropomorphize almost anything we interact with, and therefore we often trust too easily.” –John Markoff
“These processes aren’t in the hand of democratic governments—they’re in the hand of corporations—so giving people a voice is going to be hard to achieve.” –Dr. Kanta Dihal
“Big Tech has been able to keep private control over the data they’re collecting and the algorithms they’re using, and it’s transforming our society in ways we have yet to completely understand. We need a new sort of social contract to address these issues, particularly for democratic societies. The old versions of anti-trust laws aren’t sufficient to address the phenomenon we have with these companies that are shaping our attitudes, the redistribution of wealth, etc.” –Dr. Peter Asaro
Q&A—Religion
As the focus of the discussion moved to what religion could offer AI and vice versa—along with looking at the intriguing claim that AI was a religion unto itself—Dr. Kanta Dihal spoke of three points that most world religions and AI have in common:
- The belief that humanity is different than all the other animals, which is often connected to tool use (enabling humans to achieve the pinnacle of their potential)
- A “creationist”-type belief that making something so alive and capable of decisions is akin to becoming like the gods
- A power to shape people’s morals, thoughts, and actions
Dr. Selma Šabanović added that both AI and religion exist to improve our communities and the state of the world at large—and that neither work effectively through following simplistic sets of rules to the letter. They work through social engagement rather than division and exclusion, applying principles and good judgement to highly dynamic situations.
In conclusion, Robert Geraci spoke in behalf of the Working Group, thanking the panelists, acknowledging the work to do moving forward, and committing to take what was learned at the webinar to provide focused policy advice that will benefit the global community.