Event examines the ethics, politics and future of AI

Artificial intelligence (AI) is already at work in our daily lives, often undetected and often beneficial. But AI also comes with potential risks. What roles do – and should – politics, policies and ethics play in harnessing the power of AI and shaping its future?

Three leading Cornell scholars discussed governmental, social and moral ramifications in “Politics, Policy & Ethics of the Coming AI Revolution” on April 15, an Arts Unplugged event sponsored by the College of Arts and Sciences (A&S) and moderated by Andrew Ross Sorkin ’99, of CNBC and The New York Times.

On the panel were Shaun Nichols, professor in the Sage School of Philosophy; Sarah Kreps, the John L. Wetherill Professor of Government and founder of Cornell’s Tech Policy Lab; and Baobao Zhang, Klarman Postdoctoral Fellow in the Department of Government; all in A&S.

“Sometimes described as the fourth industrial revolution, this fundamental shift in the way we live and work is already upon us – and it is shaping our experiences in both obvious and subtle ways,” said Ray Jayawardhana, the Harold Tanner Dean of A&S and a professor of astronomy, hosting the event.

“As AI becomes more and more integrated into our lives, we must ensure that the technology is employed ethically, that our politics remain democratic and that our policies are informed and humane,” Jayawardhana said.

Cornell researchers are already at the forefront of researching – and influencing – the political, social and ethical implications of artificial intelligence.

“Cornell is so involved in this particular area and really contributing to the public discourse in this remarkably important topic,” Sorkin said.

Technological influences and disruptions are already in play in the national political discourse, said Kreps, whose research focuses on the intersection of international politics, technology and national security. The 2016 U.S. presidential election made this clear, she said, as did the 2017 episode in which the Federal Communications Commission was flooded with public comments — 90% percent of which were AI-generated digital spam.

“The incident demonstrated that legitimate democratic processes, such as receiving public comments, could be hijacked because of the openness of the deliberative process,” Kreps said. “In the intervening years, new technologies have emerged that have made this prospect even more dangerous.”

“Trust” has become a buzzword in AI research and regulation, said Zhang, who researches the governance of artificial intelligence.

“Much of the research done on trust and AI focuses on technical audits and fixes,” Zhang said. These are important, she said, as technical fixes can make sure AI systems don’t misdiagnose diseases, misjudge traffic patterns or discriminate based on gender or race.

“Nevertheless, trust is deeply social,” Zhang said. “As a researcher of human behavior, I recognize that many people trust the untrustworthy. And conversely, many distrust trustworthy people or institutions. Therefore, AI governance must take into account this social dimension of trust.”

Zhang said AI developers should “consider how a mistake could damage trust in AI systems in the future.”

Nichols, who works in the philosophy of cognitive science, agreed that distrust can lead people to eschew technologies that actually make life safer.

“We already outsource ethically important decisions to algorithms. We use them for kidney transplants, air traffic control and to determine who gets treated first in emergency rooms. These are obviously life and death matters,” he said. “Part of the reason we turn these decisions over to algorithms is this eliminates a significant source of human errors. So how do we make sure AI [systems] make decisions about ethical matters in a way that we want them to, so we can trust them?”

There is a difference between what AI systems should do morally and what people, in general, want them to do, Nichols said. For example, a study found that people agreed the most moral thing for an autonomous vehicle to do in an accident is to sacrifice the one driver in order to save five bystanders. However, the same study showed that people would not buy such a car.

Neither is there one single ethical system for all AI systems, Nichols said.

“Do we care about the moral character of AI?” he said. “Answering those questions will require interdisciplinary work, not only from engineers and software designers, but also from philosophers and cognitive scientists working in moral psychology.”

Jayawardhana said frontline research and education are closely intertwined at Cornell, pointing out a quiz developed by students in the Milstein Program in Technology & Humanity.

“See if you can distinguish AI-generated text from that written by a person,” he said. “I must tell you, it’s not easy.”

While the rise of AI is often met with fear and negativity, Jayawardhana said, there are also exciting opportunities.

Read the story in the Cornell Chronicle.

More news

View all news
		Interior of a self-driving car, looking out at palm trees
Top