Query Results

What questions should we be asking about ethics and artificial intelligence? Here are six that we asked Irina Raicu J.D. ’09.

Make a list of brilliant thinkers and doers when it comes to ethics and artificial intelligence—as consulting group Lighthouse3 did—and you’ll find some Santa Clara people in the mix: Irina Raicu J.D. ’09, who directs the internet ethics program for the Markkula Center for Applied Ethics; and Shannon Vallor, the Regis and Dianne McKenna Professor in the Department of Philosophy. For a series of conversations about AI, we sat down with Raicu to ask what we should be asking.

1

What’s fair? And what’s human?

Some of the questions that have become more obvious—and that a lot of people are dealing with now—revolve around fairness and AI. A few years ago, only a handful of academic papers focused on AI and fairness. Now that’s grown exponentially. One of the questions that will come up for the foreseeable future is: What is it that makes us human? What do we mean by “artificial intelligence”? What is human intelligence? How is it different?

There’s something about real humans that’s messy and complicated. And most situations are more complex and nuanced than we would like them to be. That’s part of what’s interesting about the difference between humans and AI. Humans understand context and translate among concepts; AI is a long way from doing that.

2

Should there be AI-free zones?

One of the questions about artificial intelligence that I’ve been writing and talking and thinking about recently is: Should there be some AI-free zones? Are there areas in which AI would actually not do a better job than human decision-making? If we can delineate those, we can help prevent some harm and also increase trust in AI in the areas where it does do a better job.

Raicu

Right now, part of the problem is that people are presenting AI as an improvement on everything. There are areas in which there are changing social norms and in which implementing AI/machine learning as we know it would actually be embedding the current or prior social norms—rather than allowing things to develop.

For example, there is a claim by one writer that I’ve been rebutting. He argues that, eventually, algorithms will make “better” decisions about whom we should marry. That’s exactly the kind of area in which there’s so much complexity, so much variety, so much personal preference, and so many changing societal norms that no, I don’t see how an algorithm is going to do a better job of deciding that for us. That’s true with other relationships, too. We need to allow society to continue to grow—morally as well.

I spoke about this recently at a meeting for the Partnership on AI, a consortium of businesses, civil society groups, and academic centers working on a global effort to share best practices, advance public understanding, and make sure we’re developing AI for good. The bottom line is that we can’t allow algorithms to decide societal norms.

3

How do we keep humans accountable?

We want to make sure that it’s still humans who are held accountable for what the AI does. Some developers have talked about “bias laundering”: the notion that you have all this potentially bias-embedding data, and then you have the biases of the people who are designing the algorithms, but somehow we would run the data through the algorithm and the outcome would be objective. How do we make sure that people understand that it’s humans all the way down, and that the accountability then stays with the humans?

In a lot of areas, I don’t think we ever want to live in a society where we say AI has gone through so many iterations that it’s no longer really the humans who are in charge. (But AI also plays very differently in different contexts, so broad generalizations are not really helpful.) The accountability has to be placed somewhere, and we can’t let go of it by just allowing the neural networks to do their thing.

4

What happens with AI discoveries that are too dangerous to release?

The research institute OpenAI came up with an artificial intelligence text generator that had the researchers worried enough about its capability that they decided not to release it to the general public. Like with the “deepfake” video clips, this tool could generate plausible-sounding fake news. Some have argued that holding back the code is an empty gesture.

Contain AI / Illustration by Paul Blow

I don’t think it is. A lot of people would not have the resources to generate that kind of knowledge, so, by not sharing it, the researchers slow down significantly the deployment of such tools. Then maybe we have time to build some guardrails or countermeasures.

Their decision also sparked a loud conversation, which is, overall, a good thing. This is a really interesting case, because what concerned the researchers is that they found themselves being able to generate misinformation at scale very quickly. It’s a problem that we’re struggling with already, with just the amplification of content via bots. Part of the rapid development and deployment of technology means that the societal responses don’t keep up. Maybe technologists will have to be very explicit in assessing situations so that, whenever they see something that has the potential like this does—say, to help upend demo-cratic governments—maybe they will have to sit on the research and try to create the antibodies to the virus.

5

Whose norms define AI?

Recently, Foreign Affairs magazine asked a number of folks, including me, whether technological change today is strengthening authoritarianism relative to democracy. As I noted, the key part of the question is today. The trend for technology to strengthen either authoritarianism or democracy is not a given. Democratic entities need to move more quickly to understand new technology, consider its long-term consequences, and regulate its deployment.

The development of the internet was very U.S.-centric, to such an extent that, for a while, when the internet was spreading, we didn’t even realize that it was carrying with it American values. In contrast, AI is being developed very quickly in a number of places. It may be that we will have different flavors, different languages, different norms embedded in different AI tools. AI in China might reflect different norms than AI in the United States or AI in Europe.

Recently I spoke to a reporter about an app developed by the Saudi government, which allows men to track the women in their household. It has a wide variety of functions. One function is to track the women’s travel, because in Saudi Arabia you have to get the man’s permission for that. For example, the app will send the man a text if a passport is about to be used at a border. The app is hosted on the app stores run by both Google and Apple—and there’s been an outcry here against that, because it goes against values that we hold in the United States. But will the companies decide it goes against their values—or policies?

6

What are some of the things being done at Santa Clara to address these questions?

We want ethics to go along with the development of AI more closely than it has with the development of other technologies. We don’t want it to have to catch up.

There are multiple efforts to address AI ethics—from the School of Law and the Leavey School of Business, and of course in philosophy and engineering. Along with that, how we’re approaching AI at the Markkula Center for Applied Ethics is really on three levels.

On an international level, the Center is part of the Partnership on AI. We were one of the first two ethics centers to join, and we’ve been working closely with the other members of the organization. Four of the Center’s staff are now serving on four different working groups in the partnership, which means that we’re part of the ethical conversations around the development and deployment of AI with all of these big companies and civil society groups.

To address the local concerns in Silicon Valley, we released a set of materials on ethics and technology practice that are intended for use in ethics training workshops and programs in tech companies. They’re free for anybody to use. We see them very much as a starting point—and it does look like a number of local companies are interested in customizing those materials and incorporating them into their processes. And here on campus, we’re working to infuse ethics into all of the technology-related courses, whether it’s data analytics, computer science, software engineering, or engineering more broadly.

We want ethics to go along with the development of AI more closely than it has with the development of other technologies. We don’t want it to have to catch up.

Increasing Access

Discerning one’s dream requires a whole set of experiences based on community, opportunity, and, yes, cash. Santa Clara helps first-generation students discover their paths through various means of support.

In Search of Verdure

Santa Clara students and faculty are on a quest for greener pastures.

Make AI the Best of Us

What we get out of artificial intelligence depends on the humanity we put into it.

The Co-Op

Santa Clara University has long been a bastion of interdisciplinary learning. A new fund is taking cross-collaboration to new heights.