Make AI the Best of Us

What we get out of artificial intelligence depends on the humanity we put into it.

Make AI the Best of Us

Let AI talk to your customers! Never share a bad photo again! Powering health! Streamline your business! The stretch of Highway 101 connecting Silicon Valley to San Francisco is nearly flooded with billboards touting the myriad ways companies want to wield artificial intelligence to improve our lives. A seemingly endless parade of promise.

For those with longish Valley memories, it may not seem so different from the go-go days of the 1990s dot-com boom, neon advertising for Yahoo and Pets.com looming above the steady stream of six-lane traffic. A revolution in information and lives animated by engineering and money. In November 2022, that promise and buzz burst back into the Valley in a big way when OpenAI released ChatGPT to the public.

Like any revolution, the ultimate end is unclear at its spark. The true costs of developing more intricate, more powerful AI are unknown.

Questions circle the excitement: How will artificial intelligence used by the general public change lives? Will it replace or enhance human creativity and labor? Is training computers on human-generated work stealing? Will AI meet our expectations?

“I am very concerned about the hype factor of AI,” says David DeCosse, director of religious and Catholic ethics at Santa Clara University’s Markkula Center for Applied Ethics. “The idea that AI could achieve consciousness is being used as a means to solicit funding for corporate projects. That is very concerning. I worry about it being abstracted away from the human life experience.”

These debates over AI will be answered, of course, by those with their hands on the levers. What is artificial intelligence if not technology, a tool? Any tool’s purpose is most often found with the user who directs it.

“Humans can use or misuse tools,” said Eric Haynie, manager of instructional technology at Santa Clara University and a religious studies lecturer of Buddhism, at a recent forum discussing human flourishing in the time of AI hosted by the Markkula Center. “We made metal implements a long time ago. They can be weapons, or they can be tools.”

A REVOLUTION?

First, it is important to understand what is new and what is not in this brave new world of artificial intelligence, to parse what’s actually being promised in all that billboard propaganda. The technology itself is not all that new, says Maya Ackerman, an associate professor of computer science and engineering and an expert on computational creativity whose AI program helps people make music.

Generative AI makes new things. But it has existed within academia for years, as has similar predictive technologies. For example, Professor of Finance Hersh Shefrin taught his graduate students in the early 1990s how to use an early version of AI—networks trained on information—to help clients. Another Leavey School of Business finance professor, Sanjiv Das, conducted research that helps computers understand context by codifying text and researched how artificial intelligence can help plan for retirement.

The similar technology most familiar to those of us who have spent a couch-bound evening in front of the TV is software that recommends a next Netflix binge-watch. It’s hard to imagine a chill night without it. Generative AI is related. Rather than recommending an existing book, a text-based generative AI can make a whole new murder mystery for readers.

“It is basically saying, ‘I am going to predict what the next word is going to be,’” Ackerman says. After enough such predictions, a novel story can be created. “That’s the foundation of it.”

This is not so much a revolution as it is an evolution. The commercial AIs base their creations on more data than previous technologies and generate new content. In the case of ChatGPT, it is based on the informational equivalent of 10 percent of the internet.

Ai Small 1

“There’s really something magnificent happening here that our legal, financial, and social systems are not equipped to deal with,” Ackerman says.

AT WHAT COSTS?

For the average user, interacting with generative artificial intelligence can seem like magic.

“There is an uncanniness in something that can sound human,” says Haynie, the instructional technology manager. “There is a Frankenstein aspect to it.”

That uncanniness changes the game. It makes it harder for humans to tell truth from fiction. Using language-generating AI to ask questions can result in misinformation if the answers sought are not common knowledge.

“Any time you ask it something that isn’t the common denominator of human knowledge, that is not repeated over and over again on the internet, it makes up answers,” Ackerman says. “Because that is what generative AI does. It is wildly creative.”

One word followed by another that is statistically likely to appear next does not equate to the truth. Just months into a world with easily available generative AI, we were awash in stories of lawyers submitting AI-generated papers to court with completely fabricated citations that sound believable enough. Machine-created precedents aren’t law. They are rather wishful artificial “thinking.”

“The authoritative tone of those outputs also leads many to overestimate their relationship to ‘truth.’”

“Unfortunately, many of the people using such models don’t understand that limitation—especially since the outputs of those models are often accurate and often impressive. The authoritative tone of those outputs (which is a design choice) also leads many to overestimate their relationship to ‘truth,’” said Markkula Center director of internet ethics Irina Raicu in a recent panel. “What happens to online accuracy as our online information streams become polluted with AI-generated misinformation?”

And when these so-called AI hallucinations are repeated on the internet and fed back into the machine’s information base, misinformation begets misinformation. And it can be more than words that read like a human creation.

In the panel discussion, Raicu noted that a creative artificial intelligence could use her fellow panelists’ voices and images to make it seem as though they uttered words they never did. Deciphering reality in a vortex of so-called deep fakes that look real could create a head-spinning and less helpful internet, damage reputations, and feel like a violation for those subjected to them.

Relying on the courts to protect people from such misrepresentation or misuse of their own creations likely won’t get us far, says Santa Clara Law Professor Tyler T. Ochoa, noting the courts typically run “a decade or two behind the technology.”

Instead, the creators of AI-generative systems could make them sound less human and create ways to blur or prevent exact re-creations of actual human images or people. But, if funding is being sought on the promise of artificial human-like intelligences, as the Markkula Center’s DeCosse argues against, the incentive for doing so might be limited.

A BETTER AI?

In his October 2023 “Techno-Optimist Manifesto,” venture capitalist and Netscape co-founder Marc Andreessen cited American economist David Friedman saying that people only do things for love, money, or force. Andreessen riffed on the idea: “Love doesn’t scale, so the economy can only run on money or force. The force experiment has been run and found wanting. Let’s stick with money,” he wrote.

DeCosse disagrees. “Love does scale. It really does. It is a problem of imagination to think it doesn’t,” he says.

Indeed, what is the act of nurturing a child or a friendship, if not the multiplication of love? The difficulty of creating art and sharing it is also love growing, moving at larger scales as it ripples and inspires others. The struggle of owning a restaurant for the love of hosting or food, bringing together others to share in that joy and bond in community. That is also a love that scales. It is there that humans often find purpose.

Ai Small 2

Why can’t we use technology for that very thing? To grow love? Why can’t AI be built to help us, inspire us, and treat one another better? In the right hands, perhaps it can.

Take ChatGPT. Trained on a tenth of the internet, it has more data at its non-fingertips than one human could possibly consume. But it also doesn’t reflect the full wealth of human knowledge.

The training data fed to ChatGPT is likely generated by people with more in common than not—access to technology, money to develop the skills to share their experience, and potentially a shared language, says the Markkula Center’s Raicu. A different model could pull from different sources, encompassing a wider net of experience and wisdom.

Since AI is trained on human-produced information, it spits back all of our biases. The problem there, Professor Ackerman notes, isn’t the technology, but us. This could be flipped. Perhaps if we ask different questions, the resulting data could show us our hidden biases. And we could work to change. “We need to work on being better humans. That’s hard work,” Ackerman says. AI is not going to fix it for us, but it could be a tool we use on that journey.

Take, for example, the work of Leavey School of Business professors Michele Samorani, Haibing Lu, and Michael A. Santoro with the Black Women’s Health Imperative. Their research uncovered human bias seeped into algorithms that set doctors’ appointments, giving Black patients the least desirable times.

Scientists are using AI not to just to spot biases, but also predict earthquakes and make medical discoveries. It can also help learning. In some SCU classrooms, students are encouraged to create summaries of readings using AI as a way to help those who process information better in short form than long.

AI can also help us be more creative. Art Professor Kathy Aoki creates art to inspire viewers to think about pop culture differently. She has experimented with AI to see if it can help capture or generate what she has in mind. And, in many ways, because her concepts are innovative, it fails. Remember, AI is trained on what already exists.

If she could train an artificial intelligence to “think” in her style, she says, perhaps it would be different. But, AI companies are being sued for stealing copyrighted images to train their AI, without crediting artists. This makes Aoki nervous about having her work on any site as it may be trawled. Some AI companies, like Midjourney, retain unlimited licenses to use user-generated prompts. The prompt is the creative process, she says.

But what if we made an artificial intelligence that didn’t need to own private inputs? Aoki’s art could be powered by the creativity of what Ackerman calls one giant brain, an artificial intelligence built on massive amounts of human knowledge. Concerns about companies gaining rights to private work, then, would be moot.

It is costly, but compensation is possible. There is a potential for a love-powered AI.

That same protection of the creator could be true for the content used to train AI, law Professor Ochoa says. Just as humans who read books or stories often buy them or check them out from a library and as museums purchase artworks to show, companies relying on such materials could pay for them. “We want the works to be used,” he says. “We want the books to be read. But you have to get them from somewhere.”

Even though the sources of the information are vast, it is possible to track where training data comes from, says Brian Patrick Green, director of technology ethics at the Markkula Center. It is costly, but compensation is possible. There is a potential for a love-powered AI.

SCU’s culture of collaboration is part of today’s artificial intelligence buzz. Students are experimenting with AI in engineering and philosophy classrooms alike. Faculty are engaging their classes in discussions about ways to make AI better so we can be better.

As the University plans its future, the Board of Trustees is set to consider a new strategic plan as we go to press, and we expect AI to be an area of focus. There is a future of massively scalable love that could be built, one where we use tools to improve the human experience.

Increasing Access

Discerning one’s dream requires a whole set of experiences based on community, opportunity, and, yes, cash. Santa Clara helps first-generation students discover their paths through various means of support.

In Search of Verdure

Santa Clara students and faculty are on a quest for greener pastures.

The Co-Op

Santa Clara University has long been a bastion of interdisciplinary learning. A new fund is taking cross-collaboration to new heights.

Human at Heart

How Santa Clara University is distinguishing itself as a leader in one of the fastest-growing industries in the nation.