Beyond the academic sphere, generative AI presents ethical and legal concerns for society at large. According to Irina Raicu, internet ethics director of the Markkula Center of Applied Ethics, what makes evaluating the ethics of generative AI even more difficult is the lack of transparency from companies that make these programs.
“People play with these tools without realizing that every time they put a cute prompt into something like ChatGPT they’re actually using energy and having an environmental impact,” Raicu says. “It’s because those kinds of impacts aren’t explicitly shown, [and caused] in part by the fact that the tools were made at least initially free.”
According to Raicu, some AI tools may also be more environmentally costly than others. Specifically, large language models can have upwards to hundreds of billions of parameters, meaning they’re being trained on increasingly larger amounts of data in order to generate better responses. Building and implementing such tools requires rare metals for manufacturing, water to cool data centers, and energy to keep those data centers running.
Betty Li Hou ’22, who was a Hackworth Fellow with the Markkula Center from 2021 to 2022, agrees that a level of caution toward AI usage and its implementation is wise. For Hou, as exciting as AI can be in terms of potential benefits, she says there will always be bad actors that abuse it. These problems are often difficult to mitigate with fairness.
Fairness in AI has been a longstanding problem. What does “fair” look like in this context? Who determines what’s fair? Is there even a clear definition of the word? Identifying unfairness, then, amidst countless layers in AI systems is like finding a needle in a haystack. To make the matter more complex, the system itself might not be the issue but rather how it is integrated into society or how it is deployed and regulated. What does fair access to AI look like? Who gets to benefit and who is sidelined, or harmed, by it?
“From a technical standpoint of how models are built, we can look at how to make systems more fair, accessible, transparent, truthful, and coherent” says Hou, who is now a computer science doctoral student at New York University. “That’s what AI researchers do and it’s definitely not straightforward at all. I think more than ever though, there’s this question of how technology is going to come together with humanity. How is it going to change how we live in society and what it means for us to be human? It’s these really big philosophical and ethical questions that we have to think about now more than ever.”
Whether educators choose to implement generative AI like ChatGPT into their classrooms or not, Hou believes that at the end of the day all educators must consider what their goals are.
“What is it that we really want students to walk away with, and how do we get them there?” Hou says. “Both in terms of knowledge but also skills, specifically life skills and critical thinking skills. Consider what will help them flourish, not just as students, but as people.”