Would you be willing to forgo—or forgo for your children and grandchildren—a cure for Alzheimer’s, or cleaner and vastly more efficient power systems, or reliable weather and global climate forecasts, or better responses to drought and famine? Then we cannot afford to reject artificial intelligence out of hand.
This creates an unprecedented ethical imperative for AI researchers, designers, users, and companies and institutions that employ them. Artificial intelligence is immenselypowerful, but it is not magic. It does not run without human intelligence—including, even chiefly, our moral intelligence. The future of an AI-driven world depends less upon new breakthroughs in machine learning algorithms and big data than it does upon the choices that humans make in how AI gets integrated into our daily lives and institutions and how its risks and effects are managed.
This imperative falls within the realm of ethics because core human goods and values are at stake. An artificial agent that ruins the rest of your life by falsely labeling you a high-risk defendant, or that denies you a home or a job because of a random algorithmic quirk that no one can see, is implicated in an injustice, especially when it is relied upon by other humans in ways that deny you due process or meaningful remedies. We cannot sit by and allow compassion, justice, liberty, and respect for human dignity to be sacrificed at the altar of algorithmic efficiency. Every AI-enabled decision process is still a human responsibility, all the way down to its deepest, darkest, most inscrutable layers.
Things can be done to foster and earn the public’s trust in artificial intelligence. First, companies that develop and market AI-driven technologies need to cultivate a sincere public conscience and internal corporate culture, supported by incentive structures, that reflect awareness of the unprecedented social power of these tools. Respect for human life and dignity is not incompatible with healthy commerce and reliance on markets. It’s essential to it. If we don’t tolerate profit-driven recklessness and contempt for public health and safety from companies that build and operate nuclear reactors or airliners, we cannot tolerate it from companies that build and operate AI, especially when they impact critical human systems and institutions.
Second, the public needs to adopt a more critical, questioning relationship with technology and its social effects. We each need to become better educated about the promise and the limits of artificial intelligence, and to actively demand and participate in AI governance and oversight, in both formal regulatory structures and informal citizen-driven structures. From the person who is asked by their doctor or employer to surrender genetic data to an AI-driven cloud platform, to the HR manager who downloads an AI hiring assistant to sort résumés or evaluate interview responses, to the juror or judge presented with an AI-generated risk score, we all need to ask reasonable questions and demand reasonable answers about AI-driven systems, such as: “What are appropriate uses of this tool? What are common inappropriate uses/misuses of this tool?” “What human biases could have skewed the data this system was trained on, and what measures were taken to identify or mitigate biased results?” “What kind of errors will this system most likely make, when it makes them?” “What au-diting processes are in place to identify individual errors or harmful/unjust patterns in the results?” “What steps can I or my organization take to ensure that independent human checks and other due-process measures are available when an algorithmic decision is contested by an affected party?”
Third, institutions that rely heavily upon AI-driven solutions, especially those institutions that protect fundamental human goods such as education and health, need to develop institutional structures and incentives that ensure that fundamental human values central to the mission of the institutions are not lost or sacrificed to the rule of algorithmic “efficiency” and its opaque authority. Human judgment must remain in the loop in such a way that the vigor of human intellect, the virtues of moral wisdom, and an ethos of personal responsibility are preserved and given ample opportunities to be practiced and honed. Artificial intelligence can even be enlisted in this effort as artificial helpers and tutors that encourage and support the ongoing cultivation and refinement of human intelligence, rather than demoting or degrading it to a lesser status.
Artificial intelligence is already one of humanity’s sharpest tools. But like any very sharp tool we have crafted for ourselves, it must be treated with care and discernment. We must know where and when it is safe to use, and where and when it is not. We must know with whom to entrust its use, and with whom to not. We must know how to keep its power from injuring or enfeebling ourselves, or those we love. And we must know that the tool and its power is always the responsibility of the one who trusts it.