Engaging in the topic of global catastrophic risk involves not only scholarly research and teaching, but also engagement with businesses, governments, and nongovernmental organizations. For example, while it is widely anticipated that the U.S. federal government will now be less involved in the mitigation of climate change—particularly since the White House announced it was withdrawing from the Paris Agreement in June—perhaps there is now more of a role for subsidiary levels of government, for corporations, and for civil society to play. Many individual movements already exist to work on specific risks: against nuclear weapons, for environmental protection, even for finding dangerous Earth-crossing asteroids. Only recently have organizations formed to work against many kinds of catastrophic risks. Together, a movement is gathering to help guide technology toward good futures and away from bad ones.
Even religions have an important role to play in this movement. In his recent encyclical, Laudato Si, Pope Francis stated that “The work of the Church seeks not only to remind everyone of the duty to care for nature, but at the same time ‘she must above all protect mankind from self-destruction.’” This is no minor point; the pope declared, quoting Pope Benedict XVI that the duty of the Catholic Church is to protect humanity against its self-destruction. In the context of Christian history, in which the Church has already seen great civilizations collapse (not only their own civilization in Rome, but also other civilizations, like the Aztecs and Incas), this becomes an especially visceral call. It can happen, it has happened before, but can we remember the lessons learned and avoid having it happen again, on an even larger scale?
Additionally, the Catholic Church has a long history of advocating for certain kinds of technologies and opposition to others. In 1139, at the Second Lateran Council, the Church tried to ban the use of the crossbow against fellow Christians. The ban obviously failed; however, the idea of banning certain types of weapons has continued on to this day and has become encoded in international treaties that limit biological and chemical weapons, poison bullets, and blinding lasers, for example. A new movement has formed to ban lethal autonomous weapons systems, aka “killer robots.” The sections of Laudato Si concerned with technology (which have been both lauded and reviled) should be read from the perspective of Catholic history, where life-harming technologies should be limited and life-giving technologies promoted.
Needless to say, crossbows are now the least of our worries. We live in a very different world than medieval Europe. Today, humans are vastly more powerful. In every one of my engineering ethics courses, I repeat this same phrase: “Previously, humankind was constrained by its weakness; now, we must learn to be constrained by our good judgment—our ethics.” Humanity is no longer what it once was; we have now gained powers greater than those of the Greek and Norse gods. As we bask in our self-satisfied glory, we might consider what it means to be newly powerful, mortal, and without good judgment.
THE ONLY THING
And yet we have the capability to solve these problems. Some technological problems can be solved with better technology, and many people are working on these tasks already—for example, renewable energy. With renewable energy, we can move our economy away from such carbon-intensive fossil fuels as coal and oil, and instead run the world on sun, wind, and geothermal. Nuclear fusion—the power behind the most devastating bombs in the Marshall Islands—may become controllable in the next few years and provide a nearly unlimited source of energy.
The transition to renewable energy helps to mitigate the risk of climate change, but merely stopping the rate of change is not enough. Atmospheric composition must be rolled back to pre-industrial levels if we want to restore the climate to which we are historically accustomed. That will require technologies to remove carbon dioxide and other greenhouse gases from the air and sequester them elsewhere. One approach is to simply harness what nature has already given us in plants—organisms that naturally collect CO₂—and then remove that carbon from the carbon cycle, for example by burying it as inorganic carbon. “Terra preta” in the Amazon reveals that humans long ago discovered the usefulness of incorporating charcoal into their soil, along with other fertilizers, to create a long-term gain in fertility while also storing carbon for millennia. Research in this field is ongoing.
In addition to mitigating the risks of climate change, we must also adapt to them. Some damages from climate change are inescapable—for example, the rise in sea level we are already experiencing. To respond, we must take simple actions, such as building flood control. Unfortunately, while flood control is a simple idea, it is also very expensive. For example, in the San Francisco Bay Area, studies have been done to determine the feasibility of damming the Golden Gate in order to maintain the Bay and Delta region’s sea level. The cost to build such a project would be tens of billions of dollars, but the cost to build hundreds of miles of levees would also be expensive, as would a “managed retreat” where property is abandoned to the sea. In the face of oncoming destruction, which pain do we prefer?
Climate change is a slow disaster, but other human-made catastrophes could be much faster. Nuclear weapons captured the world’s imagination during the Cold War, and nuclear stockpiles are now reduced. Yet there is still sufficient weaponry to reduce most of human accomplishment to ruins. Reducing nuclear weapons stockpiles should remain a vital moral priority.
Emerging technologies have dangers as well, but may also provide solutions to our problems. Artificial intelligence has long been maligned in movies such as The Terminator, but AI also may give us the power to better evaluate our risks and determine how to solve them efficiently. While AI is sometimes viewed as a panacea, where all will be fixed in a “singularity” or “intelligence explosion” that will lead to AI becoming god-like and salvific in its goodness, this is mere mythology. What we need even more, whether we believe in God or not, is a revolution in our own behavior, in our ethics, rather than just a revolution in our technology.
While technological development surely is not easy, the more difficult problem is choosing to try to solve the problem of catastrophic risk on a much vaster scale, at the level of ethics and politics. We need ethical action and political cooperation to promote good technologies and limit bad ones—and, more than that, to change our hearts so that even when technology can be used in a bad way, we will choose not to do so. Ultimately, to paraphrase Shakespeare, the fault is not in our technologies, it is in ourselves. How can we create a future where technologies contribute to human flourishing and not to human destruction?