Risk

Let’s look at the biggest threats to our very existence. For a glimpse into the future, start with a tiny group of islands literally going underwater.

Frank, 13, standing on a seawall that protects his family home from the rising seas. His parents say that despite the wall during high tides the water breaks into their yard. The Marshall Islands comprises two chains of coral atolls, together with more than 1,000 islets. It is on average just two meters above sea level. The country faces an existential threat from rising sea levels with some predictions claiming that the islands will be swamped by the end of the 21st century.

Brian Patrick Green
15 Jun 2017

Between the summer of 2001 and the summer of 2003, I lived in the Marshall Islands, in the middle of the Pacific Ocean, working as a high school teacher with Jesuit Volunteers International. It was a life-transforming experience in many ways, but the way in which it affected me most profoundly involves the relationship between humanity and our most destructive technologies. This is because the Marshall Islands have experienced and are experiencing these destructive technologies firsthand, and as I lived there, I saw their effects on my friends.

After World War II, the United States used the Marshall Islands to test nuclear weapons—including the first experimental hydrogen bombs, the largest nuclear weapons the U.S. ever tested. Entire islands were vaporized and became mile-wide craters, and due to lingering radioactivity, large areas of the Marshall Islands remain uninhabited. Scientific and military recklessness exposed the Marshallese to nuclear fallout, and they experienced radiation sickness, birth defects, lethal cancer, and other horrible effects from these tests—consequences that continue to this day.

The U.S. military still maintains a base in Kwajalein Atoll for testing ballistic missiles and interceptors, and its rent is a major source of revenue for the local economy. I have seen dummy nuclear warheads reenter the atmosphere white-hot, glowing like meteors, and I have seen ballistic missile interceptors go up (the roar of their rocket engines interrupting my class). But these are not the only technologies that have affected the Marshall Islands.

While I was in the Islands I taught life and physical sciences, algebra, history, and religion. In history, we learned about the legacy of nuclear testing. In religion, about right and wrong. In science, about technology and its effects on the world—including radioactivity, climate change, and sea level rise. (Global average sea level has already risen just under a foot in the last century or so, and it continues to rise.) During my time in the Islands, sea level had not yet reached critical levels; in the last 10 years, a threshold has been crossed. Now high-wave events regularly flood the islands. Because these islands are low-lying atolls—just sand and gravel bars on a limestone bed of fossilized coral, with an average elevation of less than six feet—in the next few decades, sea level rise will most likely make many of them uninhabitable.

In recorded history, nations rise and fall metaphorically, but never before have they had their land area literally submerged and erased from the face of the earth. The Marshall Islands now face this future, along with several other island groups, including the nations of Tuvalu, Kiribati, and the Maldives. If sea levels continue to rise, these islands will be the first, but not the last, nations devastated by anthropogenic climate change. Our fossil fuel technologies are leading us to this destruction.

In the midst of this depressing vision of the future, the U.S. military base on Kwajalein simply builds its seawall higher. Weapons testing must go on.

SOMETHING REALLY BIG

The Marshall Islands are enduring the horrendous effects of two particular technologies, nuclear and fossil fuel, but something bigger is going on here. Over the past few decades, humanity has experienced an unprecedented technological revolution, propelling us from being of little existential threat to ourselves to being, perhaps, the single gravest threat to our own existence. Becoming collectively so dangerous was never any one person’s intent. Fossil fuels and nuclear energy are both technologies humans intended for good (even if they’ve sometimes powered weapons of war). So how have these technologies come instead to represent such risks to us, their creators? Through our choices, of course. And where choice is involved, so too should ethics. This is why I work on the ethics of technology, and particularly, the world’s worst risks.

Global catastrophic risks are those that threaten to devastate large areas of Earth’s surface, whereas existential risks are ones that threaten the extinction of humanity. Today, there are at least 10 natural sources of global catastrophic risks (ranging from asteroid impacts and pandemics to ocean anoxia and supervolcanoes), and in the near future, there will be at least 10 sources of global catastrophic risks made by humans, ranging from nuclear weapons and anthropogenic climate change to bioweapons and artificial intelligence.

Scholars who study these risks seek to understand them and prepare ways to mitigate against or adapt toward them. Much begins simply with education. In my work at the School of Engineering and the Markkula Center for Applied Ethics, I teach ethics to engineers. In my classes, we consider the dangers of viruses and malware to cyberphysical infrastructure and the benefits and dangers of artificial intelligence.

Students learn to use gas masks in case of the event of chemical or biological attacks, at a training center next to the War Memorial of Korea in Seoul, South Korea, April 21, 2017. For some residents, North Korea's recent provocations have stoked fears of renewed conflict. But many more have responded with nonchalance. (Lam Yik Fei/The New York Times)

We consider the democratization of biotechnology, the dangers of bioterrorism, and steps that can be taken for biodefense. And we consider the global situation and response to climate change. In my publications, I have sought to educate about these risks and provoke discussion on what steps might be taken to make our world more secure. There are actually many steps that we can take to make our world more secure against catastrophic risks, but all solutions begin with recognizing the risks (education) and organizing to respond to them (activism).

Engaging in the topic of global catastrophic risk involves not only scholarly research and teaching, but also engagement with businesses, governments, and nongovernmental organizations. For example, while it is widely anticipated that the U.S. federal government will now be less involved in the mitigation of climate change—particularly since the White House announced it was withdrawing from the Paris Agreement in June—perhaps there is now more of a role for subsidiary levels of government, for corporations, and for civil society to play. Many individual movements already exist to work on specific risks: against nuclear weapons, for environmental protection, even for finding dangerous Earth-crossing asteroids. Only recently have organizations formed to work against many kinds of catastrophic risks. Together, a movement is gathering to help guide technology toward good futures and away from bad ones.

Even religions have an important role to play in this movement. In his recent encyclical, Laudato Si, Pope Francis stated that “The work of the Church seeks not only to remind everyone of the duty to care for nature, but at the same time ‘she must above all protect mankind from self-destruction.’” This is no minor point; the pope declared, quoting Pope Benedict XVI that the duty of the Catholic Church is to protect humanity against its self-destruction. In the context of Christian history, in which the Church has already seen great civilizations collapse (not only their own civilization in Rome, but also other civilizations, like the Aztecs and Incas), this becomes an especially visceral call. It can happen, it has happened before, but can we remember the lessons learned and avoid having it happen again, on an even larger scale?

Additionally, the Catholic Church has a long history of advocating for certain kinds of technologies and opposition to others. In 1139, at the Second Lateran Council, the Church tried to ban the use of the crossbow against fellow Christians. The ban obviously failed; however, the idea of banning certain types of weapons has continued on to this day and has become encoded in international treaties that limit biological and chemical weapons, poison bullets, and blinding lasers, for example. A new movement has formed to ban lethal autonomous weapons systems, aka “killer robots.” The sections of Laudato Si concerned with technology (which have been both lauded and reviled) should be read from the perspective of Catholic history, where life-harming technologies should be limited and life-giving technologies promoted.

Needless to say, crossbows are now the least of our worries. We live in a very different world than medieval Europe. Today, humans are vastly more powerful. In every one of my engineering ethics courses, I repeat this same phrase: “Previously, humankind was constrained by its weakness; now, we must learn to be constrained by our good judgment—our ethics.” Humanity is no longer what it once was; we have now gained powers greater than those of the Greek and Norse gods. As we bask in our self-satisfied glory, we might consider what it means to be newly powerful, mortal, and without good judgment.

THE ONLY THING

And yet we have the capability to solve these problems. Some technological problems can be solved with better technology, and many people are working on these tasks already—for example, renewable energy. With renewable energy, we can move our economy away from such carbon-intensive fossil fuels as coal and oil, and instead run the world on sun, wind, and geothermal. Nuclear fusion—the power behind the most devastating bombs in the Marshall Islands—may become controllable in the next few years and provide a nearly unlimited source of energy.

The transition to renewable energy helps to mitigate the risk of climate change, but merely stopping the rate of change is not enough. Atmospheric composition must be rolled back to pre-industrial levels if we want to restore the climate to which we are historically accustomed. That will require technologies to remove carbon dioxide and other greenhouse gases from the air and sequester them elsewhere. One approach is to simply harness what nature has already given us in plants—organisms that naturally collect CO₂—and then remove that carbon from the carbon cycle, for example by burying it as inorganic carbon. “Terra preta” in the Amazon reveals that humans long ago discovered the usefulness of incorporating charcoal into their soil, along with other fertilizers, to create a long-term gain in fertility while also storing carbon for millennia. Research in this field is ongoing.

In addition to mitigating the risks of climate change, we must also adapt to them. Some damages from climate change are inescapable—for example, the rise in sea level we are already experiencing. To respond, we must take simple actions, such as building flood control. Unfortunately, while flood control is a simple idea, it is also very expensive. For example, in the San Francisco Bay Area, studies have been done to determine the feasibility of damming the Golden Gate in order to maintain the Bay and Delta region’s sea level. The cost to build such a project would be tens of billions of dollars, but the cost to build hundreds of miles of levees would also be expensive, as would a “managed retreat” where property is abandoned to the sea. In the face of oncoming destruction, which pain do we prefer?

Climate change is a slow disaster, but other human-made catastrophes could be much faster. Nuclear weapons captured the world’s imagination during the Cold War, and nuclear stockpiles are now reduced. Yet there is still sufficient weaponry to reduce most of human accomplishment to ruins. Reducing nuclear weapons stockpiles should remain a vital moral priority.

Emerging technologies have dangers as well, but may also provide solutions to our problems. Artificial intelligence has long been maligned in movies such as The Terminator, but AI also may give us the power to better evaluate our risks and determine how to solve them efficiently. While AI is sometimes viewed as a panacea, where all will be fixed in a “singularity” or “intelligence explosion” that will lead to AI becoming god-like and salvific in its goodness, this is mere mythology. What we need even more, whether we believe in God or not, is a revolution in our own behavior, in our ethics, rather than just a revolution in our technology.

While technological development surely is not easy, the more difficult problem is choosing to try to solve the problem of catastrophic risk on a much vaster scale, at the level of ethics and politics. We need ethical action and political cooperation to promote good technologies and limit bad ones—and, more than that, to change our hearts so that even when technology can be used in a bad way, we will choose not to do so. Ultimately, to paraphrase Shakespeare, the fault is not in our technologies, it is in ourselves. How can we create a future where technologies contribute to human flourishing and not to human destruction?


We need a revolution in our
behavior, in our ethics, rather
than just a revolution in our
technology.


If there is one thing we can learn from history, it’s that a better future will not happen on its own. It will only happen by the hard work and dedication of many good people, organized and cooperating globally, for the good of all humankind. Our organizational scale must match our task, and the good we seek to preserve must be common to us all.

American writer and environmental activist Wendell Berry once said, “The only thing we can do for the future is to do the right thing now.” What is the right thing for me, as an individual, to do?

As an individual, what I can do in the world is marked by what I have done so far. I have grown up in America, lived in the Marshall Islands, and I work in academia. I have not gone into business, or politics, or the military; those paths are now far from me. I can only do the right thing here, and now. And so I teach and write, and hope that I might communicate something to someone, somewhere, which will help make the world a better place. I network with like-minded individuals in academia, business, government, and religion. We all have little things we can do.

Yet in the end, always, my thoughts are pulled back to the Marshall Islands, slowly going underwater. As I look back on my time there, it becomes so clear why I now do what I do. We have made these mistakes before. People have died, lives have been ruined, entire cultures changed. Nations remain, awaiting destruction … or awaiting renewal and future flourishing. Our story is not yet finished. What future we make together is up to us. How can we work together on this great task?

BRIAN PATRICK GREEN is assistant director of campus ethics programs at the Markkula Center for Applied Ethics and adjunct lecturer in the School of Engineering.

A Woman’s Place

Despite being cut off from some levers of power within the Catholic Church, women continue to find ways to lead.

Increasing Access

Discerning one’s dream requires a whole set of experiences based on community, opportunity, and, yes, cash. Santa Clara helps first-generation students discover their paths through various means of support.

In Search of Verdure

Santa Clara students and faculty are on a quest for greener pastures.

Make AI the Best of Us

What we get out of artificial intelligence depends on the humanity we put into it.