Where There’s Smoke

… there might just be mirrors. On “fake news,” the Internet, and everyday ethics.

Where There’s Smoke
Illustrations by Lincoln Agnew

An armed man walks into a pizzeria. Terrified customers gather their children and rush out. But the man is there hoping to protect children.

That is a true story. However, the armed man was in the pizza place, in early December, because of a fake news story—or, to put it more accurately, because of misinformation about a supposed child slavery ring: a story that was widely spread on the Internet—easily, quickly, and with little or no cost.

Analyses of the proliferation of misinformation and its impact on the presidential election and democracy itself are now widespread on the Internet, as well. As such stories show, there are multiple ingredients in this noxious concoction—this stew of intentional deceit, profit-driven indifference, and individual biases or carelessness.

FOLLOW THE MONEY
One ingredient consists of individuals who generate and publish absolutely false stories, posting their creations on websites that try to mimic as closely as possible those of the traditional media that still strive for accuracy. Those creators of fake news fully intend to make their lies hard to distinguish from truthful stories. Some simply do it for money: Fake news draws views, which draw advertising dollars.

Trust Project
The basic pledge of journalism is this: Serve society with a truthful, intelligent, and comprehensive account of ideas and events, and function as “the immune system of democracy.” How’s it going? Not well.

Another ingredient is the growing number of people who carelessly post falsehoods on their public social media accounts and then express surprise when their posts spread and are amplified through other media channels. For example, in a case study titled “How Fake News Goes Viral,” The New York Times quotes a man whose tweet about outside protesters supposedly being bused into Austin, Texas, became the basis of a widely spread fake news story: “I don’t have time to fact-check everything that I put out there, especially when I don’t think it’s going out there for wide consumption.” He only had a handful of followers. But the media report tweets now. For wide consumption.

There are also those media organizations whose business models rely on monetizing exaggerations and conspiracy mongering. Then there are social media platforms whose algorithms are designed to give people what they want, thus separating users into echo chambers in which they only hear their own views, constantly reinforced by a stream of sometimes inaccurate information.

Over the past several years, governments have also taken a hand in the creation and amplification of misinformation. Back in 2015, in an article called “The Agency,” Adrian Chen detailed his extensive investigation of “paid trolls” working in Russia: “By working every day to spread Kremlin propaganda,” he wrote, they “made it impossible for the normal Internet user to separate truth from fiction.” As early as 2014, the workers of the St. Petersburg-based “Internet Research Agency” appeared to be connected to hoaxes set in the United States, as well. As Chen put it, “Russia’s information war might be thought of as the biggest trolling operation in history, and its target is nothing less than the utility of the Internet as a democratic space.” In the last press conference of his tenure, in December 2016, President Obama spoke of lessons learned “about how Internet propaganda from foreign countries can be released into the political bloodstream.”

And what lessons might we draw, all of us, the consumers of news, who sometimes share links to stories—often in anger or dismay—without doing even a simple check on what we distribute to our network of friends, amplifying and magnifying the reach of falsehoods?

Follow the Compass

To journalists, the story of fake news is one of professional ethics. To media organizations and social media platforms (and primarily Facebook, on which, according to the Pew Research Trust, 44 percent of Americans access news—far more than on any other social network), fake news is a business ethics issue. To some politicians, fake news is a question of campaign ethics. To most of us who share links online, it’s an “everyday ethics” issue. As explained by the former executive director of the Markkula Center for Applied Ethics, Thomas Shanks, S.J., “Despite our many differences, we share [commonplace moral] everyday questions; this is the common ‘stuff’ of human living and interacting.” These days, one such commonplace question is, “What can I do to combat the spread of fake information?”

The satire website The Onion recently ran an article titled “Facebook User Verifies Truth of Article By Carefully Checking It Against Own Preconceived Opinions.” Don’t be that user.

We need “extreme vetting” for news stories—and in recent weeks, plenty of experts have offered excellent suggestions on how to spot fake news. Realistically, though, especially in the lazy-river-ride feel of scrolling through the Facebook News Feed, most of us won’t take the time to look closely at bylines, the “About” sections of various sites, or URLs that hint at something other than legitimate news sources. But the truth is—at least for now—that even a modest level of effort will help reduce the proliferation of misinformation:

  • Don’t share news stories based on the headline alone (without actually reading the linked article).
  • Don’t share in anger. The few seconds you take to vet a story will also serve as a cooling-off period.
  • Before sharing a link, especially if it comes from a source you don’t recognize, go to Snopes.com, Factcheck.org, or Politifact, and use those services to check on the accuracy of the linked story.
  • If those sites don’t address the story, Google the headline (or use your favorite other search engine instead). If the story is fake, it’s likely that other articles debunking or questioning it will appear in the search for it, too.
  • If, after those steps (which shouldn’t take very long), you’re still not sure whether a story is true or not, don’t share it. Your family and friends aren’t likely to be permanently deprived of key information by your choice, but the ecosystem may well be improved. This is especially true in light of the phenomenon of “availability cascade,” which, as Wikipedia notes, is a “self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse.” Even false stories start to gain an aura of credibility if repeated or shared often enough (“I think I heard that before, didn’t I? There must be something to it …”). But, on the Internet, the presence of smoke doesn’t always signal the presence of fire. Sometimes, it’s just smoke and mirrors.

Will your actions have an effect on the misinformation maelstrom? Our individual decisions impact the common good. Consider your effort to combat the spread of misinformation as akin to participation in beach clean-ups: part of a great communal effort to remove some of the trash from the online ecosystem. And, of course, to amplify its impact, please share this on your favorite social media.

Irina Raicu directs the Internet Ethics program for the Markkula Center for Applied Ethics—which just celebrated its 30th anniversary. Find out more, follow its blogs, and support its work: scu.edu/ethics.

Increasing Access

Discerning one’s dream requires a whole set of experiences based on community, opportunity, and, yes, cash. Santa Clara helps first-generation students discover their paths through various means of support.

In Search of Verdure

Santa Clara students and faculty are on a quest for greener pastures.

Make AI the Best of Us

What we get out of artificial intelligence depends on the humanity we put into it.

The Co-Op

Santa Clara University has long been a bastion of interdisciplinary learning. A new fund is taking cross-collaboration to new heights.