Self-driving cars are often hailed as the future of transportation due to their promises of fewer human-caused accidents, less traffic, and more productive commutes. After a few years of slowed down production, 2024 saw a renewed effort in progressing autonomous vehicle technology. Last summer in San Francisco, for example, market leader Waymo saw a 50% increase in autonomous car rides over two months.
As we begin to share our roads with more and more self-driving cars, questions abound: How safe are they, really? What happens when these cars make mistakes? And how much do we, as passengers, actually know about the decisions autonomous vehicles make on the road?
Melissa Cefkin, an anthropologist and lecturer at Santa Clara University’s School of Engineering, is researching the ethical challenges of self-driving cars through her work as an consultant in the development of autonomous vehicle technology. She focuses on how these cars will affect our daily lives, especially when it comes to how we interact with them and each other. In her classes, Cefkin talks with students about the responsibility of the companies building these cars and the bigger picture of what these technologies mean for society.
At first glance, self-driving cars promise safety and efficiency. However, as Cefkin points out, the reality is more complex. “People are interacting with a very complex system of AI-driven technologies,” she says. Many folks envision these machines as infallible but self-driving technology is still very much in the learning process—and it’s a road riddled with bumps.
Cefkin explains that autonomous vehicles already feature interfaces that show passengers what the car “sees,” such as pedestrians or nearby cars, helping to build trust. However, the interface only displays a subset of the data the car processes, focusing on what’s most relevant to the passenger. In robotaxis, passengers cannot override the car’s decisions and can only report issues, allowing them to relax during the ride. In contrast, in semi-autonomous personal cars like Teslas, the driver must remain alert and ready to take control at any time.
It’s like using a navigation app that tells you where to go. Should you trust it completely, or should you still decide for yourself? This raises the question: Should we control the flow of information, or just let technology guide us.
Then there’s the community aspect. As self-driving cars make their way into our neighborhoods, we can’t overlook the need for inclusive dialogue. Cefkin emphasizes that, “it’s crucial to involve all relevant stakeholders—such as city planners, ethicists, engineers, community members, and policymakers—in conversations about where these vehicles should operate.”
Like, what if a dog runs into the street, should the car swerve to avoid it, even if that puts the passenger at risk? What if it’s a person instead of a dog? These are real dilemmas engineers and ethicists must literally code into the cars’ algorithms.
Split-second decisions can save a life or cause harm, and they need to be thought through carefully. That’s why it’s crucial for communities, including pedestrians and cyclists, to have a say in how these vehicles operate on public roads. Ultimately, as Cefkin reminds us, “Autonomous vehicles are not the full answer.” The road ahead may be uncertain, but with thoughtful engagement, transparency, and a commitment to ethical practices, we can make this journey smoother for everyone. Buckle up, the future is waiting.