At first glance, self-driving cars promise safety and efficiency. However, as Cefkin points out, the reality is more complex. “People are interacting with a very complex system of AI-driven technologies,” she says. Many folks envision these machines as infallible but self-driving technology is still very much in the learning process—and it’s a road riddled with bumps.
Cefkin explains that autonomous vehicles already feature interfaces that show passengers what the car “sees,” such as pedestrians or nearby cars, helping to build trust. However, the interface only displays a subset of the data the car processes, focusing on what’s most relevant to the passenger. In robotaxis, passengers cannot override the car’s decisions and can only report issues, allowing them to relax during the ride. In contrast, in semi-autonomous personal cars like Teslas, the driver must remain alert and ready to take control at any time.
It’s like using a navigation app that tells you where to go. Should you trust it completely, or should you still decide for yourself? This raises the question: Should we control the flow of information, or just let technology guide us.
Then there’s the community aspect. As self-driving cars make their way into our neighborhoods, we can’t overlook the need for inclusive dialogue. Cefkin emphasizes that, “it’s crucial to involve all relevant stakeholders—such as city planners, ethicists, engineers, community members, and policymakers—in conversations about where these vehicles should operate.”