In the heart of San Francisco, the future of transportation is unfolding. Over the past month, the city’s streets have become a testing ground for autonomous taxis. We now find ourselves at a pivotal intersection of technology and transportation. Software engineering visionaries know that embracing this technological shift means more than acknowledging IoT and AI. It’s about identifying the hurdles these vehicles might encounter and steering the direction for future innovations.
In this article, we’ll delve deep into the genesis of driverless cars, highlighting the technical triumphs and the bumps they’ve encountered along the way. From the ethical dilemmas posed by AI biases to the promise of cutting-edge solutions like Liquid Neural Networks, join us as we navigate the intricate landscape of autonomous taxis, offering critical and forward-thinking insights. Strap in for a ride through autonomous taxi’s past, present, and future.
The Genesis of Driverless Cars
The allure of autonomous cars dates back to the 1920s, a time when rudimentary experiments and remote control technologies ignited imaginations. In 1925, New York witnessed the demonstration of the first radio-controlled car, the “American Wonder,” 30 years after the invention of radio, transmitting Morse Code. Nearly a century later, fleets of robotaxis are swarming cities due to advancements in AI coupled with over 100 billion dollars of investment in the industry. In 2016, Singapore became the first to launch an autonomous taxi service, followed by Uber in Pittsburgh, in the USA. Since then, various trials have emerged globally, with safety drivers in some instances and fully autonomous operations in others, spanning regions from China to Russia and beyond.
The Bumps on the Road: Post-Introduction Challenges
While exhilarating, the promise of a driverless future hasn’t been without its setbacks. Despite safety concerns, this August marked a turning point in driverless car history as San Francisco decided to permit 24/7 driverless taxi operations in the city. Within days, there were a series of public mishaps. Ten Robotaxis, operated by Cruise, halted unexpectedly on bustling streets due to lost contact with mission control. A Cruise autonomous car found itself entangled in freshly poured cement. Another Cruise robotaxi collided with a fire truck, leaving one injured, and most recently, two cruise taxis delayed an ambulance transporting a car accident victim to hospital who later died. These episodes underscore the complexities of real-world driving scenarios.
The Edge Case Enigma
As open source veterans and maintainers for multiple software programs, including PyTorch on Windows, we understand the difficulties of dealing with edge cases, especially in unpredictable environments. In the context of autonomous vehicles, this unpredictability manifests as unique situations that fall outside the parameters of training data. Whether it’s sudden weather changes, unexpected pedestrian behavior, or unforeseen road obstructions, these scenarios can present significant challenges for autonomous systems. In a recent blog, Gary Marcus, CEO & cofounder of the Center for the Advancement of Trustworthy AI, said: “Scaling up to driving everywhere all the time without a serious, well-vetted solution to the edge case problem was insane; it was quite literally an accident (or series of accidents) waiting to happen.” He also touched on earlier reflections from some of the first autonomous driving tests in 2016, saying, “If you put them in clear weather in Palo Alto, they’re terrific … All the drivers are relaxed; you try it in New York, and you see a whole different style of driving. The system may not generalize well to a new style of driving.”
The Ethical Crossroads: Unmasking AI’s Biases
In addition to concerns with edge cases, a recent study unveiled unsettling biases in some autonomous systems. The research tested over 8,000 images using eight AI-powered pedestrian detection systems. The research pointed to discrepancies in how accurately self-driving car sensors detect children and people with darker skin. These findings underscore how technology can perpetuate real-world inequality if ethics and inclusivity are not deliberately centered in AI design.
Dr. Jie Zhang, one of the study’s authors, said in a press release. “Car manufacturers don’t release the details of the software they use for pedestrian detection, but as they are usually built upon the same open-source systems we used in our research, we can be quite sure that they are running into the same issues of bias.” However, a spokesperson for Waymo told Gizmodo that they also include lidars and radars as opposed to cameras alone.
Liquid Neural Networks: A Beacon of Hope
In the face of these challenges, innovations like MIT’s Liquid Neural Networks (LNNs) offer a glimmer of hope. Their potential to handle novel scenarios better than traditional AI systems makes them a promising solution for autonomous vehicles’ challenges.
LNNs, inspired by biological neurons, are compact and highly adaptable. A striking testament to their efficiency is their compactness: traditional networks might require around 100,000 neurons for tasks like lane-keeping, but an LNN achieved the same with just 19 neurons.
Daniela Rus, the director of MIT CSAIL, shared, “The inspiration for liquid neural networks was thinking about the existing approaches to machine learning and their fit with safety-critical systems like robots and edge devices.”
Moreover, LNNs have showcased their adaptability in real-world scenarios. Researchers trained LNNs for object detection on video frames taken in woods during summer. Only the LNNs maintained high accuracy when tested in different seasons, while other neural networks faltered. Rus observed, “Only the liquid networks were able to complete the task in the fall and winter because these networks focus on the task, not on the context.” This adaptability is crucial, especially when considering the biases highlighted in the abovementioned study. Technologies that can adapt to diverse scenarios without being influenced by the context can potentially reduce biases and ensure safer autonomous operations.
The journey toward autonomous vehicles is a blend of triumphs and trials. As we navigate this intricate landscape, the fusion of robust AI frameworks, ethical considerations, and continuous innovation becomes paramount. The proverbial road ahead is long and winding. Still, with collaboration, innovation, and a commitment to excellence, we believe a future where technology seamlessly aligns with safety and inclusivity is within reach.
Drawing from our hands-on experience with deep learning libraries like PyTorch, which are instrumental in powering autonomous vehicles, we recognize this domain’s vast potential and challenges. As we forge ahead, expanding our AI practice with exciting projects and groundbreaking initiatives (like our AI safety startup, BigFilter.ai), our commitment remains unwavering. We are dedicated to advancing the frontiers of AI, machine learning, and software engineering, striving to sculpt a safer and more efficient digital future.
Get in contact here to find out more about our work.