Tesla CEO Elon Musk has been promising self-driving cars for years now, and he recently doubled down on that promise, saying “I would be shocked if we d not achieve full self-driving safer than a human this year.” Despite numerous delays, Musk has been adamant that the technology is coming soon, and he’s not alone in his optimism.
Many experts in the field believe that fully autonomous vehicles are an inevitability, and some are even predicting that they will be on the roads within the next few years.
But while the prospect of self-driving cars is exciting, it’s also fraught with potential risks and challenges. One of the biggest concerns is safety. While autonomous vehicles have the potential to reduce accidents caused by human error, they also introduce new risks, such as software glitches and hardware failures.
Additionally, the ethical implications of autonomous vehicles are still being debated, particularly when it comes to issues like liability and accountability in the event of accidents.
- Advertisement -
Tesla’s attempt at Full Self-Driving (FSD)
Tesla, the visionary company at the forefront of the autonomous vehicle industry, has been relentlessly pushing the boundaries of self-driving technology for years now. And in a recent move that defies industry norms, Tesla has expanded beta trials of its Full Self-Driving (FSD) software to over 60,000 Tesla owners, with a stringent safety test and a hefty price tag being the only barriers to entry.
This approach has boosted Tesla’s reputation with its cult-like fan club, which has enthusiastically taken up the opportunity to participate in the beta trials. However, it has also been fraught with risk, as numerous videos have surfaced online, documenting some reckless FSD behaviour.
One video shows a car veering dangerously into oncoming traffic, while another shows a car repeatedly trying to turn onto train tracks and into pedestrians. There is also a video that captures the driver struggling to regain control of the car after the system prompts him to take over. Furthermore, what appears to be the first crash involving FSD was reported to the US National Highway Traffic Safety Administration (NHTSA) in November last year; no one was injured, but the vehicle was “severely damaged.”
Despite these challenges and risks, there are potential benefits to using real-world testers. One significant advantage is that the technology is exposed to a more extensive range of scenarios and edge cases, which can help to identify and rectify issues that may not have been encountered in a controlled testing environment. Ultimately, this can lead to improved safety and reliability of the technology.
Tesla’s approach has also sparked a debate on the best way to test and develop autonomous vehicle technology. Other companies, such as Alphabet-owned Waymo, General Motors-backed Cruise, and AV startup Aurora, use trained safety operators to test the technology on predetermined routes. While this approach provides more control and minimises risk, it may not expose the technology to a wide range of scenarios and edge cases.
There are also concerns about the reliability of the data collected from these real-world tests. Since the testers are not trained safety operators, their feedback may not always be accurate or complete. The fact that testers are Tesla customers who have paid a substantial amount of money for the FSD feature may create a bias toward positive feedback, making it difficult to report issues or failures.
Government Regulations on Full Self-Driving
Tesla’s approach to testing autonomous driving technology has also raised questions about the regulatory oversight of the industry. While companies like Waymo and Cruise have been testing their technology on public roads for years, they have done so under the supervision of safety operators and with the approval of regulatory bodies.
Tesla’s decision to allow its customer to test its FSD technology has led to regulatory scrutiny. Unlike other AV manufacturers, Tesla has avoided more stringent requirements, such as reporting crashes and system failures, as well as using trained safety professionals as testers. Why? Because it claims that its systems are more basic.
- Advertisement -
However, this approach is under review by California’s Department of Motor Vehicles, the state’s autonomous driving regulator, following investigations into Tesla and dangerous-looking videos of the FSD in action.
The lack of global consensus on AV regulations has created a regulatory gap, raising concerns about whether the software will mature fast enough to be trusted by the regulators before something catastrophic happens, pulling the rug out from under the whole industry.
The future of FSD
To be frank – the future for FSD looks bleak. Regulations have become increasingly strict with each passing day, and public support for FSD is waning. Furthermore, the financial viability of FSD for companies is becoming less and less clear. The negative impact on a company’s reputation due to a backlash from FSD accidents may not be worth the risk. And Many experts have expressed their doubts about Tesla achieving FSD by the end of the decade, let alone in the coming year.
As Elon Musk once said, “When Henry Ford made cheap, reliable cars, people said, “Nah, what’s wrong with a horse?” Ford made a huge bet, and it worked.” This highlights the fact that even in the car industry, taking risks and making big bets can pay off. Maybe someday soon, FSD will become the norm, and we will wonder how we ever went without it.