“Admission of technical limitations?” Tesla redefines ‘Full Self-Driving,’ autonomous driving without supervision deemed impossible
Input
Changed
Tesla alters sales terminology and adds new contractual clauses Redefinition reflected even in Musk’s CEO compensation plan Full Self-Driving: breakthrough innovation or perilous experiment?

Tesla has redefined its “Full Self-Driving Capability (FSD)” system. The company has effectively retracted its earlier commitment to deliver a truly autonomous driving technology free of human intervention, a move that the industry interprets as an implicit admission of the limitations of self-driving.
From ‘Full Self-Driving’ to ‘Driver Assistance System’
On the 8th (local time), U.S. EV-specialized outlet Electrek reported that Tesla recently renamed FSD as “FSD (Supervised),” explicitly noting the need for driver oversight. Tesla’s revised terms state: “The vehicle is not an autonomous car and does not guarantee such functionality.”
Furthermore, Elon Musk’s new compensation package also includes provisions clarifying that FSD does not equate to unsupervised driving. The clause defines FSD as an “advanced driving system capable of autonomous or near-autonomous functions under specific conditions.” In effect, FSD has been downgraded to a sophisticated driver-assist feature requiring constant human monitoring.
Since 2016, Tesla has publicly declared that all its vehicles would eventually support driverless autonomy. Musk pledged annually since 2018 that this would be realized by year’s end, selling FSD software at $15,000 per customer and promising that autonomous functionality would be enabled via over-the-air updates. However, this commitment has yet to materialize. Vehicles produced between 2016 and 2023 do not, in fact, support driverless autonomy at all.

Is AI safer than humans?
Tesla’s decision transcends mere corporate marketing strategy, raising profound implications for the trajectory of the autonomous driving sector. Over the past decade, no subject has been more polarizing in the automotive industry than autonomy. Yet a central unresolved question persists: can full self-driving truly surpass human safety?
Tesla’s FSD technology diverges sharply from traditional approaches. Whereas most manufacturers rely on LiDAR and radar sensors, Tesla has bet exclusively on camera vision and neural network-based AI. The challenge lies in whether AI can reliably outperform human judgment. According to the U.S. National Highway Traffic Safety Administration (NHTSA), Tesla vehicles on self-driving modes were involved in 736 accidents between 2021 and 2023, several of which were fatal. While AI learns from data, its ability to respond to unpredictable events may lag behind human reflexes.
Even Tesla’s use of the term “Autopilot” has been contentious. Many engineers advocated for “Copilot,” a term signaling assistance rather than replacement, but Musk insisted otherwise, prompting mass resignations. The California Department of Motor Vehicles has also deemed Tesla’s autonomous driving advertisements misleading, authorizing class-action litigation. Tesla maintains that it has never used the term “full autonomy” in marketing, yet legal disputes remain unresolved.
Former Tesla AI chief: “Full autonomy remains distant”
Andrej Karpathy, Tesla’s former head of AI and once the guiding force behind its autonomous driving efforts, has himself cast doubt on overly optimistic timelines for FSD. Speaking at Y Combinator’s “AI Startup School” event in June, Karpathy warned, “Full self-driving remains an unsolved challenge. Belief in its imminent arrival should be tempered.”
He recounted: “In 2013, I experienced Google’s autonomous driving project (now Waymo) in Palo Alto, and the car drove flawlessly for about 30 minutes. At the time, I thought self-driving was imminent, but twelve years later, many issues remain unresolved.” He added, “Even when Waymo cars appear to be operating without human drivers, remote interventions are frequent, and human judgment is still necessary.” Karpathy emphasized that the evolution of autonomous vehicles and AI agents—software systems performing tasks on behalf of humans—cannot be achieved in short order. “Software is far more complex than anticipated. It’s not 2025, but the entire 2020s that will mark the beginning of the AI agent era,” he stated.
Karpathy’s remarks gained traction in June, coinciding with Tesla’s pilot launch of a robotaxi service in Austin, Texas. International media noted, “While Musk asserts that full self-driving has been solved, his top technologist expressed an entirely different view.” Electrek observed that during the pilot, “Tesla employees occupied the passenger seat while remote operators remained on standby to take control. This is not genuine full autonomy but merely a shift in the observer’s position.”
Electrek further criticized the rollout: “Despite a decade of broken promises, delayed launches, and incomplete systems, Tesla’s push for robotaxi commercialization is little more than a promotional tactic.” At present, Tesla’s FSD can operate for several hundred miles without intervention, but true Level 4 autonomy requires tens of thousands of miles of disengagement-free driving. Analysts conclude that the road ahead remains long.
Comment