In a strategic move that underscores its deepening commitment to artificial intelligence and autonomous technology, Tesla is reportedly poised to significantly expand its semiconductor supply agreement with Samsung Electronics. According to industry reports, executives from the electric vehicle giant are scheduled to meet with Samsung officials this week to negotiate a substantial increase in the production volume of the next-generation AI6 chip. This development marks a pivotal moment in Tesla’s hardware roadmap, potentially securing the computational power necessary for its most ambitious projects over the next decade.
The negotiations center on Tesla’s request to bolster the supply of wafers for its upcoming AI6 chip, which is set to be manufactured using Samsung’s cutting-edge 2-nanometer process node. If finalized, this expansion would see Tesla’s procurement rise from an initially agreed 16,000 wafers per month to approximately 40,000 wafers per month. Such a dramatic increase in volume highlights Tesla’s anticipation of massive demand for high-performance computing, driven by the simultaneous scaling of its Full Self-Driving (FSD) suite, the Optimus humanoid robot program, and internal data center operations.
This potential agreement builds upon a long-term foundry partnership between the two tech behemoths. Tesla had previously secured a deal covering AI6 production through the end of 2033, valued at approximately 22.8 trillion won ($16–17 billion). The current discussions aim to revise the terms of this existing contract to accommodate the higher volume, signaling Tesla’s intent to lock in critical supply chain resources well into the future. As the global race for AI supremacy accelerates, securing 2-nanometer production capacity—widely considered the next frontier in semiconductor efficiency and performance—is a strategic imperative.
Scaling Up: The Logistics of the Expansion
The core of the reported negotiations lies in the sheer scale of the production increase. Under the terms of the existing agreement, Tesla had secured a steady supply of 16,000 wafers per month. While significant, this volume appears to be insufficient for the company’s revised projections of its future hardware needs. The request for an additional 24,000 wafers per month represents a 150% increase over the original plan, bringing the total monthly capacity to around 40,000 wafers.
Industry sources cited by The Elec indicate that Tesla purchasing executives are visiting Samsung specifically to hammer out the detailed supply terms regarding this volume hike. The financial implications of such an expansion are substantial. With the original deal already valued in the range of $16 billion to $17 billion, a volume increase of this magnitude would likely involve significant renegotiation of pricing structures, delivery schedules, and capital commitments. For Samsung, securing such a massive order from a high-profile client like Tesla serves as a validation of its foundry capabilities, particularly as it competes with Taiwan Semiconductor Manufacturing Company (TSMC) for dominance in the advanced logic market.
The timeline of the agreement, extending through December 31, 2033, suggests that Tesla is not merely looking for a short-term fix but is architecting a decade-long hardware strategy. By securing wafer capacity for the AI6 chip now, Tesla is insulating itself from potential future supply chain disruptions while ensuring it has the raw silicon necessary to power millions of vehicles and robots.
The 2-Nanometer Advantage
Central to this expansion is the technology underlying the AI6 chip: Samsung’s 2-nanometer process. In the world of semiconductor manufacturing, the process node (measured in nanometers) refers to the size of the transistors; smaller transistors allow for higher density, greater performance, and improved energy efficiency. Moving from the current 5-nanometer and 4-nanometer standards to 2-nanometer represents a generational leap in chip technology.
For Tesla, the benefits of the 2-nanometer process are multifaceted. The primary advantage is power efficiency. As Tesla vehicles become increasingly software-defined computers on wheels, the energy consumption of the onboard inference computer becomes a critical factor in overall vehicle range. A more efficient chip allows for more complex FSD calculations without draining the battery excessively. Furthermore, the increased transistor density allows for greater computational throughput, enabling the AI6 to process the immense streams of video and sensor data required for Level 4 and Level 5 autonomy with lower latency.
Samsung’s approach to 2-nanometer technology utilizes Gate-All-Around (GAA) transistor architecture, which offers superior control over current flow compared to the older FinFET architecture. This technical nuance is vital for Tesla’s applications, where reliability and performance consistency are non-negotiable safety requirements.
Powering the Ecosystem: FSD, Optimus, and Data Centers
The massive increase in wafer orders raises the question: What will Tesla do with all this silicon? The report identifies three primary pillars that the AI6 chip will support: the Full Self-Driving system, the Optimus humanoid robot, and Tesla’s internal AI data centers.
Full Self-Driving (FSD): The trajectory of Tesla’s FSD software has been moving steadily toward end-to-end neural networks, where the system learns driving behaviors directly from video data rather than relying on hard-coded rules. This approach is incredibly compute-intensive. As Tesla expands its fleet and pushes for higher levels of autonomy, the onboard computer must be capable of executing trillions of operations per second with flawless reliability. The AI6 is poised to be the engine behind the next generation of FSD hardware, likely succeeding the current Hardware 4 (HW4) and the upcoming HW5.
Optimus Humanoid Robot: Perhaps the most demanding application for the new chips will be the Optimus robot. Unlike a car, which has a large battery pack and active cooling systems, a humanoid robot has strict constraints regarding power consumption, heat dissipation, and weight. The AI6 chip’s 2-nanometer architecture will be crucial for Optimus, providing the necessary intelligence for navigation, object manipulation, and human interaction while maintaining a battery life that makes the robot commercially viable.
Internal AI Data Centers: A surprising but significant detail in the report is the potential use of AI6 chips in Tesla’s internal data centers. Traditionally, Tesla has utilized NVIDIA GPUs for training its neural networks. However, the company has been aggressively pursuing vertical integration to reduce dependency on third-party suppliers and optimize hardware for its specific workloads. Deploying AI6 chips in server clusters would allow Tesla to unify its architecture, running the same silicon architecture in the cloud (for training) as it does at the edge (for inference in cars and robots).
Strategic Pivot: From Dojo to AI6 Clusters
One of the most intriguing aspects of the report is the suggestion that AI6 clusters could replace the role previously planned for Tesla’s Dojo AI supercomputer. Dojo was introduced with much fanfare as Tesla’s custom-designed solution for video training, utilizing the proprietary D1 chip. The goal was to create a supercomputer specifically optimized for the hyper-bandwidth requirements of processing video data.
However, the industry sources cited indicate a potential pivot. The report states:
“The report also indicated that AI6 clusters could replace the role previously planned for Tesla’s Dojo AI supercomputer. Instead of a single system, multiple AI6 chips would be combined into server-level clusters.”
If accurate, this represents a significant shift in Tesla’s infrastructure strategy. Moving away from a specialized supercomputer architecture like Dojo toward a cluster of versatile AI6 chips could offer greater flexibility and scalability. It suggests that the AI6 is being designed not just as an inference chip for edge devices, but as a high-performance processor capable of handling the heavy lifting of model training. This unification of hardware could streamline software development, as engineers would be optimizing for a single architecture across the entire stack.
A Historic Partnership: Tesla and Samsung
The collaboration between Tesla and Samsung is not new; it is a relationship forged over several years of technological cooperation. Samsung has been a key player in Tesla’s journey toward custom silicon, having participated in the design and manufacture of previous hardware generations.
- HW3 (AI3): Samsung manufactured Tesla’s Hardware 3 chip using a 14-nanometer process. This chip was a breakthrough, allowing Tesla to move away from standard NVIDIA hardware to a custom solution tailored for FSD.
- HW4: The current generation of hardware found in new Tesla vehicles is also produced by Samsung, utilizing a 5-nanometer node. This chip offered significant performance improvements over HW3, enabling higher resolution cameras and faster processing.
While Tesla had previously planned to split the production of its intermediate AI5 chip between Samsung and TSMC, the decision to choose Samsung as the primary partner for the AI6 chip underscores a deepening trust. TSMC remains the dominant player in the global foundry market, serving clients like Apple and NVIDIA, but Samsung’s aggressive investment in 2-nanometer technology and its willingness to accommodate Tesla’s specific volume and design requirements appear to have tipped the scales.
Global Implications and Supply Chain Security
Tesla’s move to secure massive 2-nanometer capacity has broader implications for the global semiconductor industry. As artificial intelligence becomes the driving force behind technological innovation, the demand for advanced logic chips is outstripping supply. By locking in a contract through 2033, Tesla is effectively reserving a significant portion of the world’s future advanced computing capacity.
This strategy also reflects a desire for supply chain resilience. The automotive industry was crippled by chip shortages during the post-pandemic recovery, teaching manufacturers a hard lesson about the fragility of the supply chain. Tesla, which weathered that storm better than most due to its agile software and hardware engineering, is now taking steps to ensure it never faces a bottleneck in compute availability. Relying on Samsung also provides geographic diversification, balancing the heavy concentration of chip manufacturing in Taiwan.
Conclusion
The reported negotiations between Tesla and Samsung regarding the AI6 chip represent more than just a supply deal; they are a declaration of intent. By targeting a production volume of 40,000 wafers per month on a 2-nanometer process, Tesla is laying the physical foundation for a future dominated by autonomous agents. Whether it is a Robotaxi navigating a busy intersection or an Optimus robot performing household tasks, the success of these products hinges on the silicon that powers them.
As Tesla executives meet with their counterparts at Samsung this week, the outcome of these talks will likely shape the trajectory of the company for the next decade. If the deal is finalized as reported, it will cement the Tesla-Samsung alliance as one of the most consequential partnerships in the tech world, bridging the gap between advanced semiconductor manufacturing and real-world AI robotics.