In a move that signals a seismic shift in the world of artificial intelligence, Tesla has officially pulled the curtain back on CORTEX, its latest supercomputer training cluster built on Nvidia’s cutting-edge H100 GPUs. And make no mistake — this is no mere lab project. CORTEX is already powering the next evolution of Tesla’s Full Self-Driving (FSD) and Optimus humanoid robot programs, taking Elon Musk’s vision of “real-world AI” from science fiction to scalable reality.
From Sci-Fi to Superiority: Real-World AI Is Here
Over half a century ago, the idea that machines could outthink humans seemed like a far-off fantasy. Now, with the rise of real-world AI — systems trained to interact, react, and evolve in real-time environments — Tesla is leading the charge. Whether it’s navigating traffic or teaching a humanoid robot to walk, CORTEX is the brain behind the brawn.
Gone are the days of hardcoded rules. Tesla has shifted to a vision-only approach powered by neural networks trained on terabytes of real-world driving data. That data, collected from millions of Teslas on the road, fuels the company’s AI engines — and CORTEX is the supercomputer that’s making this next-gen training pipeline faster and smarter than ever before.
CORTEX: The Supercomputer That’s Powering FSD v13 (and Beyond)
As revealed in Tesla’s Q4 2024 investor call, CORTEX was launched at Giga Texas to accelerate the rollout of FSD version 13. According to Musk, it played a “significant” role in boosting the performance of the system — delivering better safety, comfort, and response time. Tesla’s shareholder deck listed key specs:
-
~50,000 H100 GPUs
-
4.2x increase in training data throughput
-
2x reduction in latency between image input and car response
-
High-resolution video processing and a redesigned controller
These aren’t just incremental upgrades. They’re leaps toward truly autonomous driving.
Auto-Labeling at Scale: Tesla’s Secret Weapon
A major bottleneck in AI development is data labeling — the tedious process of tagging roads, lanes, signs, and obstacles in countless hours of video. Tesla used to employ over 1,500 human labelers. Now? It’s automating that too.
According to a Tesla patent, CORTEX is part of a three-step system:
-
Collect image and navigation data from Tesla cars and Optimus robots
-
Create fully labeled 3D models of environments using repeated trip data
-
Auto-label future trips using the models — with minimal human input
CORTEX is designed to learn from the data it helped generate, making the labeling process self-improving over time. This is AI training AI — a closed loop that scales.
But What About DOJO?
Interestingly, CORTEX isn’t Tesla’s in-house “Dojo” supercomputer. Dojo was hyped as a revolutionary chip design project, but it’s still scaling up. CORTEX, on the other hand, is already online and running high-stakes workloads, built on tried-and-true Nvidia H100 architecture.
Elon clarified that both systems are running in parallel:
-
~90,000 H100 GPUs (CORTEX)
-
~40,000 Tesla AI4 computers (formerly HW4)
-
~8,000 H100-equivalent Dojo 1 nodes expected by year’s end
And this is just the beginning. Musk has teased expansion plans to scale CORTEX from 130 MW to over 500 MW of power and cooling within 18 months, hinting that Tesla’s own AI hardware will increasingly be integrated into the cluster.
Play to Win — or Don’t Play at All
Tesla is not just building faster chips or smarter cars. It’s building the backbone of a general-purpose AI ecosystem — one that trains itself, improves itself, and ultimately outpaces anything the competition can offer.
Whether you’re bullish on Tesla or skeptical of the hype, one thing is clear: the CORTEX supercomputer is not just another press release. It’s a line in the sand.
This is no longer about catching up to Tesla. It’s about surviving the next wave of real-world AI — a wave Elon Musk is already surfing at full speed.