Skip to main content

Tesla Vision Puts It in the Pole Position for Full Self-Driving

There was a recent presentation by Andrej Karpathy, the leading AI expert for Tesla, talking about removing radar and using a pure vision system only for Tesla's autonomous driving software. Let's look at the key highlights and why this puts Tesla in a firm lead.

Radar is Completely Gone and Why This Matters

Tesla has removed radar from their latest cars and will be relying only on vision to make their cars autonomous. They are the only auto-maker doing this that I'm aware of.

Removing radar will save money and production time on Tesla's vehicles, but it also decreases complexity. The main reason for this is: If radar and vision both have a different answer of what to do, which do you believe?

Value is Offered Today before Full Self-Driving

In Andrej Karpathy's presentation, he shared several instances of where Tesla is offering value today with its vision system. Here are the scenarios where the vision system saved the driver of the Tesla as well as others who could have been harmed:

Automatic Emergency Braking

There was a scenario where a drive was proceeding through and intersection and while doing so, a pedestrian jumped out in front of the car after which the Tesla car quickly hit the brakes to stop and avoid hitting the pedestrian. You can see the person quickly run forward, but they would not have been hit by the car.

Here is the video of the Automatic Emergency Braking.

Traffic Control Warning

There was a situation with a person that isn't paying attention to red lights. As they go through the lights and reach the point of traffic coming, the car stops itself at the red light and avoids running it. I can see Tesla expanding this to avoiding running stop signs as well - which I have done on accident before, but thankfully didn't have any accidents.

Here is the video of the Traffic Control Warning.

Pedal Misapplication Mitigation # 1

A driver was in a parking lot getting ready to leave and as they were turning, they floored the accelerator to leave, but there were pedestrians in front of them as they turned. The Tesla vision system slammed on the breaks to avoid hitting the people in front of the car.

Here is the video of the pedal misapplication mitigation scenario 1.

Pedal Misapplication Mitigation # 2

There was a driver of a Tesla in a parking area and they were turning to park in a parking spot. As they turned into the parking spot, they floored the accelerator, probably on accident. The problem is that ahead of the parking spot was a river. The Tesla vision system stopped the car, recognizing that there was nowhere to go ahead, except for the car to go in the river and it promptly slammed on the breaks, keeping the person safe from being plunged into the river.

Here is the video of the pedal misapplication mitigation scenario 2.

Related News: Tesla's neural net development for FSD.

Tesla has a Massive Super Computer that is Just the Beginning

Tesla has what they believe to be the 5th largest super computer in the world and that computer is solely focused on solving autonomous driving. Essentially, they've created their own "vision" cortex that is functioning as the human brain would. The computer boasts some impressive specifications:

  • 720 nodes of 8x A100 80GB (5760 GPUs total)
  • 1.8 EFLOPS (720 nodes * 312 TFLOPS-FP16-A100 * gpu/nodes
  • 10 PB of "hot tier" NVME storage @ 1.6 TBps
  • 640 Tbps of total switching capacity

Basically all of this means that Tesla has built their own custom computer that has an IMMENSE amount of video processing capability.

In the latest Tesla vehicles, there is also a FSD computer motherboard that has the following:

12 CPUs @ 2.2 GHz; GPU (600 GFLOPS); 2X NPU (36.86 TOPS/NPU); 36W

This is a hefty computer chip used for running the full self-driving software in a Tesla car.

Here is the video of Tesla's super computer and chip in its car.

Vision is Much More Effective at Handling All Cases

There was a situation where a Tesla car was driving behind another car in a controlled situation where the car ahead was told to slam on its breaks as hard as it could.

When the data was retrieved to look at how the radar and vision systems processed this, it showed the radar having a hard time keeping track of the velocity and the position of the car, whereas the vision system was able to very smoothly track the position and velocity in real-time without problem. Radar was shown to have periodic problems tracking objects and there is no way to know when this will happen.

Here is the video of the Tesla vision system performing better than radar.

Tesla Has the Most Data Being Gathered and Processed

Tesla has over a million cars on the road gathering data and a massive super computer processing and training datasets. According to Andrej Karpathy, to get any neural network to signal (be successful), you need the following:

  • Millions of videos (a large dataset)
  • Clean data (labeled data, here; depth, velocity, acceleration
  • Diverse data (lots of edge cases and not just normal situations)
  • A large enough neural network to train the data

Here is the video of Tesla's neural network outline.

Tesla vision seems to be the most viable, globally scalable solution to autonomous driving. Will anyone else catch up to Tesla? Will someone be able to use radar and lidar in order to solve autonomy at a global scale?

Leave your comments below, share the article with friends and tweet it out to your followers.

Jeremy Johnson is a Tesla investor and supporter. He first invested in Tesla in 2017 after years of following Elon Musk and admiring his work ethic and intelligence. Since then, he's become a Tesla bull, covering anything about Tesla he can find, while also dabbling in other electric vehicle companies. Jeremy covers Tesla developments at Torque News. You can follow him on Twitter, Facebook, LinkedIn and Instagram to stay in touch and follow his Tesla news coverage on Torque News.

Comments

Al D (not verified)    June 25, 2021 - 4:03PM

Prove it, Musk. I'd rather put my trust in the InnovizTwo, which is a compact, inexpensive lidar system that will achieve Level 2+ and be upgradeable in the future to Level 3 without any new hardware needed. It will be out by the end of the year.