Tesla's original supercomputer configuration for AI basically had 720 nodes, each one consisting of eight 80GB Nvidia A100 GPUs, for a total of 5,760 GPUs. A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in cars, but also in embedded systems, mobile phones, personal computers, workstations, and game consoles.
Modern GPUs are efficient at manipulating computer graphics and image processing. Their parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard.
With the recent addition of another 200 nodes of the same characteristics (1,600 GPUs), Tesla´s AI reaches a total of 920 nodes, which represents 7,360 GPUs. This update makes the cluster the seventh most powerful supercomputer in the world, as explained by Tim Zaman, Tesla's director of engineering on Twitter, although this capacity has not yet been publicly evaluated.
Tesla is sponsoring the @MLSysConf, come visit our booth for opportunities on the AI team and see our hardware.We have recently upgraded our GPU supercomputer (photo) to 7360 A-100(80GB) GPUs, making it Top-7 by gpu-count. Reach out to build #1: https://t.co/mPiOOb8CJg pic.twitter.com/VN5orhbtLy— Tim Zaman (@tim_zaman) August 12, 2022
The fact that the system cannot be evaluated means that it cannot officially be part of the Top500 of the most powerful supercomputers in the world. If it did, it would be up against systems with similar computing power, such as the Perlmutter (6,144 Nvidia A100 GPUs) or the Selene (4,480 A100 GPUs). However, Tesla at the moment does not open the door for this to be allowed, although beyond the aforementioned ranking, the important thing is the benefit that the company can take from this system that will precede a much more ambitious one, called "Dojo", that is supposed to be ready by the end of this year.
But, exactly what role does the supercomputer play within Tesla? The evolution of autonomous driving systems, at least as proposed by Tesla, requires the parallel development of many fields, precisely because what must be achieved is that a car reacts like a human being - or even better - in any driving situation. Thus, the firm trains deep neural networks in problems that affect everything from perception to control through the raw images captured by many of its cars currently circulating on the streets.
This Tesla supercomputer basically performs semantic segmentation, object detection, and monocular depth estimation. In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
In addition, they learn from the situations of actual human driving on the spot. This way, broadly speaking, Tesla can improve driving assistance systems in its Model Y Model 3, S and X through updates and point in the direction of the long-awaited fully autonomous driving system – FSD - which Elon Musk has been speaking about for a long time. And of course, there's a lot of other technologies embedded in cars, like machine vision cameras, autonomy algorithms (which algorithmically design accurate real-world terrain data), and FSD chips that run autonomous driving software.
As we can see, manufacturing the EV cars of the present and the future inevitably requires exploring new areas: supercomputing and artificial intelligence are two of the great pillars of Tesla. The company is so involved in them that, in addition to promoting the aforementioned technologies, it is one of the sponsors of the Machine Learning & Systems Conference in Santa Clara, California; and on September 30th, just like every year, will celebrate "Tesla AI Day", where it will reveal its latest developments in the field and how it is applying them to enhance its current and future IT plans: we might even get see the Optimus robot for the first time.
All images courtesy of Tesla Inc.
Nico Caballero is the VP of Finance of Cogency Power, specializing in solar energy. He also holds a Diploma in Electric Cars from Delft University of Technology in the Netherlands, and enjoys doing research about Tesla and EV batteries. He can be reached at @NicoTorqueNews on Twitter. Nico covers Tesla and electric vehicle latest happenings at Torque News.