Skip to main content

Hours Ago A Tesla Owner Shows FSD Improvements in Action

Tesla owner shows full self driving improvements

Just a few hours ago one Tesla blogger shared the FSD improvements in action and said during the entire driving he had zero disengagements.

"How do people not see what a massive software breakthrough this is? AI day was about how they made it happen, and how they plan to take it to its inevitable conclusion," wrote Whole Mars Catalog Twitter user, who regularly blogs Tesla news and shares full-self driving videos.

Tesla's FSD was one of the four main topics that Elon Musk dedicated most of the time during the latest Tesla AI Day.

Many of the speakers at the AI Day event noted that Dojo will not just be a tech for Tesla’s “Full Self-Driving” (FSD) system, it’s definitely an impressive advanced driver assistance system that’s also definitely not yet fully self-driving or autonomous. The powerful supercomputer is built with multiple aspects, such as the simulation architecture, that the company hopes to expand to be universal and even open up to other automakers and tech companies.

“This is not intended to be just limited to Tesla cars,” said Musk. “Those of you who’ve seen the full self-driving beta can appreciate the rate at which the Tesla neural net is learning to drive. And this is a particular application of AI, but I think there’s more applications down the road that will make sense.”

Tesla tries with great effort to make the autopilot more practical, builds supercomputers and has another small surprise..

At Tesla's AI Day, engineers presented the new developments to improve the autopilot function of Tesla vehicles. The improvements shown also give an insight into the problems of the previous software. At the end of the presentation, Elon Musk, almost in the tradition of Steve Jobs, had a little surprise to present.

As announced some time ago, the new autopilot will be able to evaluate video data from multiple cameras simultaneously over time. The aim is to no longer only recognize objects on individual images from individual cameras, but also to be able to assign physical properties such as their location and speed to them. The artificial intelligence in the background should be able to model a complete physical world, in which objects have a permanent existence, instead of just being captured from moment to moment on a camera image (or not).

At least in the demonstration, the new autopilot was able to reconstruct its surroundings much better, with a clearly recognizable course of the road instead of just individual, not always reliably recorded lane markings. This should be due both to better preprocessing of the images and to the new algorithms for capturing objects and surface features.

Armen Hareyan is the founder and the Editor in Chief of Torque News. He founded TorqueNews.com in 2010, which since then has been publishing expert news and analysis about the automotive industry. He can be reached at Torque News Twitter, Facebok, Linkedin and Youtube.