Skip to main content

Elon Musk Warns of AI for the Future

Tesla just had its AI Day event and it was very technical and detailed. I'm excited for where their AI tech is going, but I also thought about Elon Musk talking in videos about how we must be cautious and very careful as we create AI.

Elon Musk Talks About AI in 2019

This is a video from almost 2 years ago where Elon Musk was talking about AI. He sums it up during the video by saying: The danger of AI is much greater than the danger of nuclear war heads. And that AI is far more dangerous than nuclear warheads.

He is referring to a digital super human intelligence AI. Narrow AI (like FSD) is not a species level risk for humans. It will result in dislocation and lost jobs. It will improve weapons, etc...

Elon Musk mentions that if humanity decides to create a digital super human intelligence, that we should so very carefully.

When Elon Musk Was Asked About AI Misalignment at AI Day

At 2:59:18 of this video, Elon Musk is asked about AI being malicious and what would happen.

Elon's response to this question was that we should be worried about AI, but that Tesla is making "Narrow AI" - that is AI that is focused on one specific task, in the case of Tesla, driving a car better than a human. And for the Tesla humanoid robot to just do basic stuff.

When you get to super human intelligence, all bets are off. That will probably happen. But at Tesla, we're trying to make useful AI that people love and that is unequivocally good.

My Take on AI

After having watched AI day, here's what I think:

I think that we are, in 2021, in a state of Narrow AI being developed, but not perfected. In a few years, I see Narrow AI starting to be successful at surpassing humans with full self-driving. Narrow AI can already beat humans at Chess and other games. But this Narrow AI is still just pulling from possibilities and is trained to do what it is doing. I think there is little chance that a Narrow AI causes a problem for humanity.

But, what happens when hardware continues to improve? What happens when that hardware can do more and more calculations and processes. What happens when that hardware is enough to mimic a human brain? What happens when that hardware can take in multiple things like driving, athletics, speaking, martial arts, weapons, etc...?

A digital super intelligence would have a piece of it that understands driving and could drive at super human levels. This intelligence could also take information in about cooking, cleaning, manufacturing, politics, sports, weapons and warfare - anything that has been done by humanity and do it at super human levels. Eventually, an AI could be trained on everything that has existed for humanity.

What would happen if you trained a system with thousands of times the processing of a human brain, on all the things in the world? What would the neural net look like and how would it interact with the world? I think this is what Elon Musk is worried about because at this stage, would the AI start to do things we don't expect? Would this AI see all of this data about humanity and start to make decisions that are viewed as harmful to humans, but as beneficial overall to it and its view of the world?

What do you think about the future of artificial intelligence? Is this something we should be worried about? Will there be a digital super intelligence?

Leave your comments below, share the article with friends and tweet it out to your followers.

Jeremy Johnson is a Tesla investor and supporter. He first invested in Tesla in 2017 after years of following Elon Musk and admiring his work ethic and intelligence. Since then, he's become a Tesla bull, covering anything about Tesla he can find, while also dabbling in other electric vehicle companies. Jeremy covers Tesla developments at Torque News. You can follow him on Twitter, Facebook, LinkedIn and Instagram to stay in touch and follow his Tesla news coverage on Torque News.