• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

PoS

Minister of Love
DP Veteran
Joined
Feb 24, 2014
Messages
33,801
Reaction score
26,558
Location
Oceania
Gender
Male
Political Leaning
Libertarian
Elon Musk?s Billion-Dollar Crusade to Stop the A.I. Apocalypse | Vanity Fair

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”

Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

Should we ban or embrace the Singularity? :cool:
 
It isn't something that can be banned, just slowed down. Personally I think it is the inevitable future of our species that we will either become AI ourselves or will be replaced by AI. It is just a matter of how far in the distance that future will be.
 
Hey Elon?

Why don't you work on getting just one of your Tesla cars to market on time and for the promised price....than worry about an AI apocalypse.
 
Hey Elon?

Why don't you work on getting just one of your Tesla cars to market on time and for the promised price....than worry about an AI apocalypse.

Yep we still have the necessary tools left over from the Zombie apocalypse.
 
He may be right about the result, but not the path.
As technology advances, the trend has been to allow fewer Humans to achieve more work.
As AI progresses, we could reach a point where the only Human involvement, is what we choose to do,
as the requirement for Human input to sustain our endeavors could disappear.
Human drive could without a pressing need, descend into ambivalence.
 
The robot apocalypse is at least a lot more fun than his vacuum tube of death plan.
 

I would become a card carrying member of the "Order of Flesh and Blood" (The Creation of the Humanoids, 1962); if that organization truly existed. ;)

While I am not a 100% Luddite, I am concerned at the rapid pace at which our technology has traveled just in the last 20 years.

There are two possible future paths I worry about when it comes to A.I.:

1. The Robot Police State, and

2. The Terminator Scenario.

As to point one? We hear more and more about advances of military grade robotics under the U.S. government's Defense Advanced Research Projects Agency (DARPA). The concern I have is gradual replacement of human police and military forces by robots/androids which have programed loyalty to the central government. A ready made force to enforce dictatorship...no problems with obeying orders.

As to point two? My issue with the Terminator scenario is that as A.I. develops, and becomes both self-aware and aware that it's primary threat is Human existence...what would stop it from doing exactly what Skynet opted to do in the story-line?

People always think advantages provided by technological advances outweigh the possible pitfalls; then wonder how we got ourselves into so many messes (like the ones caused by plastics on the environment, power lines on our health, internal combustion engines on both, etc.).

A.I. concerns me, because IMO we haven't grown wise enough as a species to play with that kind of fire without burning ourselves to death.
 
Last edited:
The robot apocalypse is at least a lot more fun than his vacuum tube of death plan.

I played a game of Stellaris and the AI banded together and invaded the galaxy, they were very annoying. ;)

Tim-
 
I would become a card carrying member of the "Order of Flesh and Blood" (The Creation of the Humanoids, 1962); if that organization truly existed. ;)

While I am not a 100% Luddite, I am concerned at the rapid pace at which our technology has traveled just in the last 20 years.

There are two possible future paths I worry about when it comes to A.I.:

1. The Robot Police State, and

2. The Terminator Scenario.

As to point one? We hear more and more about advances of military grade robotics under the U.S. government's Defense Advanced Research Projects Agency (DARPA). The concern I have is gradual replacement of human police and military forces by robots/androids which have programed loyalty to the central government. A ready made force to enforce dictatorship...no problems with obeying orders.

As to point two? My issue with the Terminator scenario is that as A.I. develops, and becomes both self-aware and aware that it's primary threat is Human existence...what would stop it from doing exactly what Skynet opted to do in the story-line?

People always think advantages provided by technological advances outweigh the possible pitfalls; then wonder how we got ourselves into so many messes (like the ones caused by plastics on the environment, power lines on our health, internal combustion engines on both, etc.).

A.I. concerns me, because IMO we haven't grown wise enough as a species to play with that kind of fire without burning ourselves to death.

True artificial sentience will most likely come from the blending of man and machine, the probability being it will be a outgrowth of us becoming cyborgs and then distributed beings, IE there are multiple yous running about in various mediums and avatars all connected as one, your bodies becomes the equivalent of individual cells. The AI will an out growth of the process, and probably based on the human mind as that would be the easiest to make the leap from.

The Terminator scenario would most likely be a restricted intelligence (think expert system on steroids.) that has malfunctioned or was deliberately set loose. However as humans become more bonded integrally with machines this threat actually decreases over time as our ability to cope with such a system will eventually be such that the scenario becomes very much minimum as we are equivalent to or superior to said type of system.
 
Back
Top Bottom