• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

If this doesn't send a chill up your spine, nothing will.

Dittohead not!

master political analyst
DP Veteran
Joined
Dec 3, 2009
Messages
52,009
Reaction score
33,944
Location
The Golden State
Gender
Male
Political Leaning
Independent
DronesKillAlgorithm_web_1024.jpg



It's Happening: Drones Will Soon Be Able to Decide Who to Kill


Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.


Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society.


No kidding. Science fiction is about to become science fact. Can you imagine the implications of this new technology?
 
Keep in mind, automated technology doesn't need to be absolutely perfect before America at large adopts it for commercial use, it just needs to make less errors than the average operator. There's a lot of finer details that will need to be accounted for before Aircraft like the Predator can be fully unmanned, but there's also more than enough human error that can be corrected by having artificial intelligence can take account for that the human mind simply cannot.
 
Keep in mind, automated technology doesn't need to be absolutely perfect before America at large adopts it for commercial use, it just needs to make less errors than the average operator. There's a lot of finer details that will need to be accounted for before Aircraft like the Predator can be fully unmanned, but there's also more than enough human error that can be corrected by having artificial intelligence can take account for that the human mind simply cannot.

And when the world at large adopts it for military use, what implications might that have for civilian populations?
 
Keep in mind, automated technology doesn't need to be absolutely perfect before America at large adopts it for commercial use, it just needs to make less errors than the average operator. There's a lot of finer details that will need to be accounted for before Aircraft like the Predator can be fully unmanned, but there's also more than enough human error that can be corrected by having artificial intelligence can take account for that the human mind simply cannot.

Well there is a comforting thought. We can have a machine decide with accuracy who to kill or a human making a mistake and killing the wrong people.

That makes me feel a whole lot better about war.
 
Well there is a comforting thought. We can have a machine decide with accuracy who to kill or a human making a mistake and killing the wrong people.

That makes me feel a whole lot better about war.

Best of all, you can assassinate at will and blame it on a software update!
 
And when the world at large adopts it for military use, what implications might that have for civilian populations?

Skynet, 1985, Apollo Shrugged, who knows? Morality needs to evolve with technology and when it doesn't we get incidences of mass domestic surveillance like the Patriot Act or Cambridge Analytica. I can't assess what will be and wouldn't be kosher for civilian populations at large 10-15 years from now if we get aircraft that can strike nefarious targets at the same, or more effectively than trained personnel. What if we got to a point where AI can distinguish nefarious activity better than trained personnel; would you still want flawed humans to make those decisions or trust technology that is demonstrably better at the job?
 
Skynet, 1985, Apollo Shrugged, who knows? Morality needs to evolve with technology and when it doesn't we get incidences of mass domestic surveillance like the Patriot Act or Cambridge Analytica. I can't assess what will be and wouldn't be kosher for civilian populations at large 10-15 years from now if we get aircraft that can strike nefarious targets at the same, or more effectively than trained personnel. What if we got to a point where AI can distinguish nefarious activity better than trained personnel; would you still want flawed humans to make those decisions or trust technology that is demonstrably better at the job?

It would be humans who would still have to point out what a nefarious activity looked like.
 
Well there is a comforting thought. We can have a machine decide with accuracy who to kill or a human making a mistake and killing the wrong people.

That makes me feel a whole lot better about war.

War isn't a comforting thought, neither is having to be trusted with the decision whether or not to hit somebody with a hellfire missile. I'm not advocating for the DoD to become dependent on artificial intelligence, but it isn't completely immoral to develop technology that could help increase confidence levels on what is, and what isn't a "bad guy."
 
War isn't a comforting thought, neither is having to be trusted with the decision whether or not to hit somebody with a hellfire missile. I'm not advocating for the DoD to become dependent on artificial intelligence, but it isn't completely immoral to develop technology that could help increase confidence levels on what is, and what isn't a "bad guy."

Ai's simply process information faster. They do not think as we do or make judgments as we do. They would be no more efficient at deciding than the information given to them by a human. This is not improving the decision making it is just making it a faster decision.
 
Ai's simply process information faster. They do not think as we do or make judgments as we do. They would be no more efficient at deciding than the information given to them by a human. This is not improving the decision making it is just making it a faster decision.

It would be humans who would still have to point out what a nefarious activity looked like.

How much experience do you have behind the monitor of a drone feed in a contested area? In my experience, after a certifiable bad dude gets blown up a ton of things can happen at all at once and you can only see so much and have such little time to call out what you want to focus on. Is it nefarious for a bunch of grown men to run for cover after a missile strikes a guy? Is it nefarious for someone to go out with a cell phone and stares at the strike location? What do you decide is something you should focus on? AI could help define what is probable activity and what should be followed up on.
 
Darpa already has small drones that are imbued with a measure of AI. Individual drones in the swarm communicate with each other and decide on the best attack pattern and can also take collective/individual evasive actions. These mini-drone swarms with AI are being developed for attacking ships at sea.

A problem is, both Russia and China are working on weaponizing AI. Keep up or be hopelessly left behind and vulnerable.

Within the coming decades, battles will transpire far too fast for humans to be in the decision loop.
 
THIS is the problem:



The driver of a semi fell
asleep at the wheel.
Average guy...
wife and kids... you know,
working a double...
not the devil...

the car he hit, the driver's
name was Harold Lloyd.
Like the film star?
No relation.
He was killed instantly.

But his 12 year old
was in the passenger seat.
Never really met her...
can't forget her face though.

Sarah.
This was hers.
She wanted to be a dentist.
The hell kind of a 12 year old
wants to be a dentist?

Ya, um, the truck
smashed our cars together
and pushed us into the river.
I mean, metal gets pretty
pliable at those speeds.

She's pinned, I'm pinned,
the water's coming in.
I'm a cop so...
I already know everybody's dead.
Just a few more minutes
before we figure it out.

NS4 was passing by, saw the
accident and jumped in water.
(Save her, save the girl! Save HER!!)
But I, um... it didn't.
Saved me.

"The robot's brain
is a difference engine.
It's reading vital signs
that must have calculated that..."


It did...
I was the logical choice.
Calculated that
I had 45% chance of survival.
Sarah had only an 11% chance.
That was somebody's baby.
11% is more than enough.

Human being would have known that.
Robots... nothing here...
just lights and clockwork.
Go ahead and you
trust them if you want to.
 
Last edited:
AI, why not? Not like humint is so good!
 
Skynet, 1985, Apollo Shrugged, who knows? Morality needs to evolve with technology and when it doesn't we get incidences of mass domestic surveillance like the Patriot Act or Cambridge Analytica. I can't assess what will be and wouldn't be kosher for civilian populations at large 10-15 years from now if we get aircraft that can strike nefarious targets at the same, or more effectively than trained personnel. What if we got to a point where AI can distinguish nefarious activity better than trained personnel; would you still want flawed humans to make those decisions or trust technology that is demonstrably better at the job?

I don't think I'm ready for AI drones patrolling the skies ready to kill whoever their makers have decided should be killed.
I also don't think humans can evolve as fast as we can invent new technology.
 
How much experience do you have behind the monitor of a drone feed in a contested area? In my experience, after a certifiable bad dude gets blown up a ton of things can happen at all at once and you can only see so much and have such little time to call out what you want to focus on. Is it nefarious for a bunch of grown men to run for cover after a missile strikes a guy? Is it nefarious for someone to go out with a cell phone and stares at the strike location? What do you decide is something you should focus on? AI could help define what is probable activity and what should be followed up on.

No, an ai could not help define. They are not intelligent in that way. In order for them to understand something it requires a human to input thousands of examples of that thing. If someone decides that reading a left wing newspaper is nefarious activity and inputs thousands of examples of that then that is what an ai will pick up as nefarious activity.
They are not intelligent to make judgments. They are intelligent to process information at great speed. They can spot a man reading a left wing newspaper faster than any human could but the decision of whether that is nefarious has come from a human.
 
Back
Top Bottom