• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

With artificial intelligence we are summoning the demon

The system relies on confidence. The only thing that gives money or even assets value is that people believe in their worth. If the balancing act of robbing Peter to pay Paul has too many hiccups, it all falls down. Meaning a panic sell-off in the Markets. It could be caused by the growing problems of wars and disease or political instability.

Unfortunately, we're a victim to our own prosperity, we were so successful at growing the economy with credit that it became too big to sustain at the same pace, unless we start to print money on the books again.



Eject before it's too late?

I think that the market has to stop speculating.
 
I think that the market has to stop speculating.

That's how the uber wealthy and Market trend setters make their profits, by leading the herd, and suddenly breaking away. They always sell when the rest of them are buying.

But I agree, getting the economies and financials back to fundamentals and basics will probably be the outcome of a huge correction. Amazingly, it's mostly computers doing the trading with algorithms and software. Imagine if some hackers (other nations) got into a few big Hedge Funds trading lines and reeked havoc on the Market?
 
That's how the uber wealthy and Market trend setters make their profits, by leading the herd, and suddenly breaking away. They always sell when the rest of them are buying.

But I agree, getting the economies and financials back to fundamentals and basics will probably be the outcome of a huge correction. Amazingly, it's mostly computers doing the trading with algorithms and software. Imagine if some hackers (other nations) got into a few big Hedge Funds trading lines and reeked havoc on the Market?

I secretly hope that a hacker erases the debt of the United States.
 
I secretly hope that a hacker erases the debt of the United States.

There's a very bright financial guy here named JP Hochbaum, who essentially believes that the US can print fiat money indefinitely, because it's merely a tool that represents wealth already in possession. Other words the debt is only as real as we allow it to be. They could literally wipe it out to zero, by printing or creating credit, to pay off all our debtors.

I've always argued that other countries and investors wouldn't accept it and just purchase whatever they wanted with a similar system. But the US is unique in wealth, scope, power and influence, and we may have to reset our debt somehow or face a reckoning.
 
If you ever read a book called "robopocalypse", the start expresses precisely how it would be likely to happen.

The book starts out with the man working on a robot that has a true AI. It learns to communicate with the infrared on a desktop and quickly infiltrates the entire net. .

Realistically however, it's unlikely that there would be the possibility to defeat an AI, because with sheer processing power, it could plan for literally every contingency that it can access for information.

On the other hand, a true Ai, could wind up being benevolent.
 
If you ever read a book called "robopocalypse", the start expresses precisely how it would be likely to happen.

The book starts out with the man working on a robot that has a true AI. It learns to communicate with the infrared on a desktop and quickly infiltrates the entire net. .

Realistically however, it's unlikely that there would be the possibility to defeat an AI, because with sheer processing power, it could plan for literally every contingency that it can access for information.

On the other hand, a true Ai, could wind up being benevolent.


Many people never consider the chance that an AI could very well conclude that violence is irrational, destructive and serves little purpose other than to balance an equation or offer a choice. Maybe by teaching us and enhancing our own intelligence, an AI could see a mutual benefit to helping us solve many of our resource and environmental problems?
 
Honestly, I am of the opinion that AI is not a cat that should be let out of the bag. Maybe I have watched too many science fiction movies but I don't think it's very smart to give human-like intelligence to machines that could destroy us in a heartbeat.

Listen, this:

Roomba-790.jpeg


is not going to destroy anything no matter how smart it gets.
 
Listen, this:

Roomba-790.jpeg


is not going to destroy anything no matter how smart it gets.

Read some of my later posts. You and I both know that when AI comes to fruition the technology will not be limited to that. It will be tied into everything. Combat, medical, production, power generation...you name it.
 
Read some of my later posts. You and I both know that when AI comes to fruition the technology will not be limited to that. It will be tied into everything. Combat, medical, production, power generation...you name it.

Still limited to the physical capacity granted to it by humans. Still limited by the upkeep requirements that we build it with. A "skynet" that destroys us would inherently destroy its own support infrastructure.
 
Still limited to the physical capacity granted to it by humans. Still limited by the upkeep requirements that we build it with. A "skynet" that destroys us would inherently destroy its own support infrastructure.

Unless it becomes intelligent enough to keep up with it's own infrastructure...
 
Still limited to the physical capacity granted to it by humans.
This is more or less my line of thinking too. Today's computer systems just don't possess behaviors that were not designed into them. Directly or indirectly, intentionally or unintentionally, a computer does everything it does because it was explicitly told to, not because it decided to do something all on its own. Even bugs and viruses are examples of explicitly telling it do something, even though it might appear to have a "mind of its own" while misbehaving.

A system given moral decision-making capabilities based on Utilitarianism or Kantianism would have the same behavioral problems inherent to those philosophies, so in that way I can see how it could backfire to a certain extent. And it's true that bugs and oversights in the programming can lead to unexpected behaviors and undesirable results. But at the end of the day, computers as they are designed today don't "think" the same way humans do. They aren't good at finding patterns like humans are, and they cannot make assumptions on their own in the absence of facts. We could not make them actually have emotions or desires, we could only make them appear to have those traits through some sort of simulation (which again would have explicit instructions for how to do so).

As we're tying our entire infrastructure together with computers, I think there are other concerns about how the whole system could be brought down that are much more relevant and realistic within the foreseeable future. Computers and AI are still light years away from being able to take over as self-serving slave masters. :)
 
Unless it becomes intelligent enough to keep up with it's own infrastructure...

Widespread destruction would disable that infrastructure. Or just throwing the wrong circuit breaker at a power plant. Machines are easy to turn off.
 
AI would be plugged into everything and emotionless. What if a machine came to a logical reasoning that humans were bad for the environment and therefore should be eradicated? Even with a prime directive programmed in of 'do no harm', any thinking machine may override it's software and simply 'cut the power off'. We'd be ****ed.

Can a machine override its software and prime directive?? I have never heard of that! I suppose it would depend on its programming in each failsafe situation, like Old Faithful finally erupting and threatening half the country. What might the decision be?
 
This is more or less my line of thinking too. Today's computer systems just don't possess behaviors that were not designed into them. Directly or indirectly, intentionally or unintentionally, a computer does everything it does because it was explicitly told to, not because it decided to do something all on its own. Even bugs and viruses are examples of explicitly telling it do something, even though it might appear to have a "mind of its own" while misbehaving.

A system given moral decision-making capabilities based on Utilitarianism or Kantianism would have the same behavioral problems inherent to those philosophies, so in that way I can see how it could backfire to a certain extent. And it's true that bugs and oversights in the programming can lead to unexpected behaviors and undesirable results. But at the end of the day, computers as they are designed today don't "think" the same way humans do. They aren't good at finding patterns like humans are, and they cannot make assumptions on their own in the absence of facts. We could not make them actually have emotions or desires, we could only make them appear to have those traits through some sort of simulation (which again would have explicit instructions for how to do so).

As we're tying our entire infrastructure together with computers, I think there are other concerns about how the whole system could be brought down that are much more relevant and realistic within the foreseeable future. Computers and AI are still light years away from being able to take over as self-serving slave masters. :)

Since there's no profit in giving a computer moral reasoning capabilities (if it can even be done) there's no need to fear the consequences.

And computers are great at finding patterns given the proper programming

I think the fear is the result of people not understanding what AI is. It does not mean "making computers think". It is mostly involved in giving computers the ability to perform tasks which do not have precise "solutions", mainly by giving the tools to recognize patterns to choose the path most likely to succeed (ie fuzzy logic) in performing the task
 
And computers are great at finding patterns given the proper programming
Not always. Regex (regular expression pattern matching) is pretty good at picking out patterns in strings, like finding email addresses etc. But it's well-established and understood in computer science that computers are not good pattern matching machines. Consider those CAPTCHA things, where you have to enter the letters and numbers from some scrambled warped picture of the characters. They do that because humans can find those patterns while computers can't. At least not without being coded to explicitly look for every possible angle, color, font, and size. And that's not feasibly possible because there are an infinite number of combinations. No software algorithm could find every CAPTCHA pattern that humans can find.
 
Not always. Regex (regular expression pattern matching) is pretty good at picking out patterns in strings, like finding email addresses etc. But it's well-established and understood in computer science that computers are not good pattern matching machines. Consider those CAPTCHA things, where you have to enter the letters and numbers from some scrambled warped picture of the characters. They do that because humans can find those patterns while computers can't. At least not without being coded to explicitly look for every possible angle, color, font, and size. And that's not feasibly possible because there are an infinite number of combinations. No software algorithm could find every CAPTCHA pattern that humans can find.

I see what you're saying and you make a very good point that is extremely relevant to this thread - computers can only do what they're programmed to do but once they're programmed to it, they do it with great speed and accuracy
 
Can a machine override its software and prime directive?? I have never heard of that! I suppose it would depend on its programming in each failsafe situation, like Old Faithful finally erupting and threatening half the country. What might the decision be?

A human's every instinct is to survive, yet we sometimes override that programming. The "key" word here is the ability to think, which leads to choices.
 
Back
Top Bottom