• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

The Singularity

One thing I'm curious about here is why so many people seem to think computers could become conscious. This seems like a highly problematic notion, and probably one that's false. I'd be interested to hear people's reasons on this (I have to admit that I have an ulterior motive for asking, though not one that I think anyone would think of as manipulative or malevolent).

If you accept a naturalistic origin of the mind, and furthermore a computational theory of the mind, then it stands to reason that this process can be replicated.
 
Personally I don't think that computers by themselves are going to ever really become self aware in the sense we now define. I think that will come about from merging of humanity and computing. At that point I think it will be hard to distinguish one from the other. We are making huge leaps toward that merging, especially with the introduction and realization of neurological computing and meshing. The singularity as it is defined will really be us transforming ourselves, that transformation will most probably bring baggage with it that comes from the nature of being human and what that entails.

Humans have been around awhile and I suspect will be around a great deal longer. We are an incredibly resistant species if you think about it. You don't get to be a long lived species without having a knack for survival. Here's the rub unlike other species in time that we know of, are unique in the ability to use tools efficiently and to invent new tools prodigiously. This ability allows us unlike any other species to much better control our destiny. Every other species is subject to the whims however minor of nature. We much less so. All this is a long winded way of saying that despite the increased danger posed by technology, humans will most likely survive it and probably thrive because of it. At least as a species anyhow.

I agree and disagree. I think the inevitability of our gradual merging with technology is almost palpable. The smart phone analogy has become cliche but you can even peer just a little further round the corner at the cutting edge of neuroprosthetics and increasingly fine understanding and control of the mind. Where I disagree is that I do think AGI is possible, and if it is possible I think we will accomplish it's development within the next hundred years.
 
Sherman123 said:
If you accept a naturalistic origin of the mind, and furthermore a computational theory of the mind, then it stands to reason that this process can be replicated.

I guess I take this as fairly obvious, but I should have said as much. What I want to know is why people, especially members of the general public who don't work in a relevant field, find it so easy to accept something like the confluence of these two theses (i.e. naturalistic origin of the mind, computational theory of the mind). Is it as simple as "because the literature written for a popular audience by experts says so"? Or is there more to it?
 
Resistance is futile.

Assimilate this!

fishermansworf2.jpg
 
Last edited:
Personally I don't think that computers by themselves are going to ever really become self aware in the sense we now define. I think that will come about from merging of humanity and computing. At that point I think it will be hard to distinguish one from the other. We are making huge leaps toward that merging, especially with the introduction and realization of neurological computing and meshing. The singularity as it is defined will really be us transforming ourselves, that transformation will most probably bring baggage with it that comes from the nature of being human and what that entails.

Humans have been around awhile and I suspect will be around a great deal longer. We are an incredibly resistant species if you think about it. You don't get to be a long lived species without having a knack for survival. Here's the rub unlike other species in time that we know of, are unique in the ability to use tools efficiently and to invent new tools prodigiously. This ability allows us unlike any other species to much better control our destiny. Every other species is subject to the whims however minor of nature. We much less so. All this is a long winded way of saying that despite the increased danger posed by technology, humans will most likely survive it and probably thrive because of it. At least as a species anyhow.

I think we could gradually slip into a situation where we are no longer in charge, and might be too far "gone" to do anything about it.

And if we "let the machines do it" more and more, and complexity increases enough as a result, we might just "give birth" to a "wild AI", a spontaneous machine intelligence.

No "Three Laws", nothing.

Sounds sci fi, but its an official "possibility", at least in the "singularity" community.

I think it would be amazing. And there's no reason to expect it to be hostile. More danger of it "hugging us and squeezing us and never letting us go!". Assuming it wasn't too busy gazing at its own navel or some other inscrutible activity.

I think real AI is inevitable, but maybe not the Turing type. We should be able to duplicate animal "intelligence", I'm pretty sure they're up to insects now.

It's the "I AM" moment that's the tricky part. We don't even know how that works in us, exactly.
 
From the movie, First Contact. You shouldn't quote Star Trek unless you're serious about it.

That would have been an appropriate reply from the one I had the exchange with.

But not from you simply popping your head in and saying it later down the road as you did.

Now had your picture been included when you posted such a reply it surely would have been understood. But it wasn't.
As it stood, I figured you were being your usual belligerent self, so I thought I would ask.

Thanks for clarifying.
 
One thing I'm curious about here is why so many people seem to think computers could become conscious. This seems like a highly problematic notion, and probably one that's false. I'd be interested to hear people's reasons on this (I have to admit that I have an ulterior motive for asking, though not one that I think anyone would think of as manipulative or malevolent).

I think it's possible that they could become self aware, but I don't get why that means they would hate us.
 
Did anything in it strike you as particularly interesting?

Not in the sense that I found it astonishing. I'm very interested in AI and it's one of those Sci-Fi things that I may see in my lifetime. The story is about an email program that gains manipulative abilities. There's not a lot of great AI books out there and without the OP it would have remained a deep, dark secret. Worth a read if you're into it. I'm a lifetime Sci-Fi addict and I''ve seen a lot of what I read come true. The book itself got good reviews.
 
Not in the sense that I found it astonishing. I'm very interested in AI and it's one of those Sci-Fi things that I may see in my lifetime. The story is about an email program that gains manipulative abilities. There's not a lot of great AI books out there and without the OP it would have remained a deep, dark secret. Worth a read if you're into it. I'm a lifetime Sci-Fi addict and I''ve seen a lot of what I read come true. The book itself got good reviews.

I'll have to check that out. I think it would be astonishing if it happened by accident. Say someone's trying to create a powerful network, and suddenly it just wakes up. Did it say anything in that book about why an AI would or wouldn't hate humans?
 
I'll have to check that out. I think it would be astonishing if it happened by accident. Say someone's trying to create a powerful network, and suddenly it just wakes up. Did it say anything in that book about why an AI would or wouldn't hate humans?

No spoilers for you! But I think you're going to like it.
 
No spoilers for you! But I think you're going to like it.

I probably will. I bookmarked your link. :mrgreen: Do you think that would be the case? The only way it makes sense to me is if an AI was trying to help us, but in a way a eugenicist would.
 
A big leap? It's why it's called singularity, it is "the leap".
Technology, AI specifically, advances and we either stop it, or singularity occurs IMO. Reality evidences to us a wide spectrum of non-intelligent "life" (simple organism, virus, etc.), on a very granular scale all the way up to humanity. Adding one more data point above humanity seems entirely possible.

Life is nice and all, but it's the only game in town. Singularity adds a new game, one that isn't the short tragedy of the mortal coil...

It's technologically possible.
The motivation is there at every step along the way.
The end goal can be freedom from mortality...if there was ever a strong human end goal that's it.

Either we get wiped out or boom/bust back to dark ages, or we hit singularity IMO. The fear that they'd "hate us", not nearly as important as why we'd want to persist when we could be synthetic. We ultimately hate our own mortality.
 
How do you curb artificial intelligence? It is said in 20 years AI will surpass us and it may decide it does not like us.....then what?
 
sbrett said:
I think it's possible that they could become self aware, but I don't get why that means they would hate us.

OK. I hope I'm not being pushy, but just why do you think computers could become self aware? Indeed, if self-awareness requires awareness, why do you think computers could become aware?
 
I probably will. I bookmarked your link. :mrgreen: Do you think that would be the case? The only way it makes sense to me is if an AI was trying to help us, but in a way a eugenicist would.

Will ELOPE be a friend or an enemy? Will it help or hurt? I'm thrilled that at least a couple of my fellow DPers are showing interest. Lets see how it goes and if it works out well, I will post my "AI section" of my library. Because Avogadro Corp. is set in modern times, it is the most germane to the OP discussion. But I have other AI based books that are pretty good but set in various futures.
I look forward to discussing this when you're done reading.


Just got all three of the series. Will let you know if any good or not. Looks promising though.

When you finish book 1, we can discuss this more. I hope you like it and if you do - I'll put up a few more recommendations. Like I said, there are only a few authors that grok the AI thing.
 
How do you curb artificial intelligence? It is said in 20 years AI will surpass us and it may decide it does not like us.....then what?

You limit CPU production capability. Because CPUs are so complicated and take so much manufacturing know-how, money, etc., it's like nuclear weapons...you limit them by limiting the very obvious and overt production capability.
 
OK. I hope I'm not being pushy, but just why do you think computers could become self aware? Indeed, if self-awareness requires awareness, why do you think computers could become aware?
They could become self aware primarily because we would create them to become self-aware.
 
We are actually more or less in agreement here. You are correct in observing how the breakneck pace of our advancement over the course of the last few decades has resulted in a world that tends to shift too quickly for human sensibilities to keep up. You are also correct in pointing out that many of the (largely directionless) socio-cultural changes that this state of affairs has been responsible for bringing about have tended to be far from positive on the whole. I have argued much the same in many other threads on this board.

I was simply responding to what (I perceived to be, anyway) the assertion in your earlier post that a smaller population was necessarily a desirable answer to these current problems. Generally speaking, I am wary of such claims, as they tend to be the almost exclusive domain of ultra-Left Wing busybodies with delusions of "utopia" frittering round their overly-idealistic heads. The idea that any society can, or even should, be held in "equilibrium" through the artificial management of populations is questionable at best, and outright dangerous at worst.

The proposition is based around principles which have never been shown to be workable in reality. Indeed, contrary to what many of those who favor Malthusian "stability" might like to claim, most human progress throughout our history has been brought about as a result of population growth fueling innovation, not population decline. It also displays a certain inherent aversion to risk and luddite fear of material progress which I find to be intellectually lazy, counter-productive, and fundamentally unimaginative.

It is a truism to say that nothing worth doing in this world comes easy or free of cost, and growth is no different. If the Malthusians had gotten their way, it is likely that there never would've even been an Industrial Revolution, let alone the wonders we see in today's world. They simply wouldn't have been able to see past the temporary hardship growth tends to cause to the rewards which almost always seem to follow in its wake.

The simple fact of the matter is that history has shown time and again that, where there is not growth, there tends to be stagnation. Where there is stagnation, there inevitably tends to be decay. The Imperial Chinese, whose Confucian worldview actually held a lot in common with that of modern population minimalists, IMO, demonstrated this principle perfectly. They felt that their society could be maintained at "equilibrium" indefinitely if only there was a place for everything and everyone worked towards that common goal. The rather cataclysmic cultural dead-end they eventually ran into at the hands of Western Imperialists whose cultures existed in anything but "equilibrium" proves just how mistaken their ideas ultimately turned out to be.

In any case, I do think you are correct in saying that much of the global population is probably going to experience upheaval and even decline in coming centuries in as a result of the structural and social-cultural strain that is becoming ever more readily apparent in modern society as a result of the dizzying pace of our development. I simply hope that progress is not overly set back as global society inevitably readjusts itself.

I would very much like to see the reach of human ambition expanded upon in the same manner it has been since the onset of the modern era, rather than regress back into the petty societal introversion which so marred most of the rest of our history.


You've mistaken my comment as favoring conscious population control. My statement was more of the notion that it will occur irregardless of our intent. I would be in favor, if humans can achieve it of populating the universe. But this world realistically will only hold so many. Whether that's 10 billion or 50 is still unknown but it will plateau and reverse.

As technology and knowledge have increased, so has the world population exponentially. Thru technology we increase the effective utilization of resources and the medical ability to save and extend life. Most wars and even the current gov shutdown are a struggle for the allocation of resources. If it were merely a US problem of governing it wouldn't be happening world wide, which it is.

Do you know why the US gov doesn't feed all the starving people of the world? Because they would simply have more babies without any ability to feed them, till there was a mass die off. Like over feeding a pond of fish, growth cannot continue without limit, even thru technology without more room and resources.
 
Mach said:
They could become self aware primarily because we would create them to become self-aware.

This doesn't seem to clarify anything. How would you create a self-aware computer? Do you know how to do it? I do not. Maybe it's just not possible, in the same way that it's not possible to create a four-sided triangle. What I'm wondering about is what makes people think it's possible to do so.
 
Back
Top Bottom