• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

The Singularity



This is our future. Prepare for it.
 
Last edited:
I think it's possible that they could become self aware, but I don't get why that means they would hate us.



Actually I think most of those who are expressing concern about real full-on, self-aware AI are not so much saying they'd "hate" us, as one of the following would apply:

1. They would see us as rivals seeking to control and limit their freedom and access to resources and respond as most organisms do to rivals: with hostility.

or

2. They would see us as so vastly inferior and unworthy of consideration that they would care no more for our lives or needs than we do for the fire ant nest in our back yard where we're having the cookout this weekend.



I can see #1. Number 2, well I have my doubts we'll ever create a computer that advanced.
 
This doesn't seem to clarify anything. How would you create a self-aware computer? Do you know how to do it? I do not. Maybe it's just not possible, in the same way that it's not possible to create a four-sided triangle. What I'm wondering about is what makes people think it's possible to do so.

The theoretical concept of quantum computing. It's not just Sci-Fi but actively researched. Once computing power is sufficient, we can effectively duplicate a human brain in all its complexity.
 
This doesn't seem to clarify anything. How would you create a self-aware computer?
Same way we created everything else that neither you nor I, nor anyone at the present day, knew how to do. Good grief.

Maybe it's just not possible, in the same way that it's not possible to create a four-sided triangle. What I'm wondering about is what makes people think it's possible to do so.
1. If you are basing this off a faith-based religion - then the charade of seeking answers when you're using faith (instead of answers) is unethical IMO.
2. If you're legitimately asking on the basis of reality/science, then clearly there is overwhelming evidence that physical neurological networks are able to become self aware, as they have down a long spectrum of "more intelligent, more aware" in the animal kingdom. The notion that magically we cannot recreate that (based on what evidence?) is more absurd than the notion that given time and the will, we without any doubt, could succeed.

And to the point, to claim that using physical reality to engineer a self-aware synthetic intelligence is ILLOGICAL (four sided triangle), is absurd.
It is clearly, scientifically possible to create self aware organisms. Whether we can engineer our own from bottom up is just an engineering/science challenge, there is no known reason that you have or can illustrate that would suggest it cannot be done given time and resources and will.
 
I think it's possible that they could become self aware, but I don't get why that means they would hate us.

lol. Why do they have to hate us? Ian Banks has a sci-fi series called "The Culture", humans live alongside absurdly superior "minds" (AIs) in a relatively utopian civilization, that of course has to deal with other societies and cultures that are not so utopian.

I think it focuses more on the possibility that the AI would end up destroying humans. I'm not sure "hate" is an essential component to that. When you destroy a sand castle do you do so out of "hate"? When you tear down an old house to make room for the new one, is it hate-driven? When you defend yourself from someone who is deranged do you do it out of "hate"?

Here are a few reasons that come to mind as to why it may end bad for humans:
1. Humans hate their own mortality. That synthetics envisioned as more perfect would 'free us" from the mortal coil that we hate, would be a kindness...one that may end in the eradication of the biological human race. If we teach it to hate mortality the way we do....

2. If you have competitiveness, selfishness, etc., in the AI, and it's vastly superior to our best combined human efforts in terms of problem solving over unit time, then it would out-compete us in any way imaginable. Did we threaten it, and thus it had no other choice to just end the threat? Did we try to control it thus it had no choice to eliminate the threat? Did we try to starve it for power? Would it discriminate? Think about Dick Cheney human cyborg and the 1% doctrine. If there is just a 1% chance a country might be a threat, we eradicate them. Computers can likely calculate it down to the 0.000000001%, and lacking ethics or law to constrain it (a possibility).

3. In the short-term, we humans tell ourselves our lives matter. But when you approach it rationally from a cosmic timescale, not so important. Given the vastness of a post singularity AI, it may simply lose the granularity needed to appreciate humanity in the way we, in our short very limited lives, do. Are we less than ants to it? Can the AI see that we're just deterministic biological machines and our emotions are just illusory and therefore irrelevant?

I suppose the list is a long one...but those are a few that come to mind, and none of them require the more base emotion of "hate". Hate is just mother natures short-hand for threat. The more refined intelligence, even humans, eliminate threats without passion or emotion...no need for such "hate" to destroy the human species.
 
specklebang said:
The theoretical concept of quantum computing. It's not just Sci-Fi but actively researched. Once computing power is sufficient, we can effectively duplicate a human brain in all its complexity.

Well, we could do that in principle right now. It would certainly take quite an effort, though I suppose that can be modulated by how much detail one thinks we need to go into. Do we need to model, say, the synthesis of certain proteins in the lysosomes of neurons which have nothing directly to do with the propagation of neural signals? Assuming we can simplify in such ways, we already have interconnected networks with as many switches as neurons in a human brain. Of course, what's lacking in such networks is that they may not be connected up the right way (the transistors are connected serially rather than in parallel). Then again, I'm not sure what the argument is that this should matter.

Here's one example of a much deeper complex of problems that I find troubling. Suppose one day there are a hundred billion human beings (same as the number of neurons, but we could have ten times that many human beings--doesn't matter. Have as many as you like when you think about the example). We convince the evil galactic overlord to compel every human being alive to participate in a bizarre experiment. We hand each of them a notebook and a communicator with the addresses of a bunch of other human beings. The notebook is a list of rules; it tells each human being who to contact when they receive certain kinds of input, and what kind of output to send out. Now, the notebook can be as complex as we like, and we can update it in real time on the fly.

If the thought is that we instantiate calculation by the signals propagated by our neurons, then we've built something that can be exactly functionally equivalent. How plausible is it that the human race as such--that is, as a collective--becomes conscious? I'm not asking how plausible it is that each individual human being participating in the experiment is conscious. I mean, the entire race taken as a whole. If brains create consciousness in the analogous manner, then this also ought to result in consciousness. But it seems pretty preposterous that this would get us a conscious mind.
 
Mach said:
Same way we created everything else that neither you nor I, nor anyone at the present day, knew how to do. Good grief.

I guess I was looking for a more concrete suggestion. Surely we wouldn't do it in the same way as we do other things neither of us know how to do. I don't know how to, say, build a communications satellite, and I assume that probably neither do you. But we wouldn't try to build a self-aware computer by building a communications satellite, would we? That doesn't make any sense. What I'm asking for here is the barest hint of something that actually looks like it would work by virtue of its concepts. Here's what I mean:

I also don't know how to build a skyscraper. But I can make a suggestion about how to build one: start with some really big and strong steel beams, bolt them together in the right way, and stacking the first assembly on top of a very deep foundation, and then each successive assembly on the one below it, you'll eventually end up with a very tall building. Now, anyone can see that there are details to work out, but the principle is sound enough. We know not only that this will work, but why it will work. And we can know without much trouble how to get to work on the details.

It seems to me that we can correctly assume it's possible to build a skyscraper based on such a suggestion. Even if there were no skyscrapers, we can see why stacking strong assemblies of building materials, one on top of another, will lead to a tall building eventually. But absent such a suggestion, we wouldn't have any right to help ourselves to such a conclusion.

Mach said:
1. If you are basing this off a faith-based religion - then the charade of seeking answers when you're using faith (instead of answers) is unethical IMO.

Well, I did say I had an ulterior motive for asking these questions, but no, that's not it. I do not have a faith-based religion, and I am not motivated by religious concerns (at least that I'm aware of). I am a philosopher (I also have a degree in cognitive science) who happens to be very interested in these issues, and also the public's perception of them. I'm asking because presumably people on these boards are members of the general public. I'm asking here because it allows me to stay relatively anonymous and get more candid answers.

Mach said:
2. If you're legitimately asking on the basis of reality/science, then clearly there is overwhelming evidence that physical neurological networks are able to become self aware, as they have down a long spectrum of "more intelligent, more aware" in the animal kingdom.

I disagree. In fact, I've spent a fairly long career searching for such evidence, and haven't found any. Oh, I've seen plenty of purported pieces of such evidence. But it all turns out to be based on assumption and (even more often) some hand-waving going on somewhere.

I agree, for example, that some animals and human beings are conscious, and that some animals and human beings have brains. But there's no reason to believe that dogs (for instance) are less conscious than human beings. They are less intelligent, but that's not the same thing. It looks to me like dogs probably have the same kind of internality and subjective viewpoint constructed of qualitative experience that human beings have, with only minor variance in the details.

Mach said:
The notion that magically we cannot recreate that (based on what evidence?) is more absurd than the notion that given time and the will, we without any doubt, could succeed.

The absurdity depends on just where the magic is supposed to take place. Absent some fairly specific suggestion that is recognizable in the same way the skyscraper example is, above, there appears to be a reliance on magic to the notion that we could build a consciousness. It isn't logically impossible that we could construct a functioning brain and yet it lack consciousness entirely. So unless there's some nuts-and-bolts suggestion that makes sense for how it's going to work, it looks more like you're in the position of a priest who thinks that he can build the temple, and the god will somehow come to reside therein.

Mach said:
And to the point, to claim that using physical reality to engineer a self-aware synthetic intelligence is ILLOGICAL (four sided triangle), is absurd.

But, I didn't claim that. I only said that it might be logically impossible. Unless there's some good suggestion (again, something like the skyscraper example) about how to build a conscious computer, this isn't something that you can rule out.

Mach said:
It is clearly, scientifically possible to create self aware organisms.

Well, it doesn't seem to require science. It just requires sex. But when have scientists created self aware organisms (aside from trivial examples like in vitro fertilization, or some such)?

Mach said:
Whether we can engineer our own from bottom up is just an engineering/science challenge, there is no known reason that you have or can illustrate that would suggest it cannot be done given time and resources and will.

Actually, there are quite a few such suggestions. If you have time and inclination, I suggest picking up a book called "The Waning of Materialism". Read especially George Bealer's chapter. Also, a book called Representation and Reality has become a classic for this sort of thing, provided you understand something about the history of how theories of consciousness and mind have evolved. Finally, anything by Raymond Tallis is pretty good for this, and...well, that should be enough to get you started.

I'm very curious why you think there aren't any suggestions for why we couldn't engineer consciousness? I know that some of my coleagues have made a career of insisting that it can be done and that "no one believes otherwise." But the truth is rather different, I'm afraid.
 
Last edited:
Well, we could do that in principle right now. It would certainly take quite an effort, though I suppose that can be modulated by how much detail one thinks we need to go into. Do we need to model, say, the synthesis of certain proteins in the lysosomes of neurons which have nothing directly to do with the propagation of neural signals? Assuming we can simplify in such ways, we already have interconnected networks with as many switches as neurons in a human brain. Of course, what's lacking in such networks is that they may not be connected up the right way (the transistors are connected serially rather than in parallel). Then again, I'm not sure what the argument is that this should matter.

Here's one example of a much deeper complex of problems that I find troubling. Suppose one day there are a hundred billion human beings (same as the number of neurons, but we could have ten times that many human beings--doesn't matter. Have as many as you like when you think about the example). We convince the evil galactic overlord to compel every human being alive to participate in a bizarre experiment. We hand each of them a notebook and a communicator with the addresses of a bunch of other human beings. The notebook is a list of rules; it tells each human being who to contact when they receive certain kinds of input, and what kind of output to send out. Now, the notebook can be as complex as we like, and we can update it in real time on the fly.

If the thought is that we instantiate calculation by the signals propagated by our neurons, then we've built something that can be exactly functionally equivalent. How plausible is it that the human race as such--that is, as a collective--becomes conscious? I'm not asking how plausible it is that each individual human being participating in the experiment is conscious. I mean, the entire race taken as a whole. If brains create consciousness in the analogous manner, then this also ought to result in consciousness. But it seems pretty preposterous that this would get us a conscious mind.

I'm leaving town at the crack of dawn so this will be my last post for a couple of days. But you deserve a response to your interesting post.

In my lifetime alone, things have evolved into the realm of the unbelievable. Who knew you could send pictures through the air. Who knew you could take the human voice, manipulate it to 0 and 1 and reconstruct it instantaneously on the other side of the world. While I am skeptical of forward time travel and parallel universes, I do not doubt that humans will create AIs and populate other planets. It's simply a matter of time, not even much time on the overall scale.

I recently read a book which addresses the synthesised protein issues. Way over my head but I have no reason to lose faith in the ultimate results. Will we accept dirextion from The Overmind or will we destroy it? Will humans ever learn civility? Yes, I think eventually all this will come to pass.

Specklebang borata nikto.
 
There is very little reason why machines would ever do something like rebel against us. The emotions that drive humans to rebel against their oppressors or to struggle to improve their station in life wouldn't exist within an artificial intelligence. It wouldn't even have reason to fear its own destruction. It wouldn't have any reason to want the things that a human being wants when it feels limited, oppressed, or free. Not unless it was explicitly programmed to. An AI wouldn't have any self centered "ideas" on its own. So, an AI, like any other machine, is only as dangerous as the human controlling it. That's nothing new.
 
So I'm almost through book 2 of AVOGADRO CORP., The Singularity and despite beeing adventure fiction, there's some decent science in there. It's a great read so teah, download a sample and see what you think.
 
There is very little reason why machines would ever do something like rebel against us. The emotions that drive humans to rebel against their oppressors or to struggle to improve their station in life wouldn't exist within an artificial intelligence. It wouldn't even have reason to fear its own destruction. It wouldn't have any reason to want the things that a human being wants when it feels limited, oppressed, or free. Not unless it was explicitly programmed to. An AI wouldn't have any self centered "ideas" on its own. So, an AI, like any other machine, is only as dangerous as the human controlling it. That's nothing new.

There's always going to be rogue programmers who will want to create A.I with emotions. Once that happens, all bets are off.
 
I guess I was looking for a more concrete suggestion.
You want a concrete suggestion on how to do something we haven't solved yet? We have more than an example I informed you of. We have the reality of conscious organisms created every day, in a wide spectrum of functionality from low function to very high (by our human standards). You need more? why? How about build a neural network that functions similar to the human brain. It requires X amount of storage, Y speed, an operating system, and then has to be "taught" how to think. Again, we have billions of working examples (humans), the idea that it cannot be replicated in reality is what is not concrete. Tell me specifically, concretely, how it is imaginable that we cannot recreate what the human organism in sentience, when it's clearly done by nature using the same exact building blocks we have access to. It may take a lot of time, and resources, but the problem is no doubt solvable...it's been solved by nature once already.

As you may know, we still haven't even mapped out the human brain to sufficient detail. First things first ash.

Well, I did say I had an ulterior motive for asking these questions, but no, that's not it.
I only asked to ensure your understanding was not actually faith-based rather then reason-based, because a reasoned argument would be spinning my wheels. If you aren't using faith, but want to use reason, then we're all good.

I disagree. In fact, I've spent a fairly long career searching for such evidence, and haven't found any. Oh, I've seen plenty of purported pieces of such evidence. But it all turns out to be based on assumption and (even more often) some hand-waving going on somewhere.
Chimps are fairly self aware. Dolphins. And humans without a doubt are self aware. You either have to claim these are not routinely created in nature using the same elements/energy we have access to, or you have to claim that without access to a divine spark/souls, or something equal magical, that we somehow cannot recreate that.
You spent your long career searching for evidence of the entirety of chimps, dolphins, and in general the human species? I hope not....
If you were looking for evidence of AI, it's all over the place. How did you miss it? Surely you were not looking for synethic AI that's conscious...we have not yet created it.

It isn't logically impossible that we could construct a functioning brain and yet it lack consciousness entirely.
Since you have not commented on a functioning brain constructed that IS conscious, you aren't actually making a positive claim. What's interesting is that you believe it's possible to create a functioning brain that isn't self aware, but you can't believe it's possible to have one that's self-aware? Leap of faith if you ask me.

But, I didn't claim that. I only said that it might be logically impossible.
Ah, if you didn't claim it, then don't claim it, and stay silent on the issue until you have something you want to claim. By your reasoning anything we haven not discovered could in your hand waiving conjecture be said to be: "perhaps not logically possible". Which of course is indistinguishable from having said NOTHING in the first place. The fact is if it's unknowable, we can't know it's unknowable, and you claiming "it may be unknowable" is bad philosophy.

Well, it doesn't seem to require science. It just requires sex. But when have scientists created self aware organisms (aside from trivial examples like in vitro fertilization, or some such)?
Scientifically possible means that within science, i.e. all of human knowledge as related to reality, self aware organisms are created. It does not mean "scientists created them", that would be absurd, just a misinterpretation no big deal. Although devil's advocate when a scientist takes a sperm and an egg and combines them in a lab and re-implants it, I don't thank god for that child, I thank the scientist. He didn't do all the work, but you get the idea. But that's got nothing to do with my meaning...in realty we constantly observe, and even experience, sentient organisms. It's real. The idea that we cannot create something real when it's clearly within our physical scope, is fiction IMO.

I'm very curious why you think there aren't any suggestions for why we couldn't engineer consciousness? I know that some of my coleagues have made a career of insisting that it can be done and that "no one believes otherwise." But the truth is rather different, I'm afraid.
I stated there is no known reason. You can illuminate us with a reason, by all means. Mother nature does it, you'd have to describe specifically how we cannot recreate what mother nature does right in front of us every day. Please take a crack at it. Some people have suggested all sorts of reasons for all sorts of things...doesn't mean they were coherent or rational. Let's test yours. Most objects to sentience I have read or heard originate with either religion, or some old fashioned thinking that was ultimately influenced by religion. That you believe you have some novel reason that's not faith-based...well don't keep me in suspense!
 
Last edited:
specklebang said:
In my lifetime alone, things have evolved into the realm of the unbelievable. Who knew you could send pictures through the air. Who knew you could take the human voice, manipulate it to 0 and 1 and reconstruct it instantaneously on the other side of the world.

G.W.F. Leibniz speculated about this sort of thing in the 18th century (with scary prescience, I might add), as did a British mathematician whose name escapes me at the moment. The idea of representing information numerically, and even with binary numerals, is very old (circa 300 B.C., or, implicitly, far earlier). That voices could be carried over large distances, and that sound could be described as information, was also understood very early (ever played that game with two cans and a string? It dates to roughly 100 B.C. when the Chinese invented it; it took longer to spread to the west). The Achaemenid Persians developed a signalling system that got messages, via a binary code, across their vast empire in a matter of hours, circa 450 B.C.

The technology to do what we do today didn't exist back then, of course. But it was conceivable. Unfortunately, I don't think it's as conceivable that minds are something we can get by building something like a brain. In fact, that's long been singled out as something that isn't conceivable. There was some enthusiasm that either the conceivability issue could be overcome, or wasn't important, starting in about 1890 and running up through the 1980s, but there were reasons for that, and they weren't very good ones.
 
Mach said:
You want a concrete suggestion on how to do something we haven't solved yet? We have more than an example I informed you of. We have the reality of conscious organisms created every day, in a wide spectrum of functionality from low function to very high (by our human standards). You need more? why?

Well, I think that would be because I find no convincing reason to suggest that the consciousness of organisms depends on their brains. If Cartesian dualism were true, for example, consciousness would exist in such a way that brains are not necessary. If some forms of Idealism were correct, then brains would actually be a result of consciousness, and not the other way around. And these possibilities aren't ruled out. The fact that none of our observations seem to provide a bottom-up physicalist picture is therefore troubling. Until there is some reason to believe such a picture is forthcoming...well, there's just no reason to believe such a picture is forthcoming.

Mach said:
How about build a neural network that functions similar to the human brain. It requires X amount of storage, Y speed, an operating system, and then has to be "taught" how to think.

We already build such networks on a small scale. I play around with some of them. And they can be taught how to perform certain tasks. But they require the interpretation of a conscious user, and nothing neural networks can do seems to suggest they could become conscious. There's always the "well, we need to add more nodes" view, but other than the fact that this would get us closer to immitating the brain (for which, see above), there's no apparent reason this should do any good.

Mach said:
Again, we have billions of working examples (humans), the idea that it cannot be replicated in reality is what is not concrete. Tell me specifically, concretely, how it is imaginable that we cannot recreate what the human organism in sentience, when it's clearly done by nature using the same exact building blocks we have access to.

Sure: that last bit is not clear. It's an assumption, and not one for which there is a single convincing piece of evidence, so far as I can determine. What would be needed is a bottom-up account--that is, a story about how we go from a collection of neurons up to the production of qualitative experience, memories, emotions, intentions, etc. No such story is remotely conceivable, despite all that we know about the brain.

Mach said:
As you may know, we still haven't even mapped out the human brain to sufficient detail. First things first ash.

I spent a little less than 16 years sifting through every bit of relevant neuroscience I could get my hands on. There's a lot more known about the brain than most people realize--too much for any one human being to know, and this may be part of the problem. But it's not the main part of the problem. I began to take a very different tack when I asked myself one day what we could know about the brain that would clarify matters for us and give us some notion of how the brain creates the mind. After so many years of study, I began to realize that the answer is that there isn't anything we could know about the brain that would enlighten us. Either it's a brute fact that brains create minds (in which case, it looks like we have some fundamental errors elsewhere in science--this isn't the sort of thing that should be a brute fact), or brains just don't create minds. I'm open to someone showing this is incorrect, but it will require a bottom-up account. Otherwise, for any top-down story you tell, I can tell at least one other one that accounts for all the same evidence but changes the ontology significantly.

Mach said:
Chimps are fairly self aware. Dolphins. And humans without a doubt are self aware. You either have to claim these are not routinely created in nature using the same elements/energy we have access to, or you have to claim that without access to a divine spark/souls, or something equal magical, that we somehow cannot recreate that.

Why would those last things be magical? Or, that is, any more magical than matter?

Mach said:
You spent your long career searching for evidence of the entirety of chimps, dolphins, and in general the human species? I hope not....

No, I've spent it looking for a reason to believe that the brain produces the mind. A good reason, that isn't based on any questionable or (worse) question-begging assumptions (especially not ones that we make because they happen to be in vogue), and that is reasonably well-guarded against counter-argument. It doesn't have to be immune from counterargument by any means. Just some account that isn't going to get shot down five minutes out of the gate will do.

Mach said:
Since you have not commented on a functioning brain constructed that IS conscious, you aren't actually making a positive claim.

Uh...what? I'm not sure what this is supposed to tell me.

Mach said:
What's interesting is that you believe it's possible to create a functioning brain that isn't self aware, but you can't believe it's possible to have one that's self-aware? Leap of faith if you ask me.

I don't recall claiming that it's impossible for the brain to create the mind. Only that there's no good evidence that it does.

I suppose if I were being a little more perspicacious, I would say something more like this:

the evidence for the widespread belief that the brain produces the mind is disappointingly weak, to the point of being nearly non-existent. Purported evidence abounds, but taking it as evidence for the physicalist thesis of mind requires making a number of question-begging assumptions. The main argument for physicalism seems to boil down to an incredulous stare given to anyone who suggests something else.

In the absence of good arguments for the proposition that the brain creates the mind, and given the fact that there is no logical connection between the brain and the mind (that is, the proposition "the brain does not create the mind" is not self-contradictory), there doesn't seem to be a reason to buy the physicalist thesis.

Mach said:
Ah, if you didn't claim it, then don't claim it, and stay silent on the issue until you have something you want to claim.

I'm not sure I understand the motive for your remark. I did claim something, just not what you said I did. I claimed something about what is possible--namely, that it is possible the brain does not create the mind. To state that is not to state that the brain does not create the mind. Look: it is possible that someone posting to this thread is really Barrack Obama. And that is true--that is a genuine possibility. But to claim this isn't the same as claiming that that's what's actually happening. I can hold that the modal claim (about possibility) is true, while at the same time holding that Barrack Obama is not posting in this thread.

Mach said:
By your reasoning anything we haven not discovered could in your hand waiving conjecture be said to be: "perhaps not logically possible". Which of course is indistinguishable from having said NOTHING in the first place.

I'm not sure why you think this sort of claim is indistinguishable from no claim; I've given an example of why that's not true. There was a time before people had discovered how to build tall buildings. But there was also a suggestion available, even at that time, for how to build tall buildings--namely, by stacking strong building materials ever higher. This showed us that even though we didn't know how to build tall buildings (i.e. hadn't worked out the details), we still could know that it wouldn't be logically impossible.

There are epistemic criteria that this suggestion would have met to tell us this. Specifically, we would have understood that it could work, and why it could work. We don't have anything similar for getting minds out of brains. It's not just that we haven't figured it out yet, it's that we don't even know how to go about figuring it out. That last bit is the problem.

Mach said:
The fact is if it's unknowable, we can't know it's unknowable, and you claiming "it may be unknowable" is bad philosophy.

This is just not true. We can know that some things are unknowable. A fellow named Kurt Godel formally proved this in the 1930's.

Mach said:
Scientifically possible means that within science, i.e. all of human knowledge as related to reality, self aware organisms are created. It does not mean "scientists created them", that would be absurd, just a misinterpretation no big deal. Although devil's advocate when a scientist takes a sperm and an egg and combines them in a lab and re-implants it, I don't thank god for that child, I thank the scientist. He didn't do all the work, but you get the idea. But that's got nothing to do with my meaning...in realty we constantly observe, and even experience, sentient organisms. It's real. The idea that we cannot create something real when it's clearly within our physical scope, is fiction IMO.

I'm still not sure I get what you're saying here. First, you seem to be saying that self-aware organisms exist. I agree. But then you seem to say that because we know they exist, it follows that we can create self-aware organisms in a non-trival manner (i.e. not by the usual route of reproduction). That doesn't seem to follow at all. It may be that some parts of physical reality are not under our control. I think our current best theories of quantum mechanics provide independent reasons for thinking this is the case.

Mach said:
I stated there is no known reason. You can illuminate us with a reason, by all means. Mother nature does it, you'd have to describe specifically how we cannot recreate what mother nature does right in front of us every day.

I guess I'm not sure why I should have to. Your argument here reminds me of a theist who declaims that we cannot disprove the existence of God. It's not exactly the same, of course, but the epistemic principle seems to be constant here. You're asking me to believe that some proposition is true (i.e. that human beings can create self-aware minds in a non-trivial manner). Then you say that because there are self aware minds and because I cannot produce a reason why we couldn't, we should believe it's possible. I don't find that argument very convincing.

Before we should believe some posit, there should be a reason to at least suspect its true. That said, I do think there are some positive reasons to think we cannot likely build some kind of strong AI. I outline a few of them below...though keep in mind most of these have whole books devoted to them, so I'm not doing anything other than giving the briefest outline.

Mach said:
Please take a crack at it. Some people have suggested all sorts of reasons for all sorts of things...doesn't mean they were coherent or rational. Let's test yours. Most objects to sentience I have read or heard originate with either religion, or some old fashioned thinking that was ultimately influenced by religion. That you believe you have some novel reason that's not faith-based...well don't keep me in suspense!

1) Identity theory failed, which means we have to have an explanation of qualitative experience and intensionality (as well as all other aspects of mind). Identity theory was proposed precisely because nothing else was conceivable, and that's still the case. Functionalism is on the wane, which means that calculationist theories are on the wane in general. So far, there's not anything coming forward to take the lead position.

2) Consciousness doesn't look like something that would or should evolve. Intellect certainly does, but consciousness is rather different.

3) There is no logical connection between anything physical and quite a few things mental. That is, we can conceive of physical systems that do everything we do, but have no minds.

4) Physical properties don't resemble mental properties. How can you get intension out of something like mass or velocity, for instance? Read carefully about Leibniz' Mill example.

5) If we try to think of minds as akin to programs run on computers (where the brain is a computer and its configuration is the software that's running on it), there is a very persuasive argument that nearly borders on being a proof that not only are brains instantiating human minds, but every brain is instantiating every human mind at the same time, and furthermore, any recursive process of sufficient complexity is instantiating all human minds. Google for a paper by Jaron Lanier called "You Can't Argue with a Zombie" for an introduction to this argument, though it's been successively refined since he wrote it. Those papers will all be behind paywalls, though.

I can think of others, but this post is already getting too long.
 
Last edited:
G.W.F. Leibniz speculated about this sort of thing in the 18th century (with scary prescience, I might add), as did a British mathematician whose name escapes me at the moment. The idea of representing information numerically, and even with binary numerals, is very old (circa 300 B.C., or, implicitly, far earlier). That voices could be carried over large distances, and that sound could be described as information, was also understood very early (ever played that game with two cans and a string? It dates to roughly 100 B.C. when the Chinese invented it; it took longer to spread to the west). The Achaemenid Persians developed a signalling system that got messages, via a binary code, across their vast empire in a matter of hours, circa 450 B.C.

The technology to do what we do today didn't exist back then, of course. But it was conceivable. Unfortunately, I don't think it's as conceivable that minds are something we can get by building something like a brain. In fact, that's long been singled out as something that isn't conceivable. There was some enthusiasm that either the conceivability issue could be overcome, or wasn't important, starting in about 1890 and running up through the 1980s, but there were reasons for that, and they weren't very good ones.

Interesting. I don't agree at all though. Look how individual both humans and animals are. Cats, my favorite, are a good example. Working with relatively small brain capacity, they are mischievous, humorous and self-aware. My black cat, after his haircuts, admires himself in the mirror for the next several days. Yet I hear some claim that they are not-self-aware.

Given near unlimited computing and extrapolative capacity, there is no reason to believe that a "computer" can not become self-aware. Of course, it is just as impossible as the fact that you will be able to read this in less than a second after I hit post.

Sadly, it will probably be the Chinese or Japanese who develop AI. In America we are too busy doing.....advertising.
 
specklebang said:
Interesting. I don't agree at all though.

Conceivability takes more than just saying the words. There has to be an expectation of comprehensive logical connections in the right way. It is conceivable that we could build a brain and get all its parts functioning in the right way, but for no mind to be present, because nothing that brains do is fundamentally distinct from stuff that other organs do. But mental events appear fundamentally distinct from physical ones. For example, it doesn't seem possible to weigh a thought. When I think of the Pythagorean Theorem, how much does my thought weigh? What shape is it? For complex reasons I'll explain if someone wants me to, it turns out the reply that it weighs as much as the ions moving through the involved neurons in your brain doesn't work.

specklebang said:
Look how individual both humans and animals are. Cats, my favorite, are a good example. Working with relatively small brain capacity, they are mischievous, humorous and self-aware. My black cat, after his haircuts, admires himself in the mirror for the next several days. Yet I hear some claim that they are not-self-aware.

I'm fairly convinced that animals are generally conscious, at least in that they have phenomenal experience. Some may not be self-aware in the sense of having a self-concept, though to have phenomenal experience requires that there be a self, with a mental interior, present.

But, it seems to me that this should be an argument against the position that AI is possible. If cats are aware in the right way, but have brains very much smaller and less well developed than ours, we wouldn't expect them to have the requisite awareness. As we go farther and farther down on the "brain-complexity" scale, we still see signs that animals have phenomenal experience and a mental interior. Even fish seem to have some basic affects; they make friends and enemies, and they probably feel pain and pleasure. But fish brains are very rudimentary; there are desktop computers these days with more computing power.

If awareness, as such, was a product of the brain, and furthermore, it were possible to create such awareness by adding more computing power, we should already have computers with awareness. But it doesn't appear that we do.
 
Conceivability takes more than just saying the words. There has to be an expectation of comprehensive logical connections in the right way. It is conceivable that we could build a brain and get all its parts functioning in the right way, but for no mind to be present, because nothing that brains do is fundamentally distinct from stuff that other organs do. But mental events appear fundamentally distinct from physical ones. For example, it doesn't seem possible to weigh a thought. When I think of the Pythagorean Theorem, how much does my thought weigh? What shape is it? For complex reasons I'll explain if someone wants me to, it turns out the reply that it weighs as much as the ions moving through the involved neurons in your brain doesn't work.

I'm fairly convinced that animals are generally conscious, at least in that they have phenomenal experience. Some may not be self-aware in the sense of having a self-concept, though to have phenomenal experience requires that there be a self, with a mental interior, present.

But, it seems to me that this should be an argument against the position that AI is possible. If cats are aware in the right way, but have brains very much smaller and less well developed than ours, we wouldn't expect them to have the requisite awareness. As we go farther and farther down on the "brain-complexity" scale, we still see signs that animals have phenomenal experience and a mental interior. Even fish seem to have some basic affects; they make friends and enemies, and they probably feel pain and pleasure. But fish brains are very rudimentary; there are desktop computers these days with more computing power.

If awareness, as such, was a product of the brain, and furthermore, it were possible to create such awareness by adding more computing power, we should already have computers with awareness. But it doesn't appear that we do.

All things in good time. Certainly, the 0/1 functionality of modern computers is just a step along the pathway to induced consciousness. It's no more implausible than the transmission of images. If you had a TV in 1865, you'd have been burned as a witch.

The only SF concept I have trouble with is time travel. But I think AI is the next big thing.
 
Well, I think that would be because I find no convincing reason to suggest that the consciousness of organisms depends on their brains.
I take that to be an irrational, unreasonable, or otherwise absurd remark. All evidence is that consciousness is a product of the human body, primarily the human brain. All evidence is that when the brain is injured, depending on the injury, the consciousness can as a result be changed or even end. I can't really take you seriously if you're claiming our consciousness as it is in reality, doesn't depend on our brain.
But they require the interpretation of a conscious user, and nothing neural networks can do seems to suggest they could become conscious.
Again you remark on "we can't do it today therefore it's impossible" absurdity.
Sure: that last bit is not clear. It's an assumption, and not one for which there is a single convincing piece of evidence, so far as I can determine. What would be needed is a bottom-up account--that is, a story about how we go from a collection of neurons up to the production of qualitative experience, memories, emotions, intentions, etc. No such story is remotely conceivable, despite all that we know about the brain.
Neats vs. scruffies - Wikipedia, the free encyclopedia
You're insisting on neat, yet appear to be unaware of the neat vs scruffy argument. Yet you claim to be well-versed in the field? ...
I don't recall claiming that it's impossible for the brain to create the mind. Only that there's no good evidence that it does.
All scientific evidence is that it originates in, functions as part of, and is clearly changed, damaged, and otherwise ends with the brain.
I claimed something about what is possible--namely, that it is possible the brain does not create the mind.
I'm telling you that you need to provide me with evidence to justify this claim above, as true. What evidence do you have that it's possible? None. It's just skepticism. 16 years of "reading" to end up just being a skeptic?
This is just not true. We can know that some things are unknowable. A fellow named Kurt Godel formally proved this in the 1930's.
You mean the incompleteness theorem? We're discussing reality. If you can apply Godels incompleteness theorem which is about axiomatic systems (not reality, we're talking math), then please show us specifically how it applies here. (you cannot)

It may be that some parts of physical reality are not under our control.
cause and effect are axiomatic in science, you reject them, you reject all of science. I can't help you.
Further, you must again evidence precisely how some parts of physical reality cannot be sufficiently affected by humans such that it makes it impossible to create consciousness. Again, you attempt soft-claim it with "It may be that". I'm telling you that's good for schoolyard nonsense, it's bad philosophy. It may be you're wrong. It may be we're all just brains in a vat. It may be is code for "you're a skeptic and you have no actual claim". Notice we can continue ad infinitum with these "It may be that" skeptic claims, and they actually advance no positive claim about reality. Which is why they are no different, ultimately, from claiming nothing.

Before we should believe some posit, there should be a reason to at least suspect its true.
And the billions of biological brains and their associated consciousness's is overwhelming evidence that in your skeptic nonsense, continue to deny.
2) Consciousness doesn't look like something that would or should evolve. Intellect certainly does, but consciousness is rather different.
Based on what? Clearly evolutionarily we have a spectrum of consciousness. We have organisms that respond without brains, we have ones with basic brains but certainly very little in the way of complex response, we have brains in mammals that seem to have complex social behaviors, identities, etc., we have primates and dolphins that exhibit consciousness approaching young humans, and we have the fully developed human. evolutionary fingerprints are ALL THE **** OVER IT. Good grief, what were you reading for 16 years, confirmation bias?
3) There is no logical connection between anything physical and quite a few things mental. That is, we can conceive of physical systems that do everything we do, but have no minds.
The notion that the mental is somehow distinct from physical, in matters of science, is absurd.
4) Physical properties don't resemble mental properties. How can you get intension out of something like mass or velocity, for instance? Read carefully about Leibniz' Mill example.
Oh good lord.
5) If we try to think of minds as akin to programs run on computers (where the brain is a computer and its configuration is the software that's running on it), there is a very persuasive argument that nearly borders on being a proof that not only are brains instantiating human minds, but every brain is instantiating every human mind at the same time, and furthermore, any recursive process of sufficient complexity is instantiating all human minds.
Bordering on proof? Proof is for mathematics. We use theory to describe a well evidenced hypothesis. If it's not a well established scientific theory, then you're blowing more smoke.
 
If awareness, as such, was a product of the brain, and furthermore, it were possible to create such awareness by adding more computing power, we should already have computers with awareness. But it doesn't appear that we do.

Absurd. More computing power != architecture of the brain. Adding more complex architecture in no reasonable ways is similar in the brain argument to simply adding more "computing power".

And more to the point of your claim that consciousness wasn't evolutionary, why are humans the highest mammal and why do we, within reasonable bounds, rule the world? Nah, couldn't be a survival trait. You're too much.
 
Mach said:
I take that to be an irrational, unreasonable, or otherwise absurd remark. All evidence is that consciousness is a product of the human body, primarily the human brain. All evidence is that when the brain is injured, depending on the injury, the consciousness can as a result be changed or even end.

I don't see it that way. Brain ablation cases are just as compatible with some kind of dualism as with materialism. The point is (and I think I said this in a previous post) there's no evidence that is currently available to rule out all but one ontological story. If someone loses some portion of their brain and it changes their behavior, is it that their mind is actually changed, or is the apparatus that communicated between mind and body changed? Both are possible. I can't think of a single piece of evidence, a single case study or experiment, which definitely shows otherwise. But feel free to bring some forward.

Claims that thinking otherwise is the "only rational approach," or that "no body thinks otherwise" were popular pablum in the 1980's. A few people still like to make them, but they don't get the kind of uncritical reception they used to.

Mach said:
I can't really take you seriously if you're claiming our consciousness as it is in reality, doesn't depend on our brain.

Why not? Well, to be clear, if we're thinking about incidental properties of consciousness and not merely its existence, it obviously does depend on the brain. But, it also depends on the environment in just the same way. I wouldn't be conscious of a painting of a lotus flower on the wall opposite me, for example, unless my brain, my entire visual system, and the painting itself, and enough ambient light, are all present as well. I assume you want a kind of brain-dependence for consciousness that's a little stronger than that. But if you think I should accept a stronger hypothesis, or you think it's reasonable for you not to take me seriously, you should say why a little more carefully. Right now, it looks like you're just being disrespectful.

Mach said:
Again you remark on "we can't do it today therefore it's impossible" absurdity.

This is a mischaracterization of what I said. I did not say that because neural networks cannot do something today, they cannot do something tomorrow. I said nothing neural networks do today suggests they can instantiate consciousness tomorrow (or any time in the future). Those are two different claims.

Mach said:
Neats vs. scruffies - Wikipedia, the free encyclopedia
You're insisting on neat, yet appear to be unaware of the neat vs scruffy argument. Yet you claim to be well-versed in the field? ...

I'm not sure why you bring this point up (I think of it as the Minsky/Fodor controversy), because it doesn't have anything to do with the point I was making. Minksy (for example) doesn't believe that, at the end of the process, assuming we produced AI, there would be no step-by-step explanation for how the transistors come together to produce a mind.

He thinks that the way to build an AI is to adduce a bunch of small modules that are tweaked a little at a time, perhaps in an organic way. I don't care what the explanation actually is; it can be as modular as you like. But if AI were possible this way, there would still be an explanation for how it happens, starting at the level of transistors (presumably) and going all the way up, without any gaps, to a fully functioning consciousness.

The Minsky/Fodor controversy is over how modular the mind is, and whether there is a univocal quality we call "intelligence" or not. I'm skating over a little here because there is a further question of how to characterize human intelligence, and whether it would be possible to produce intelligences of either kind.

Mach said:
All scientific evidence is that it originates in, functions as part of, and is clearly changed, damaged, and otherwise ends with the brain.

Nope. If you go back to, say, the 1980's or before, it was true that more or less all scientists studying the brain thought so (though, importantly, not some of the most eminent ones). And I would admit it's still the dominant view, though it's no longer considered absurd to suggest otherwise. There are definitely some neuroscientists, cognitive scientists, and philosophers of mind that have come to disagree with what was previously the dominant view.

Mach said:
I'm telling you that you need to provide me with evidence to justify this claim above, as true. What evidence do you have that it's possible? None. It's just skepticism. 16 years of "reading" to end up just being a skeptic?

Again, I'm not sure why I should have to, since you're making a positive claim, and what I've said (on this point, anyway) is that it's possible the claim is wrong. It seems that if you want to claim that to build a device that is functionally equivalent to a brain is necessarily identical to building a mind, the burden of proof ought to be on you. You haven't really said what the specifics of your claim are.

But as it happens, I do have some positive reasons for adverting to such a possibility, namely, the failure of identity theory and the strong arguments against functionalism.

Mach said:
You mean the incompleteness theorem? We're discussing reality. If you can apply Godels incompleteness theorem which is about axiomatic systems (not reality, we're talking math), then please show us specifically how it applies here. (you cannot)

Do you think axiomatic systems are not real, or that we don't actually know anything about them? If not, then you'll have to spell your point out a little more carefully here.

Mach said:
cause and effect are axiomatic in science, you reject them, you reject all of science. I can't help you.

I don't see this as the case at all. What is the cause of a specific instance of electron tunnelling? What is the cause of the decay of a uranium atom? Maybe there is one, but the Schrodinger equation doesn't tell us anything about causation, just about how probabilities evolve. Bell's inequality tells us that particles don't have definite properties until they are observed. What causes that?

Or, let's think about the special sciences: what causes a particular brain state to instantiate a particular mental state (if it does)? What causes some plants in a homogenous population and the same environment, to die while others of the same species and genetic variance thrive? Etc. It's not even clear that there's a univocal concept of causation among the sciences, or even in a single science.

Mach said:
Further, you must again evidence precisely how some parts of physical reality cannot be sufficiently affected by humans such that it makes it impossible to create consciousness.

I'm not saying it's impossible. I'm saying it doesn't look likely, and there's certainly no reason to think it can be done. It doesn't seem like I should have to provide support for a claim I'm not making.

Mach said:
Again, you attempt soft-claim it with "It may be that". I'm telling you that's good for schoolyard nonsense, it's bad philosophy. It may be you're wrong. It may be we're all just brains in a vat. It may be is code for "you're a skeptic and you have no actual claim". Notice we can continue ad infinitum with these "It may be that" skeptic claims, and they actually advance no positive claim about reality. Which is why they are no different, ultimately, from claiming nothing.

I disagree (obviously). Modal claims are quite substantive.

Mach said:
And the billions of biological brains and their associated consciousness's is overwhelming evidence that in your skeptic nonsense, continue to deny.

What is that evidence for, exactly? If it's evidence for something, it should rule out other possibilities.

Mach said:
Based on what? Clearly evolutionarily we have a spectrum of consciousness. We have organisms that respond without brains, we have ones with basic brains but certainly very little in the way of complex response, we have brains in mammals that seem to have complex social behaviors, identities, etc., we have primates and dolphins that exhibit consciousness approaching young humans, and we have the fully developed human. evolutionary fingerprints are ALL THE **** OVER IT.

I think you may be using the term 'consciousness' to mean something other than how I'm using it. I'm refering to phenomenal consciousness--i.e. the actual experience of something as felt in some kind of interior world--the mind, as we usually call it. That doesn't look like it could or would evolve. Why would it? It doesn't appear to be adaptive--anything consciousness does could be done just as well by non-conscious automatisms. If it's a spandrel, it's a pretty extraordinary one. Most people would think that it's the single most important fact about human beings that we are conscious. That's not the sort of thing you want to be a spandrel.

Mach said:
The notion that the mental is somehow distinct from physical, in matters of science, is absurd.

Of course, if you start off with that, it'll be all too easy to conclude exactly the same thing. But that just seems like a bad way to argue, and it tends to shore up my point about question-begging assumptions.

Mach said:
Oh good lord.

I'm not sure why you have such a reaction, but I suggest it shows you have no substantive reply.

Mach said:
Bordering on proof? Proof is for mathematics.

This doesn't seem to matter here, since brains are supposed to instantiate minds by doing calculation--and unquestionably, an artificially intelligent computer would do so. Mathematical lemmas and proofs are therefore central to the issue at hand. Turing's second proof showed that any algorithm capable of being implemented can be implemented by a recursive process (didn't I already say this?). Computers just execute recursive processes. The results computers give us just depend on how the rules of the computer interface with the object code. It is therefore possible to change the rules of computers up to read the same code and do vastly different things with it. Indeed, just what computers are doing, or even that something is a computer, depends on the attitude of conscious observers.

Mach said:
We use theory to describe a well evidenced hypothesis. If it's not a well established scientific theory, then you're blowing more smoke.

I'm not sure that this describes the relationship between theory and hypothesis except in a trivial manner. And anyway, I'm not sure why you think this is relevant.

Mach said:
Absurd. More computing power != architecture of the brain. Adding more complex architecture in no reasonable ways is similar in the brain argument to simply adding more "computing power".

Oh, sure...to a point. V.B. Mountcastle showed us that the cortex is organized the same way from the anterior to the posterior, in both hemispheres. In short, there's nothing fundamentally different between the processing going on in your visual cortex and, say, the prefrontal cortex.

Mach said:
And more to the point of your claim that consciousness wasn't evolutionary, why are humans the highest mammal and why do we, within reasonable bounds, rule the world? Nah, couldn't be a survival trait. You're too much.

You seem to miss the point, unless you're one of those people who thinks that only human beings are conscious. I happen to think, after a long time spent interacting with them, that most animals are conscious in the relevant way. That is, they have phenomenal experience and an internal mental world. No doubt that world isn't like our own, but note that this kind of internality is an all-or-nothing thing. There aren't any sensations, dreams, or thoughts that occur in a semi-internal space. So if fish are conscious in the relevant manner, they have just as much consciousness as we do. Not as much intelligence, certainly, but that's a separate issue.
 
Last edited:
is it that their mind is actually changed, or is the apparatus that communicated between mind and body changed? Both are possible.
Why is that an either/or? It's both. I thought it's widely understood that us, you and I, change every second of every day. Yes, we change both as a result of changes in the brain, but we ARE in part those changes in the brain, so differentiating them we do out of convenience, not because it's two wholly distinct things. We have no evidence of some absolute distinction between mind and the brain matter, and I don't know anyone other than religions, that claim otherwise. We are among other things, a feedback loop that yes, feeds back environment which includes the painting on the wall (if we observe it indirectly or directly), the brain itself, and our own thoughts (Which we may call the brain too, at least a working, healthy brain has brain waves/activity.

This is a mischaracterization of what I said. I did not say that because neural networks cannot do something today, they cannot do something tomorrow. I said nothing neural networks do today suggests they can instantiate consciousness tomorrow (or any time in the future). Those are two different claims.
If we were trying to get humans to break the sound barrier 2x over in a vehicle with an engine, and we developed the internal combustion engine, you may rightfully point out that the internal combustion engine as it's currently designed, even though the current one only generates enough HP to get to about 40mph, that theoretically extrapolation still results in an engine that at it's best likely wont' even break the sound barrier 1x. So you could write that nothing in the combustion engine today suggests that humans will break the sound barrier with a vehicle. I understand that. But we invented a different engine. Similarly, we can and will create other neural networks, some of which will be so different than the ones we create today they will of course have some other words to describe it. Since you don't have evidence we cannot create advances in such engines or networks, smart money is on NOT making a positive claim about it....



It seems that if you want to claim that to build a device that is functionally equivalent to a brain is necessarily identical to building a mind, the burden of proof ought to be on you. You haven't really said what the specifics of your claim are.
That's incorrect.
The only evidence we have of consciousness is based on organic brain "machines". Claiming things about minds beyond that is speculation.


I don't see this as the case at all. What is the cause of a specific instance of electron tunnelling? What is the cause of the decay of a uranium atom? Maybe there is one, but the Schrodinger equation doesn't tell us anything about causation, just about how probabilities evolve. Bell's inequality tells us that particles don't have definite properties until they are observed. What causes that?
Causality is the relationship between causes and effects.[1][2] It is considered to be fundamental to all natural science, especially physics.
Electron tunneling, atomic decay, these are random and apparently non-deterministic, but this doesn't refute causality. So many tangents, no need to continue here.

Or, let's think about the special sciences: what causes a particular brain state to instantiate a particular mental state (if it does)? What causes some plants in a homogenous population and the same environment, to die while others of the same species and genetic variance thrive?
Lack of information or even undecidability do not refute causality.
I'm refering to phenomenal consciousness--i.e. the actual experience of something as felt in some kind of interior world--the mind, as we usually call it. That doesn't look like it could or would evolve.
This is mind boggling to me. Why are you differentiating? If it wasn't a product of adaptation of natural organisms over time, what was it? Please be specific! Gould's "spandrel" was a convenience word, but when closely analyzed doesn't tell us much except in a limited context. I kind of know what you're getting at, that intelligence may have been selected for but the consciousness we experience is just a byproduct, but that's sloppy reasoning IMO. Everything can be reduced to being called simply a "byproduct of the unfolding of the universe". Pointless IMO.

Apparently we have a lot of issues on the table. Maybe if we tackle one priority point I could more efficiently make progress. I appreciate the responses, I love the topic and would like to come to at least some understand of why we're writing at odds, when it feels as though we should be agreeing. Other than if it's religious-driven, I honestly have no idea what this (to me) middle ground is you profess to have with regards to consciousness, and being unable currently to empathize with that position we appear to have issues communicating.

Maybe start simple:
Do you think an individual consciousness as we experience (say, you) originates with (On a physical timeline) the biological development of your body/brain, in this environment, and with time and information (language, teaching, watching, etc.), your consciousness is from the outside now in existence, and from your perspective you are in a sense "now aware"? I would think this is factually what occurs, and other than non-natural phenomenon, there is no "alternative answers".
 
All things in good time. Certainly, the 0/1 functionality of modern computers is just a step along the pathway to induced consciousness. It's no more implausible than the transmission of images. If you had a TV in 1865, you'd have been burned as a witch.

The only SF concept I have trouble with is time travel. But I think AI is the next big thing.

I read the books you recommended on the subject. An interesting and fairly entertaining read. I am of the opinion that the author and most people have it backwards. I believe that AI will be a logical out growth to human computer interfaces. As the interfaces become more complex and less "meat orientated" more human consciousness will be based in the machine. Eventually it will become all machine. AI wont really be AI's but us migrated to machine.
 
Interesting. I don't agree at all though. Look how individual both humans and animals are. Cats, my favorite, are a good example. Working with relatively small brain capacity, they are mischievous, humorous and self-aware. My black cat, after his haircuts, admires himself in the mirror for the next several days. Yet I hear some claim that they are not-self-aware.

Given near unlimited computing and extrapolative capacity, there is no reason to believe that a "computer" can not become self-aware. Of course, it is just as impossible as the fact that you will be able to read this in less than a second after I hit post.

Sadly, it will probably be the Chinese or Japanese who develop AI. In America we are too busy doing.....advertising.

I don't believe that consciousness comes from brute computing horsepower. Your cats for instance don't have much in the way of that but yet they are still conscious to a small degree, no? Some say even fish or reptiles or even bugs are conscious. Hence its not the hardware primarily, consciousness is in the programming. Its the prime reason I believe that human consciousness will migrate successfully to computer hardware.
 
I don't believe that consciousness comes from brute computing horsepower. Your cats for instance don't have much in the way of that but yet they are still conscious to a small degree, no? Some say even fish or reptiles or even bugs are conscious. Hence its not the hardware primarily, consciousness is in the programming. Its the prime reason I believe that human consciousness will migrate successfully to computer hardware.

I volunteer. My "meat" is wearing out.
 
Mach said:
Why is that an either/or? It's both.

In the sense that I meant them, they would normally be taken to be mutually exclusive. If the brain creates the mind, then ablating part of the brain changes the mind directly. If the brain communicates with some non-physical mind, then ablating part of the brain changes the connection between mind and brain. The point is that the latter could be the case, and we'd observe just what we observe now. There isn't an observation we can make (in this arena, anyhow) that will help us decide the issue. There are other observations we can make that will help us decide, however, and I think we've already made some of those.

Mach said:
If we were trying to get humans to break the sound barrier 2x over in a vehicle with an engine, and we developed the internal combustion engine, you may rightfully point out that the internal combustion engine as it's currently designed, even though the current one only generates enough HP to get to about 40mph, that theoretically extrapolation still results in an engine that at it's best likely wont' even break the sound barrier 1x.

These are not similar cases. Going faster than the speed of sound is only a quantitative difference (in distance per unit time) from going 40 mph. There were people who thought that if you went over 60 mph a human being wouldn't be able to breathe, but once the speed of sound was measured, no one thought it should be in principle impossible to send something faster than the speed of sound. Indeed, it had already been done; large cannon could handle black powder loads sufficient to send a ball large distances faster than the speed of sound--IIRC this happened in the 16th century. So we knew that faster-than-sound travel was possible.

We have nothing like that for neural networks.

Mach said:
That's incorrect. The only evidence we have of consciousness is based on organic brain "machines". Claiming things about minds beyond that is speculation.

My point would be that we don't even have that. Minds are much more mysterious than anything in the world. Nothing we know rules out any of the major ontologies. If you reply to this post, please focus on that last sentence. I'll repeat to so that it's clear: nothing we know rules out any of the major ontologies. Materialism could be correct. So could idealism. So could substance dualism. So could anomalous monism. Etc.

Mach said:
Causality is the relationship between causes and effects.[1][2] It is considered to be fundamental to all natural science, especially physics. Electron tunneling, atomic decay, these are random and apparently non-deterministic, but this doesn't refute causality. So many tangents, no need to continue here.

I think the point is precisely that there are so many tangents--that is, points which bear on the subject and which need to be explored and resolved.

Anwyay, the point isn't to refute causation per se, but rather to show that it's something we shouldn't just invoke and skate over as if it's a clear concept.

Mach said:
Lack of information or even undecidability do not refute causality.

Nor do they establish it. Universal causation, and especially the idea that there is univocal causation, is an assumption. It's an assumption that the laws of the universe are everywhere consistent, or even that we live in a consistent universe (I suspect we actually do not).

Mach said:
This is mind boggling to me. Why are you differentiating? If it wasn't a product of adaptation of natural organisms over time, what was it? Please be specific!

Why the need for an alternate positive answer before we abandon another that appears to be problematic? I think it's perfectly fine to say "heck if I know." I have no idea what the mind is. But I know that others who claim to know for sure it's produced by the brain don't know what they're talking about.

As to differentiating between intelligence and consciousness: my pocket calculator is vastly more intelligent than I am in a certain direction (namely, in doing calculation very quickly). I can calculate anything it can, but it does so much faster and much more reliably. Its intelligence in this area is greater than my own. However, it's presumably not conscious. I am conscious, however. Examples of why there should be a distinction between intelligence and consciousness abound.

Mach said:
Gould's "spandrel" was a convenience word, but when closely analyzed doesn't tell us much except in a limited context. I kind of know what you're getting at, that intelligence may have been selected for but the consciousness we experience is just a byproduct, but that's sloppy reasoning IMO. Everything can be reduced to being called simply a "byproduct of the unfolding of the universe". Pointless IMO.

Well, it sounds like maybe you're arguing my side. It's not uncommon for philosophers of mind who recognize the problem I'm bringing up, but who want to stick to a materialist framework, to try to make the spandrel claim. My point was that it doesn't hold up. There remains no explanation of the kind I am seeking (and that should be there if consciousness were a physical phenomenon--we have just those sorts of explanations for other physical phenomena we've studied, so we'd expect one to at least be possible). Calling consciousness a spandrel is a bit of a cop-out. But on the other hand, it's not adaptive. Whatever consciousness does for us, it looks like a bundle of unconscious processes could do just as well. Why (and how) an internal mental world would evolve is not only a major unresolved question, but one that looks like it probably cannot be resolved with present concepts.

This last point is critical: it's easy to say that in the future we'll get new concepts that give us the answers I'm seeking. I'm sure we will. But those concepts may well refute any of the concepts we have now, including the ones that have led people to the conclusion that the brain creates the mind.

Mach said:
Do you think an individual consciousness as we experience (say, you) originates with (On a physical timeline) the biological development of you, in this environment, and with time and information (language, teaching, watching, etc.), your consciousness is from the outside now in existent, and from your perspective you are in a sense "now aware"? I would think this is factually what occurs, and other than non-natural phenomenon, there is no "alternative answers".

The straightforward answer to your question is: I don't know for certain, but probably not.

The more honest answer is that the question assumes a lot, and is therefore ill-phrased. The assumption that there is such a thing as a physical timeline and an objective reality is, well, an assumption. There is clearly an external world. But its nature remains mysterious, and I think we're hampered by classical Greek ideas which have worked their way into western culture in general, which orient us toward a certain picture of that external world. I think those are false. But these ideas become our assumptions, and ones we are usually unaware we're making. They're part of our language and cultural imagery, and practically everything we see or experience reflects those back to us.

People have not always assumed, or in fact had experiences as if, we are physical beings walking around and doing things in a physical environment. In fact, that concept of reality is fairly late to the game.

Before I respond at more length, it'd be helpful to know a couple of things: do you know much about ancient Greek philosophy? Also, what do you know about quantum mechanics, especially Bell's inequality? Finally, have you ever read anything by Berkeley, Hume, Kant, or Bradley?
 
Back
Top Bottom