- Joined
- Nov 28, 2011
- Messages
- 23,281
- Reaction score
- 18,289
- Gender
- Undisclosed
- Political Leaning
- Other
But how is that possible?!?Well, I don't know that AI is impossible in the sense that we might one day build a machine that becomes associated with consciousness--that is, that a conscious mind comes to attune itself with the bit of matter that constitutes the machine. However, if that ever does happen, it'll be because that machine makes some connection to a non-physical domain, not because it makes a mind out of material stuff.
Where is the connection?
How do we make the silicon connect to something that has no location, no mass, no energy, nothing at all?
Do you really not see why that's a major issue?
The tax program, like the Quake engine, also needs input it can understand. If I try to tell TurboTax that my income for 2017 is <<<IMPRESSIVE>>> because I fragged 3 enemies, the application will reject the data. Similarly, if I tell Quake that I paid $15,000 in mortgage interest in 2017, the game will reject the data.Of course it'll work. A tax program (and by parity of reasoning, any program) receives input and produces output according to an interpretation.
But you have no way to tell the Quake game engine what your income was! It has no way for you to input that data, and certainly not in any way that it can manipulate that data to calculate your tax liabilities.My point is that the interpretation need not remain static. I could understand that when, say, a door opens in the game, that's my cue to input my gross income, which I could do by taking certain actions within the game, all mapped to a different set of actions for different possible values.
I think you'd be better off learning a bit about Kahneman's dual system theory before making further comments. It's quite fascinating, by the way.None of that is evidence for different persons or consciousnesses. I'm aware of what I'm doing (even if I'm not fully aware of why I'm doing it) under the influence of either system.
Let me know when it's replicated.Funny you should mention calostomy patients....
Even if it turns out to be wrong, there are countless other ways that our consciousness is not what it seems.
And yet, clearly that's what seems to be happening. Our experiences of time, of pain, of attention, of consciousness itself don't actually tell us how any of that operates.1) I'm not sure how any of that is evidence of the brain fooling itself.
By the way, the "agent" theory is from Minsky.
So your takeaway from change blindness is to... ditch the entire concept of a material world? Seriously?2) I think at least some of that evidence supports idealism better than it does materialism. Take change blindness, for instance. We know that the physical processes, so far as we can describe, work to register the change. The light impacting the retina changes, obviously, as must the axial potentials travelling down the optic nerve, and so on. And yet, the conscious mind does not register the change. Why should that be the case? A materialist can come up with reasons, of course, but they're ad hoc reasons--nothing I can think of in materialist theories predict this sort of thing.
You do realize it really isn't that difficult to explain in physical terms, right?
And you do understand that your own proposal actually predict nothing at all? How does dualism or idealism predict that damage to the visual cortex will prevent the individual from having visual experiences? How does it predict that there are multiple regions of the brain which receive visual input, resulting in an individual with a damaged visual cortex still being able to process some visual information, albeit unconsciously? Does dualism predict that giving Prozac to a patient with bipolar disorder will help relieve their symptoms? Will it tell you which anti-depressants will work best?