• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

What is free will? Is it meaningful to say there is such a thing?

So in other words, it would be difficult to impossible to tell whether an entity "has" free will.

Insofar that it's difficult to tell whether an entity is conscious.

Consider a simple example: grasping an object. You have to choose a trajectory, from among millions available. You can select "by reflex", or, you can bring that into your consciousness and alter the reflexive solution "at will". However the resulting behavior is the same - it's the choice of one from among many possible trajectories.

The details of the trajectory to grasp the object aren't chosen "of one's free will" (though you could, if you concentrated on it...wanted it......wanted your elbow to bend just so as you're reaching). That's mostly stuff that happens unconsciously in the cerebellum. When I reach for an object I'm not "choosing" to bend my elbow 0.342ms after I start pivoting at the shoulder, etc. That's all unconscious. I am only conscious of the object, of wanting to grasp it, intending to grasp it, etc etc. This is why accidents happen; sometimes you reach for something and elbow someone's drink and so you say "oops, sorry i didn't mean to do that". Because it's true; you did not mean to do that. Your spilling of their drink did not happen in accordance with an intent/want/desire to spill their drink.
 
So what would an empirical test look like?

Unfortunately I don't believe there isn't one because there isn't an empirical test for the presence of conscious experience. There are correlates that we can test for (to some degree) like firings in certain areas of the brain, but these do not necessarily mean there isn't or isn't conscious experience - they are only correlates.

But this doesn't stop us from believing that other people are conscious, just because there isn't an undeniable test.
 
Think of it this way: you're rather hungry, and someone offers you your favorite snack. As it happens, the snack is not unhealthy, and there's absolutely no reason you shouldn't eat it. Do you eat it? Of course you do.

It's only when you introduce a reason not to eat it that there's really any consideration of choice. The cat is a reason for the mouse not to eat the cheese, while the hunger of the mouse is a reason for it to eat the cheese. The mouse must choose whether to risk it or not.

Why are you obfuscating? It's a simply question. If a mouse is hungry and it smells food (and now the caveat) if there are no other negative externalities, can the mouse choose not to eat? Can it say......Remember that it has food at home or decide it isn't in the mood for cheese?

I can decide not to eat I can make a choice based on lots of reasons, some rational and others not so much, but my question has always been, if a person is acting without free will what exactly would that look like. How would he differ from those with free will?

I'm not even sure computers have goals, so much as that we don't have better language to describe the behavior of a computer or robot. Computers have goals in the same way that bullets have goals; the bullet is going to hit something, but there appears to be no mental component to its doing so. The bullet never stops in mid-air and says to itself "hey, wait a second...I don't want to hit that nice fellow over there. I'm just going to hit those bricks instead." Neither do computers.

Poorly worded on my part. "Goal" makes it sound as if it were a choice Not what I meant. If a sophisticated program has been given a task, it will complete that task based entirely on the program it's given.

They do what they're programmed to do.

Even if they are programed to improve their own programming?

Still don't see any evidence. To be clear, what I'm saying is that I've never seen any evidence of a computer doing something that couldn't be explained by reference to its program plus prevailing physical conditions at the relevant times.

I suspect the same could be said for the human brain and the actions taken by the body if we had , literally, all the information there was to know about that brain and it's environment.
 
Unfortunately I don't believe there isn't one because there isn't an empirical test for the presence of conscious experience. There are correlates that we can test for (to some degree) like firings in certain areas of the brain, but these do not necessarily mean there isn't or isn't conscious experience - they are only correlates.

But this doesn't stop us from believing that other people are conscious, just because there isn't an undeniable test.

There is no test for a soul in the Christian sense, are you prepared to believe in that as well?

Seems to me that denying consciousness is the same as denying that my senses are giving me information that reflects reality, both are equally obvious and difficult to prove, but I'm not sure that we can apply this rationale to free-will and the idea of the human soul. At some point we have to ask for evidence.
 
There is no test for a soul in the Christian sense, are you prepared to believe in that as well?

No

Seems to me that denying consciousness is the same as denying that my senses are giving me information that reflects reality, both are equally obvious and difficult to prove, but I'm not sure that we can apply this rationale to free-will and the idea of the human soul.

But here you're taking free will to be something more than what I defined it as. I defined it as acting in accordance with wants/intentions/desires etc. If you accept the presence of conscious experience - the existence of wants/intentions/desires - how can you deny the existence of "acting in accordance with wants/intentions/desires"?

Most of the debate over free will is semantics, disagreement over what the notion of "free will" is (and why that particular notion is or is not relevant). I am a compatabilist and offer a compatabilist notion of "free will". You seem to be arguing against a libertarian notion of free will - the kind of ethereal, ghost-in-the-machine type of free will.
 
No...I'm asking if AI can get to the point where it appears that it is making choice such that it becomes impossible to distinguish between what we are calling free will in humans and an AI that simulates it. Voight-Kampff test anyone?

I'm sure the new Blade Runner 2 movie will be great !!!

"She's a replicant!"

"How long did it take you to discover?"

"How long? What difference does it make?"

"She does not know, I suspect."

"Not know? How can a thing not know what it is?"

"We noticed a certain insecurity about them, so we gave them memories."

"Implants! You're talking about implants!"

"Yes. We found it helps them to control their emotions."
 
Last edited:
By begging the question you have just affirmed the consequent regarding your belief in quanta and in chaos.

Apart from your faulty equation of both fallacies being the same, Andalublue has committed neither.

Yet another ad hominem.
Ok then, explain (give example) of begging the question fallacy and affirming the consequent fallacy. And show where Andalublue fell prey to committing either one.
 
No



But here you're taking free will to be something more than what I defined it as. I defined it as acting in accordance with wants/intentions/desires etc. If you accept the presence of conscious experience - the existence of wants/intentions/desires - how can you deny the existence of "acting in accordance with wants/intentions/desires"?

Most of the debate over free will is semantics, disagreement over what the notion of "free will" is (and why that particular notion is or is not relevant). I am a compatabilist and offer a compatabilist notion of "free will". You seem to be arguing against a libertarian notion of free will - the kind of ethereal, ghost-in-the-machine type of free will.

Yea, so sometimes I start these conversations just to learn more about them. I've recently learned the types that you list, compatibilist and libertarian notions of free will. I've struggled to identify what exactly free will is, then it sort of hit me. Trying to imagine what a lack of free will might look like. Even if one did not act in accordance with wants intentions and desires, how could we be sure that's not what the person wanted?
 
To say that there is something called "free will" is to suggest that there is a state in which a person can lack free will. If someone did not have free will, can you tell me how exactly would that person differ from a person who did? How would you empirically measure freewill, that is, what measurement or test could you do to determine if a person didn't have free will? How would they differ from everyone else?

If your answer is that it is immaterial, that's cool, but then you have to concede that the idea is essentially meaningless.

everything is meaningless other than, at best, to a single person in a single moment for a single moment
 
Yea, so sometimes I start these conversations just to learn more about them. I've recently learned the types that you list, compatibilist and libertarian notions of free will. I've struggled to identify what exactly free will is, then it sort of hit me. Trying to imagine what a lack of free will might look like. Even if one did not act in accordance with wants intentions and desires, how could we be sure that's not what the person wanted?

Descartes' Evil Genius is all about the enslavement of free will.
 
I am actually closer to muting/blocking you dear friend.
It doesn't get you around the fact that you made a false claim. Unless you care to either show how you did not or you show the decency to correct it.
 
csbrown28 said:
Why are you obfuscating? It's a simply question. If a mouse is hungry and it smells food (and now the caveat) if there are no other negative externalities, can the mouse choose not to eat? Can it say......Remember that it has food at home or decide it isn't in the mood for cheese?

I'm not obfuscating. By my lights, I'm getting to the heart of the question. Here--let me see if I can break it down a little further.

The answer to the question is: who knows? There are reasons to think that maybe the answer is yes. After all, if the mouse can see the cat and decide not to eat the cheese, the mouse may also respond to other things, such as that it's not in the mood for cheese, or some other such.

On the other hand, perhaps the answer is that mice cannot do this. But the reasons a mouse might not be able to do this seem to have little to do with free will, as such. Mice might not have moods, for example--and so, a fortiori, no anti-dairy moods might interfere with the gathering of calories. Similarly, mice might lack sufficient memory facility to recall that there is food at home. Neither of these have anything to do, at least on their face, with free will.

Generally speaking, I think I have free will in the sense you seem to mean, in that, given all the information present to my mind at some time, I'm able to make choices. Those choices are only caused by that information in a teleolistic sense--the information in question just more or less is the end of my decision making process. But the decision-making itself is separate from the information present. We infer this because the information present to me at any one time differs sharply in content, quantity, and quality from some other time, and yet the faculty of choice never leaves me. The fact that the biological endowment of a mouse presents its mind with very much less information, generally speaking, than is present to my consciousness doesn't mean the mouse lacks the power of choice; as I've already observed, that power remains regardless of the content, quantity, or quality of information present to me--presumably, the same is true of other conscious creatures.

csbrown28 said:
I can decide not to eat I can make a choice based on lots of reasons, some rational and others not so much

The point I was making is that even human beings make choices based on reasons. Will is not random--or at least, it usually isn't. If you design a thought experiment which has it that the mouse has no reason not to eat the cheese, of course the outcome is that it will eat the cheese. In order to determine whether mice can make a choice, you need to introduce an element of competing motivations.

csbrown28 said:
but my question has always been, if a person is acting without free will what exactly would that look like. How would he differ from those with free will?

I thought you were asking the converse: i.e. how to distinguish free will from unfree will--and I believe I've already answered that question. (The answer, to review, was that the mental states, and presumably behavior, of a being with free will could not be predicted from physical information, no matter how complete that information. But also, such a being would not behave randomly; logical connections between its mental states at differing times, and again, presumably its behavior over differing times, could be found). We could predict the behavior of a being without free will based on sufficient physical information, and at least possibly, there would be no logical connection between its various mental states.

csbrown28 said:
Poorly worded on my part. "Goal" makes it sound as if it were a choice Not what I meant. If a sophisticated program has been given a task, it will complete that task based entirely on the program it's given.

Sure. I agree.

csbrown28 said:
Even if they are programed to improve their own programming?

Yes, of course.

csbrown28 said:
I suspect the same could be said for the human brain and the actions taken by the body if we had , literally, all the information there was to know about that brain and it's environment.

I disagree. There are properties of minds which could never be predicted from physical facts alone.
 
Last edited:
All interesting answers, but no-one of the answers has yet to tell me how to determine the difference between free will and lack there of.

In the not to distant future we will have AI indistinguishable from ourselves. Is AI capable of free will? I suspect I'll be told no so I'll ask now.... Why is it free will when the brain is made of cells and neurons, and not when the brain is made of silicon and transistors?

Unless someone can tell me how we could test an AI for free will, I am tempted to determine it a meaningless concept. If I said that a jar of peanut butter has free will, how would you prove me wrong?

if you have the capacity to make a choice you are exercising your will. Is it free will? Free from what? the control of environment and biology? Those conditions influence but they to not mandate decisions (choices)
 
Insofar that it's difficult to tell whether an entity is conscious.

Right. That's what I was saying too. "Will" is an internal subjective "experience", it's on the "other side of the veil", just like "consciousness". We can't really determine whether an organism is "conscious" or not, all we can see is the sophisticated behavior that might be related to it - and we can also ask, if that's possible... if it's a person we can just ask "how are you feeling", but we couldn't ask a cat such a thing even though a cat may in fact have "feelings".

So, that's why I was trying to take the opposite approach - I was trying to define "will" scientifically, in terms of a universe of possibilities. Because, if you think about it, that's exactly what "will" is, it's the restriction of that universe and its instantiation in terms of behavior. And all that is brain-wise, is just a "motor activity that has no environmental correlates", math-wise it's the exact same thing the cerebellum does only it operates on the universe of possibilities instead of the trajectories.
 
Not sure why we are belaboring this point, but my point still stands. I'd have limited free will.



I agree, but do you believe that this will always be a limitation? Do you think it impossible that computers will become sophisticated enough that they will be better able to program themselves in ways beyond human comprehension? If it becomes the case that a computer can program itself or another computer better than a human, think about where that leads. You can claim that it is still bound by it's programming, but if it's capable of changing it's programming to meet whatever goal it wishes to actualize, I ask you, what is the difference? How could you tell the difference between free will and programming?

I recommend you watch a few movies:

Automata - Not the greatest movie, but if you can sit through it, it makes an excellent point that, unfortunatly, is probably missed by most people that watch it.

Ex Machina - An excellent movie, a little more far fetched, esp for this discussion, but still a really good movie.





I didn't mean to suggest that it was. Clearly the program and it's answers were bound by externalities out of it's control. My point was that it was a decent approximation of a "human simulator". If you are old enough to remember things like flight simulators that ran on computers in the late 80's and compare them to flight simulators of today, perhaps you can appreciate what has been accomplished in less than 20 years.

View attachment 67187804

View attachment 67187805

It still doesn't take away from the premise. free will is about choice.
the only way you don't have free will is if someone takes that choice away from you.

bodily functions are not considered a choice.
computer AI's are pre-programmed to run in a certain way and the list of choices are defined by the user.

i guess the best thing to that would be person of interest but even then the computer calculates what it is programmed to.
 
Yea, so sometimes I start these conversations just to learn more about them. I've recently learned the types that you list, compatibilist and libertarian notions of free will. I've struggled to identify what exactly free will is, then it sort of hit me. Trying to imagine what a lack of free will might look like. Even if one did not act in accordance with wants intentions and desires, how could we be sure that's not what the person wanted?

I haven't read the entire thread, so excuse me if I say something that's already been discussed. Free Will Vs Determinism has raged within Philosophy, for years. It's a very complex and inviting topic. Strawson has done a fair amount on the topic:

Peter F. Strawson
 
I'm not obfuscating. By my lights, I'm getting to the heart of the question. Here--let me see if I can break it down a little further.

The answer to the question is: who knows? There are reasons to think that maybe the answer is yes. After all, if the mouse can see the cat and decide not to eat the cheese, the mouse may also respond to other things, such as that it's not in the mood for cheese, or some other such.

On the other hand, perhaps the answer is that mice cannot do this. But the reasons a mouse might not be able to do this seem to have little to do with free will, as such. Mice might not have moods, for example--and so, a fortiori, no anti-dairy moods might interfere with the gathering of calories. Similarly, mice might lack sufficient memory facility to recall that there is food at home. Neither of these have anything to do, at least on their face, with free will.

Generally speaking, I think I have free will in the sense you seem to mean, in that, given all the information present to my mind at some time, I'm able to make choices. Those choices are only caused by that information in a teleolistic sense--the information in question just more or less is the end of my decision making process. But the decision-making itself is separate from the information present. We infer this because the information present to me at any one time differs sharply in content, quantity, and quality from some other time, and yet the faculty of choice never leaves me. The fact that the biological endowment of a mouse presents its mind with very much less information, generally speaking, than is present to my consciousness doesn't mean the mouse lacks the power of choice; as I've already observed, that power remains regardless of the content, quantity, or quality of information present to me--presumably, the same is true of other conscious creatures.



The point I was making is that even human beings make choices based on reasons. Will is not random--or at least, it usually isn't. If you design a thought experiment which has it that the mouse has no reason not to eat the cheese, of course the outcome is that it will eat the cheese. In order to determine whether mice can make a choice, you need to introduce an element of competing motivations.



I thought you were asking the converse: i.e. how to distinguish free will from unfree will--and I believe I've already answered that question. (The answer, to review, was that the mental states, and presumably behavior, of a being with free will could not be predicted from physical information, no matter how complete that information. But also, such a being would not behave randomly; logical connections between its mental states at differing times, and again, presumably its behavior over differing times, could be found). We could predict the behavior of a being without free will based on sufficient physical information, and at least possibly, there would be no logical connection between its various mental states.



Sure. I agree.



Yes, of course.



I disagree. There are properties of minds which could never be predicted from physical facts alone.

Mice Don’t Like Cheese | Now I Know
 
You're shifting the burden. Clue -- fallacy.
You made the initial claim, I pointed out that your equation of two separate fallacies was wrong. Burden lies with originator, namely you.
 
Back
Top Bottom