• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

What is free will? Is it meaningful to say there is such a thing?

All interesting answers, but no-one of the answers has yet to tell me how to determine the difference between free will and lack there of.

In the not to distant future we will have AI indistinguishable from ourselves. Is AI capable of free will? I suspect I'll be told no so I'll ask now.... Why is it free will when the brain is made of cells and neurons, and not when the brain is made of silicon and transistors?

Unless someone can tell me how we could test an AI for free will, I am tempted to determine it a meaningless concept. If I said that a jar of peanut butter has free will, how would you prove me wrong?

Something like this: https://en.wikipedia.org/wiki/Determinism

Determinism is the philosophical position that for every event, including human action, there exist conditions that could cause no other event. "There are many determinisms, depending on what pre-conditions are considered to be determinative of an event or action."[1] Deterministic theories throughout the history of philosophy have sprung from diverse and sometimes overlapping motives and considerations. Some forms of determinism can be empirically tested with ideas from physics and the philosophy of physics. The opposite of determinism is some kind of indeterminism (otherwise called nondeterminism). Determinism is often contrasted with free will.[2]

Determinism often is taken to mean causal determinism, which in physics is known as cause-and-effect. It is the concept that events within a given paradigm are bound by causality in such a way that any state (of an object or event) is completely determined by prior states. This meaning can be distinguished from other varieties of determinism mentioned below.

Other debates often concern the scope of determined systems, with some maintaining that the entire universe is a single determinate system and others identifying other more limited determinate systems (or multiverse). Numerous historical debates involve many philosophical positions and varieties of determinism. They include debates concerning determinism and free will, technically denoted as compatibilistic (allowing the two to coexist) and incompatibilistic (denying their coexistence is a possibility).

Determinism should not be confused with self-determination of human actions by reasons, motives, and desires. Determinism rarely requires that perfect prediction be practically possible.
 
if i don't allow you to have something then well you really don't have free will do you? yes you can choose to struggle that is a choice that you can make.
you can't choose to have ice cream. you can request then it is my choice whether to let you have it or not.

Not sure why we are belaboring this point, but my point still stands. I'd have limited free will.

no you said computers you mentioned nothing about AI.
even now AI only take what is programmed into it to do. It has a limited amount of choices and those choices are based on what it calculates to be the best one.
not so much free will as it has to be programmed and told what to do.

I agree, but do you believe that this will always be a limitation? Do you think it impossible that computers will become sophisticated enough that they will be better able to program themselves in ways beyond human comprehension? If it becomes the case that a computer can program itself or another computer better than a human, think about where that leads. You can claim that it is still bound by it's programming, but if it's capable of changing it's programming to meet whatever goal it wishes to actualize, I ask you, what is the difference? How could you tell the difference between free will and programming?

I recommend you watch a few movies:

Automata - Not the greatest movie, but if you can sit through it, it makes an excellent point that, unfortunatly, is probably missed by most people that watch it.

Ex Machina - An excellent movie, a little more far fetched, esp for this discussion, but still a really good movie.



that isn't free will.

I didn't mean to suggest that it was. Clearly the program and it's answers were bound by externalities out of it's control. My point was that it was a decent approximation of a "human simulator". If you are old enough to remember things like flight simulators that ran on computers in the late 80's and compare them to flight simulators of today, perhaps you can appreciate what has been accomplished in less than 20 years.

chuck-yeagers-advanced-flight-simulator_7.jpg

ve-flight-simulator.jpg
 
To say that there is something called "free will" is to suggest that there is a state in which a person can lack free will. If someone did not have free will, can you tell me how exactly would that person differ from a person who did? How would you empirically measure freewill, that is, what measurement or test could you do to determine if a person didn't have free will? How would they differ from everyone else?

If your answer is that it is immaterial, that's cool, but then you have to concede that the idea is essentially meaningless.

Anciently, this depended on whether you were a slave or not.

In ancient times, slavery was practiced extensively.

Clearly then, slaves have no free will. They may have had some legal rights granted to them by Hammurabi's Law Code or other subsequent common law, but they were not free.

The next question you must then ask is how free are you within the confines of your government?

Does a king or queen rule over you? If so then you are a subject, and not really free.

According to J.M. Roberts in his book "History Of The World" (Penguin Books, 2002) on Page 58, "Originally Sumerian society seems to have had some representative even democratic basis, but a growth of scale led to the emergence of kings ... ." He was speaking of the 3rd millennium BCE.

Cleisthenes (KLEISTHENES in Greek) invented democracy again in 510 BCE at Athens, at which time the philosophers invented a new word, ELEFTHERIA which means freedom. So these ancient Athenians had free will for the first time in thousands of years.

Something like this happened again in 1783 when King George 3rd of England freed the American Colonies from British rule. He had concluded that fighting against the American Revolution was costing him more treasure than he could ever hope to recoup. So with Cornwallis' defeat, the King gave up the fight.

Americans have chosen their leadership ever since. Subsequently, France and other European and world nations gained their own independence from Kings as well.

Most of the Earth is now democratic -- the entire process of which having taken 5 thousand years.

So free will starts with location, location, location. It all depends on your nation and your countrymen.

You will thus always be bound to keep the laws of the land in which you live, and as long as you do so, you will be free and will have free will.

But if you become a felon the representatives of the Body Politic will seize you and throw you into confinement and take away your property, and your loved ones will be on their own without you present.

So free will is the freedom (ELEFTHERIA) to do as you please as long as you keep the law of the land.

You must vote and elect goodly candidates for public office to ensure your freedom.

You must work to feed, clothe, and shelter yourself, so that you may obtain the necessary conveniences of life which make living pleasant and possible.

You must get educated sufficiently to provide for yourself.

You must care for your family -- those who brought you and whom you brought into the world and including your lawfully wedded spouse, your faithful dog and/or your loving cat.

Your free will shall continue until you become old and infirm. At which time you will become again dependent on others as you were back when you were still a growing child.

Ultimately you will die and your dead body will dissolve or be turned into ashes.

What happens after that with regards to your free will we do not know.
 
Last edited:
I believe people are conditioned and the choices they make are due to how they've been conditioned.

True indeed we have all been brainwashed.

My own brainwashing includes Jewish commandments, Christian commandments, U.S. Federal, State, County, and City laws, philosophical ethics and logic, empathy for others, sympathy for unfortunates, hate for the cruel and merciless, love of my cat and my neighbors.

I have been conditioned to worry about what a Divine Being will say about my thoughts, words, and deeds after my life is over.

I have been conditioned not to break the laws of my Nation as long as they follow the Constitution. I once took a solemn oath to defend that Constitution.

I have been conditioned not to kill or hurt anyone else except unavoidably in self defense and proportionately.

I have been conditioned to not covet anything which belongs to my neighbors.

I have been conditioned to work hard 6 days of the week, and to rest on the seventh.

I have been conditioned to save money and live well within my means in case of a rainy day, so I keep a rainy day fund.
 
Last edited:
I think it's pretty much impossible to wholeheartedly champion either an entirely deterministic view of the world, or a wholly indeterministic view. We are all constrained and influenced by forces beyond us in everything we do, hence indeterminism cannot be wholly correct. Similarly, there is no evidence to lead us to believe that our every move and action is entirely pre-determined, especially in the light of discoveries in the fields of quantum physics and chaos theory.

The question is very much material to discussions of ethics, morality, freedom etc, it's just not an easy tick-box question requiring a Yes/No answer.

By begging the question you have just affirmed the consequent regarding your belief in quanta and in chaos.
 
All interesting answers, but no-one of the answers has yet to tell me how to determine the difference between free will and lack there of.

In the not to distant future we will have AI indistinguishable from ourselves. Is AI capable of free will? I suspect I'll be told no so I'll ask now.... Why is it free will when the brain is made of cells and neurons, and not when the brain is made of silicon and transistors?

Unless someone can tell me how we could test an AI for free will, I am tempted to determine it a meaningless concept. If I said that a jar of peanut butter has free will, how would you prove me wrong?

Then you must also conclude that we are all slaves, born slaves, and will die slaves.

This begs the question however as to Who is the slave Master? And why did He/She/They do it?

I suppose that you could also presume we are just the slaves of gravity. But who created gravity then?
 
To me, "free will" entails some capacity to control one's actions in accordance with rational consideration, as opposed to some emotional craving or desire.

Can we agree that in order to have free will, at least two conditions must exist...

1) There is something I desire to actualize

2) I can make the choice to try to obtain my desire

But what happens when 1 and 2 conflict?

If I am a kleptomaniac:

1) I desire to take something that isn't mine.

2) I don't want to be a kleptomaniac.

or

1) I find myself craving a cigarette.

2) I have the rational desire to avoid lung cancer and know I shouldn't smoke it.

The question of an animal, like a mouse, the question is, does it have a choice? If it is hungry and it smells something it's mind identifies as food, can it override the dopamine response to eat the cheese? I don't know, I'm asking. How would you determine if it could actually make the choice?



Certainly computers are capable of being programed to learn. It is also possible to program a computer to program. When I say that a computer has exceeded it's programming it could be defined as an action taken, probably though some combination of learning and self programming that cannot be explained by its original program.



See above for answers to both.

But you can still be a slave and have all these too.
 
Try this thought experiment...

If there were ten parallel universes, each identical down to the individual atoms and sub-atomic particles, so that there is absolutely nothing different about any of them on any level, then whatever happened in one of the universe must also be happening in all of the others.
Meaning that if one person in one universe makes a decision, then their parallel selves must necessarily make an identical decision each.
Logically, there can be no other conclusion, and if you follow this logic to its inevitable end, our decisions are consciously made of free will, but subconsciously, and mechanically, we are simply following the fixed laws of the universe, and therefore we are essentially robots without free will.

Thoughts?

Since parallel universes is a science fiction concept, it must logically remain in the realm of entertainment not reality.
 
By begging the question you have just affirmed the consequent regarding your belief in quanta and in chaos.
Apart from your faulty equation of both fallacies being the same, Andalublue has committed neither.
 
csbrown28 said:
To me, "free will" entails some capacity to control one's actions in accordance with rational consideration, as opposed to some emotional craving or desire.

I'm not so sure that rationality is actually opposed, or opposite to, emotion. That was a kind of traupe of the Enlightenment, but by my lights, it looks increasingly like it's just false. The many things we take to be rational are founded on the classically irrational--intuitions, desires, or emotions.

What you seem to be talking about below are not rational vs. irrational motives, but rather, higher-order desires--that is, desires about desires.

csbrown28 said:
Can we agree that in order to have free will, at least two conditions must exist...

1) There is something I desire to actualize

2) I can make the choice to try to obtain my desire

Provisionally, sure. That is, I'm not sure I agree, but for the sake of discussion, I'll go along for the time being.

csbrown28 said:
But what happens when 1 and 2 conflict?

Cognitive dissonance or other unpleasant psychological state.

csbrown28 said:
The question of an animal, like a mouse, the question is, does it have a choice? If it is hungry and it smells something it's mind identifies as food, can it override the dopamine response to eat the cheese? I don't know, I'm asking. How would you determine if it could actually make the choice?

Of course it can. Set a sleeping cat right next to the cheese and see if the mouse goes for it. What I expect would happen is that almost no mouse would dare the cheese, but a few would. Moreover, the hungrier the mice, the more likely they'd go for the cheese.

Now, in answer to the question about whether computers could ever do something like this, the answer is: no. Computers cannot have desires, let alone second-order desires.

csbrown28 said:
Certainly computers are capable of being programed to learn. It is also possible to program a computer to program. When I say that a computer has exceeded it's programming it could be defined as an action taken, probably though some combination of learning and self programming that cannot be explained by its original program.

I've never seen an example of this, nor seen any evidence that it's possible.

csbrown28 said:
If there were ten parallel universes, each identical down to the individual atoms and sub-atomic particles, so that there is absolutely nothing different about any of them on any level, then whatever happened in one of the universe must also be happening in all of the others.
Meaning that if one person in one universe makes a decision, then their parallel selves must necessarily make an identical decision each.

This doesn't seem correct to me. There doesn't seem to be a good reason to accept the proposition that fixing all the physical facts is equivalent to fixing all the facts. If there is such a thing as free will, it's not physically determined.
 
Ok, bad example. I can choose to struggle or not to struggle. If you ask me to tell you something I can choose to tell you or not, but if I want ice cream, you can deny me that, thus my choices are limited, but they still exist. Unless you make me unconscious or you figure out how to control my desires then, by your definition I will always have limited freewill, right?

I'm not talking about "computers" I'm talking about AI, which presently exists, though it isn't sophisticated enough to be able to make "choices" free of the underlying programing (choices made by the programmer), but unless you think there is a compelling reason to believe that AI intelligence is limited, or that the processing power that makes it possible is somehow limited, the speculation of AI and free will isn't a distraction, but a very real possibility, one that we might ought to start thinking about before it becomes a reality.

Approps, (true story) I was called the other day by a voice that sounded almost human (it was a woman's voice). I had a hard time trying to figure out if it was human. I was of course immediately insulted to think a company would try to pass a computer off as a person as if I was an idiot. I asked, is this a real person? The voice replied, "of course I am, my name is Julie". She went on for a minute, I totally ignored her as I thought of a question that would prove she wasn't human. I asked if I could ask a question, and proceeded to ask who was the first man on moon? The answer came back, "I'm sorry Mr. Brown, I'm not sure about that. I asked a math question, three times two....Again, same response. It was uncanny how close this voice was to a real person, both in tone and responses. AI is coming my friend and I suspect, in many cases it has arrived and you just don't know it.

So are you asking if A/I has free will?

A/I is all pre-programmed.

In the movie "Blade Runner," memory implants were used in the A/I Replicants.

There is going to be a Blade Runner 2 coming out soon, also with Harrison Ford again -- can't wait !!!

Are you then analogizing that we as humans are also pre-programmed?

Is predestination also pre-programming?

These notions and ideas are on the far fringes of Christianity with St. Paul's comments about predestination.
 
Last edited:
I'm not so sure that rationality is actually opposed, or opposite to, emotion. That was a kind of traupe of the Enlightenment, but by my lights, it looks increasingly like it's just false. The many things we take to be rational are founded on the classically irrational--intuitions, desires, or emotions.

For instance?

Cognitive dissonance or other unpleasant psychological state.

Only if, for instance we know something to be one thing but we want it to be another. That's not at all what I said. If you "crave" something, it isn't necessarily rational or good (though it can be). The decision to act on that desire can be overridden by other desires to actualize longer term outcomes for which there may be no apparent benefit.

I can want to jump off a building, not doing so doesn't actualize anything tangible except the belief that I avoided potential harm.

Of course it can. Set a sleeping cat right next to the cheese and see if the mouse goes for it. What I expect would happen is that almost no mouse would dare the cheese, but a few would. Moreover, the hungrier the mice, the more likely they'd go for the cheese.

Why change the question? There was no cat in my example. Now answer the question without the cat.

Now, in answer to the question about whether computers could ever do something like this, the answer is: no. Computers cannot have desires, let alone second-order desires.

Though I don't remember asking that question, I will admit that it seems unlikely that a computer could "want" anything. it seems more likely that a computer simply has a goal and will try to achieve that goal in the best way it knows how.

I've never seen an example of this, nor seen any evidence that it's possible.

Evidence?
 
Then you must also conclude that we are all slaves, born slaves, and will die slaves.

I guess you'd have to define all that in order for me to agree of disagree, because the way your using those words I would say no, but I'm not sure if I'm actually interpreting the meaning of what you tried to say the way you mean it.
 
Anciently, this depended on whether you were a slave or not.

In ancient times, slavery was practiced extensively.

Clearly then, slaves have no free will.

Again I have to stop here. Of course slaves had free will, it's just that exercising it may have lead to harm or punishment, but choosing not to, say...., try to escape, is a choice. Even if that choice is to choose to do nothing.
 
Since parallel universes is a science fiction concept, it must logically remain in the realm of entertainment not reality.
'
Whether or not parallel universes exist doesn't negate the question.
 
So are you asking if A/I has free will?

A/I is all pre-programmed.

In the movie "Blade Runner," memory implants were used in the A/I Replicants.

There is going to be a Blade Runner 2 coming out soon, also with Harrison Ford again -- can't wait !!!

Are you then analogizing that we as humans are also pre-programmed?

Is predestination also pre-programming?

These notions and ideas are on the far fringes of Christianity with St. Paul's comments about predestination.

No...I'm asking if AI can get to the point where it appears that it is making choice such that it becomes impossible to distinguish between what we are calling free will in humans and an AI that simulates it. Voight-Kampff test anyone?
 
To say that there is something called "free will" is to suggest that there is a state in which a person can lack free will. If someone did not have free will, can you tell me how exactly would that person differ from a person who did? How would you empirically measure freewill, that is, what measurement or test could you do to determine if a person didn't have free will? How would they differ from everyone else?

If your answer is that it is immaterial, that's cool, but then you have to concede that the idea is essentially meaningless.



Come down to the county jail.

I will put you in a restraint chair with chains or straps on your head, arms, waist, and ankles. Your ability to act freely will be near zero.

Then, after waiting until you're about ready to scream and gibber like a loon (20 min for most), I'll undo all the restraints and let you leave the jail and walk free into the night air.


THEN you will know the difference.
 
Try this thought experiment...

If there were ten parallel universes, each identical down to the individual atoms and sub-atomic particles, so that there is absolutely nothing different about any of them on any level, then whatever happened in one of the universe must also be happening in all of the others.
Meaning that if one person in one universe makes a decision, then their parallel selves must necessarily make an identical decision each.
Logically, there can be no other conclusion, and if you follow this logic to its inevitable end, our decisions are consciously made of free will, but subconsciously, and mechanically, we are simply following the fixed laws of the universe, and therefore we are essentially robots without free will.

Thoughts?
It's like you can have randomness on a small local scale, but on a larger scale there is well a defined, predictable structure.
 
To say that there is something called "free will" is to suggest that there is a state in which a person can lack free will. If someone did not have free will, can you tell me how exactly would that person differ from a person who did? How would you empirically measure freewill, that is, what measurement or test could you do to determine if a person didn't have free will? How would they differ from everyone else?

Acting in accordance with desires/intentions/wants etc.
 
csbrown28 said:
For instance?

From your previous post: Not wanting to get lung cancer is founded upon the desire to live and not suffer. You don't arrive at that desire by cold rational processing of information, free of all intuitive or emotional weight. You simply have that desire; it's not a result of applying logic. Rather, you take it as a desire logic should serve. You don't smoke because you're aware there's a causal relationship between smoking and lung cancer.

csbrown28 said:
Only if, for instance we know something to be one thing but we want it to be another. That's not at all what I said. If you "crave" something, it isn't necessarily rational or good (though it can be).

Maybe you're working with a different definition of rational than I, but it doesn't seem to me like cravings could ever be rational.

csbrown28 said:
The decision to act on that desire can be overridden by other desires to actualize longer term outcomes for which there may be no apparent benefit.

OK, sure.

csbrown28 said:
I can want to jump off a building, not doing so doesn't actualize anything tangible except the belief that I avoided potential harm.

Again, sure. I'm afraid I don't see why this has much to do with the discussion. In fact, it seems to me like the important points are rapidly getting submerged...

csbrown28 said:
Why change the question?

Think of it this way: you're rather hungry, and someone offers you your favorite snack. As it happens, the snack is not unhealthy, and there's absolutely no reason you shouldn't eat it. Do you eat it? Of course you do.

It's only when you introduce a reason not to eat it that there's really any consideration of choice. The cat is a reason for the mouse not to eat the cheese, while the hunger of the mouse is a reason for it to eat the cheese. The mouse must choose whether to risk it or not.

csbrown28 said:
Though I don't remember asking that question, I will admit that it seems unlikely that a computer could "want" anything. it seems more likely that a computer simply has a goal and will try to achieve that goal in the best way it knows how.

I'm not even sure computers have goals, so much as that we don't have better language to describe the behavior of a computer or robot. Computers have goals in the same way that bullets have goals; the bullet is going to hit something, but there appears to be no mental component to its doing so. The bullet never stops in mid-air and says to itself "hey, wait a second...I don't want to hit that nice fellow over there. I'm just going to hit those bricks instead." Neither do computers. They do what they're programmed to do.

csbrown28 said:
Evidence?

Still don't see any evidence. To be clear, what I'm saying is that I've never seen any evidence of a computer doing something that couldn't be explained by reference to its program plus prevailing physical conditions at the relevant times.
 
Acting in accordance with desires/intentions/wants etc.

So in other words, it would be difficult to impossible to tell whether an entity "has" free will.

Consider a simple example: grasping an object. You have to choose a trajectory, from among millions available. You can select "by reflex", or, you can bring that into your consciousness and alter the reflexive solution "at will". However the resulting behavior is the same - it's the choice of one from among many possible trajectories.
 
Come down to the county jail.

I will put you in a restraint chair with chains or straps on your head, arms, waist, and ankles. Your ability to act freely will be near zero.

Then, after waiting until you're about ready to scream and gibber like a loon (20 min for most), I'll undo all the restraints and let you leave the jail and walk free into the night air.


THEN you will know the difference.

"Near Zero" ≠ Zero
 
Back
Top Bottom