• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Epistemic Responsibility

This feels closer to what I'm getting at, but I need more clarification. What do you mean by an abstracted principle from belief that lends itself to moral analysis?
We define belief in such a way that we get the set of all beliefs {B[SUP]1[/SUP], B[SUP]2[/SUP], B[SUP]3[/SUP], B[SUP]4[/SUP], ...B[SUP]n[/SUP]}

Then we intuitively derive the subset of morally bad or wrong beliefs {B[SUP]'1[/SUP], B[SUP]'2[/SUP], B[SUP]'3[/SUP], B[SUP]'4[/SUP], ...B[SUP]'n[/SUP]}

Then we abstract our criteria for a morally bad or wrong belief from that subset.

Then we derive from those criteria a principle by which belief may be said to be wrong or bad.
 
We define belief in such a way that we get the set of all beliefs {B[SUP]1[/SUP], B[SUP]2[/SUP], B[SUP]3[/SUP], B[SUP]4[/SUP], ...B[SUP]n[/SUP]}

Then we intuitively derive the subset of morally bad or wrong beliefs {B[SUP]'1[/SUP], B[SUP]'2[/SUP], B[SUP]'3[/SUP], B[SUP]'4[/SUP], ...B[SUP]'n[/SUP]}

Then we abstract our criteria for a morally bad or wrong belief from that subset.

Then we derive from those criteria a principle by which belief may be said to be wrong or bad.

Eh, I don't know if this is sufficient for what I'm trying to get at. And the 'intuitive' derivation of morally bad or wrong beliefs seems rife with problems in itself, because the moral complexity of any given belief makes me question whether or not just categorizing beliefs as "good" or "bad" is enough. Being certain that all humans have an eternal soul that is going to heaven can bring me enormous peace, decrease my stress and improve my quality of life here on earth (very good!) It can also cause me to feel a lack of empathy for the suffering of the dying or their loved ones because "they're going to a better place" (bad!) or even be cavalier towards the mortal consequences of, say, war (very bad!)

Beliefs also come in degrees. I can be absolutely certain that I am going to die. I can tentatively accept that I have an immortal 'substance' that will persist my death, but not be certain about it. Thus, I may get to experience at least some of the good benefit of belief in my immortal self without being at risk of callousness towards the deaths of others, because while I feel immortality could be true, I can't be sure enough of its trueness (or at least its details) to be dismissive of the fear and pain of someone watching a loved ones' demise.

Extracting a guiding principal for moral belief from a criteria of a subset of beliefs also doesn't stop people from making exceptions to those categorizations based on their own epistemology. To put it another way, even if we can accurately categorize all belief into "good" or "bad", if I have an epistemology that permits exceptions to that derived moral principle, and those exceptions can be carried with certainty, then the guiding principle doesn't stop me from being epistemically irresponsible.

An example: say we follow your process of abstracted principles from belief for moral analysis. And say we agree on whatever principal we end up with, and we are then able to categorize any belief into "good" or "bad". If at the same time I have an epistemology that dictates I can also know that things are true based on God telling me they are true, I can then prescribe to some kind of divine command theory that will allow me to override any categorization we come up with.
 
Eh, I don't know if this is sufficient for what I'm trying to get at. And the 'intuitive' derivation of morally bad or wrong beliefs seems rife with problems in itself, because the moral complexity of any given belief makes me question whether or not just categorizing beliefs as "good" or "bad" is enough. Being certain that all humans have an eternal soul that is going to heaven can bring me enormous peace, decrease my stress and improve my quality of life here on earth (very good!) It can also cause me to feel a lack of empathy for the suffering of the dying or their loved ones because "they're going to a better place" (bad!) or even be cavalier towards the mortal consequences of, say, war (very bad!)

Beliefs also come in degrees. I can be absolutely certain that I am going to die. I can tentatively accept that I have an immortal 'substance' that will persist my death, but not be certain about it. Thus, I may get to experience at least some of the good benefit of belief in my immortal self without being at risk of callousness towards the deaths of others, because while I feel immortality could be true, I can't be sure enough of its trueness (or at least its details) to be dismissive of the fear and pain of someone watching a loved ones' demise.

Extracting a guiding principal for moral belief from a criteria of a subset of beliefs also doesn't stop people from making exceptions to those categorizations based on their own epistemology. To put it another way, even if we can accurately categorize all belief into "good" or "bad", if I have an epistemology that permits exceptions to that derived moral principle, and those exceptions can be carried with certainty, then the guiding principle doesn't stop me from being epistemically irresponsible.

An example: say we follow your process of abstracted principles from belief for moral analysis. And say we agree on whatever principal we end up with, and we are then able to categorize any belief into "good" or "bad". If at the same time I have an epistemology that dictates I can also know that things are true based on God telling me they are true, I can then prescribe to some kind of divine command theory that will allow me to override any categorization we come up with.
Fair enough. But I think the problems you point out are bound to crop up with any morality of belief, and this as much for the nature of morality as for the nature of belief, as neither attains to certainty. Were we able to come up with a principle of epistemic responsibility as prescriptively universal as the Golden Rule, the problems of degree, exceptions, and double-edgedness will remain. "Believe and let believe" might be the only responsible epistemic position to practice, and even it, I dare say, is not immune to philosophical cavil.
 
I can certainly appreciate how someone without the "religious bug" as you put it finds this all unnecessary and excessively difficult. You might be right. But like it or not, humans have a strong disposition to believe in the 'supernatural' and assign some sort of agency to cosmic forces. Whether the need to believe is from divine calling on our heart or just a glitchy byproduct of an unprecedented level of brain complexity, the need is real for most people.

There are a lot of things we THINK we need. But sometimes, all it takes is a little paradigm shift in our thinking, adapting a different perspective, to realize it falls away easily and we don't need it all that much after all. This idea of a "need to believe" and "ultimate truth" as an inevitable part of human nature may just be a weird byproduct of certain traditional cultural paradigms and ways of thinking we have grown up with. But humanity has reached a stage where that is no longer compatible with more useful ways of thinking we have learned just in the past 2-3 centuries. So it may be time to revisit the earlier ways of looking at things and seeing if it was even the right way to look at things in the first place. It seems what you are describing here is just the discomfort, the cognitive dissonance, that comes from adapting more modern and useful ways of looking at the world, and finding that they are hard to reconcile with many of the old paradigms and worldviews. You are thinking there MUST be some way to reconcile these two. The inability to do so is making you uncomfortable. You want to salvage the old model in some way or other. You are using the perspectives and vocabulary of the new paradigms to look at the old ones, and they no longer make sense. But you feel like you need it, and wouldn't be able to live without them. You really don't. It's like the smoker who thinks he can't live without his cigarettes. Psychologists often use "cognitive behavioral therapy" (CPT) to show them that with being able to think about things slightly differently, you don't find necessarily clever new ways to fill that inevitable "need". You may find you don't need it at all.

"My principal motive is the belief that we can still make admirable sense of our lives even if we cease to have … "an ambition of transcendence." "
-Richard Rorty
 
Last edited:
There are a lot of things we THINK we need. But sometimes, all it takes is a little paradigm shift in our thinking, adapting a different perspective, to realize it falls away easily and we don't need it all that much after all.

Yowza. That sounds like a tidy clean way of wrapping up the problem, but that is not the lived experience of a LOT of people.

Don't mistake me, if someone pulls off a faith transition that smooth and at the end of it they feel no real loss, then cool for them. I'd probably want to guide that person into humanism to avoid derailing into some kind of moral nihilism, but yeah whatever floats their boat.

The very real, tangible problem is that most faith transitions (either into another faith or into atheism) are rarely that simple, painless, or easy. And frequently people end up in atheism or agnosticism in a sort of spiritual identity fugue that doesn't ever subside. I say this both from personal experience and from observing the experiences of others. My need for spirituality and a responsible epistemology to guide it aren't stemming from a nice, clean break into agnosticism from Christianity. After seven years of trying to recohere from a faith deconstruction, I've had enough of the malaise. I'm not alone.

Also, faith transitions don't happen in a vacuum. Frequently, people in faith have a lot to lose by abandoning their religious practice: career, community, friends, family. Faith can certainly cause problems, but your prescription for its complete abandonment is totally inadequate for addressing all of the symptoms of the condition. A nice, easy transition from theism to atheism is just not in the cards for many religious people, no matter how much you (or they!) want it to be. If you are a humanist concerned with the elevation of society, you are morally obligated to take the well-being of these people into account.

This idea of a "need to believe" and "ultimate truth" as an inevitable part of human nature may just be a weird byproduct of certain traditional cultural paradigms and ways of thinking we have grown up with.

Science says there's a lot more driving the need for certainty, spirituality and ultimate truth than just cultural traditions. I agree with Richard Dawkins on that much.

But humanity has reached a stage where that is no longer compatible with more useful ways of thinking we have learned just in the past 2-3 centuries. So it may be time to revisit the earlier ways of looking at things and seeing if it was even the right way to look at things in the first place. It seems what you are describing here is just the discomfort, the cognitive dissonance, that comes from adapting more modern and useful ways of looking at the world, and finding that they are hard to reconcile with many of the old paradigms and worldviews. You are thinking there MUST be some way to reconcile these two. The inability to do so is making you uncomfortable. You want to salvage the old model in some way or other. You are using the perspectives and vocabulary of the new paradigms to look at the old ones, and they no longer make sense. But you feel like you need it, and wouldn't be able to live without them. You really don't. It's like the smoker who thinks he can't live without his cigarettes. Psychologists often use "cognitive behavioral therapy" (CPT) to show them that with being able to think about things slightly differently, you don't find necessarily clever new ways to fill that inevitable "need". You may find you don't need it at all.

I've been thinking a lot about how to respond to this. I am beginning to wonder if this evangelical streak in the New Atheist isn't really just the same sort of fundamentalism that plagues religion.

My first rebuttal is to ask you these:

Do humans need art?
Do they need music?
Do they need love?
And who decides how real or imagined those needs are?

My second rebuttal is to point out your (and other atheists') severe error in making religion analogous to an addictive substance. The benefits of a cigarette is so utterly outweighed by its costs that guiding people away from them is a pretty healthy thing to do. Science has a lot to say about the neurological benefits of religion though, and they're hard to beat any other way. So you're not asking someone to let go of their need for a smoke: it's more like you're asking someone to let go of their need to be in a loving relationship.

"But relationships are so dysfunctional," you argue. "And yours is abusive. You don't actually need a relationship, you just want a relationship. Plus they're so prone to failure, why make the effort? It's an imagined desire. You'll be happier alone, you'll see."

Maybe that individual's 'relationship' is abusive, but is the fairest and kindest course of action really to sever the hand to heal the broken finger?
 
Fair enough. But I think the problems you point out are bound to crop up with any morality of belief, and this as much for the nature of morality as for the nature of belief, as neither attains to certainty. Were we able to come up with a principle of epistemic responsibility as prescriptively universal as the Golden Rule, the problems of degree, exceptions, and double-edgedness will remain. "Believe and let believe" might be the only responsible epistemic position to practice, and even it, I dare say, is not immune to philosophical cavil.

Perhaps then we're not searching for a perfect application of epistemic responsibility, but the 'best possible option'. Do you think it is it even possible for an epistemology with fail-safes against immoral belief acquisition to exist? Let's say it does. How would we find it?
 
Perhaps then we're not searching for a perfect application of epistemic responsibility, but the 'best possible option'. Do you think it is it even possible for an epistemology with fail-safes against immoral belief acquisition to exist? Let's say it does. How would we find it?
I'm not clear on what constitutes "immoral belief," but I think we might be able to tease out of thought something comparable to an immoral act in the realm of belief and call its avoidance "epistemic responsibility." Offhand, however, I don't see how "epistemic responsibility" can be related to the truth value of the belief. Both a true belief and a false belief may prove harmful or benign, it seems to me.

Am I on the scent here or have I lost it?
 
I'm not clear on what constitutes "immoral belief," but I think we might be able to tease out of thought something comparable to an immoral act in the realm of belief and call its avoidance "epistemic responsibility." Offhand, however, I don't see how "epistemic responsibility" can be related to the truth value of the belief. Both a true belief and a false belief may prove harmful or benign, it seems to me.

Am I on the scent here or have I lost it?

You're getting closer. I've only made it more difficult with some of my language though, so let me try to clean up.

I realize now that "Immoral belief acquisition" was a terrible statement. It also contradicts some of my earlier points, so just dump that one right out of your brain.
I think I actually want to just remove morality from the equation for a moment, because I think that too is causing confusion.

Here is what I'm looking for: an epistemology that minimizes the acquisition of personally and socially deleterious beliefs, particularly those of the spiritual variety.
You are right in saying that epistemic responsibility speaks nothing to the truth value of the belief, and I'm fine with that. Belief veracity is not my concern here: effect of belief is my concern.

If you believe the Great Pumpkin steals your soul every night and then puts it back in when you wake up, and that has absolutely no harmful bearing on how you interact with the world, then I really don't care if you believe it. If you believe that gay people are filthy sinners and it convicts you to go wave signs and scream abuse at anyone you suspect is LGBTQ, then I care very much that you believe it.

In both examples, whether or not the belief is "true" isn't actually relevant, because neither are provable. But if your epistemology permits you to accept beliefs on minimal evidence, and if your epistemology allows you to carry those beliefs with enough certainty to weigh on how you behave in the world, and if that behavior is deleterious, then the problem (in my estimation) isn't so much with the belief itself, the problem is with the way you've acquired it.

Am I making more sense?

Epistemology can be loosely described as "how we know what we know", but I think for our purposes it can also be accurately described as "what we believe about what we can be certain of." Epistemologies don't just explain belief formation, they also describe the extent of certainty. My epistemology allows me to be certain that I am a human in a chair at a computer. It does not permit me to be certain that Jesus resurrected.
 
...
Here is what I'm looking for: an epistemology that minimizes the acquisition of personally and socially deleterious beliefs, particularly those of the spiritual variety.
You are right in saying that epistemic responsibility speaks nothing to the truth value of the belief, and I'm fine with that. Belief veracity is not my concern here: effect of belief is my concern.
...
In both examples, whether or not the belief is "true" isn't actually relevant, because neither are provable. But if your epistemology permits you to accept beliefs on minimal evidence, and if your epistemology allows you to carry those beliefs with enough certainty to weigh on how you behave in the world, and if that behavior is deleterious, then the problem (in my estimation) isn't so much with the belief itself, the problem is with the way you've acquired it.

Am I making more sense?

Epistemology can be loosely described as "how we know what we know", but I think for our purposes it can also be accurately described as "what we believe about what we can be certain of." Epistemologies don't just explain belief formation, they also describe the extent of certainty. My epistemology allows me to be certain that I am a human in a chair at a computer. It does not permit me to be certain that Jesus resurrected.
Yes, I think this post clarifies and focuses our discussion. The classic false belief in western civilization was the geocentric view of the universe with mankind just below the angels in the Great Chain of Being, but it is arguable both that this belief had beneficial effects in the career of mankind and that it had deleterious effects.

You offer three criteria in your post for determining "epistemic responsibility":
1. degree of evidence for a belief
2. behavior based on a belief
3. nature of behavior based on belief

Where the degree is minimal and acted on and deleterious in effect, there our question of "epistemic responsibility" seems to arise. yes?
 
Yes, I think this post clarifies and focuses our discussion. The classic false belief in western civilization was the geocentric view of the universe with mankind just below the angels in the Great Chain of Being, but it is arguable both that this belief had beneficial effects in the career of mankind and that it had deleterious effects.

You offer three criteria in your post for determining "epistemic responsibility":
1. degree of evidence for a belief
2. behavior based on a belief
3. nature of behavior based on belief

Where the degree is minimal and acted on and deleterious in effect, there our question of "epistemic responsibility" seems to arise. yes?

Yep, you've articulated what I am getting at. And the obvious answer in the case of a belief held in minimal evidence and deleterious in effect is: "don't believe that anymore."

But that's a hard thing to get people to do. I'm trying to explore the possibility of an epistemology that stops people from picking up beliefs with deleterious effects in the first place, without developing some wishy-washy standard of "sufficient evidence" ala Clifford that also just happens to rule out all religious belief.
 
Yep, you've articulated what I am getting at. And the obvious answer in the case of a belief held in minimal evidence and deleterious in effect is: "don't believe that anymore."

But that's a hard thing to get people to do. I'm trying to explore the possibility of an epistemology that stops people from picking up beliefs with deleterious effects in the first place, without developing some wishy-washy standard of "sufficient evidence" ala Clifford that also just happens to rule out all religious belief.
Minimally justified belief that has deleterious consequences -- that's what we wish to discourage or prevent in your paradigm of "epistemic responsibility."
Are we not interested also in maximally justified belief that has deleterious consequences?
That is to say, are we interested in the consequences of belief only when minimally justified?
 
Yep, you've articulated what I am getting at. And the obvious answer in the case of a belief held in minimal evidence and deleterious in effect is: "don't believe that anymore."

But that's a hard thing to get people to do. I'm trying to explore the possibility of an epistemology that stops people from picking up beliefs with deleterious effects in the first place, without developing some wishy-washy standard of "sufficient evidence" ala Clifford that also just happens to rule out all religious belief.

I have questions on this whole beliefs and deleterious effects and stopping people. Beliefs by themselves do not have deleterious effects if not acted on. What makes an effect deleterious? What standards are you using? And is it really possible to stop people from having beliefs at all?
 
Minimally justified belief that has deleterious consequences -- that's what we wish to discourage or prevent in your paradigm of "epistemic responsibility."
Are we not interested also in maximally justified belief that has deleterious consequences?
That is to say, are we interested in the consequences of belief only when minimally justified?

I've been thinking about this a lot over the last couple of days. I think the answer has to be "yes", although it also seems that 'maximally' justified beliefs that have deleterious consequences also trends towards greater net positive (or neutral) consequences and are therefore less under the spotlight of responsibility.

For example: I believe I am sitting at my desk. Every available subjective sense tells me this is so, and if I were to subject that belief to empirical analysis it would still be justifiable. I could be wrong, maybe I'm a brain in a tank, or maybe my consciousness was booted up a few seconds ago. There is no test that would permit me to know the difference. Yet given all available resources at my disposal, the belief "I am sitting at my desk" is about as maximally justified as I can imagine. Every belief with that same level of justification seems to have no real deleterious consequences that I can discern.

Beliefs tend to build on other beliefs, and the further along the gradient of justification we go, the more possible negative consequences appear. I'm not going to try and map out a line between "fission produces energy" and "we should deploy our nuclear weapons", but I think we can agree a lot of beliefs fall in between those two with increasing degrees of potential consequences corresponding to a decrease in justification.
 
Beliefs by themselves do not have deleterious effects if not acted on.

That's Clifford's point about no private beliefs though; every belief you have gets acted upon in some way. If you believe gay people are filthy with sin, that belief will effect how you behave around LGBTQ people even if you are not being 'open' about your belief.

It can go the other way too: if you believe humanity is mostly good and/or that the world is a safe place, that belief has a profound positive physiological impact on you. That belief also improves empathy and shapes how you treat other people.

The less evidence available to justify a belief, the more important it is to be aware of the potential negative consequences of that belief and attempt to minimize them while also enjoying the benefits of a belief. Sometimes that means just choosing not to believe something you don't have great evidence for if the potential positive effects of that belief are outweighed by the potential negative effects.

What makes an effect deleterious? What standards are you using?

An effect is deleterious if it harm self or others. This is obviously not a hard-and-fast measuring stick, and things can be harmful without inflicting suffering. For many beliefs the potential deleterious effects are pretty self-evident. When they are not, extra care and self-awareness are needed.

If you're wanting some scientific scale for what registers as deleterious, then that is simply not possible. There is a great deal of subjectivity at work here.

And is it really possible to stop people from having beliefs at all?

Well no, obviously not. When I talk about stopping people from having beliefs, I mean teaching people to have a critically-minded approach to belief and a level of honesty about what level of evidence we actually have for things we believe. Obviously you can't force someone not to believe something.

Everyone believes they have "good enough" reasons for their beliefs, that they've built an accurate model of the world through fair analysis. But beliefs are largely survival mechanisms, coping tools, adaptations to give us certainty. Evolution has not equipped us to see reality as it fundamentally is; we just see enough of it to get consistent results when it comes to food and clothing and mate selection and keeping our kids alive, but our intuited sense of how the world works quickly falls apart outside of the very basics of our senses. I can't perceive the cells in my body or the weight of the moon. We know about those things through a tedious, fraught, methodical process called 'science', and it doesn't come naturally.

We crave certainty because certainty means survival. Stable beliefs provide security. This phenomenon is partly what makes it so profoundly difficult to get people to modify their beliefs about the world when confronted with new evidence: it's deeply psychologically uncomfortable.

Beliefs need to be held with an open palm, not a tight fist: especially when we're holding beliefs with minimal evidence. We need some kind of system to guide that process. I'm calling it epistemic responsibility.
 
That's Clifford's point about no private beliefs though; every belief you have gets acted upon in some way. If you believe gay people are filthy with sin, that belief will effect how you behave around LGBTQ people even if you are not being 'open' about your belief.

It can go the other way too: if you believe humanity is mostly good and/or that the world is a safe place, that belief has a profound positive physiological impact on you. That belief also improves empathy and shapes how you treat other people.

The less evidence available to justify a belief, the more important it is to be aware of the potential negative consequences of that belief and attempt to minimize them while also enjoying the benefits of a belief. Sometimes that means just choosing not to believe something you don't have great evidence for if the potential positive effects of that belief are outweighed by the potential negative effects.



An effect is deleterious if it harm self or others. This is obviously not a hard-and-fast measuring stick, and things can be harmful without inflicting suffering. For many beliefs the potential deleterious effects are pretty self-evident. When they are not, extra care and self-awareness are needed.

If you're wanting some scientific scale for what registers as deleterious, then that is simply not possible. There is a great deal of subjectivity at work here.



Well no, obviously not. When I talk about stopping people from having beliefs, I mean teaching people to have a critically-minded approach to belief and a level of honesty about what level of evidence we actually have for things we believe. Obviously you can't force someone not to believe something.

Everyone believes they have "good enough" reasons for their beliefs, that they've built an accurate model of the world through fair analysis. But beliefs are largely survival mechanisms, coping tools, adaptations to give us certainty. Evolution has not equipped us to see reality as it fundamentally is; we just see enough of it to get consistent results when it comes to food and clothing and mate selection and keeping our kids alive, but our intuited sense of how the world works quickly falls apart outside of the very basics of our senses. I can't perceive the cells in my body or the weight of the moon. We know about those things through a tedious, fraught, methodical process called 'science', and it doesn't come naturally.

We crave certainty because certainty means survival. Stable beliefs provide security. This phenomenon is partly what makes it so profoundly difficult to get people to modify their beliefs about the world when confronted with new evidence: it's deeply psychologically uncomfortable.

Beliefs need to be held with an open palm, not a tight fist: especially when we're holding beliefs with minimal evidence. We need some kind of system to guide that process. I'm calling it epistemic responsibility.

A belief does not necessarily impact behavior. There is no guidance for that. Some people handle uncertainty just fine. Most don't really think that deeply to have it effect them. Human beings, in general, are not spending much time in deep philosophical thought. Have you ever observed them in their natural habitat?
 
I've been thinking about this a lot over the last couple of days. I think the answer has to be "yes", although it also seems that 'maximally' justified beliefs that have deleterious consequences also trends towards greater net positive (or neutral) consequences and are therefore less under the spotlight of responsibility.

For example: I believe I am sitting at my desk. Every available subjective sense tells me this is so, and if I were to subject that belief to empirical analysis it would still be justifiable. I could be wrong, maybe I'm a brain in a tank, or maybe my consciousness was booted up a few seconds ago. There is no test that would permit me to know the difference. Yet given all available resources at my disposal, the belief "I am sitting at my desk" is about as maximally justified as I can imagine. Every belief with that same level of justification seems to have no real deleterious consequences that I can discern.

Beliefs tend to build on other beliefs, and the further along the gradient of justification we go, the more possible negative consequences appear. I'm not going to try and map out a line between "fission produces energy" and "we should deploy our nuclear weapons", but I think we can agree a lot of beliefs fall in between those two with increasing degrees of potential consequences corresponding to a decrease in justification.
Thoughts turning over in my mind as we proceed:

What we're after is the relation between beliefs and harm, in order to discover what makes for harmful beliefs.

We observe harmful beliefs and look for the relation between beliefs and harm.

We propose that the relation between beliefs and harm corresponds to the relation between justification and belief.

And how is the justification of belief related to the harm of beliefs?
 
Thoughts turning over in my mind as we proceed:

What we're after is the relation between beliefs and harm, in order to discover what makes for harmful beliefs.

We observe harmful beliefs and look for the relation between beliefs and harm.

We propose that the relation between beliefs and harm corresponds to the relation between justification and belief.

And how is the justification of belief related to the harm of beliefs?

Because it seems to me that there is an inverse relationship (even if not perfectly uniform one) between the degree of justification and the potential harm of the belief.
 
A belief does not necessarily impact behavior.

I've already answered this. They always impact behavior. The impact can be nominal or maximal, but beliefs shape our model of the world, which in turn shapes how we interact with the world. Like I stated, there are no private farts in a public elevator, and when it comes to our beliefs, we are all chronically flatulent.

Some people handle uncertainty just fine.

True, but a lot of people-- I would argue most people-- do not handle it just fine, at least when it comes to significant questions that impact their life. Does my spouse love me? Is there a God? What will happen tomorrow? If the answer is truly "I don't know" for those questions, then the person asking them is probably going to start searching for answers pretty fast. Uncertainty indicates an inaccurate or incomplete model of the world, and our brains are evolved to develop a model that is accurate enough to keep us safe and prosperous. We have a natural drive to experience certainty because certainty = survival.

If you just want to insist I'm wrong you are welcome to, but you are going to have to provide some pretty compelling data. he certainty bias is a very well-documented scientific phenomenon both in neurology and psychology. Just google "certainty and the brain" and pick something to read.

Most don't really think that deeply to have it effect them.

Do you have some data to support that? You are way outside the consensus of the scientific community with that claim. The feeling of certainty is a largely unconscious mechanism.

Human beings, in general, are not spending much time in deep philosophical thought.

The certainty bias is a largely unconscious mechanism. It can become conscious if the discomfort of uncertainty becomes strong enough, but how and why people settle into certainty about things isn't because everyone is thinking really hard. In fact, often quite the opposite.

Have you ever observed them in their natural habitat?

Have you? Watch a documentary on Netflix called "Behind The Curve"-- it's about flat earthers-- and then tell me certainty requires lots of careful philosophy.
 
I've already answered this. They always impact behavior. The impact can be nominal or maximal, but beliefs shape our model of the world, which in turn shapes how we interact with the world. Like I stated, there are no private farts in a public elevator, and when it comes to our beliefs, we are all chronically flatulent.



True, but a lot of people-- I would argue most people-- do not handle it just fine, at least when it comes to significant questions that impact their life. Does my spouse love me? Is there a God? What will happen tomorrow? If the answer is truly "I don't know" for those questions, then the person asking them is probably going to start searching for answers pretty fast. Uncertainty indicates an inaccurate or incomplete model of the world, and our brains are evolved to develop a model that is accurate enough to keep us safe and prosperous. We have a natural drive to experience certainty because certainty = survival.

If you just want to insist I'm wrong you are welcome to, but you are going to have to provide some pretty compelling data. he certainty bias is a very well-documented scientific phenomenon both in neurology and psychology. Just google "certainty and the brain" and pick something to read.



Do you have some data to support that? You are way outside the consensus of the scientific community with that claim. The feeling of certainty is a largely unconscious mechanism.



The certainty bias is a largely unconscious mechanism. It can become conscious if the discomfort of uncertainty becomes strong enough, but how and why people settle into certainty about things isn't because everyone is thinking really hard. In fact, often quite the opposite.



Have you? Watch a documentary on Netflix called "Behind The Curve"-- it's about flat earthers-- and then tell me certainty requires lots of careful philosophy.

You are wrong.

Watching a documentary is not observing human beings in their natural environment. Look at human behavior all around you.
 
You are wrong.

Great. Show me some data.

Watching a documentary is not observing human beings in their natural environment.

Um, ok... not sure why that has any bearing on my argument, but if the documentary is unsatisfactory to you, then go have a look at flat earthers on Twitter. Or talk to one. My point is, people arrive at certainty without having to think very hard.

Look at human behavior all around you.

I'm glad I used the flat earther example because it segways nicely into a critique of your argument: I've presented you with arguments based on science and you've essentially just declared "Look at the horizon all around you. It's obviously flat."
 
Because it seems to me that there is an inverse relationship (even if not perfectly uniform one) between the degree of justification and the potential harm of the belief.
Let's assume we've made out a case for our inverse law, developed a strong argument for the inverse relation between justification and harm.
Let's also assume we've defined "harm" in a rationally adequate manner. (Here, by the way, is where we might reintroduce morality.)

The sticking point, it seems to me, is with this central notion of justification. Not surprising, of course: epistemology came a cropper in the 20th century over this issue.
If "epistemic responsibility" is essentially a question of justification, then this is the concept we need to unpack.
 
Let's assume we've made out a case for our inverse law, developed a strong argument for the inverse relation between justification and harm.
Let's also assume we've defined "harm" in a rationally adequate manner. (Here, by the way, is where we might reintroduce morality.)

The sticking point, it seems to me, is with this central notion of justification. Not surprising, of course: epistemology came a cropper in the 20th century over this issue.
If "epistemic responsibility" is essentially a question of justification, then this is the concept we need to unpack.

I agree. And I realize that epistemic justification is a big and messy world with various competing theories, and I fear being sucked into its black hole, but evidentialism is really what I'm talking about in the 'inverse law' aspect of the argument.

To be clear, I'm not proposing that evidentialism is the best or truest model of all the other 'isms' of epistemic justification. I am simply proposing that there is a relationship between harm and justification, according to how justification is defined by that particular model.

So when I talk about a belief being justified or unjustified, I'm really talking about the relationship between lines of evidence and attitude towards the belief held due to that evidence. If your degree of certainty and your degree of evidence are not in conjunction, your belief is "unjustified" (evidentially speaking.)

If I insist that I am a potato because I am round and beige, you don't have to work very hard to point out how my belief in my potato-ness is unjustified by the evidence I'm using to support the belief.
 
Great. Show me some data.



Um, ok... not sure why that has any bearing on my argument, but if the documentary is unsatisfactory to you, then go have a look at flat earthers on Twitter. Or talk to one. My point is, people arrive at certainty without having to think very hard.



I'm glad I used the flat earther example because it segways nicely into a critique of your argument: I've presented you with arguments based on science and you've essentially just declared "Look at the horizon all around you. It's obviously flat."

You presented me with a documentary which is about as scientific as an episode of "Survivor." There is no scientific evidence of how human beings need to be certain. It may or not be a built in mechanism for every day behavior but it does not mean it is needed to survive as a conscious big picture approach to life. When we know where our next meal is coming from and have some social satisfaction and have some things that interest and entertain us we make it day to day. But one crisis is enough to spark doubt. This doubt may throw us off, but it doesn't necessarily create a fatal crisis. We seem to adapt pretty well naturally, just like other animals do.

If people arrive at certainty unconsciously there is nothing they can do to change it consciously. We are more like what we view as lower animals than we think. Most of life is unconscious.

This heavy philosophising you are doing here doesn't really describe human behavior at all.
 
Back
Top Bottom