• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Just Plain Wrong

Have you taken a statistics class before? Depending on the subject of what you're testing, p < 1%, p < 5%, or p < 10% could all qualify as statistically significant. Anything over 10% is statistically insignificant, because the chances of the event happening by chance are far too great to be deemed different than than the control.

Wait, so you also believe that 5% of a population can be called "statistically significant"? If so, then you don't have a clue what statistically significant means. Instead of accusing me of not taking a stats course, you should know what you are talking about.
 
You probably weren't going to donate the 100 bucks anyways.

BTW, I'll certainly donate the $100 when someone actually provides what was asked for instead of posting something totally different and pretends that it was what was asked for.
 
Wait, so you also believe that 5% of a population can be called "statistically significant"? If so, then you don't have a clue what statistically significant means. Instead of accusing me of not taking a stats course, you should know what you are talking about.

Once again, have you taken a statistics class before?

Clearly you do not understand how a test is performed. You don't test populations, you test sample sizes (of size n) because measuring entire populations is virtually impossible or impractical. Allow me to explain the most rudimentary concept of statistics you can possibly learn in a statistics class:

1. You need to test something.
2. You determine what population it applies to.
3. You acquire a sample from the population. This sample must not be very small, or else you increase the chances of the data and results being inaccurate.
4. You have a control group and a "treatment" group (the "treatment" group can be named anything, it is just the group that you are testing. The control group is a group not affected by your test, it is just something to compare your "treatment" group with after the test is performed.)
5. You randomly (ideally, simple random sample technique) assign members of the sample size to the two groups.
6. Perform the test.
7. Get p-value.
8. If p-value is less than determined alpha (as I posted earlier, 10% is the highest alpha), you can then assume the event is not due to chance, therefore it is statistically significant.


The most rudimentary concept in statistics. If you still do not understand it after taking a statistics course, well, good luck to you.
 
Once again, have you taken a statistics class before?

Clearly you do not understand how a test is performed. You don't test populations, you test sample sizes (of size n) because measuring entire populations is virtually impossible or impractical. Allow me to explain the most rudimentary concept of statistics you can possibly learn in a statistics class:

1. You need to test something.
2. You determine what population it applies to.
3. You acquire a sample from the population. This sample must not be very small, or else you increase the chances of the data and results being inaccurate.
4. You have a control group and a "treatment" group (the "treatment" group can be named anything, it is just the group that you are testing. The control group is a group not affected by your test, it is just something to compare your "treatment" group with after the test is performed.)
5. You randomly (ideally, simple random sample technique) assign members of the sample size to the two groups.
6. Perform the test.
7. Get p-value.
8. If p-value is less than determined alpha (as I posted earlier, 10% is the highest alpha), you can then assume the event is not due to chance, therefore it is statistically significant.


The most rudimentary concept in statistics. If you still do not understand it after taking a statistics course, well, good luck to you.

:rofl

Excellent. Now that you've done that, go back and read the discussion between ecofarm and me. I'll give you a very small hint: you just supported my side of that discussion.
 
Oh and this is from Wikipedia. I could have just as easily pulled it from a stat textbook if I still had one on me. I suppose I should have let him look it up himself...but whatever. You probably weren't going to donate the 100 bucks anyways.

That's hilarious.

It is. The part that he bolded is the part that I quoted and linked to the wiki article (post #740, this thread), before he posted. I posted it above him and he acts like I never seen it. Bizarre.


Anyway, I know 5% is just the "conventional" or common level of significance. But doesn't that make it more appropriate in context?
 
Last edited:
It is. The part that he bolded is the part that I quoted and linked to the wiki article, before he posted. I posted it above him and he acts like I never seen it. Bizarre.


Anyway, I know 5% is just the "conventional" or common level of significance. But doesn't that make it more appropriate in context?

And despite your apparent inability to read your own link, it says nothign at all about 5% of a population being statistically significant, which is what your claim was when you said:

... color blindness is probably statistically significant (5%) and this normal.

Statistical significance is not a measure of a total population. It means that a research finding is not likely to have occurred by chance. It does not mean "5%", as you claimed.

What the 5% (or whatever p-value Alpha that is chosen) actually means is that there is a 95% certainty that the results were not due to random chance. 5% of the time, though, the results are actually just due to random chance.

When you called "color-blindness" statistically significant, you were using the term incorrectly.


Editted to correct terminology error.
 
Last edited:
He never said "5% of a population." I'm not sure if he was referring to a study with the color-blindness, but whether he was or not I'm pretty sure it was assumed that any data or hypothetical reasoning was from a sample, not a population.

+/- one standard deviation from the mean on a normal curve includes 68% of the population. Once you get past one SD from the mean you are firmly in the realm of abnormal, since you are either in the top or bottom 16% with said characteristic.

This is what I first saw, which (at least in statistics) doesn't make much sense. Especially because 10% on each side of a normal curve is almost always the maximum any statistician would use in determining if something occurred by chance (normally) or due to something else.
 
I should say that you were right about "5% of a population" being inaccurate and that I made the mistake of thinking you did not understand what you were saying. My bad! :doh
 
But CC said it is defined in stat. Was he trying to hide a value judgement in math?



Total color blindness is not 1/20. Partial (especially minor) color blindness is probably statistically significant (5%) and this normal.

To be fair, ecofarm never said 5% of the population. That was Tucker Case. I don't believe that ecofarm meant 5% of the population, instead it meant you could prove the hypothesis that "partial (especially minor) color blindness isn't abnormal" if you used 5% as your level of significance.

Either way, you are not arguing the same thing. When something falls within two standard deviations (or 95%) it is considered normal. That is different than saying 95% of the population. Tucker Case argued that using the two standard deviations as a model, then color blindness would be considered normal, which is inaccurate, unless the statistical analysis was done on the entire population. I would wager that if you did a statistical analysis on a group (you choose the size), that color blindness would not fall within the two standard deviations.
 
To be fair, ecofarm never said 5% of the population. That was Tucker Case. I don't believe that ecofarm meant 5% of the population, instead it meant you could prove the hypothesis that "partial (especially minor) color blindness isn't abnormal" if you used 5% as your level of significance.

Either way, you are not arguing the same thing. When something falls within two standard deviations (or 95%) it is considered normal. That is different than saying 95% of the population. Tucker Case argued that using the two standard deviations as a model, then color blindness would be considered normal, which is inaccurate, unless the statistical analysis was done on the entire population. I would wager that if you did a statistical analysis on a group (you choose the size), that color blindness would not fall within the two standard deviations.

That's correct. Thank you.
 
He never said "5% of a population." I'm not sure if he was referring to a study with the color-blindness, but whether he was or not I'm pretty sure it was assumed that any data or hypothetical reasoning was from a sample, not a population.

First, he called a population "statistically significant" because they totaled 5%.

Second: a population is the group from which a sample is drawn. Since the normal distribution being discussed was not that of a sample, but that of the total population, we are clearly talking about populations. Your assumption is based on your ignorance of what was being discussed, as you have admitted ( I put your admission of ignorance in bold for you so that you are aware of that admission).

Third: He was obviously not referring to a study, which is abundantly clear by virtue of his answer to the very question "What study are you referring to?". See posts 739 and 740 for evidence of this. It's important to not be ignorant of what's being discussed before injecting yourself into a discussion.



This is what I first saw, which (at least in statistics) doesn't make much sense. Especially because 5% on each side of a normal curve is almost always the maximum any statistician would use in determining if something occurred by chance (normally) or due to something else.

Now who's never taken a stats class?

What does my statement that you quoted have to do with determining chance?
 
First, he called a population "statistically significant" because they totaled 5%.

Second: a population is the group from which a sample is drawn. Since the normal distribution being discussed was not that of a sample, but that of the total population, we are clearly talking about populations. Your assumption is based on your ignorance of what was being discussed, as you have admitted ( I put your admission of ignorance in bold for you so that you are aware of that admission).

Third: He was obviously not referring to a study, which is abundantly clear by virtue of his answer to the very question "What study are you referring to?". See posts 739 and 740 for evidence of this. It's important to not be ignorant of what's being discussed before injecting yourself into a discussion.





Now who's never taken a stats class?

What does my statement that you quoted have to do with determining chance?

Yes he worded it wrong but his implications were obvious. Read what Chaddelamancha just recently posted. And then Ecofarm's response.

As for my quote, that's all normal curves are in statistics. They show a distribution of all the possible outcomes. And when tests are performed you are measuring whether or not something was due to chance.

What I was getting at with my post is that when you suddenly claim that everything that extends 1 SD of a normal curve is "abnormal," it makes no sense. Even if cutoffs are arbitrary, which you said after that quote, it still makes no sense because it doesn't apply like that. But this is getting old so I'm gonna call it a day with this thread.
 
I don't believe that ecofarm meant 5% of the population, instead it meant you could prove the hypothesis that "partial (especially minor) color blindness isn't abnormal" if you used 5% as your level of significance.

This also shows an ignorance of what statistical significance is. Statistical significance would have no bearing on normal or abnormal under any circumstances.

Statistical significance ALWAYS, 100% of the time, refers to teh likluihood that the results of a study occured by chance.

There's a reason I asked him what study he was referring to when he used that term.

He may not have used the words "5% of a population" but no matter how you cut it, that's what his comment was doing.


Either way, you are not arguing the same thing. When something falls within two standard deviations (or 95%) it is considered normal. That is different than saying 95% of the population.

It isn't different at all. The 95% refers to the total population that falls within the range of 4 standard deviations of the distribution that "within 2 standard deviations" encompasses (two above the mean, two below the mean).

325px-Standard_deviation_diagram.svg.png


There would only be 2.2% of the population further than 2sd's above the mean, and there would only be 2.2% of the population more than 2sds below the mean.

If you use the two SD rule to define normal, then you are saying that 95% of the population is normal.

Tucker Case argued that using the two standard deviations as a model, then color blindness would be considered normal, which is inaccurate, unless the statistical analysis was done on the entire population.

Colorblindness doesn't fall on a bell curve, so I was actually using the 95% rule of thumb that coincides with the normal distribution. Once we make "normal" a product of percentages, where 95% are normal, and 5% are not, we have an easy way to calculate "normal" for distributions that don't follow the bell curve based on the percentage of population.

Colorblindness is believed to be as much as 10% of the male population. By using the rule of thumb related to percentages, it would be normal.
 
And when tests are performed you are measuring whether or not something was due to chance.

Keyword in bold. Nothing you quoted from me discussed tests in any way. The first time tests were even introduced was when eco misused the term "statistically significant".

No test exists which defines normality.

Even if cutoffs are arbitrary, which you said after that quote, it still makes no sense because it doesn't apply like that. But this is getting old so I'm gonna call it a day with this thread.

If you agree that the cut offs are arbitrary, why doesn't it make sense to you?

Normal is simply another word for common. that which is encompassed by 68% of the population is certainly common, no?
 
BTW, there is a reason I chose the arbitrary cut off of one standard deviation when one wants to use a statistical definition of "normal" that relates to the topic at hand and it wasn't so that I could have homosexuality in the "abnormal" range (it might fall into that range if 2 standard deviations from the mean was used as well).

It was done that way so that a great many other things would fall into the "abnormal" range that people do not want to be considered abnormal.

Ultimately, it was to illustrate the asinine and fallacious nature of the "normal" debate, regardless of the definition of "normal" one chooses to use. Normal =/= good, abnormal =/= bad.

Basically, I'm saying that I understand the resistance to my choice to limit "normality" to one standard deviation from the mean. But it's important to remember that such resistance was the goal of my decision to limit it in this way.
 
Last edited:
BTW, there is a reason I chose the arbitrary cut off of one standard deviation when one wants to use a statistical definition of "normal" that relates to the topic at hand and it wasn't so that I could have homosexuality in the "abnormal" range (it might fall into that range if 2 standard deviations from the mean was used as well).

It was done that way so that a great many other things would fall into the "abnormal" range that people do not want to be considered abnormal.

Ultimately, it was to illustrate the asinine and fallacious nature of the "normal" debate, regardless of the definition of "normal" one chooses to use. Normal =/= good, abnormal =/= bad.

Basically, I'm saying that I understand the resistance to my choice to limit "normality" to one standard deviation from the mean. But it's important to remember that such resistance was the goal of my decision to limit it in this way.

Yeah the only problem I had with that was that significance level is determined somewhat arbitrarily, but that's only in tests. Since it wasn't a test, and therefore no significance level, it was just a matter of opinion, wasn't it? An educated opinion because you've taken statistics... But an opinion nonetheless?
 
Yeah the only problem I had with that was that significance level is determined somewhat arbitrarily, but that's only in tests. Since it wasn't a test, and therefore no significance level, it was just a matter of opinion, wasn't it? An educated opinion because you've taken statistics... But an opinion nonetheless?

Absolutely.


Although I should add that since I was doing it to essentially make that point, it would probably be more appropriate to say it how I decided to portray my opinion, rather than it being my actual opinion of "normal".

Basically, the point is that what one considers "normal" is always an opinion statement, even when one tries to use a statistical basis for their opinion. The arbitrary nature of all possible statistical cut-offs means that the entire debate is a waste of time.

I apologize to everyone involved fo the way that the illustration of this point played out, because it was not my intention to have it go that way. For some reason, I allowed myself to get sucked into the statistical debate and actually lost sight of my intended purpose. I had one of those major brain-fart moments. Shortly before I posted my last post explaining myself, it dawned on me I was being an idiot by arguing so strongly for a position I didn't even hold (I basically reject the idea that an objective determination of "normal" really exists).

Again, my apologies for this lapse.
 
This also shows an ignorance of what statistical significance is. Statistical significance would have no bearing on normal or abnormal under any circumstances.

Semantics. Would it have appeased you if I had stated the hypothesis as "the chance of someone being partial (especially minor) color blind"? It's silly to ridicule people because of semantics.

As for rest about defining normal as +/- one deviation? Sounds like an opinion I could get behind. Honestly I have given up defining normal/abnormal for others a long time ago. My definition is pretty far out there in regards to abnormal compared to most.
 
CC your typical debate style of no substance and bullying and bogarting does NOT work with me...your twisting and fabricating others intent and meaning will not work either....you will NOT win this debate asking a perpetual question ad nausem. Ive answered your question several times...your twisting my response to fit your need to avoid answering mine is typical of your avoidance tactics..so again
Define how homosexuality is Normal.

Actually, lpast, you cannot win this. There is no one at DP more stubborn than I. You have not answered the question in any satisfactory way. In this thread alone, you have given a definition, then contradicted the very definition you gave, demonstrating your hypocrisy on the issue. Once you answer the question, you will get my answer... not before.

Define normal.
 
Probably a dumb question but what constitutes normal, statistically? Within two standard deviations?

Depending on the hypothesis and the level of deviation the individual performing the hypothesis is looking for to define "within normal limits". Most times, this would be, as you said, two standard deviations, though sometimes it is one.
 
Statistics courses are for suckas. :lol:

They will give you a formula for figuring out the likelihood of a coin coming up heads after coming up tails 99 times in a row. That formula doesn't tell you the truth. The truth is it is still 50/50 heads/tails. The outcome is not dependent upon previous coin flips.

Actually statistics WILL give you the outcome of it being 50/50 regardless of how many times it has come up heads before. The difference would be, are you measuring something with independent variables (the case you just pointed out) or something with dependent variables (like a deck of cards).
 
Semantics. Would it have appeased you if I had stated the hypothesis as "the chance of someone being partial (especially minor) color blind"? It's silly to ridicule people because of semantics.

I'm not trying to ridicule anyone.

I'm just a math nerd who had a dumb moment from acting like a math nerd in this debate.


As for rest about defining normal as +/- one deviation? Sounds like an opinion I could get behind. Honestly I have given up defining normal/abnormal for others a long time ago. My definition is pretty far out there in regards to abnormal compared to most.

Which is basically the point I was getting at.
 
Depending on the hypothesis and the level of deviation the individual performing the hypothesis is looking for to define "within normal limits". Most times, this would be, as you said, two standard deviations, though sometimes it is one.

Assuming a normal distribution, that is. If the distribution is a true bimodal distribution, for example, you could actually end up with a situation where the 50% of the total population involved in the distribution actually falls outside of two standard deviations from the mean of that distribution.
 
Back
Top Bottom