• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

A Planned Parenthood Clinic In Wisconsin Was BOMBED Last Night

Who bombed the Wisonsin Planned Parenthood?


  • Total voters
    54
Obviously you do not understand the nuance required to make sense of statistics. Sorry.

Then why not actually explain what we are missing, as opposed to repeating the same claims that are seemingly debunked?

If global volcanic eruptions were 100 out of two million volcanoes, then the eruption rate would be low. But, if the 100 eruptions all occurred on a single small island in the Pacific which had only one volcano, then their eruption rate would be extremely high.

your claim included possibly tens of millions of people in two major countries (Gallup measures a fifth of the US population being against abortion in all circumstances). That isn't a small island
 
Since the vandal and trespasser aims to intimidate providers and patients, then yes. It is Terrorism.

I'm sorry, you're not going to convince me graffiti should be categorized as 'terrorism.

Why don't you just go through life telling anyone who asks that abortion providers face no risks and that the amount of violence against them is not that high. I'm sick of dealing with you loons.

You're attacking straw men again: No one claimed abortion providers "did not face risk". What was challenged was your claim of inevitable violence
 
So we are including vandalism and trespassing in the catagory of "terrorist attack"?


LOL

Yes.

Both vandalism and trespassing disrupt the activities of the clinic, which is the goal of those terrorists - using violence to interfere with their normal operations
 
Why would that inform the argument when you cited the inevitable behavior of people that adopt an anti-abortion position? What that argument indicates a very large majority of the millions of people that adopt such a position should be out committing these acts of violence. But YOUR figures indicate the exact opposite

Not truw

What that argument indicates is that someone will commit an act of violence
 
Yes.

Both vandalism and trespassing disrupt the activities of the clinic, which is the goal of those terrorists - using violence to interfere with their normal operations

vandalism and trespassing are not classified as violent crimes
 
Vandalism and trespassing often involve more than graffiti. You argument is dishonest

while there certainly are other forms of vandalism, graffiti would also be classified as vandalism
 
Not truw

What that argument indicates is that someone will commit an act of violence



No, he wrote the logical result of 'religious brainwashing" and listening to people describe the fetus as a baby is a person going out and doing things like bombing an abortion clinic. And it's the same position he has been defending since he made it
 
It shows here that 25% of all clinics have experienced severe violence. I think that's significantly higher than most other fields.
National Clinic Access Project - Clinic Surveys - Feminist Majority Foundation

I looked into the 2010 report your Feminist Majority Foundation did and found reason for concern.

First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."

On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.

Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.
 
Last edited:
No, he wrote the logical result of 'religious brainwashing" and listening to people describe the fetus as a baby is a person going out and doing things like bombing an abortion clinic. And it's the same position he has been defending since he made it

Yes, his argument is that "someone" will commit an act of violence and is not what you dishonestly claimed in your earlier post:

What that argument indicates a very large majority of the millions of people that adopt such a position should be out committing these acts of violence.
 
I looked into the 2010 report your Feminist Majority Foundation did and found reason for concern.

First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."

On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.

Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.

tl;dr
 
I looked into the 2010 report your Feminist Majority Foundation did and found reason for concern.

First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."

On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.

Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.

Thank you for this thoughtful analysis. I'm one of those who tends to gloss over numbers /accept them at face-value, so I appreciate your effort. I shouldn't be so credulous.
 
I stopped reading when he claimed that 14.5% of 357 = 72

Ha, that is hilarious! Must have been a math error. There are clearly 20.1% aggragate percent of Severe Violent Crime in the statistics chart (which does roughly equal the 72 number he mentioned). And his argument, sans that 14.5% number, appears sound. Still interesting the difference between the 20.1% shown in the graph and the 23.5% claim. That would indicate that severe violent crime has held relatively steady among reporting clinics over the last 10 years and gets rid of the uptick at the end. That would also increase the difference displayed in his second argument.
 
Thank you for this thoughtful analysis. I'm one of those who tends to gloss over numbers /accept them at face-value, so I appreciate your effort. I shouldn't be so credulous.

What you should do is check Cyclones' "math"
 
Ha, that is hilarious! Must have been a math error. There are clearly 20.1% aggragate percent of Severe Violent Crime in the statistics chart (which does roughly equal the 72 number he mentioned). And his argument, sans that 14.5% number, appears sound. Still interesting the difference between the 20.1% shown in the graph and the 23.5% claim. That would indicate that severe violent crime has held relatively steady among reporting clinics over the last 10 years and gets rid of the uptick at the end. That would also increase the difference displayed in his second argument.

That's not the only mistake you made.

The 23.5% claim? Take a look at the graph and ask yourself why you used only #'s from 2010.

IOW, you screwed up all over the place.
 
That's not the only mistake you made.

The 23.5% claim? Take a look at the graph and ask yourself why you used only #'s from 2010.

IOW, you screwed up all over the place.

23.5% is straight from the graph and from the report; it isn't my fault that the number is unsupported by the other data in the report. I only used the #'s from 2010 for two reasons:

1) it is the most recent published report and most relevant to the situation we see today. The most current report should represent the most polished methodology of all the reports as ideally they would learn and adjust from any mistakes/errors in previous reports. It would be silly for someone like me to criticize them for mistakes in older reports that they have already corrected for in more recent reports.

2) Going through the reports takes time and I only have so much. Reviewing several reports would take considerably more time and unfortunately my real job has priority. I have made no remarks concerning the validity of previous reports and the assumption of validity is represented in several of my recent comments regarding trend data. If I am wrong in that assumption, please let me know.

Regarding your last point, that I "screwed up all over the place:" please be specific. Other than a minor math error which had no significant effect on the conclusion of my analysis, what errors have I made? I believe I have provided enough information for an informed, critical dialogue beyond a simple "YOU ARE WRONG!" I took the time to critique the report. I was specific, detailed, and open concerning my analysis. If you are going to critique my analysis, I simply ask that you do the same. I would love to be 100% accurate in every little thing. Unfortunately, I only have so much time to double-check everything and I am only human. If there are as many mistakes in it as you indicate there are, I welcome the knowledge as I absolutely despise disseminating misinformation. Please use more than one sentence in your critique as I would like to understand how you believe each potential mistake specifically impacts the significance of my overall analysis. Basically, please give me more than: You made mistakes, the whole thing is crap!
 
Last edited:
23.5% is straight from the graph and from the report; it isn't my fault that the number is unsupported by the other data in the report. I only used the #'s from 2010 for two reasons:

1) it is the most recent published report and most relevant to the situation we see today. The most current report should represent the most polished methodology of all the reports as ideally they would learn and adjust from any mistakes/errors in previous reports. It would be silly for someone like me to criticize them for mistakes in older reports that they have already corrected for in more recent reports.

2) Going through the reports takes time and I only have so much. Reviewing several reports would take considerably more time and unfortunately my real job has priority. I have made no remarks concerning the validity of previous reports and the assumption of validity is represented in several of my recent comments regarding trend data. If I am wrong in that assumption, please let me know.

Regarding your last point, that I "screwed up all over the place:" please be specific. Other than a minor math error which had no significant effect on the conclusion of my analysis, what errors have I made? I believe I have provided enough information for an informed, critical dialogue beyond a simple "YOU ARE WRONG!" I took the time to critique the report. I was specific, detailed, and open concerning my analysis. If you are going to critique my analysis, I simply ask that you do the same. I would love to be 100% accurate in every little thing. Unfortunately, I only have so much time to double-check everything and I am only human. If there are as many mistakes in it as you indicate there are, I welcome the knowledge as I absolutely despise disseminating misinformation. Please use more than one sentence in your critique as I would like to understand how you believe each potential mistake specifically impacts the significance of my overall analysis. Basically, please give me more than: You made mistakes, the whole thing is crap!

The graph clearly refers to the 23.5% number being the result of attacks from 1993-2010. Therefore, if you're going to try to honestly debunk their claim, then you will have to use all of the reports going back to 1993.

If you don't have time to do that, then you shouldn't claim that they are wrong. It is dishonest.

And as far as detailing your mistakes, I have already done so. Do I really need to explain it as:

1) You left out the #'s for 2009

2) You left out the #'s for 2008

3) You left out the #'s for 2007

and so on and so forth back to 1993?

And as far you "taking the time", you just admitted that you did not take the time to do the job correctly
 
The graph clearly refers to the 23.5% number being the result of attacks from 1993-2010. Therefore, if you're going to try to honestly debunk their claim, then you will have to use all of the reports going back to 1993.

If you don't have time to do that, then you shouldn't claim that they are wrong. It is dishonest.

And as far as detailing your mistakes, I have already done so. Do I really need to explain it as:

1) You left out the #'s for 2009

2) You left out the #'s for 2008

3) You left out the #'s for 2007

and so on and so forth back to 1993?

And as far you "taking the time", you just admitted that you did not take the time to do the job correctly

I am afraid you are gravely mistaken. To help me explain, I shall provide the graph in question:

ViolenceStat.jpg

The design of the study is a longitudinal repeated measures survey. You see that last data point? 23.5%? That isn't an aggragate data point from the last 15 years, that point represents the data from the latest survey period (2010). If you read the report thoroughly, it clearly explains that graph. Additionally, from an experimental design standpoint, rolling aggragate data of the type you seem to think this is is virtually useless in this type of longitudinal study as it would actually hinder people from being able to accurately and quickly percieve trends in the data (it would harshly skew the data). The data represented by that 23.5% data point does not include any survey results from before 2010, so I do not need to look at the specific data from other years to be analyze the scientific validity/reliability of that specific data point. In this case, the data point is neither reliable nor valid as it is impossible for anyone to accurately compute the same number given the specific data provided in the report. Honestly, I was hoping your critique would be a little more informed (a little less ridiculous?) considering the level of vitriol you have displayed.

I find it ironic, considering your harsh yet utterly baseless critique, that the following sentence is in your signature:
"One can only be so intelligent, but stupidity knows no limits"
 
Last edited:
I am afraid you are gravely mistaken. To help me explain, I shall provide the graph in question:

View attachment 67154197

The design of the study is a longitudinal repeated measures survey. You see that last data point? 23.5%? That isn't an aggragate data point from the last 15 years, that point represents the data from the latest survey period (2010). If you read the report thoroughly, it clearly explains that graph.

Umm, no

"Longitudinal study" means it aggregates the data over the entire course of the time period.

PS - your link doesn't work
 
Umm, no

"Longitudinal study" means it aggregates the data over the entire course of the time period.

PS - your link doesn't work

Where, exactly, did you get your degree from? Longitudinal studies look for changes in data over a period of time, it compares data between points in time. Aggragating this data before comparing the data gathered at different points in time would needlessly muddle the analysis.

Data collected at different points in time can only be validly statistically aggragated when there are no significant differences in the data collected caused by the difference in the time of collection. Such a procedure is typically antithetical to longitudinal research unless you are looking at a difference cause by a distinct event at a distinct point in time. Even then, you would only aggragate the data into groups before and after the event prior to comparing those two groups of aggragated data. You would never aggragate the entire data set.
 
Last edited:
Back
Top Bottom