First of all, the basic math does not add up. They list 357 respondants to their violence and harassment survey. They claim that 23.5% of the respondants reported "severe violence," which woud be 84 out of 357 respondants. Severe Violence they defined as one of 11 possible things, and they listed the percent of respondants who claimed to be victims of an act of each type of "Severe Violence." The basic incongruency is this: there is no possible way that the numbers they list can equal up to 23.5% of the 2010 respondants they say claim to be victim of a severe violent act. If you do that math and even generously assume that all categories are mutually exclusive (each responding facility only was listed in one category [the report explicitly states this is not the case]), the maximum total number of facilities which could have reported an act of severe violence is only roughly 14.5% of respondants (or 72 clinics). It is worse, however, as the report mentions that there was a concentration of violence in a small number of clinics (so one clinic could have acts listed from 2+ categories). This means that several clinics are counted more than once in the aggragate sum of percentages (14.5% or 72 clinics). So, the actual number of clinics that have been the target of "severe violence" as defined in the study is likely a good deal less than even the 14.5% aggragate. There is literally no possible way, given the statistics they provide, that 84 clinics (or 23.5% of responding clinics) could have reported to be the target of an act of "severe violence."
On an ethical level, the representation of the data in the initial page you present is trash. The graph's title says it represents the "Percentage of Clinics Experiencing Severe Violence." That is extremely misleading. It is actually the percentage of responding clinics that reported experiencing severe violence. Roughly 40% (238) of the clinics did not respond to the 2010 survey. While one might think it safe to generalize the percent found by responders to the nonresponding clinics, it could easily be the case that the reason so many clinics did not respond was because they were not the target of any violent attacks (we call this a "selection effect"). This alternative explanation is supported by the evidence in the most recent report that the number of clinics reporting that they are not the target of any violence is increasing rapidly. If you run the numbers considering the non-respondants, only about 12% of all clinics could have responded that they were the target of an act of "severe violence." However, taking into consideration the summation issue addressed in the previous paragraph, the actual percent of clinics who reported they were the target of severe violence is lower than that. This indicates that the title of the graph would lead people to believe that the actual percent of clinics reporting at least one act of severe violence is over twice what the actual percent probably is.
Combining the shady number crunching with the misrepresentation of those numbers leads me to believe that the people conducting and reporting this study are scientifically incompetent at best, or ethically bankrupt and agenda driven at worst. Considering that these are probably professionals putting the study togather, I think the more likely explanation is the latter.
Last edited by CycloneWanderer; 09-24-13 at 01:58 PM.