• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Trusting News Project Report 2017

Xelor

Banned
DP Veteran
Joined
Apr 20, 2018
Messages
10,257
Reaction score
4,163
Location
Washington, D.C.
Gender
Undisclosed
Political Leaning
Undisclosed
In 2017, Michael W. Kearney conducted a study for the Trusting News Project to discover what news and information sources were most trusted. The goal of his study was "to better understand elements of trust and distrust in the relationship between journalists and nonjournalists" so that he could report that information to journalism organizations that would, in turn, use it to hone the nature of their communications to enhance the new-consuming public's confidence in the reliability of the messages the organizations distribute. Kearney's goal-germane findings are published on the project's website; however they are are all action-plan type findings that are useful to news publishers/producers, but largely not so much to the general public.

One by-product -- more precisely, an intermediate work product -- of Kearney's work was ascertaining what news sources are most trusted. That by-product struck me as being something that might interest readers/members here. Kearney's findings to that extent are in an attached images. The newsrooms that made the survey available are in another image.

Some observations:
  • I was surprised to see The Economist noted as most trusted, not because I don't trust it -- I trust it plenty; it's a publication I routinely read --but because I didn't imagine that enough folks read it that it would have made the list at all.
  • Another publication I routinely read, Financial Times didn't make the list. That it didn't also surprised me. I guess I just figured that quite a lot of folks would care enough about money and finance that F. Times would be a paper (website) they read/subscribe to.

Endnote:

  • I suspect someone will look at the political orientation chart Kearney published with his results and feel obligated to claim the study is biased. Before doing so while also (initially) being of a mind to do so, please read the following and Kearney's methodology exposition of weighting in his study:
 

Attachments

  • Most Trusted News Sources.webp
    Most Trusted News Sources.webp
    14 KB · Views: 79
  • Newsroom where the survey happened.webp
    Newsroom where the survey happened.webp
    18.6 KB · Views: 77
In 2017, Michael W. Kearney conducted a study for the Trusting News Project to discover what news and information sources were most trusted. The goal of his study was "to better understand elements of trust and distrust in the relationship between journalists and nonjournalists" so that he could report that information to journalism organizations that would, in turn, use it to hone the nature of their communications to enhance the new-consuming public's confidence in the reliability of the messages the organizations distribute. Kearney's goal-germane findings are published on the project's website; however they are are all action-plan type findings that are useful to news publishers/producers, but largely not so much to the general public.

The problem is the large number of people for whom "most trusted" means actually "Most trusted to comfort my ego by saying what I want/expect to hear".

Far too few people are looking for truth to make this sort of poll you have here very valuable.
 
The problem is the large number of people for whom "most trusted" means actually "Most trusted to comfort my ego by saying what I want/expect to hear".

Far too few people are looking for truth to make this sort of poll you have here very valuable.

You should probably read the methodology and results exposition for the study.
 
You should probably read the methodology and results exposition for the study.

NAH I am kinda doing a drive-by here, I know who I trust and I trust me first and foremost, I dont much care who others trust specifically, understanding that people tend to be stupid now and that we have to do better and that what ever we are doing sure is not working.
 
NAH I am kinda doing a drive-by here, I know who I trust and I trust me first and foremost, I dont much care who others trust specifically, understanding that people tend to be stupid now and that we have to do better and that what ever we are doing sure is not working.
Trusting oneself, unless one is particularly given to rigorous circumspection and logical (i.e., non-fallacious analysis, inferring and concluding) of one's own ideas, is almost certain to result in one's succumbing to confirmation bias.

For example:
I'm suspect Trump truly trusts in his belief that nobody's deen tougher on Russia than he; however, were he to rigorously challenge that belief, he'd discover that simply isn't the case. To wit, in no way are accolades and approbation of Putin concomitant with "being tough on Russia." (Please don't try to make this be a Trump discussion. It's not. I just cited an example. One need not agree or disagree with the substance of the example to understand it as an example.)​
 
Rasmussen chimed in with this poll on political reporting
Voter distrust in the political news they see every day is continuing to grow.

A new Rasmussen Reports national telephone and online survey finds that 54% of Likely U.S. Voters now say they do not trust the political news they are getting, up from last June’s previous high of 46%. Only 36% did not trust political news in January of 2017, but that number was in the 40s from 2014 through 2016.
So freom 36% in January 2017 to 54% today. Pretty back erosion of trust in political reportage.
 
The problem is the large number of people for whom "most trusted" means actually "Most trusted to comfort my ego by saying what I want/expect to hear".

Far too few people are looking for truth to make this sort of poll you have here very valuable.
I think you have a good point. We trust the sources that confirm our beliefs and biases.
 
I think you have a good point.
In fact, he does not at all have good point. Keep reading to see why.


I think you have a good point. We trust the sources that confirm our beliefs and biases.

We don't all do that, though some people do. The rest of us "trust, but verify." Those of us who do are the one who have good points.

To wit, consider Hawkeye10's remarks in post #1 and to which I responded that s/he should read the methodology and results sections of the study. The response given to my suggestion was, "NAH I am kinda doing a drive-by here."

I suggested s/he read the methodology and results sections of the study because I have read them and I know that by doing so, one will find that Hawkeye10's assertion that "'most trusted' means actually 'Most trusted to comfort my ego by saying what I want/expect to hear' " is factually inaccurate.

What "most trusted" did mean to survey respondents was directly addressed in the methodology and result -- no surprise given the subject of the study. As stated in the report's summary, the goal of the project wasn't to identify what sources of information are most trusted; identifying that was but an intermediate data-gathering step -- a work product of the study, rather than the study's deliverable -- the researchers had to perform to achieve their actual research goal: "The goal of the Trusting News project is to better understand elements of trust and distrust in the relationship between journalists and non journalists." This thread uses one of the project's work products as its rubric; however, the approach the researchers used to obtain that work product is exposed fully in the methodology section of the report.

Pairing a basic understanding about the principles of statistics/probability with Kearney's exposition of his research goal and methodology, what may one infer about the "least and most trusted news sources" chart he shared? Well, minimally this:
  • Kearney's sampling approach is what's called "purposive sampling." That's fitting given his research goal. Kearney sought to discover for newsrooms (web, print and broadcast) what factors, in the minds of direct consumers of news content (non-journalists), militate for trusting the content newsrooms/journalists publish.
  • Population: It's important to note that while everyone obtains news from somewhere, not everyone obtains it directly from the publishers of it. E.g., some folks get news from friends, from Facebook and other social media sites, and so on. Given Kearney's research goal, it makes sense that he'd implicitly cull the population from the ~320M people in the U.S. to whatever be the population of direct consumers of news.
  • 8700+ Respondents: Basic statistics math informs us that even assuming a population of 320M, worst case response variability, and seeking a 99% confidence level, having a sample size of 8,728 produces a confidence interval of ~1.38. A bit more quick math tells us that for a confidence interval of 2, the sample size would be need to be ~4100, whereas ~16.6K is the sample size for a confidence interval of 1.
  • Weighting: Those few seconds of computation, combined with understanding what 99% confidence levels and 1.38 confidence intervals, described above make obvious why Kearney didn't weight his sample results; however, as any good non-probability sampler must, Kearney undertook the exercise to determine whether he yet needed to weight his results to align them with known political affiliation proportions in the American populace.
Having read Kearney's report, namely the methodology and results discussions, I knew the above is what one can soundly infer about his results, which is why I suggested Hawkeye10 read the methodology and results.

Now one doesn't need to have closely enough read the methodology and results to find out whether Hawkeye10's assertion about what "most trusted" meant to the 8700+ participants in the study is accurate. What "most trusted" meant to survey respondents happens to be something Kearney explicitly discussed and in that discussion one see that what it meant to to them and what it meant to Hawkeye10 are not at all the same things. Because that is so, Hawkey10's so-called point is the very opposite of a good point.
 
Unfortunately "trust" in a news source no longer depends on the accuracy of reporting, the quality of information or the dependability of reporters. Many now base their trust on political propaganda and personal bias, disregarding the traditional standards of truth. The "FOX NEWS" phenomenon is the prime example of this and simply by pointing out this extremely obvious reality its devoted will become angry I commented and consider me a liberal, or something even worse.
 
Unfortunately "trust" in a news source no longer depends on the accuracy of reporting, the quality of information or the dependability of reporters. Many now base their trust on political propaganda and personal bias, disregarding the traditional standards of truth. The "FOX NEWS" phenomenon is the prime example of this and simply by pointing out this extremely obvious reality its devoted will become angry I commented and consider me a liberal, or something even worse.

In the end, trust is very much a subjective issue. We like to equate it to "believe".

The MSM cannot be trusted IMO, given its pathetic record during the last several decades. It is a mouthpiece for government and corporate propaganda, nothing more.
 
In fact, he does not at all have good point. Keep reading to see why.




We don't all do that, though some people do. The rest of us "trust, but verify." Those of us who do are the one who have good points.

To wit, consider Hawkeye10's remarks in post #1 and to which I responded that s/he should read the methodology and results sections of the study. The response given to my suggestion was, "NAH I am kinda doing a drive-by here."

I suggested s/he read the methodology and results sections of the study because I have read them and I know that by doing so, one will find that Hawkeye10's assertion that "'most trusted' means actually 'Most trusted to comfort my ego by saying what I want/expect to hear' " is factually inaccurate.

What "most trusted" did mean to survey respondents was directly addressed in the methodology and result -- no surprise given the subject of the study. As stated in the report's summary, the goal of the project wasn't to identify what sources of information are most trusted; identifying that was but an intermediate data-gathering step -- a work product of the study, rather than the study's deliverable -- the researchers had to perform to achieve their actual research goal: "The goal of the Trusting News project is to better understand elements of trust and distrust in the relationship between journalists and non journalists." This thread uses one of the project's work products as its rubric; however, the approach the researchers used to obtain that work product is exposed fully in the methodology section of the report.

Pairing a basic understanding about the principles of statistics/probability with Kearney's exposition of his research goal and methodology, what may one infer about the "least and most trusted news sources" chart he shared? Well, minimally this:
  • Kearney's sampling approach is what's called "purposive sampling." That's fitting given his research goal. Kearney sought to discover for newsrooms (web, print and broadcast) what factors, in the minds of direct consumers of news content (non-journalists), militate for trusting the content newsrooms/journalists publish.
  • Population: It's important to note that while everyone obtains news from somewhere, not everyone obtains it directly from the publishers of it. E.g., some folks get news from friends, from Facebook and other social media sites, and so on. Given Kearney's research goal, it makes sense that he'd implicitly cull the population from the ~320M people in the U.S. to whatever be the population of direct consumers of news.
  • 8700+ Respondents: Basic statistics math informs us that even assuming a population of 320M, worst case response variability, and seeking a 99% confidence level, having a sample size of 8,728 produces a confidence interval of ~1.38. A bit more quick math tells us that for a confidence interval of 2, the sample size would be need to be ~4100, whereas ~16.6K is the sample size for a confidence interval of 1.
  • Weighting: Those few seconds of computation, combined with understanding what 99% confidence levels and 1.38 confidence intervals, described above make obvious why Kearney didn't weight his sample results; however, as any good non-probability sampler must, Kearney undertook the exercise to determine whether he yet needed to weight his results to align them with known political affiliation proportions in the American populace.
Having read Kearney's report, namely the methodology and results discussions, I knew the above is what one can soundly infer about his results, which is why I suggested Hawkeye10 read the methodology and results.

Now one doesn't need to have closely enough read the methodology and results to find out whether Hawkeye10's assertion about what "most trusted" meant to the 8700+ participants in the study is accurate. What "most trusted" meant to survey respondents happens to be something Kearney explicitly discussed and in that discussion one see that what it meant to to them and what it meant to Hawkeye10 are not at all the same things. Because that is so, Hawkey10's so-called point is the very opposite of a good point.
I read the methodology and I still stand by the premise that the subjects chose the sources they trusted based on the degree to which they affirmed or confirmed their individual biases.
 
In 2017, Michael W. Kearney conducted a study for the Trusting News Project to discover what news and information sources were most trusted. The goal of his study was "to better understand elements of trust and distrust in the relationship between journalists and nonjournalists" so that he could report that information to journalism organizations that would, in turn, use it to hone the nature of their communications to enhance the new-consuming public's confidence in the reliability of the messages the organizations distribute. Kearney's goal-germane findings are published on the project's website; however they are are all action-plan type findings that are useful to news publishers/producers, but largely not so much to the general public.

One by-product -- more precisely, an intermediate work product -- of Kearney's work was ascertaining what news sources are most trusted. That by-product struck me as being something that might interest readers/members here. Kearney's findings to that extent are in an attached images. The newsrooms that made the survey available are in another image.

Some observations:
  • I was surprised to see The Economist noted as most trusted, not because I don't trust it -- I trust it plenty; it's a publication I routinely read --but because I didn't imagine that enough folks read it that it would have made the list at all.
  • Another publication I routinely read, Financial Times didn't make the list. That it didn't also surprised me. I guess I just figured that quite a lot of folks would care enough about money and finance that F. Times would be a paper (website) they read/subscribe to.

Endnote:

  • I suspect someone will look at the political orientation chart Kearney published with his results and feel obligated to claim the study is biased. Before doing so while also (initially) being of a mind to do so, please read the following and Kearney's methodology exposition of weighting in his study:

I don't even care about actual the results of this, I'm just curious as to what you think in the methodology eliminates or at least mitigates the idea that people responded trusting the sources that said things they agree with. Looking at the respondents, how they break down politically, and the results, they come out pretty much as you'd expect if they were simply trusting the sources which tell them things they agree with.

So, in your own words, how is that kind of bias factored out?
 
I read the methodology and I still stand by the premise that the subjects chose the sources they trusted based on the degree to which they affirmed or confirmed their individual biases.

Well, insofar as that it not the basis any one of them identified, on what basis do you "stand by" that assertion.
 
I don't even care about actual the results of this, I'm just curious as to what you think in the methodology eliminates or at least mitigates the idea that people responded trusting the sources that said things they agree with. Looking at the respondents, how they break down politically, and the results, they come out pretty much as you'd expect if they were simply trusting the sources which tell them things they agree with.

So, in your own words, how is that kind of bias factored out?
That the respondents explicitly indicated what militated for their construing a source as more trustworthy, and what you asserted is none of them, and Kearney listed the reasons respondents gave for their assessments of trustworthiness. So, now what are you going to do? Tell me the respondents lied?
 
That the respondents explicitly indicated what militated for their construing a source as more trustworthy, and what you asserted is none of them, and Kearney listed the reasons respondents gave for their assessments of trustworthiness. So, now what are you going to do? Tell me the respondents lied?

Ease up, mate. Why so defensive?

But as you asked, do you think it's impossible that people might have wanted to justify their responses with something more high-minded than "they agree with me"?
 
Ease up, mate. Why so defensive?

But as you asked, do you think it's impossible that people might have wanted to justify their responses with something more high-minded than "they agree with me"?
Excuse me? You asked me "what you think in the methodology eliminates or at least mitigates the idea that people responded trusting the sources that said things they agree with." I responded with a direct answer that's based on the content of the report Kearney published.
That the respondents explicitly indicated what militated for their construing a source as more trustworthy, and what you asserted is none of them, and Kearney listed the reasons respondents gave for their assessments of trustworthiness.​
If there's anything about my attitude, it's that I find it annoying/frustrating to have to answer a question the answer to which is discuss in "black and white" in the report, twice in it, actually....and you asserted that you read the thing.
 
Excuse me? You asked me "what you think in the methodology eliminates or at least mitigates the idea that people responded trusting the sources that said things they agree with." I responded with a direct answer that's based on the content of the report Kearney published.
That the respondents explicitly indicated what militated for their construing a source as more trustworthy, and what you asserted is none of them, and Kearney listed the reasons respondents gave for their assessments of trustworthiness.​

I didn't "assert" any such thing.


If there's anything about my attitude, it's that I find it annoying/frustrating to have to answer a question the answer to which is discuss in "black and white" in the report, twice in it, actually....and you asserted that you read the thing.

Dude. If you want to last here, you're going to need to calm down. I didn't say a thing to you that was unreasonable.

Basically, you're saying that this result can't be a reflection of people's biases because those people themselves say it isn't, even though the results line up pretty much according to expected bias. That's fine, I guess, if you want to take it at face value, but there's reason for skepticism; it certainly isn't some kind of scientifically-reliable methodological filter.
 
I didn't "assert" any such thing.
True. It was implied, not asserted.

Basically, you're saying that this result can't be a reflection of people's biases because those people themselves say it isn't, even though the results line up pretty much according to expected bias. That's fine, I guess, if you want to take it at face value, but there's reason for skepticism; it certainly isn't some kind of scientifically-reliable methodological filter.
What "this result" have you in mind? The report identifies several.

You asked:
...I'm just curious as to what you think in the methodology eliminates or at least mitigates the idea that people responded trusting the sources that said things they agree with.
The answer is that the respondents indicated what for them made a news source trustworthy. That the news source published notions/things with which the respondents agree is not among the characteristics a news source has that, by the respondents, militated for the news source being trustworthy. Now, one either accepts that the respondents honestly stated their reasons for thinking a news source is trustworthy, or one does not.
respondents were provided an open text box and asked to describe what made a news source credible to them.
-- Michael Kearney, author of the study
Obviously, what respondents didn't state didn't rate high enough for them to mention it. Among the things they didn't state -- in some way, shape or form -- as making a news source credible to them is the news source says things with which the respondent(s) agree.

Looking at the respondents, how they break down politically, and the results, they come out pretty much as you'd expect if they were simply trusting the sources which tell them things they agree with.
What?

Are you aware of what this thread is actually discussing? The "purple, tan and green circles" chart indicates​
what sources are most and least trusted
, that's across the entirety of the respondents. Do you not see that Huff Post is among the least trusted sources and NPR is among the most trusted, yet both are generally thought of as liberal outlets. The Economist is a conservative outlet as is Infowars, yet they are at opposite ends of the trustworthiness spectrum. Given that observation, for what you're suggesting to be true:​
  • A liberal respondent, for instance, would have to "consume" a news piece that they agree with on Huff Post and "consume" one via (same story or different one, doesn't matter) NPR and trust NPR, but not Huff Post, even though they agree with the story.
  • A conservative respondent, for instance, would have to "consume" a news piece that they agree with on Infowars and "consume" one via (same story or different one, doesn't matter) The Economist and trust The Economist, but not Infowars, even though they agree with the story.
The "red, blue and green circles" chart doesn't speak to what sources are most or least trusted, which is what this thread is about. The chat below merely indicates what sources the most and least conservative/liberal responders cited most frequently as the sources they trust. It does not indicate what sources are most or least trusted.
 
True. It was implied, not asserted.

Wasn't implied, either. This is a jump to a conclusion, same as:

That the respondents explicitly indicated what militated for their construing a source as more trustworthy, and what you asserted is none of them, and Kearney listed the reasons respondents gave for their assessments of trustworthiness. So, now what are you going to do? Tell me the respondents lied?

Try to deal with what I actually say.


What "this result" have you in mind? The report identifies several.

The one that you specified as being the focus of this thread, in your OP:

One by-product -- more precisely, an intermediate work product -- of Kearney's work was ascertaining what news sources are most trusted. That by-product struck me as being something that might interest readers/members here. Kearney's findings to that extent are in an attached images. The newsrooms that made the survey available are in another image.


You asked:
The answer is that the respondents indicated what for them made a news source trustworthy. That the news source published notions/things with which the respondents agree is not among the characteristics a news source has that, by the respondents, militated for the news source being trustworthy. Now, one either accepts that the respondents honestly stated their reasons for thinking a news source is trustworthy, or one does not.

No, it's not an either/or; there are many different degrees of skepticism you can reasonably apply, not least knowing that people simply might not WANT to go on record saying they trust a source because it's in line with their biases, ESPECIALLY when the survey is being conducted "in newsrooms." To say nothing of all the subconscious mechanisms which might be at play.


What?

Are you aware of what this thread is actually discussing? The "purple, tan and green circles" chart indicates​
what sources are most and least trusted
, that's across the entirety of the respondents. Do you not see that Huff Post is among the least trusted sources and NPR is among the most trusted, yet both are generally thought of as liberal outlets. The Economist is a conservative outlet as is Infowars, yet they are at opposite ends of the trustworthiness spectrum. Given that observation, for what you're suggesting to be true:​
  • A liberal respondent, for instance, would have to "consume" a news piece that they agree with on Huff Post and "consume" one via (same story or different one, doesn't matter) NPR and trust NPR, but not Huff Post, even though they agree with the story.
  • A conservative respondent, for instance, would have to "consume" a news piece that they agree with on Infowars and "consume" one via (same story or different one, doesn't matter) The Economist and trust The Economist, but not Infowars, even though they agree with the story.

You are arguing by cherry-picked examples among many. The totality of the result is along the lines of what you'd expect if respondents were equating "trust" according to their biases.
 
Wasn't implied, either. This is a jump to a conclusion, same as:

Try to deal with what I actually say.

The one that you specified as being the focus of this thread, in your OP:

No, it's not an either/or; there are many different degrees of skepticism you can reasonably apply, not least knowing that people simply might not WANT to go on record saying they trust a source because it's in line with their biases, ESPECIALLY when the survey is being conducted "in newsrooms." To say nothing of all the subconscious mechanisms which might be at play.

You are arguing by cherry-picked examples among many.The totality of the result is along the lines of what you'd expect if respondents were equating "trust" according to their biases.
You're grasping at straws with that remark. I mean, really. You asserted that you read the methodology. The very first sentence of it says:
Data were collected in the February and March 2017 using an online survey made available to users (N = 8,728) of the digital media platforms of twenty-eight different newsrooms across the United States.​
Yet you are of a mind that having the the anonymity of the Internet, users were reticent to truthfully state what, in their minds, makes a news source be trustworthy when what what makes a source be trustworthy is that the news source is that the source is confirming what the respondent was already included to believe (confirmation bias). Seriously....I'm sorry, dude, but for all the Internet does, make people "shy and retiring" isn't one of them.

people simply might not WANT to go on record saying they trust a source because it's in line with their biases, ESPECIALLY when the survey is being conducted "in newsrooms."

there are many different degrees of skepticism you can reasonably apply, not least knowing that people simply might not WANT to go on record saying they trust a source because it's in line with their biases, ESPECIALLY when the survey is being conducted "in newsrooms." To say nothing of all the subconscious mechanisms which might be at play....The totality of the result is along the lines of what you'd expect if respondents were equating "trust" according to their biases.

 
You're grasping at straws with that remark. I mean, really. You asserted that you read the methodology. The very first sentence of it says:
Data were collected in the February and March 2017 using an online survey made available to users (N = 8,728) of the digital media platforms of twenty-eight different newsrooms across the United States.​
Yet you are of a mind that having the the anonymity of the Internet, users were reticent to truthfully state what, in their minds, makes a news source be trustworthy when what what makes a source be trustworthy is that the news source is that the source is confirming what the respondent was already included to believe (confirmation bias). Seriously....I'm sorry, dude, but for all the Internet does, make people "shy and retiring" isn't one of them.






OK. As you did nothing here but repeat yourself, I'll point you to my previous posts for response. It will be the same.
 
Back
Top Bottom