- Joined
- Mar 23, 2018
- Messages
- 44,095
- Reaction score
- 26,900
- Gender
- Undisclosed
- Political Leaning
- Undisclosed
Red:
TY
Blue:
Yes, that diction -- "will continue to" -- obviates the need for a probability definition because it's absolute. Using the future tense indicative mood construction "will continue" -- as opposed to subjunctive mood (any tense) construction --be it by conjugation or syntax ("may," "might," "will probably," or some other explicit diction that indicates an existential measure of uncertainty from the author's POV -- asserts that the only outcome that in the future be observed is "XYZ" continuing to "increase in frequency." "Will continue to increase in frequency" leaves zero room for error, whereas "very likely to continue to increase in frequency" leaves some, but not much, room for error.
What the two categories -- "confidence" and "likelihood" -- refer to are, in statistics/quantitative analysis (SQA), referred to as "confidence intervals" and "confidence levels."
- The "likelihood" term corresponds to to the confidence interval, or what many folks think a margin of error represents. It's how sure one is about predicted outcome coming to fruition. A key difference being that MoEs represent half of a total confidence interval, hence the "+/- some digit" form of margins of error expressions.
- The "confidence level" term is merely a qualitative/narrative depiction of confidence levels, hence why NOAA has used that term.
Both terms -- "confidence" and "likelihood" -- reflect NOAA's attempting to present a SQA concept using non-SQA parance. In SQA terms, one would write something like "We are 95% certain (confidence level), with a +/- 5% margin of error (10% confidence interval), that extreme precipitation events will continue to increase in frequency...." NOAA has, for good reason, endeavored to say that in "plain language" that most folks will understand. To wit, NOAA's well aware that most Americans are clueless about SQA, so to reach and be comprehended by NOAA's target audience, the report needs to be written in as "layman-friendly" language as possible. Readers who want to see the "fancy" SQA diction can read the individual studies referenced; it's there.
NOTE:It's important to be observant of the temporal and dynamic qualities of assertions found in SQA reports. Everything NOAA's assertions pertain to is dynamic in nature, which, because what's being measured and predicted is fluid behavior, the math (concepts) used is stats + multivariable calculus
Pink:
Such tables are included in the report (see below).
I think you may have confused the fact of my having included in the OP only Chapter 1 from the complete document with Chapter 1 being intended as a stand-alone document. It's isn't, but it is available as a PDF file distinct from the entire report. I only included Chapter 1 because it's the part of the report I'd read.
Tan:
I assure you that I've not conflated the two.
OK so you hacked out Chapter 1. NOAA didn't do that. Did not know that.
Still and all you really cannot mix Likelihood with Confidence in the manner provided in my example without suggesting major ambiguity.
My real problem is the limited number of instances where the report uses a Likelihood value attached to a Confidence Score for an Assessment. Other than the two instances a Likelihood value was used very early in the report there appears to be hundreds of Confidence Scores in the report and nary an additional Likelihood value attached to them. I have not read through the full report as yet but I have already seen many more Assessments. IMO they should have just skipped the Likelihood values altogether in spite of the "correctness" or lack thereof in providing a range of Likelihood for Confidence Scoring Assessments. Either use them or don't. Using them so selectively as to provide two Assessments with them and hundreds without still IMO muddles the results for the two and makes me question why they have used them at all without using them consistently.
Further to the point while I am only up to about 300 pages and while the report in total often includes a section the describes "Major Uncertainties" for the Assessments included in that section it does not quantify them in any way. A section will either say there are "No Major Uncertainties" or will define the "Major Uncertainties" without quantifying them. Either quantify the Assessments via a Likelihood value or don't. But don't use them as selectively as they appear to have used them or I will "likely" question why.
Anyway I will try to provide a better answer when I have taken the time to get deeper into the full report. At some point I will have seen enough Assessments to have a better understanding for how often they have in fact quantified them with a Likelihood value.