We are a commune of inquiring, skeptical, politically centrist, capitalist, anglophile, traditionalist New England Yankee humans, humanoids, and animals with many interests beyond and above politics. Each of us has had a high-school education (or GED), but all had ADD so didn't pay attention very well, especially the dogs. Each one of us does "try my best to be just like I am," and none of us enjoys working for others, including for Maggie, from whom we receive neither a nickel nor a dime. Freedom from nags, cranks, government, do-gooders, control-freaks and idiots is all that we ask for.
Which I think is the same as P-Value.
"The P-value approach involves determining "likely" or "unlikely" by determining the probability — assuming the null hypothesis were true — of observing a more extreme test statistic in the direction of the alternative hypothesis than the one observed."
Statistical significance is just a probability that one set of data may be useful. Its significance is that it may be significant.
Which isn't too far from what a P-Value is trying to determine. How far off is the result from likely being random?
Heh. Now explain Numbers Needed to Treat in layman's terms. The best I've seen yet is an old post on The Last Psychiatrist blog. Once you understand it you really do see its utility. But it's a hump to get there.
I think the concept "Statistics actually doesn't work." depends on what you're using them for.
If I do a study which shows me that there is a 65% chance of getting cancer if I engage in a particular activity, then I'd probably refrain from that activity. I might be part of the 35% and never get the cancer, but the odds simply aren't good enough for my tastes.
On the other hand, if that activity has a 25% chance of leading to cancer, I'll take my chances.
Statistically, the odds aren't that much more heavily in my favor, but enough to give me the confidence to feel I can beat them.
The problem with statistics, and why they tend to 'fail' is when they are misused. Like my cousin who thinks that eating GMO food will lead to cancer and mutations and all kinds of things. So he is running an organic farm (literally 100% organic - no tractors, wood-heated home, etc. The guy is nuts, but I respect his devotion to the old-time religion.) Point is, he has misused the statistics to support his belief system.
There is a fellow who runs a website, NumberWatch, where he discusses the misuse of statistics to support outlandish ideas (like climate change). He refers to it as "May, Might, Could" science.
When someone sees not eating broccoli may present a 15% higher rate of colon cancer, then "Broccoli PREVENTS Colon Cancer" will be the headline. Well, no it doesn't. It may help reduce the risk of colon cancer, but it certainly does not prevent it. More importantly, I hate broccoli and if all I get is a 15% improvement, my life isn't that much better off by eating something I think tastes awful (though why I like cream of broccoli soup is beyond me....).
When I do my statistical modeling for our business plan, I have to lay out 3 outcomes. Best, Most Likely, Difficult. In each case, I have to lay out my assumptions which will lead to each particular outcome.
It's amazing how close to reality one of them always seems to come....but statistically, that's what should happen.
P values have to do with confidence intervals and repeated testing. If you perform an experiment an infinite number of times and calculate a confidence interval each time using a given P value, say .05, then 95 percent of the confidence intervals will contain the parameter of interest. If you perform an experiment one time and calculate the confidence interval it is meaningless. Even Neyman, who developed the confidence interval, recognized this.