The scientific world is in for a great debate as everything that seemed relevant till now may turn out to be a pile of false negatives!! At least that’s what a group of scientists and statisticians through their recently uploaded paper on PsyArXiv preprint server claims to be. A statement that may redefine the statistical significance of all times, a group of statisticians are arguing to change the p-value from 0.05 to 0.005.
What is p-value?
The recently submitted paper suggests that the commonly used value for assigning significance to results should be changed, meaning, the p-value be changed. But what is this p-value? The p value or probability value in a statistical hypothesis testing, is the probability of finding the observed, or more extreme results when the null hypothesis of a study question is true.
Simply put, when a hypothesis test in statistics is performed, p value helps you determine the significance of your results.
Its use has for long been relevant in many fields of research such as science, economics, finance, psychology, criminology and more. However, its apparent misuse has also been reported in the past and has raised considerable controversies in the past.
The p-value in all the popular statistics books is a number between 0 and 1, and have been interpreted as follows till date:
- A small p-value of less than 0.05 indicates a strong evidence against null hypothesis, so it is rejected.
- A large p-value of more than 0.05 indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
- Whereas, p-values very close to the cutoff are considered to be marginal that can go either way.
Will the new claims defy older practices?
The researchers in their study suggest that the p-value of 0.05 be changed to 0.005, as this simple step could immediately improve the reproducibility of scientific research in many fields. They suggest that this could result in fewer false positives, hence improving the research efficiencies.
John Ioannidis, a Stanford professor of health research was reportedly quoted as saying that there has been a major problem in using p-values the way it has been used as it is causing a flood of misleading claims in the literature. Though he also suggests that this change could be a quick fix but cannot solve the problems permanently.
The scientific community has for long settled on the means for obtaining the p-value that offered a measure of experiment’s success. Today, the most commonly used value of p-value to measure the statistical significance is 0.05. But researchers are now suggesting that the bar has been set too low and is leading to the problem of ‘irreproducible’ findings in the research efforts. The main problem they claim is that non statisticians do not really understand the significance of p-value and often end up using it wrongly.
They said that it cannot be, for instance, used to declare if the new drug has a 95 percent chance of working if it is used in the prescribed way. They also noted that it cannot be a way to interpret how true something is. The only way out to get true results is changing the p-value to 0.005, which they suggest, can reduce the rate of false positives from the current 33 percent down to 5 percent.
While there have been contrarian views in the scientific community, with some suggesting that changing the definition of statistical significance doesn’t address the real problem, there are others who are backing the idea.
We think more than defining a value that is universal for p-value, we should look at different use cases with varied lenses. There are times when it’s completely okay to have lower bar for p-value, while at other cases there is too much at stake for being as accurate as possible. What is required currently is educating practitioners on proper use of statistical significance.
Try deep learning using MATLAB