Thanks CP. I believe this not showing the data problem is mostly contained in the medical field. In most fields giving the data is mandatory for review, it will not be published otherwise. I’m not sure why the medical field has been allowed this, but it is an issue with validity. Similarly another big issue with psychology and other sciences using stats with p values, only the first results get published, verifying the results gets you nothing. And it was found many, as high as 70% or so, when people actually did redo to test to verify, didn’t give the same results.
A big issue, if you do an experiment 200 different times, possibly modifying with each failure for improvement, you could some wrong connection just by probability.
For example if I flip a coin 1000 times, I might arrive at 10 heads in a row at some point, a very unlikely event giving a p value of .001, but I have mererly just happened upon it by luck, i.e., if I do an experiment over and over I can find that rare case of 10 heads in a row by coincidence, not be because their is any correlation. So a focus on repeatability has been happening over the last few year.