Pitfalls and misconceptions with statistical analysis
About The Course
Ideally, any experienced investigator with the right tools should be able to reproduce a finding that is published in a peer-reviewed biomedical science journal. The reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this. One of the reasons may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes:
1. P-Hacking - This is when you re-analyse a data set in many different ways or perhaps re-analyse with additional replicates until you get the desired result.
2. Overemphasis on P values rather than the actual size of the observed effect.
3. Overuse of statistical hypothesis testing and being seduced by the word “significant”.
4. Overreliance on standard errors, which are often misunderstood.
Why can so few findings be reproduced? Undoubtedly, there are many reasons. But in many cases, I suspect that investigators fooled themselves due to a poor understanding of statistical concepts. Here, I identify five common misconceptions about statistics and data analysis and explain how to avoid them.
My recommendations are written for pharmacologists and other biologists publishing experimental research using commonly used statistical methods. They would need to be expanded for analyses of clinical or observational studies.
Dr. John Davis has more than 15 years of experience in the educational and medical world. He has participated in the development of training programs for the pharmaceutical industry, the dermatology industry and the general medicine, seeking the purpose of achieving the best possible course materials for his students and challenging institutions to change the way they incorporate new knowledge.