Loading...
Please wait, while we are loading the content...
Similar Documents
The problem of too many hypothesis tests
| Content Provider | Semantic Scholar |
|---|---|
| Author | Gissane, Conor |
| Copyright Year | 2016 |
| Abstract | Research articles frequently report on several significance tests. When multiple hypothesis tests report on a single issue, the P values may not be an accurate guide to significance of a given result [1]. Whenever an investigator conducts a statistical significance test, they could make either a Type I or a Type II error (see box). The risk of making such errors is part of the hypothesis testing process, but it is generally agreed that making a Type I error is more serious than making a Type II error [2]. Normal practice dictates that the chance of making a Type I error is set before beginning the research. The chance of making a Type I error is set as = 0.05, corresponding to the P value where the null hypothesis will either be accepted or rejected. However, the chance of making a Type I error of 0.05 is for one test. But, if the number of tests increases, so does the chance of making a Type I error. |
| Starting Page | 67 |
| Ending Page | 68 |
| Page Count | 2 |
| File Format | PDF HTM / HTML |
| DOI | 10.3233/PPR-160088 |
| Volume Number | 38 |
| Alternate Webpage(s) | https://content.iospress.com/download/physiotherapy-practice-and-research/ppr088?id=physiotherapy-practice-and-research/ppr088 |
| Alternate Webpage(s) | https://doi.org/10.3233/PPR-160088 |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |