Loading...
Please wait, while we are loading the content...
Similar Documents
Reviewer ' s report Title : Instruments to measure patient experience of health care quality in hospitals : A systematic review
| Content Provider | Semantic Scholar |
|---|---|
| Abstract | In the manuscript " Instruments to measure patient experience of health care quality in hospitals: A systematic review " a review has been described, in which the measurement properties on 11 instruments has been criticized, as well as the 'cost efficiency', 'acceptability', and educational impact'. I'm aware and appreciate the amount of work that was put into this review. The authors have tried to be clear in what they did. However, I have several questions on their methods and suggestions for improvement. 1. The authors describe that 'quality of care focusing on measuring experience' is another construct as 'quality of care focusing on satisfaction'. It would be nice to read more on their understanding of 'quality of care focusing on measuring experience'. What kind of experiences are included, what is their definition or framework of the construct they are interested in. is it considered a unidimensional construct? Also, at the result section p 14 the authors describe that instruments cover the similar domains, as well as different domains. It would be nice to read what those similar domains are (in addition to the distinct domains that are described) to get a better understanding of the construct. 2. Internal consistency is about the interrelatedness among the items. To be able to interpret a Cronbach alpha, the instrument should be based on a reflective model, and the (sub)scale should be unidimensional. I think these two issues should be addressed in the review. Internal consistency is not about the structure of the instrument (see also p 16 section 'reliability'). Furthermore, the measurement properties reliability and agreement or measurement error should not be used as synonyms. For example, a kappa of 0.9 should not be interpreted as % agreement. In Table 5 it should be nice to add a column on measurement error. 3. The measurement property 'criterion validity' was included in the ratings. For a construct like 'patients' experience about quality of care' there is likely no reasonable gold standard available, only the longer version of a shortened questionnaire. For the QPP this is also indicated, and a correlation of 0.90 was found. However, the quality rating of the methods was given a 'poor' score. For the QPPS also correlations were reported (called criterion validity), and here, the quality of the methods was rated as excellent. This seems not appropriate (and not in line with COSMIN), since in most cases a significant level … |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | https://static-content.springer.com/openpeerreview/art:10.1186%2Fs13643-015-0089-0/13643_2015_89_ReviewerReport_V2_R2.pdf |
| Alternate Webpage(s) | https://static-content.springer.com/openpeerreview/art:10.1186%2Fs13643-015-0089-0/13643_2015_89_ReviewerReport_V3_R2.pdf |
| Alternate Webpage(s) | https://static-content.springer.com/openpeerreview/art:10.1186%2Fs13643-015-0089-0/13643_2015_89_ReviewerReport_V2_R1.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Report |