Validity versus feasibility for quality of care indicators: expert panel results from the MI-Plus study

dc.contributor.authorPena, Adolfo
dc.contributor.authorVirk, Sandeep S.
dc.contributor.authorShewchuk, Richard M.
dc.contributor.authorAllison, Jeroan J.
dc.contributor.authorWilliams, O. Dale
dc.contributor.authorKiefe, Catarina I.
dc.contributor.departmentDepartment of Quantitative Health Sciences
dc.date2022-08-11T08:10:44.000
dc.date.accessioned2022-08-23T17:18:01Z
dc.date.available2022-08-23T17:18:01Z
dc.date.issued2010-04-13
dc.date.submitted2011-01-07
dc.description.abstractBACKGROUND: In the choice and definition of quality of care indicators, there may be an inherent tension between feasibility, generally enhanced by simplicity, and validity, generally enhanced by accounting for clinical complexity. OBJECTIVE: To study the process of developing quality indicators using an expert panel and analyze the tension between feasibility and validity. DESIGN AND PARTICIPANTS: A multidisciplinary panel of 12 expert physicians was engaged in two rounds of modified Delphi process to refine and choose a smaller subset from 36 indicators; these were developed by a research team studying the quality of care in ambulatory post-myocardial infarction patients with co-morbidities. We studied the correlation between validity/feasibility ranks provided by the expert panel. The correlation between the quality indicators ranks on validity and feasibility scale and variance of experts' responses was also individually studied. RESULTS: Ten of 36 indicators were ranked in both the highest validity and feasibility groups. The strength of association between validity and feasibility of indicators measured by Kendall tau-b was 0.65. In terms of validity, a strong negative correlation was observed between the ranks of indicators and the variability in expert panel responses (Spearman's rho, r = -0.85). A weak correlation was found between the ranks of feasibility and the variability of expert panel responses (Spearman's rho, r = 0.23). CONCLUSION: There was an unexpectedly strong association between the validity and feasibility of quality indicators, with a high level of consensus among experts regarding both feasibility and validity for indicators rated highly on each of these attributes.
dc.identifier.citationInt J Qual Health Care. 2010 Jun;22(3):201-9. Epub 2010 Apr 9. <a href="http://dx.doi.org/10.1093/intqhc/mzq018">Link to article on publisher's site</a>
dc.identifier.contextkey1721352
dc.identifier.doi10.1093/intqhc/mzq018
dc.identifier.issn1353-4505 (Linking)
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/qhs_pp/881
dc.identifier.pmid20382663
dc.identifier.submissionpathqhs_pp/881
dc.identifier.urihttps://hdl.handle.net/20.500.14038/47762
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=20382663&dopt=Abstract">Link to Article in PubMed</a>
dc.relation.urlhttp://dx.doi.org/10.1093/intqhc/mzq018
dc.source.issue3
dc.source.journaltitleInternational journal for quality in health care : journal of the International Society for Quality in Health Care / ISQua
dc.source.pages201-9
dc.source.volume22
dc.subjectDelphi Technique
dc.subjectGuideline Adherence
dc.subjectHumans
dc.subjectMyocardial Infarction
dc.subjectPractice Guidelines as Topic
dc.subjectQuality Indicators, Health Care
dc.subjectQuality of Health Care
dc.subjectRandomized Controlled Trials as Topic
dc.subjectReproducibility of Results
dc.subjectBioinformatics
dc.subjectBiostatistics
dc.subjectEpidemiology
dc.subjectHealth Services Research
dc.titleValidity versus feasibility for quality of care indicators: expert panel results from the MI-Plus study
dc.typeJournal Article
dspace.entity.typePublication
html.description.abstract<p>BACKGROUND: In the choice and definition of quality of care indicators, there may be an inherent tension between feasibility, generally enhanced by simplicity, and validity, generally enhanced by accounting for clinical complexity.</p> <p>OBJECTIVE: To study the process of developing quality indicators using an expert panel and analyze the tension between feasibility and validity.</p> <p>DESIGN AND PARTICIPANTS: A multidisciplinary panel of 12 expert physicians was engaged in two rounds of modified Delphi process to refine and choose a smaller subset from 36 indicators; these were developed by a research team studying the quality of care in ambulatory post-myocardial infarction patients with co-morbidities. We studied the correlation between validity/feasibility ranks provided by the expert panel. The correlation between the quality indicators ranks on validity and feasibility scale and variance of experts' responses was also individually studied.</p> <p>RESULTS: Ten of 36 indicators were ranked in both the highest validity and feasibility groups. The strength of association between validity and feasibility of indicators measured by Kendall tau-b was 0.65. In terms of validity, a strong negative correlation was observed between the ranks of indicators and the variability in expert panel responses (Spearman's rho, r = -0.85). A weak correlation was found between the ranks of feasibility and the variability of expert panel responses (Spearman's rho, r = 0.23).</p> <p>CONCLUSION: There was an unexpectedly strong association between the validity and feasibility of quality indicators, with a high level of consensus among experts regarding both feasibility and validity for indicators rated highly on each of these attributes.</p>
Files