Show simple item record

dc.contributor.authorBjorner, Jakob B.
dc.contributor.authorKosinski, Mark
dc.contributor.authorWare, John E. Jr.
dc.date2022-08-11T08:10:41.000
dc.date.accessioned2022-08-23T17:16:40Z
dc.date.available2022-08-23T17:16:40Z
dc.date.issued2003-12-04
dc.date.submitted2010-06-18
dc.identifier.citationQual Life Res. 2003 Dec;12(8):887-902. <a href="http://dx.doi.org/10.1023/A:1026175112538">Link to article on publisher's site</a>
dc.identifier.issn0962-9343 (Linking)
dc.identifier.doi10.1023/A:1026175112538
dc.identifier.pmid14651410
dc.identifier.urihttp://hdl.handle.net/20.500.14038/47449
dc.description.abstractBACKGROUND: Item response theory (IRT) is a powerful framework for analyzing multiitem scales and is central to the implementation of computerized adaptive testing. OBJECTIVES: To explain the use of IRT to examine measurement properties and to apply IRT to a questionnaire for measuring migraine impact--the Migraine Specific Questionnaire (MSQ). METHODS: Data from three clinical studies that employed the MSQ-version 1 were analyzed by confirmatory factor analysis for categorical data and by IRT modeling. RESULTS: Confirmatory factor analyses showed very high correlations between the factors hypothesized by the original test constructions. Further, high item loadings on one common factor suggest that migraine impact may be adequately assessed by only one score. IRT analyses of the MSQ were feasible and provided several suggestions as to how to improve the items and in particular the response choices. Out of 15 items, 13 showed adequate fit to the IRT model. In general, IRT scores were strongly associated with the scores proposed by the original test developers and with the total item sum score. Analysis of response consistency showed that more than 90% of the patients answered consistently according to a unidimensional IRT model. For the remaining patients, scores on the dimension of emotional function were less strongly related to the overall IRT scores that mainly reflected role limitations. Such response patterns can be detected easily using response consistency indices. Analysis of test precision across score levels revealed that the MSQ was most precise at one standard deviation worse than the mean impact level for migraine patients that are not in treatment. Thus, gains in test precision can be achieved by developing items aimed at less severe levels of migraine impact. CONCLUSIONS: IRT proved useful for analyzing the MSQ. The approach warrants further testing in a more comprehensive item pool for headache impact that would enable computerized adaptive testing.
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=14651410&dopt=Abstract">Link to Article in PubMed</a>
dc.relation.urlhttp://dx.doi.org/10.1023/A:1026175112538
dc.subjectAdolescent
dc.subjectAdult
dc.subjectFeasibility Studies
dc.subjectFemale
dc.subjectHumans
dc.subjectMale
dc.subjectMiddle Aged
dc.subjectMigraine Disorders
dc.subjectQuality of Life
dc.subjectQuestionnaires
dc.subject*Sickness Impact Profile
dc.subjectUnited States
dc.subjectBiostatistics
dc.subjectEpidemiology
dc.subjectHealth Services Research
dc.titleThe feasibility of applying item response theory to measures of migraine impact: a re-analysis of three clinical studies
dc.typeJournal Article
dc.source.journaltitleQuality of life research : an international journal of quality of life aspects of treatment, care and rehabilitation
dc.source.volume12
dc.source.issue8
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/qhs_pp/588
dc.identifier.contextkey1363423
html.description.abstract<p>BACKGROUND: Item response theory (IRT) is a powerful framework for analyzing multiitem scales and is central to the implementation of computerized adaptive testing.</p> <p>OBJECTIVES: To explain the use of IRT to examine measurement properties and to apply IRT to a questionnaire for measuring migraine impact--the Migraine Specific Questionnaire (MSQ).</p> <p>METHODS: Data from three clinical studies that employed the MSQ-version 1 were analyzed by confirmatory factor analysis for categorical data and by IRT modeling.</p> <p>RESULTS: Confirmatory factor analyses showed very high correlations between the factors hypothesized by the original test constructions. Further, high item loadings on one common factor suggest that migraine impact may be adequately assessed by only one score. IRT analyses of the MSQ were feasible and provided several suggestions as to how to improve the items and in particular the response choices. Out of 15 items, 13 showed adequate fit to the IRT model. In general, IRT scores were strongly associated with the scores proposed by the original test developers and with the total item sum score. Analysis of response consistency showed that more than 90% of the patients answered consistently according to a unidimensional IRT model. For the remaining patients, scores on the dimension of emotional function were less strongly related to the overall IRT scores that mainly reflected role limitations. Such response patterns can be detected easily using response consistency indices. Analysis of test precision across score levels revealed that the MSQ was most precise at one standard deviation worse than the mean impact level for migraine patients that are not in treatment. Thus, gains in test precision can be achieved by developing items aimed at less severe levels of migraine impact.</p> <p>CONCLUSIONS: IRT proved useful for analyzing the MSQ. The approach warrants further testing in a more comprehensive item pool for headache impact that would enable computerized adaptive testing.</p>
dc.identifier.submissionpathqhs_pp/588
dc.contributor.departmentDepartment of Quantitative Health Sciences
dc.source.pages887-902


This item appears in the following Collection(s)

Show simple item record