Mazor, Kathleen M.Canavan, ColleenFarrell, MargaretMargolis, Melissa J.Clauser, Brian E.2022-08-232022-08-232008-10-102011-12-30Acad Med. 2008 Oct;83(10 Suppl):S9-12. <a href="http://dx.doi.org/10.1097/ACM.0b013e318183e329">Link to article on publisher's site</a>1040-2446 (Linking)10.1097/ACM.0b013e318183e32918820511https://hdl.handle.net/20.500.14038/37058BACKGROUND: This study investigated whether participants' subjective reports of how they assigned ratings on a multisource feedback instrument provide evidence to support interpreting the resulting scores as objective, accurate measures of professional behavior. METHOD: Twenty-six participants completed think-aloud interviews while rating students, residents, or faculty members they had worked with previously. The items rated included 15 behavioral items and one global item. RESULTS: Participants referred to generalized behaviors and global impressions six times as often as specific behaviors, rated observees in the absence of information necessary to do so, relied on indirect evidence about performance, and varied in how they interpreted items. CONCLUSIONS: Behavioral change becomes difficult to address if it is unclear what behaviors raters considered when providing feedback. These findings highlight the importance of explicitly stating and empirically investigating the assumptions that underlie the use of an observational assessment tool.en-USFeedback, PsychologicalHumans*Internship and Residency*Interviews as TopicKnowledge of Results (Psychology)Observer VariationPediatrics*Professional CompetenceQualitative ResearchReproducibility of Results*Social BehaviorHealth Services ResearchPrimary CareCollecting validity evidence for an assessment of professionalism: findings from think-aloud interviewsJournal Articlehttps://escholarship.umassmed.edu/meyers_pp/4422426098meyers_pp/442