PURPOSE: The authors report multiple mini-interview (MMI) selection process data at the University of Dundee Medical School; staff, students, and simulated patients were examiners and investigated how effective this process was in separating candidates for entry into medical school according to the attributes measured, whether the different groups of examiners exhibited systematic differences in their rating patterns, and what effect such differences might have on candidates' scores. METHOD: The 452 candidates assessed in 2009 rotated through the same 10-station MMI that measured six noncognitive attributes. Each candidate was rated by one examiner in each station. Scores were analyzed using Facets software, with candidates, examiners, and stations as facets. The computer program calculated fair average scores that adjusted for examiner severity/leniency and station difficulty. RESULTS: The MMI reliably (0.89) separated the candidates into four statistically distinct levels of noncognitive ability. The Rasch measures accounted for 31.69% of the total variance in the ratings (candidates 16.01%, examiners 11.32%, and stations 4.36%). Students rated more severely than staff and also had more unexpected ratings. Adjusting scores for examiner severity/leniency and station difficulty would have changed the selection outcomes for 9.6% of the candidates. CONCLUSIONS: The analyses highlighted the fact that quality control monitoring is essential to ensure fairness when ranking candidates according to scores obtained in the MMI. The results can be used to identify examiners needing further training, or who should not be included again, as well as stations needing review. "Fair average" scores should be used for ranking the candidates.