AbstractAim: The primary aim of this study is to investigate the validity and reliability of new software in order to score the American Board of Orthodontics Objective Grading System
(ABO OGS) by using digital models to replace the conventional method (gold standard) of using plaster models and a hand ABO gauge. The secondary aims of the study are to assess the level of agreement between orthodontists and non-orthodontists’ ABO OGS scores using the conventional method on plaster models and the new software on digital models and then compare the time taken to score the ABO OGS using both methods.
Design: In-vitro study
Materials and methods: Thirty one high quality post-treatment plaster models which met the agreed inclusion criteria were used in this study. All models were scanned in order to compose digital models using a laser scanner (3 shape Scan IT Orthodontics). Four examiners with different levels of orthodontic knowledge (orthodontic postgraduate student, orthodontist, prosthodontic postgraduate student and undergraduate dental student) participated in the study. The four examiners scored the ABO OGS for the 31 models using the conventional method on plaster models and on digital models using the new software. All examiners received the appropriate training for using both ABO OGS scoring systems prior to scoring the models.
To determine the intra-examiner reliability, Examiner 1 repeated the ABO OGS scoring for the 7 components twice with a two-week interval. To determine the inter-examiner reliability, the 4 examiners measured the 31 models using both scoring methods. The scores of all 4 examiners for the 31 models were used to assess the correlation and agreement between the new software and the conventional method (gold standard) for the ABO OGS’ seven components (tooth alignment, vertical positioning of marginal ridges, buccolingual inclination of posterior teeth, occlusal relationship, occlusal contacts, overjet and interproximal contacts). The time taken by the 4 examiners for scoring was compared between the new software and the conventional method (gold standard).
The Pearson correlation coefficients test and Bland-Altman plots were used to assess the correlation and agreement between the two scoring methods as well as the intra and inter-examiners reliability. An ANOVA test and a t-test were used to assess the difference in scoring time for the mean of the 31 models.
Primary outcomes: The agreement between the new software and the conventional method (gold standard) for scoring ABO OGS. The intra-examiner and inter-examiner reliability of both scoring methods.
Secondary outcomes: Time taken for scoring the ABO OGS between the two methods and between examiners with different orthodontic knowledge levels.
Results: There was no agreement between the new software and the conventional scoring method in any of the ABO OGS components, except in regard to the buccolingual inclination component.
The new software had acceptable intra-examiner reliability: the total score was R = 0.583 (P = 0.001); however, inter-examiner reliability was found to be low, with a correlation of R = -0.039 (P = 0.834). The conventional scoring method had high intra-examiner and inter-examiner reliability, with correlation as high as R =0.915 (P = 0.000) for intra-examiner reliability and R = 0.970 (P=P = 0.000) for inter-examiner reliability for the total score. Differences in the level of orthodontic knowledge among examiners had a significant influence on the reliability of the new software. However, this influence was not found to be significant in the conventional scoring method.
The new software took significantly more time for ABO OGS scoring than the conventional plaster model method (mean difference 20.09 minutes, P = 0.00).
Conclusion: The new software is not a valid method for ABO OGS scoring; it cannot replace the conventional plaster model. The new software requires more time for scoring.
|Date of Award||2015|
|Supervisor||David Bearn (Supervisor) & Peter Mossey (Supervisor)|