An Alternative Method Used in Evaluating Agreement among Repeat Measurements by Two Raters in Education

Main Article Content

Semra Erdoğan
Gülhan Orekici Temel
Irem Ersöz Kaya
Hüseyin Selvi

Abstract

Taking more than one measurement of the same variable also hosts the possibility of contamination from error sources, both singly and in combination as a result of interactions. Therefore, although the internal consistency of scores received from measurement tools is examined by itself, it is necessary to ensure interrater or intra-rater agreement in order to provide reliability. The biggest problem while conducting agreement analyses for obtained measurement results is deciding which statistical method to use. Inconsistency between measurements obtained by different methods over the same individual has been suggested as being similar to inconsistency between repeated measurements obtained by the same methods over the same individual. For this purpose, a new approach is proposed for estimating and defining an agreement coefficient between raters or methods. Based on this goal, an answer to the following question is sought: When the dependent/predicted variable has two categories (such as successful-unsuccessful, sick-healthy, positive-negative, exists-does not exist, etc.) and there are two raters who each undertake repeat measures, how does the method work in terms of disagreement functions and individual-agreement coefficient, as well as for different numbers of repeat measures and different sample sizes?

Article Details

Section
Articles