Average Percent Agreement

Here`s the coverage of quantity and the instructive attribution of disagreements, while Kappa hides the information. In addition, Kappa introduces some challenges in calculation and interpretation, as Kappa is a ratio. It is possible that the kappa ratio returns an indefinite value due to zero in the denominator. Moreover, a report does not betray either its counter or its denominator. It is more informative for researchers to point out disagreements in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. If forecast accuracy is the goal, researchers can more easily think about how to improve forecasting by using two components of quantity and allocation instead of a Kappa report. [2] Some researchers have expressed concern about the tendency of κ to take as data the frequency of observed categories, which may make them unreliable for measuring concordance in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the concordance on the rare category. [17] This is why κ is considered too conservative a degree of convergence. [18] Others[19][citation required] dispute the assertion that kappa « takes into account » random agreement. To do this effectively, there would need to be an explicit model of the impact of chance on evaluators` decisions.

The so-called random adjustment of kappa statistics assumes that, if it is not entirely certain, evaluators simply advise – a very unrealistic scenario. The disagreement share is 14/16 or 0.875. The disagreement is due to the quantity, because the allocation is optimal. Kappa is 0.01. A serious mistake in this type of inter-board reliability is that it does not take into account random agreement and overestimates the degree of compliance. This is the main reason why the percentage of concordance should not be used for scientific work (e.g. B doctoral theses or scientific publications). Kappa is an index that takes into account the observed concordance with a basic agreement. However, researchers should carefully consider whether Kappa`s basic agreement is relevant to each research question. Kappas Baseline is often described as the agreement by chance, which is only partially correct. The Basic Kappa Agreement is the agreement that would be expected due to random allocation, given the quantities indicated by the marginal amounts of the square contingency table. Therefore, Kappa = 0, if the observed allocation is apparently random, regardless of the quantitative opinion limited by marginal amounts.

However, in many applications, investigators should be more interested in quantitative conjunctity in limit amounts than in the allocation notice described by the additional information on the diagonal of the square contingency table. Therefore, Kappa`s baseline is more distracting than revealing for many applications. Look at the following example: another factor is the number of codes. With the increase in the number of codes, kappas become higher. Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, Kappa values were lower if the codes were fewer. And in line with Sim & Wright`s statement about prevalence, kappas were higher when the codes were roughly equivalent. According to Bakeman et al., « no value of Kappa can be considered universally acceptable. » [12]:357 You also provide a computer program that allows users to calculate values for Kappa that indicate the number of codes, their probability, and the accuracy of the observer.

Publié dans Non classé