A Data Science Central Community
My data is about tests that are either positive or negative done on a number of specimens. The specimens are tested by 5 different lab techs and now i need to analyze agreements of these results.
Cohen's kappa can't help while fleiss kappa does not have a detailed universal method of interprating its results.
What is the best tool for analyzing rater agreement?
perhaps I can also hear more on latent class models