The question focuses on quantifying the match between the results of two (or more) tests. In other words, both tests should give similar results if applied to the same subject. Here we look at the state of mind in which we have a sample of subjects that have been tested with both tests. A natural starting point for assessing the consistency between quantitative results would be to take into account differences between test results for each subject. While the coupled t-test could then be used to test whether the average difference is significantly different from zero, this test cannot provide evidence that there is a match. In other words, rejecting the zero hypothesis that there is no difference between the two test results would only allow us to say that the tests do not match; Not rejecting this assumption would not be evidence that the tests are consistent. Chen CC, Barnhart HX. Comparison between ICC and CCC to evaluate agreement for data without and with replications. Comput Stat Data Anal 2008;53:554-64. Note that if one of the tests is a benchmark or “gold standard,” the distortion is based on the difference between the result of the new test and the “real value” of the measured quantity and therefore a degree of accuracy.10 In these cases, the CCC can measure both accuracy and consistency. But if neither test is a gold standard, it is not appropriate to say that CCC also offers a degree of precision. It is often asked whether the measurements of two different observers (sometimes more than two) or two different techniques yield similar results. This is called concordance or condore or reproducibility between measurements.
Such an analysis examines the pairs of measurements, either categorically or numerically both, with each pair being performed on a person (or a pathology slide or an X-ray). Asymptomatic and accurate approaches have often been used to test the match between two advisors with binary results. The exact conditional approach ensures compliance with the size of the tests compared to the asymptomatic approach traditionally used on the basis of the standardized Cohen-Kappa coefficient. An alternative to the conditional approach is an unconditional strategy that relaxes the limitation of fixed limits, as is the case in the under-stressed approach. This paper examines three methods of testing specific hypotheses: an approach based on maximization, an approach based on conditional p value and maximization, and an approach based on estimation and maximization. We compared these test methods on the basis of the Cohen-Kappa commonly used in terms of size and performance. We recommend two specific approaches for practical use based on performance benefits: the conditional p value-based approach and maximization, and the estimation-based and maximizing approach. Cohens Kappa ,  is a measure of fortuitous agreement.