Reliability of the assessment of appropriateness of diagnostic test request behavior.

2001 
Abstract We describe the reliability of the assessment of the appropriateness of requested diagnostic tests. We used a retrospective random selection of 253 request forms with in total1217 requested tests. Three experts made an independent assessment of each requested test. Interrater kappa values ranged from 0.33 to 0.44. The kappa values of intrarater agreement ranged from 0.65 to 0.68. The reliability coefficient for all three reviewers was 0.66. This reliability is not sufficient to make case-by-case decisions, for example to give individual feedback on the appropriateness of requested tests. Sixteen reviewers are necessary to obtain a reference with a reliability of 0.95. Keywords: Observer Variation, Primary Health Care, Practice Guidelines, Diagnostic Test, Clinical Decision Support System Introduction Various methods such as peer review and audit [1-7] are used to assess the appropriateness of healthcare activities such as diagnostic test ordering, prescribing medications, etc. Peer review is described as a method for implementing quality improvement of patient care with continuous, systematic and critical reflections by a number of colleagues on their own and others’ performance [8]. Individual feedback on gaps in the physician ‘s performance and audit are examples of peer review. The method is often based on practice guidelines that are used and interpreted by human experts. The validity and reliability of peer review of medical practice is doubtful [9-11]. Reliability is the degree to which a measurement is consistent or reproducible. Validity is the degree to which the measurement reflects the true value of the variable you want to measure. Correctness and accuracy are associated terms [12]. Smith et al. [10] stated that the reasons for the lack of reliability and validity in the peer review process are systematic bias of individual reviewers and systematic bias related to the professional training of the reviewer. In the peer review process implicit criteria are often used by physician reviewers to evaluate the appropriateness of care [11]. In addition, there are limits to man’s capabilities as an information processor which lead to the occurrence of random errors in his activities. Reliability can be improved by increasing the number of raters or by improving the agreement among raters for example by training them [13]. Also the method of peer review can be improved by providing guidelines and standards or by using computerized decision support [14]. Discussion between physicians as reviewers of medical information improves the agreement between the reviewers, but does not improve the overall reliability of the judgement of physicians who take part in different discussions [15]. The use of guidelines can be seen as an important instrument to achieve a higher degree of quality of care [16]. Nevertheless, their implementation and use in daily practice are still a problem [17]. Earlier research on the management of simple clinical events indicated that the use of computers could improve physician’s compliance with predefined care protocols [18]. The Transmural & Diagnostic Centre (T&DC) has given personal feedback (a kind of peer review) via written reports to about 90 Family Physicians (FPs) in the Maastricht region since 1985. Twice a year, each FP receives a feedback report with critical comments on his/her test requests in an earlier month. The individual biannually provided written feedback [19] is based on a comparison of information on the request forms (including medical patient data) with agreed upon practice guidelines. These regional guidelines are used already for several years in the participating general practices. Previous studies showed that this feedback was highly effective and was appreciated by the FPs [2, 19, 20]. An important disadvantage of this method from a management point of view is that the method is very labor-intensive. The expert has to review about 80 request forms per FP per year. The feedback is
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    3
    Citations
    NaN
    KQI
    []