Title | How good are clinical MedLine searches? A comparative study of clinical end-user and librarian searches |
Author(s) | K. Ann McKibbon; R. Brian Haynes; Cynthia J. Walker-Dilks; Michael F. Ramsden; Nancy C. Ryan; Lynda Baker; Tom Flemming; Dorothy Fitzgerald |
Source | Computers and Biomedical Research, Vol. 23, Pages 583-593 |
Publication Date | 1990 |
Abstract | The objective of this study was to determine the quality of MedLine searches done by physicians, physician trainees, and expert searchers (clinicians and librarians). Its design was an analytic survey with independent replication in a setting of self-service online searching from medical wards, an intensive care unit, a coronary care unit, an emergency room, and an ambulatory clinic in a 300-bed teaching hospital. Participating were all M.D. clinical clerks, house, and attending staff responsible for patients in the above settings. Intervention for all participants consisted of a 2-h small group class and 1-h practice session on MedLine searching (Grateful Med) before free access to MedLine. Search questions from 104 randomly selected novice searches were given to 1 of 13 clinicians with prior search experience and 1 of 3 librarians to run independent searches (triplicated searches). Measurements and main results from these unique citations of the triplicated searches were sent to expert clinicians to rate for relevance (7-point scale). Recall (number of relevant citations retrieved from an individual search divided by the total number of relevant citations from all searches on the same topic) and precision (proportion of relevant citations retrieved in each search) were calculated. Librarians were significantly better than novices for both. Librarians had equivalent recall to, and better precision than, experienced end-users. Unexpectedly, only 20% of relevant citations were retrieved by more than one search of the set of three, with the conclusion that novice searchers on MedLine via Grateful Med after brief training have relatively low recall and precision. Recall improves with experience but precision remains suboptimal. Further research is needed to determine the "learning curve," evaluate training interventions, and explore the non-overlapping retrieval of relevant citations by different searchers. |