This article is the second in a series on general practice research in Australia. The series explores strategies to strengthen general practice research and further develop the evidence base for primary care.
Case
Anne, 39 years of age, presents with signs and symptoms of moderately severe depression. Her depression has been building on the background of difficulties at work, a poor relationship with her daughter, and the deteriorating health of her parents. She has not been treated for depression in the past, and is not keen on seeing a counsellor. Anne’s friend suggested she asks for an antidepressant. In order to ensure you are up to date with the latest guidance, you turn to evidence‑based sources at hand.
Guidelines are an inseparable part of the general practitioner (GP) toolkit and in Australia we have several to choose from. For example, The Royal Australian College of General Practitioners’ (RACGP’s) Guidelines for preventive activities in general practice (Red Book) provides guidance on preventive care and how to screen for conditions, including depression.1 Therapeutic Guidelines advises on the best evidence-based treatments.2 These sources of guidance are based on the best available evidence, translated into practical recommendations.1–3 The number of Australian clinical practice guidelines has skyrocketed over the past three decades. Buchan et al4 identified nine times more guidelines in 2010 than in a 1993 review.5 Many clinical resources are also available electronically for subscribers (eg UpToDate, DynaMed, BestBETS), but how can you tell which guidance is reliable and relevant?
Guideline development
The first step in assessing the quality of a guideline is to understand how it is developed. Increasingly, guidelines include information on their development process. In Australia, the National Health and Medical Research Council (NHMRC) has published guidance for developing, implementing and evaluating the effectiveness of clinical practice guidelines.6 The international ‘Appraisal of Guidelines for Research and Evaluation’ (AGREE) instrument is used to ‘score’ how individual guidelines adhere to a transparent process (www.agreetrust.org). These organisations emphasise that recommendations should be based on systematic searches of the literature and rigorous assessment of the quality of evidence. Levels of evidence and an indicator of the strength of recommendations tell clinicians how robust the evidence is. The levels of evidence refer to a ‘hierarchy’ (Figure 1), where reasoning based on mechanism of action or case reports are the least reliable, and systematic reviews of rigorous studies are the most reliable evidence.7
Figure 1. Hierarchy of levels of evidence
Adapted from Murad MH, Asi N, Alsawas M,Alahdab F. New Evidence pyramid, BMJ Blogs,with permission from BMJ Publishing Group Ltd
Turning evidence from clinical studies into recommendations for clinical practice is based on the judgement of the guideline developers. The guideline developers may assign a strength to recommendations on the basis of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) standard (www.gradeworkinggroup.org). GRADE provides an overall assessment of the quality of the evidence and how it compares to current best practice (Table 1).
Table 1. The GRADE domains of quality assessment
Quality level
|
Definition
|
High
|
We are very confident that the true effect lies close to the estimate of the effect
|
Moderate
|
We are moderately confident in the effect estimate; it is possible that the true effect is substantially different
|
Low
|
Our confidence in the effect estimate is limited; the true effect may be substantially different
|
Very low
|
We have very little confidence in the effect estimate; the true effect is likely to be substantially different
|
Reproduced with permission from Schünemann H, Brozek J, Guyatt G, Oxman A. GRADE handbook 2013.
|
Conflict of interest
Transparency of the development process is not the only factor that determines how confident we can be about guidelines. One of the most important threats to the reliability of evidence is conflict of interest. Much of the evidence we rely on, especially in the therapeutic domain, is generated by pharmaceutical companies who develop and test their products. Pharmaceutical companies recover the investments for discovery and development by selling successful products. Therefore, demonstrating success and strong relationships with their customers (ie prescribers) are essential.
There is ample evidence that the relationship between clinicians and the pharmaceutical industry changes doctors’ prescribing,8,9 and may not be in the best interest of public health and safety.10,11 Transparency about the provenance of research findings and authors’ industry links has improved over the past decade. Most journals and conference organisers now require authors to disclose any potential or perceived conflicts of interest, including links with the pharmaceutical industry. Similarly, the independence of guideline developers has been scrutinised.12 However, reporting of guideline authors’ conflicts of interest is still poor.13,14 In the current climate of diminishing public funding for independent research, the issue of entanglement of conflicts is likely to become even more prominent.
Bias
‘Publication bias’, the preference to publish studies with a ‘positive’ result in favour of a new product over ‘negative’ trials that show no difference, is another risk to evidence. Publication bias is common and its potential harms are well documented. For example, thinking about Anne, publication bias has been a particular issue for antidepressants. One analysis of 74 registered antidepressant studies found that 31% of these studies were not published. Moreover, 91% of the published studies showed positive results for antidepressant efficacy when compared with 51% of the unpublished studies.15 Perhaps the published evidence is not the best guide to antidepressant efficacy (www.alltrials.net). A recent Cochrane review found that studies funded by pharmaceutical companies were more likely to report a result favouring the company’s product than research not funded by pharmaceutical companies.16
Publication bias is also a limitation of systematic reviews that underpin recommendations in guidelines. One would hope that authors of reviews present and interpret their findings as objectively as possible.17 However, it is not uncommon for different reviews investigating the same question to reach opposing conclusions. Authors may have preconceived ideas of the outcome, which may influence their decisions.18 Likewise, guideline developers use their expertise to weigh evidence and interpret the findings. They decide what and how to include the available evidence. Guidelines developed by different teams may therefore produce different recommendations.19
Systematic reviews
The ‘evidence hierarchy’ illustrates the importance of systematic reviews in guideline development. Systematic reviews collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific question.20 The key elements are a systematic approach, and transparent and reproducible methods.
The Cochrane Library includes more than 7000 systematic reviews (www.cochrane.org/about-us) that are mostly focused on the effectiveness of interventions, but diagnostic reviews and evaluations of healthcare interventions are being added. Cochrane was founded following Archie Cochrane’s call for the more efficient use of evidence that was not used because studies were not visible or too small to reach statistical significance. Pooling smaller studies into a meta-analysis (the numerical outcome of a systematic review) overcomes the issue of statistical power. One of the first examples of how systematic reviews can influence clinical practice is embodied in the Cochrane logo, demonstrating the benefit to neonates of giving corticosteroids to women at risk of delivering prematurely. Before the results were combined in a meta-analysis, individual studies found inconsistent results, throwing doubt on the intervention. Unfortunately, many years passed before this new knowledge became routine practice.
Since the 1990s, systematic reviews have established their position as the highest level of evidence and central role in clinical practice guidelines. The most recent meta-analysis of antidepressant medication found no significant difference in efficacy between antidepressants.21 If you decide to offer Anne an antidepressant, therapeutic guidelines recommend choosing one on the basis of the adverse effects profile.
Relevance to primary care
The ‘ecology’ of general practice is unique, with a low prevalence of serious illness, high prevalence of multimorbidity, and multiple interactions with social and environmental factors. The evidence supporting best practice, therefore, should reflect this complexity. Is that the case?
A review of 45 UK guidelines found that many of these recommendations were based on studies with little or no relevance to primary care.22 Reliance on research from secondary care may also ignore interventions and approaches used successfully in primary care. For instance, in an analysis of depression guidelines, associated social risk factors were often not mentioned, with limited attention to psychological treatments.23 Problems of guideline relevance for GPs are exacerbated by the lack of sufficient research conducted in general practice. Although Australian general practice research has grown significantly over the past two decades, recent defunding of programs supporting primary–care based research is feared to reduce research productivity.24
Guideline implementation
Australian GPs have access to guidelines that build recommendations on the best available evidence, while acknowledging their shortcomings. Guidelines have improved patient care,25 but there is room for improvement. Addressing the issues of transparency, conflict entanglement and relevance of research to general practice is imperative.
What this means for guideline users is that a critical attitude, and high levels of scrutiny of the evidence and our own preconceptions remain crucial. Creating a ‘repository’ of reliable and relevant resources, such as the National Guideline Clearinghouse in the US (www.guideline.gov), will make it easier to access guidelines. Transparency of the provenance of guidelines can help protect us and our patients from harm driven by commercial interests (Box 1). Improving the relevance of evidence for GPs requires a concerted effort to increase the number of studies conducted in primary care.24
We need evidence about all aspects of healthcare, including prevention, diagnosis, treatment, health services delivery and policies. All stakeholders, including patients, clinicians, administrators and policymakers, must be part of this process. In addition, more GPs need to be involved in summarising and synthesising evidence to ensure better relevance of guidelines and better outcomes for patients.
Box 1. Tips for guideline users: How reliable and relevant is this guideline?
Reliability
- Who published the guideline?
- Is the development process transparent?
- What do we know about the publisher’s and authors’ conflicts of interest?
- Has an evidence grading system been used?
Relevance for general practice
- Have general practitioners been involved in development of the guideline?
- Has evidence from general practice been incorporated?
- Is applicability to general practice discussed?
|
Authors
Mieke L van Driel, MD, DipTropMed, MSc, PhD, FRACGP, Professor of General Practice and Head Primary Care Clinical Unit and Discipline of General Practice, Primary Care Clinical Unit, Faculty of Medicine, University of Queensland, Qld. m.vandriel@uq.edu.au
Geoffrey Spurling, MBBS, MPH, FRACGP, Senior Lecturer, Primary Care Clinical Unit, Faculty of Medicine, University of Queensland, Qld.
Competing interests: None
Provenance and peer review: Commissioned, externally peer reviewed.