Evidence-based practice (EBP) is an approach to care that integrates the best available research evidence, with clinical expertise and patient values. EBP encourages clinicians to incorporate information from high quality research (quantitative and qualitative) with their clinical expertise and the client's background, preferences and values when making decisions.
It gets us thinking about our practice by asking the following questions,
Why am I doing this in this way?
Is there evidence that can guide me to do this in a more effective way?
Different practitioners at different levels of responsibility will require different skills for EBP and different types of evidence, but with the right support, practice and experience we can all learn how to identify and use appropriate evidence-base practices competently. To effectively incorporate evidence based practices into clinical care requires a systematic approach, using the following five-step process;
The research question should be clear and focused - not too vague, too specific or too broad.
QH Staff can use the TRIP PICO template and the Embase PICO template
An ‘answerable’ question in research terms is one which seeks specific knowledge, is framed to facilitate literature searching and therefore, follows a semi-standardised structure.
PICO is a framework that make the process of asking an answerable question easier. The general format of a ‘PICO’ question is: “In [Population], what is the effect of [Intervention] on [Outcome], compared with [Comparison Intervention]?”
P | I | C | O |
---|---|---|---|
Patient, Population | Intervention (or Exposure) | Comparison (or Control) if appropriate |
Outcome |
Most important characteristics of patient (e.g. age, disease/condition, gender) | Main intervention (e.g. drug treatment, diagnostic/screening test) | Main alternative (e.g. placebo, standard therapy, no treatment, gold standard) | What you are trying to accomplish, measure, improve, affect (e.g. reduced mortality or morbidity, improved memory) |
A variant of PICO is PICOS. S stands for Study designs. It establishes which study designs are appropriate for answering the question, e.g. randomised controlled trial (RCT).
S | PI | D | E | R |
---|---|---|---|---|
Sample | Phenomenon of Interest | Design | Evaluation | Research type |
S | P | I | C | E |
---|---|---|---|---|
Setting (where?) | Perspecitve (for whom?) | Intervention (what?) | Comparison (compared with what?) | Evaluation (with what result?) |
E | C | L | I | P | Se |
---|---|---|---|---|---|
Expectation (improvement or information or innovation) | Client group (at whom the service is aimed) | Location (where is the service located?) | Impact (outcomes) | Professionals (who is involved in providing/improving the service) | Service (for which service are you looking for information) |
Evidence can be broadly grouped into two main categories:
1. Filtered (secondary) information provides analysis, synthesis, interpretation, commentary and/or evaluation of original research studies (unfiltered information) and often make recommendations for practice and can be found in systematic review or meta-analysis of all relevant RCTs or evidence-based clinical practice guidelines based on systematic reviews of RCTs or three or more RCTs of good quality that have similar results.
2. Unfiltered (primary) information contains original data and analysis from research studies with no external appraisal or interpretation provided. The bottom layer is basically anecdotal and is considered the least reliable source of evidence.
Click on the Quality Evidence Finding and Using Cheat Sheet below for an overview and links to further information
For assistance searching for evidence contact COH-Library@health.qld.gov.au
Learn more about this topic on the Critical Appraisal tab
Understanding the levels of evidence and how to critically appraise the strength and quality of evidence in a timely way is crucial. Not all research is of sufficient quality to inform clinical decision making and some evidence-based practices have only been shown by research to be effective with very specific populations.
Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context.
It involves looking at the way a study is conducted and examines factors such as internal validity, generalizability and relevance. The process will help you decide whether it’s appropriate for your setting and of sufficient quality to be used for effective decision making.
Trying to make sense of health research? This tool will guide you through a series of questions to help you to review and interpret a published health research paper. http://www.understandinghealthresearch.org/
Click on the link below to access critical appraisal checklists
Learn more about this topic on the Implementation Science & Knowledge Translation tab
The research based evidence should be integrated with your own clinical experience and expertise and the patients' preferences. Implementation of evidence into practice may be challenging.
Description of different implementation strategies can be found in these books:
Greenhalgh, T. (2019). Applying evidence with patients. In Greenhalgh, T. How to read a paper (6th ed., pp. 219-231). Hoboken, NJ: John Wiley & Sons.
McCluskey, A & D. O'Connor. (2017). Implementing evidence: closing research-practice gaps. In T. Hoffmann, S. Bennett & C. Del Mar (Eds.), Evidence-based practice across the health professions (3rd ed., pp. 384-408). Chatswood, N.S.W: Elsevier Australia.
The most important but often challenging step in the evidence-based practice (EBP) process is ensuring that the change we wanted to happen actually occurred.
After a practice change has been implemented, it’s important to ask if the expected outcome was achieved. This involves looking at how you performed in the EBP process but also what the outcomes were for those involved in and/or in receipt of the practice change.