Critical appraisal Systematic Reviews and Clinical Practice Guidelines

  • Slides: 73
Download presentation
Critical appraisal: Systematic Reviews and Clinical Practice Guidelines for Drug Therapy Nancy J. Lee,

Critical appraisal: Systematic Reviews and Clinical Practice Guidelines for Drug Therapy Nancy J. Lee, Pharm. D, BCPS Research fellow, Drug Effectiveness Review Project Oregon Evidence-based Practice Center Oregon Health and Science University To receive 1. 25 AMA PRA Category 1 Credits™ you must review this section and answer CME questions at the end. Release date: January 2009 Expiration date: January 2012

Attachments • The attachments tab in the upper right hand corner contains documents that

Attachments • The attachments tab in the upper right hand corner contains documents that supplement the presentation • Handouts of slides and a glossary of terms can be found under this tab and are available to print out for your use • URL to online resources are also available

Program funding This work was made possible by a grant from the state Attorney

Program funding This work was made possible by a grant from the state Attorney General Consumer and Prescriber Education Program which is funded by the multistate settlement of consumer fraud claims regarding the marketing of the prescription drug Neurontin®.

Continuing education sponsors: The following activity is jointly sponsored by: The University of Texas

Continuing education sponsors: The following activity is jointly sponsored by: The University of Texas Southwestern Medical Center and the Federation of State Medical Board’s Research and Education Foundation.

CME information Program Speaker/Author: Nancy J. Lee, Pharm. D, BCPS Research fellow, Oregon Health

CME information Program Speaker/Author: Nancy J. Lee, Pharm. D, BCPS Research fellow, Oregon Health and Science University, Oregon Evidence-base Practice Center, Drug Effectiveness Review Project Course Director: Barbara S. Schneidman, MD, MPH Federation of State Medical Boards Research and Education Foundation, Secretary Federation of State Medical Boards, Interim President and Chief Executive Officer Program Directors: David Pass, MD Director, Health Resources Commission, Oregon Office for Health Policy and Research Dean Haxby, Pharm. D Associate Professor of Pharmacy Practice, Oregon State University College of Pharmacy Daniel Hartung, Pharm. D, MPH Assistant Professor of Pharmacy Practice, Oregon State University College of Pharmacy Target Audience: This educational activity is intended for those that are involved with committees involved with medication use policies and for health care professionals who are involved with medication prescribing. Educational Objectives: Upon completion of this activity, participants should be able to: recognize benefits and limitations of systematic reviews and clinical practice guidelines; assess quality of systematic reviews and clinical practice guidelines; identify differences between systematic reviews, narrative reviews, and meta-analyses; recognize components of forest plots used in meta-analyses in systematic reviews; review the grading of the strength of evidence used in clinical practice guideline development.

CME policies Accreditation: This activity has been planned and implemented in accordance with the

CME policies Accreditation: This activity has been planned and implemented in accordance with the Essential Areas & Policies of the Accreditation Council for Continuing Medical Education through the joint sponsorship of The University of Texas Southwestern Medical Center and the Federation of State Medical Boards Research and Education Foundation. The University of Texas Southwestern Medical Center is accredited by the ACCME to provide continuing medical education for physicians. Credit Designation: The University of Texas Southwestern Medical Center designates this educational activity for a maximum of 1. 25 AMA PRA Category 1 Credits™. Physicians should only claim credit commensurate with the extent of their participation in the activity. Conflict of Interest: It is the policy of UT Southwestern Medical Center that participants in CME activities should be made aware of any affiliation or financial interest that may affect the author’s presentation. Each author has completed and signed a conflict of interest statement. The faculty members’ relationships will be disclosed in the course material. Discussion of Off-Label Use: Because this course is meant to educate physicians with what is currently in use and what may be available in the future, “off-label” use may be discussed. Authors have been requested to inform the audience when off-label use is discussed.

DISCLOSURE TO PARTICIPANTS It is the policy of the CME Office at The University

DISCLOSURE TO PARTICIPANTS It is the policy of the CME Office at The University of Texas Southwestern Medical Center to ensure balance, independence, objectivity, and scientific rigor in all directly or jointly sponsored educational activities. Program directors and speakers have completed and signed a conflict of interest statement disclosing a financial or other relationship with a commercial interest related directly or indirectly to the program. Information and opinion offered by the speakers represent their viewpoints. Conclusions drawn by the audience should be derived from careful consideration of all available scientific information. Products may be discussed in treatment outside current approved labeling. FINANCIAL RELATIONSHIP DISCLOSURE Faculty David Pass, MD Dean Haxby, Pharm. D Daniel Hartung, Pharm. D, MPH Nancy Lee, Pharm. D, BCPS Barbara S. Schneidman, MD, MPH Type of Relationship/Name of Commercial Interest(s) None Employment/Care. Oregon None

Learning objectives I. Systematic reviews – Recognize benefits and limitations – Assess quality of

Learning objectives I. Systematic reviews – Recognize benefits and limitations – Assess quality of systematic reviews – Identify the differences between systematic reviews, narrative reviews, and meta-analyses – Recognize components of forest and funnel plots used in systematic reviews with meta-analyses II. Guidelines – Identify strengths and weaknesses – Assess and recognize quality components – Review grading of the strength of evidence used in guidelines

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps in conducting Systematic Reviews – Scientific process • Quality assessment of Systematic Reviews – Tools and checklists

Why are systematic reviews needed? • Too much information • Not enough time –

Why are systematic reviews needed? • Too much information • Not enough time – More than 2 million articles published yearly from more than 200 biomedical journals – Results can often be contradicted by subsequent trials • Taken together, a clearer picture can emerge – – Minimize biases Increase statistical power Improve generalizability Improve allocation of resources for other needed trials = minimize funding of unnecessary trials

Did trialists review all the literature before conducting their own study? Fergusson D, et

Did trialists review all the literature before conducting their own study? Fergusson D, et al. Clin Trials 2005; 2: 218 -32

69 trials After RCT #12, the cumulative effect estimate (OR) stabilizes in the range

69 trials After RCT #12, the cumulative effect estimate (OR) stabilizes in the range of 0. 25 – 0. 35. Throughout the cumulative meta-analysis, the upper limit of the confidence interval never crossed 0. 65 The largest trial published in 1992 was referenced in 7 of 44 (16%) of trials published more than 1 year later. Overall, ~20% of trials cited previous trials in their study.

When are systematic reviews needed? • When an important question needs to be addressed

When are systematic reviews needed? • When an important question needs to be addressed – Gaps in the literature or conflicting results • When there is uncertainty regarding an intervention – Uncertainty may lie in: • Population, Intervention, Outcomes • When several primary studies exist – Lack of strong evidence

Limitations of systematic reviews • Only as good as what is available and what

Limitations of systematic reviews • Only as good as what is available and what is included – Issue of publication bias • Restricted to published results – Quality of individual trials • “Garbage In, Garbage Out” • Good quality systematic reviews typically do not address all the issues relevant for decision making – Evidence outside the scope of the review may be relevant and needed for decision making – Cost and implementation implications may not always be addressed

Limitations of systematic reviews • Unrealistic expectations – What if results conflict with a

Limitations of systematic reviews • Unrealistic expectations – What if results conflict with a good quality large landmark trial? – About 10 -23% of large trials disagreed with meta-analyses* • May not always include the most up to date studies – When was the last literature search conducted? – Estimate: 3 -5 years** • Does not make decisions for the user – These are not guidelines – The reader uses their own judgment *Ioannidis, et al. JAMA 1998; 279: 1089 -93. **Shojoania, et al. Ann Intern Med 2007; 147: 224 -33.

What it is and isn’t Feature Narrative review (traditional) Systematic review Questions Often broad

What it is and isn’t Feature Narrative review (traditional) Systematic review Questions Often broad in scope Focused clinical question(s) Sources and search strategy Not usually specified; potentially biased Comprehensive and explicit search strategy Study eligibility Not usually specified; potentially biased Prespecified; criterionbased; uniformly applied Appraisal Variable; assessment of the quality of evidence typically not reported Rigorous critical appraisal; typically includes quality assessment of evidence and provides insight into potential study biases Synthesis Often qualitative Qualitative with or without meta-analyses Inferences Sometimes evidence based Usually evidence based Adapted from Cook DJ, et al. Ann Intern Med 1997; 126: 376 -80.

The advantage of using carefully done, systematic reviews becomes clear when we observe how

The advantage of using carefully done, systematic reviews becomes clear when we observe how often mistakes are made when research is reviewed non-systematically, whether by experts or others. The costs of mistaken conclusions based on nonsystematic reviews can be high. -Oxman, AD

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps in conducting Systematic Reviews – Scientific process • Quality assessment of Systematic Reviews – Tools and checklists

Systematic Reviews: A scientific process Figure 1. Copyright © 1997 BMJ Publishing Group Ltd.

Systematic Reviews: A scientific process Figure 1. Copyright © 1997 BMJ Publishing Group Ltd. from Greenhalgh T. BMJ 1997; 315: 672 -5.

What’s the purpose and question? • Developed a priori – Most important – Relevant

What’s the purpose and question? • Developed a priori – Most important – Relevant and sensible to practitioners and patients? – Typically not changed during the review process • What are we asking? – Efficacy – Effectiveness • Well-defined? – PICOS • Any exclusions? – Language restrictions or type of study design

What was the study eligibility? • Determines what studies get included in a systematic

What was the study eligibility? • Determines what studies get included in a systematic review – Formed a priori – Applied uniformly by at least 2 reviewers (dual review) • Study inclusion and exclusion criteria should relate to the areas defined by PICO(S) – Population – Intervention – Comparator – Outcome – Setting/study design

Study eligibility • What are the consequences of being too inclusive or exclusive? –

Study eligibility • What are the consequences of being too inclusive or exclusive? – Too inclusive • • Scope is too large Lose focus of question Main point may be lost May be difficult to interpret – Too exclusive • • Scope is too narrow Potential to exclude important trials May end up not having enough evidence If unaware, could lead to biased conclusions

Example: Study eligibility Population adults and children with type 2 diabetes mellitus Intervention, comparator

Example: Study eligibility Population adults and children with type 2 diabetes mellitus Intervention, comparator sitagliptin; placebo; other oral antihyperglycemic agents Outcomes all cause mortality, micro-and macrovascular disease, quality of life (Intermediate outcomes: A 1 c) Study design For efficacy/effectiveness: RCTs and good quality systematic reviews For harms: RCTs, good systematic reviews, large comparative cohort observational studies Study duration ≥ 12 weeks in duration Exclusions poor quality trials/studies were excluded from analyses

Finding all relevant studies: Search strategy • Medical librarian important • Key search terms

Finding all relevant studies: Search strategy • Medical librarian important • Key search terms should at the very least be reported Was the search strategy comprehensive? • Were any significant studies missing? – If yes, why?

Example: Search strategy

Example: Search strategy

Finding all relevant studies: Sources • Electronic databases – – – MEDLINE (Ovid/Pub. Med)

Finding all relevant studies: Sources • Electronic databases – – – MEDLINE (Ovid/Pub. Med) Cochrane Library EMBASE Psych. INFO CINAHL • Hand searching – Reference lists of trials and/or reviews – Journals • Sources for unpublished information – FDA website – Clinical Trials. gov – Registeries • Industry dossiers

Selection of studies Was the selection of studies unbiased? • • Review titles and

Selection of studies Was the selection of studies unbiased? • • Review titles and abstracts from initial search Review of full text articles Uniform application of study eligibility criteria Dual review for each step – Disagreements resolved by consensus

Issue of publication bias • “Positive” studies are more likely to be published… −

Issue of publication bias • “Positive” studies are more likely to be published… − Rapidly, in English, more than once • Failure to publish or submit “negative” studies by investigators, peer reviewers, editors and Pharma Was it addressed? − May knowingly or unknowingly influence the results toward the positive Adapted from Cochrane Open Learning. Module 15. Publication bias 2002.

“researchers and statisticians have long suspected that the studies published in the behavioral sciences

“researchers and statisticians have long suspected that the studies published in the behavioral sciences are a biased sample of the studies that are actually carried out…. The extreme view of this problem, the “file drawer problem, ” is that the journals are filled with the 5% of the studies that show Type I errors, while the file drawers back at the lab are filled with the 95% of the studies that show nonsignificant (e. g. , p >. 05) results. ” (Rosenthal, 1979, p. 638) Scargle. J of Scientific Explor 2000; 14(1): 91 -106.

Investigating for presence of publication bias • Visually check for asymmetry in funnel plots

Investigating for presence of publication bias • Visually check for asymmetry in funnel plots – NOT a tool to “diagnose” bias • Potential sources of asymmetry – True heterogeneity – Data irregularities – Chance • Other statistical methods – Ask a biostatistician Egger, et al. BMJ 1997; 315: 629 -34. Figure 1 from Peters, et al. JAMA 2006; 295: 676 -80.

Ways to minimize publication bias in the review process • Identify duplicate publications •

Ways to minimize publication bias in the review process • Identify duplicate publications • Contact study authors or manufacturer – Often difficult to obtain information – Time intensive • Check sources for grey literature – FDA review documents – Clinical trial registries – Databases • Check for any language restrictions Rising, et al. PLo. S Med 5(11): e 217.

Quality assessment of included studies Was quality assessment of individual studies conducted and reported

Quality assessment of included studies Was quality assessment of individual studies conducted and reported in the systematic review? • >25 different tools – Jadad scale, Risk of Bias tool, DERP method (for trials) – Other scales or checklists (for observational studies) • How were poor-or low quality trials handled in the review? – Were these excluded? – Sensitivity analyses?

Example Bjelakovic, et al. Lancet 2004; 364: 1219 -28.

Example Bjelakovic, et al. Lancet 2004; 364: 1219 -28.

Data abstraction • Dual abstraction and review • Types of data abstracted: – –

Data abstraction • Dual abstraction and review • Types of data abstracted: – – – Study design Setting Population characteristics (age, sex, ethnicity) Inclusion/exclusion criteria Interventions Comparisons Number screened, eligible, enrolled Number withdrawn Method of outcome ascertainment Results Adverse events

Data synthesis • Two methods: qualitative and quantitative • Qualitative – Discussion of results

Data synthesis • Two methods: qualitative and quantitative • Qualitative – Discussion of results (synthesis) • in relation to each other • in relation to study quality • Not a reporting of results from each study Adapted from Cochrane Collaboration open learning materials for reviewers 2002 -2003.

Data synthesis • Quantitative or meta-analyses – Statistical method for combining results from >1

Data synthesis • Quantitative or meta-analyses – Statistical method for combining results from >1 study • Advantage: provides an estimate of treatment effect • Disadvantage: misleading estimate if used inappropriately – Misuse of terminology • Systematic review and Meta-analysis = NOT the same SR MA Adapted from Cochrane Collaboration open learning materials for reviewers 2002 -2003.

Meta-analysis Is combining results of individual studies appropriate? • The review should provide enough

Meta-analysis Is combining results of individual studies appropriate? • The review should provide enough information about the included studies for you to judge whether combining results was appropriate. • Two types of heterogeneity – Clinical heterogeneity • Does it make clinical sense to combine these studies? – Statistical heterogeneity • Are there inconsistencies in the results? • Calculation of Q-or I-squared statistic • Common sources of heterogeneity – Clinical diversity between studies, conflicts of interest, and differences in study quality Adapted from Cochrane Collaboration open learning materials for reviewers 2002 -2003.

Example: Clinical heterogeneity?

Example: Clinical heterogeneity?

How to read a Forest plot Line of no effect Trials Each square box=

How to read a Forest plot Line of no effect Trials Each square box= point estimate Size of the square= proportional to weight/size of the study precision Horizontal line= confidence interval Forest plot adapted from Bjelakovic, et al. Lancet 2004; 364: 1219 -28. Diamond= pooled estimate of trials Trials Diamond= pooled estimate of trials

What statistical method was used for the meta-analysis? • Two common methods – Fixed

What statistical method was used for the meta-analysis? • Two common methods – Fixed effects model • Assumes homogeneity – Random effects model • Assumes heterogeneity – Use both methods and select 1 to present • Should briefly discuss why a certain method was selected

Invalid methods of synthesis • Picking and choosing – Pick what you like, ignore

Invalid methods of synthesis • Picking and choosing – Pick what you like, ignore what you don’t like • Searching for proof – Data dredging or data mining • Vote counting – Counting the number of studies with positive and negative results without considering study quality

Bridging the results to the conclusion • Do conclusions reflect the uncertainty in the

Bridging the results to the conclusion • Do conclusions reflect the uncertainty in the evidence? • Are gaps identified and recommendations for future research provided?

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps

I. Systematic Reviews: Outline • Why, When, What? – Benefits and limitations • Steps in conducting Systematic Reviews – Scientific process • Quality assessment of Systematic Reviews – Tools and checklists

Key questions to ask when assessing quality of systematic reviews • Is there a

Key questions to ask when assessing quality of systematic reviews • Is there a clear, focused, clinically relevant question? • Were study eligibility criteria reported and rationale provided (if needed)? • Was the search for relevant studies detailed and exhaustive? • Were included trials assessed for quality and were the assessments reproducible? • How was data synthesized and was this appropriate? • Are the conclusion statements clear and reflect the results from the evidence that was reviewed?

Tools and lists for assessing systematic review quality • >10 different scales and checklists

Tools and lists for assessing systematic review quality • >10 different scales and checklists – Oxman and Guyatt – Sacks, et al – DERP method

Oxman and Guyatt 1. Were the search methods used to find evidence on the

Oxman and Guyatt 1. Were the search methods used to find evidence on the key questions? 2. Was the search for evidence reasonably comprehensive? 3. Were the criteria used for deciding which studies to include reported? 4. Was bias in the selection of studies avoided? 5. Were the criteria used for assessing the validity of the included studies reported? 6. Was the validity of all the studies referred to in the text assessed using appropriate criteria? 7. Were the methods used to combine the findings of the relevant studies reported? 8. Were the findings of the relevant studies combined appropriately? 9. Were the conclusions made by the author(s) supported by the data reported? 10. How would you rate the scientific quality of this overview? Shea B, et al. Eval Health Prof 2002; 25(1): 116 -29.

Using Oxman and Guyatt method Author From DERP report. http: //www. ohsu. edu/drugeffectiveness

Using Oxman and Guyatt method Author From DERP report. http: //www. ohsu. edu/drugeffectiveness

Using Oxman and Guyatt method (continued)

Using Oxman and Guyatt method (continued)

Sacks, et al 1. Prospective design a. Protocol b. Literature search c. Lists of

Sacks, et al 1. Prospective design a. Protocol b. Literature search c. Lists of trials analyzed d. Log of rejected trials e. Treatment assignment f. Ranges of patients g. Ranges of treatment h. Ranges of diagnosis 2. Combinability a. Criteria b. Measurement 3. Control of bias a. Selection bias b. Data-extraction bias c. Interobserver agreement d. Source of support 4. Statistical analysis a. Statistical methods b. Statistical errors c. Confidence intervals d. Subgroup analysis 5. Sensitivity analysis a. Quality assessment b. Varying methods c. Publication bias 6. Application of results a. Caveats b. Economic impact 7. Language

DERP method Author From DERP report. http: //www. ohsu. edu/drugeffectiveness 50

DERP method Author From DERP report. http: //www. ohsu. edu/drugeffectiveness 50

Database of Abstracts of Reviews of Effects

Database of Abstracts of Reviews of Effects

Author X

Author X

Summary: Systematic Reviews • Advantages and disadvantages – Can minimize biases that exist in

Summary: Systematic Reviews • Advantages and disadvantages – Can minimize biases that exist in individual studies – May not answer all questions of interest • Systematic reviews and meta-analyses are not synonymous – Meta-analysis is a statistical method of combining studies • Each step of the process should be questioned – Comprehensive search of evidence – Quality assessment of individual trials – Appropriate method of synthesis

Appraisal of guidelines

Appraisal of guidelines

II. Guidelines: Outline • What is the purpose and what are the potential benefits

II. Guidelines: Outline • What is the purpose and what are the potential benefits and limitations? • Why do we need to critically assess guidelines? • Quality assessment of guidelines • Tools to help evaluate guidelines

Guidelines: steps beyond a review • Incorporates the judgments and values involved in making

Guidelines: steps beyond a review • Incorporates the judgments and values involved in making recommendations • Addresses larger spectrum of issues relevant for clinical decision making • – – – Purpose: Provide clinical practice recommendations Improve quality of care and outcomes Seek to influence change in clinical practice Reduce inappropriate variation in practice Shed light on gaps in the evidence SRs RCTs

Guidelines are not intended to… • Provide a black and white answer for complex

Guidelines are not intended to… • Provide a black and white answer for complex situations • Substitute for clinical insight and judgment • Be a legal resource in malpractice cases • Prompt providers to withdraw availability or coverage of therapies • Hinder or discourage scientific progress Woolf S, et al. BMJ 1999; 318: 527 -30.

Why is it necessary to critically assess clinical practice guidelines? • There are >

Why is it necessary to critically assess clinical practice guidelines? • There are > 2, 500 published guidelines – Multiple guidelines with differing recommendations – Not all guidelines are of good/high quality – Consensus-based – “Evidence-based” (systematic methods, transparent) • Many “stakeholders” who are invested in the influence of their guidelines – Government organizations and healthcare systems – Professional societies – Pharmaceutical industry

A glance at guidelines from 1988 -1998 • 3 items assessed 1) Description of

A glance at guidelines from 1988 -1998 • 3 items assessed 1) Description of professionals involved 2) Search undertaken 3) Explicit grading of evidence for recommendation Grilli, et al. Lancet 2000; 355: 103 -6. N= 431, guidelines assessed

Results from Grilli, et al All 3 criteria were met only in 5% of

Results from Grilli, et al All 3 criteria were met only in 5% of the identified guidelines and 54% did not meet ANY of the items.

Assessing quality • Who were involved in the decision making process? – Were all

Assessing quality • Who were involved in the decision making process? – Were all relevant perspectives considered? – To what extent were the funders of the guideline involved in the process? – Conflicts of interests declared for each participant? • Were all important practice options and clinically relevant outcomes considered? – What was excluded and was rationale provided? • How were the relative values of the outcomes weighed in terms of importance?

11 pain specialists involved—over 2 days in New Orleans Target audience: primary care physicians,

11 pain specialists involved—over 2 days in New Orleans Target audience: primary care physicians, internal medicine physicians, geriatric physicians, and psychiatrists treating chronic pain. Conflict of interest reported for 10/11 members 8 of 10 members received some sort of funding from Eli Lilly who provided an educational grant for this guideline Consensus guidelines: Assessment, diagnosis, and treatment of diabetic peripheral neuropathic pain. Mayo Clinic Proceedings 2006; 81(4): S 1 -36.

Assessing quality • How was evidence retrieved? – Was it comprehensive? • Was there

Assessing quality • How was evidence retrieved? – Was it comprehensive? • Was there explicit description of how “evidence” was used? – Systematic reviews? – Was there an approach to the hierarchy of evidence? • Was quality of the evidence assessed and reported? • How was the body of evidence graded?

Chou, et al. Ann Intern Med 2007; 147: 505 -14.

Chou, et al. Ann Intern Med 2007; 147: 505 -14.

One method of assessing the body of evidence • -ing the strength of the

One method of assessing the body of evidence • -ing the strength of the evidence and recommendations reported in guidelines – To provide a systematic and explicit approach to making judgments involved in a guideline process that can be used by all guideline developers

The approach considers: • Strength of the body of evidence – Study design –

The approach considers: • Strength of the body of evidence – Study design – Risk of bias or limitations – Consistency of results – Precision – Directness of evidence • Strength of recommendation: – Strong vs. Weak

Example: GRADE table

Example: GRADE table

The AGREE instrument

The AGREE instrument

Summary: Guidelines • Incorporates values and judgments • Can improve care by reducing variation

Summary: Guidelines • Incorporates values and judgments • Can improve care by reducing variation in practice – Not meant to provide black and white answers for complex problems • Not all guidelines are the same – Consensus-based approach – Evidence-informed approach • Important to question each step of the process – Who was involved? – How was evidence retrieved, synthesized, and graded? – How were recommendation decisions made?

Acknowledgements • Attorney General Consumer and Prescriber Education Program • Members of the technical

Acknowledgements • Attorney General Consumer and Prescriber Education Program • Members of the technical advisory committee of this grant • Office for Oregon Health Policy and Research • The University of Texas Southwestern Medical Center • The Federation of State Medical Board’s Research and Education Foundation

CME instructions • Please complete the survey, CME questions, and program evaluation after this

CME instructions • Please complete the survey, CME questions, and program evaluation after this slide • Don’t forget to click the finish button at the end of the CME questions • You should be directly linked to a CME form which you will need to fill out and fax, email, or mail in order to receive credit hours