Blog

3 Ways Technology Improves Continuing Medical Education (CME) Outcomes

stock_image2

In an effort to continuously improve the quality of continuing medical education (CME) activities, education providers are getting serious about outcomes. Outcomes data, after all, can show you whether a CME activity is achieving the desired result: performance improvements that positively impact patient health.

Tracking outcomes is inherently tricky. Learners might have excelled in an activity, but how do you know they’ve actually acquired new knowledge? And even if they have acquired new knowledge, how do you gauge whether that knowledge helping them be more effective practitioners? Getting definitive answers to these questions has always been – and continues to be – very difficult.

Thankfully, new technologies are making it easier to obtain reliable data on CME outcomes. Not only can CME providers now gauge whether learners have discovered something new and useful; they can also verify whether learners are implementing that new knowledge in everyday practice.

Let’s take a look at three ways technology can help you assess outcomes and continue to improve CME quality.

1. Pre-activity and post-activity tests provide proof of learning.

You’re probably familiar with Moore’s outcomes taxonomy, which the ACCME uses to measure the real-world impact of CME activities. Often presented as a pyramid, the taxonomy breaks down like this:

1. Participation
2. Satisfaction
3. Learning
3A. Declarative Knowledge
3B. Procedural Knowledge
4. Competence
5. Performance
6. Patient Health
7. Community Health

In the absence of the right technologies, assessing outcomes beyond Level 1 can be extremely time-consuming. However, providers using a dedicated LMS for their CME activities can often address Levels 1 through 3B with automated pre-activity and post-activity tests. When used in tandem for a specific CME activity, these tests go beyond assessing participation (attendance data), satisfaction, and declarative knowledge (post-activity evaluation data) to actually show whether learners are learning – and how well they understand specific concepts.

Consider an activity on lupus diagnosis, the goal of which is show learners how to distinguish lupus symptoms from those of other inflammatory diseases. A post-activity test alone cannot reveal whether the activity was effective since a learner with a perfect score might have already known the answers prior to completing the activity. However, providing pre-activity and post-activity assessments will reveal whether learners acquired new knowledge, satisfying Level 3B of Moore’s taxonomy.

With the right LMS, CME providers can automate the delivery of pre-activity and post-activity tests. If desired, providers can even require completion of a pre-test prior to providing access to activity content. They can then compare post-test results to pre-test diagnostics to effectively measure learning.

Rievent Comparison Reporting

2. Post-activity outcomes surveys measure learner competence.

At the Boston University School of Medicine, learners make a “commitment to change” following the completion of live or enduring CME activities. Based on the activity content, learners declare, in writing, that they will modify their approach in some specific way.

Why compel learners to make a commitment? Because it positions them to reach Level 4 of Moore’s outcomes taxonomy.

Ultimately, CME providers can assess competence by administering a post-activity survey at a specified date following a learner’s completion of an activity. The commitment to change provides a benchmark off of which to base that competence assessment, revealing whether an activity led to its desired outcome.

Let’s say a learner completes an activity on low-dose computed tomography (LDCT), the goal of which is to clarify the types of patients for whom regular LDCT screenings are essential. A post-activity test prompts learners to “commit to change” by checking off a list of signals that indicate whether a patient requires LDCT. One to two months later, a learner automatically receives a post-activity outcomes survey that asks the same question. Thanks to LMS reporting tools, providers can compare a learner’s responses to the post-activity test and post-activity outcomes survey to reveal whether he or she retained the knowledge acquired in the CME activity.

3. Outcomes surveys measure performance, too.

Ultimately, we want to link CME activities to positive outcomes in patient health. However, before we can do that, we need to determine whether the learner is translating retained competence into performance improvements.

Continuing with the LDCT example, a provider might include an additional question that speaks directly to a learner’s application of the new knowledge. The question, “Did you recommend LDCT for more (or fewer) patients as a direct result of your participation in this activity?” would help the provider gauge whether an activity is having a direct impact on the learner’s everyday performance.

And who wouldn’t like to know whether their CME activities are making an impact?

Thanks to technology, this entire process – from administering pre-tests, post-tests, and post-activity surveys to collecting learners’ responses – can unfold automatically. It enables CME administrators to address the first five levels of Moore’s taxonomy without any manual collation of assessments.

The result? No more guessing about whether CME activities lead to positive outcomes. Now you can know whether an activity improves how learners practice medicine. And when it doesn’t, you’ll have a better idea of what changes are necessary to make the activity more valuable.