After Your Next CME Meeting, Assess Performance Change
What’s CME all about, anyway? If it was about awarding credit so that clinicians could check a box and move on, designing activities and organizing meetings would be simple. You’d just publish any topically-relevant content or book any speaker. Then you’d call it a day.
But that’s not what CME is about.
If you agree with Graham McMahon, President and CEO of the ACCME, you know that CME is about learning. In particular, it’s about learning new skills in an effort to grow and improve in the real world of clinical practice. Per McMahon:
Our system is not about delivering credit. It’s never been about that. Our system is about delivering high-quality learning to drive performance improvement and skill development for every single clinician that we touch, regardless of who they are or what team they work in.
That being the case, assessing learner performance isn’t exactly optional. It’s key to ensuring your CME activities have a positive impact in the workplace and improve the quality of patient care.
In other words, CME is about driving positive outcomes. As a CME provider, you need a way to tell whether your CME activities achieve the desired effect – but how do you do it, and with what tools?
Collect learner data after each CME meeting
According to Moore’s Outcomes Taxonomy, improved “performance” is a direct result of increased “competence.” When a clinician’s competence increases, performance also improves. The result is better “patient health.”
Here’s how it all stacks up within the taxonomy itself:
-
- Participation
- Satisfaction
- Learning
- Competence
-
Performance
-
Patient Health
- Community Health
CME providers want to assess #s 4 and 5 so that they know #s 6 and 7 will happen. But you can’t assess workplace performance when learners are still in the meeting hall following a live activity. You also can’t peer over their shoulders for several weeks or months, analyzing their every move to determine whether your CME activity made a difference
What you can do is ask them how things are going. You just need the right data collection strategy.
Imagine a learner just attended a CME meeting about identifying symptoms of early onset Parkinson’s Disease. The goal of the activity was to impart new, research-driven insights that can help learners distinguish among indicators of Parkinson’s Disease and indicators of other nervous system disorders, like Progressive Supranuclear Palsy. You want to know whether the learner has, after a period of time, increased her competence with regard to diagnoses of nervous system disorders. You also want to know whether her exposure to the activity has improved her performance in the workplace.
The most practical way to do that is to send the learner a post-activity outcomes survey that includes questions like these:
- Prior to attending the CME meeting, how confident were you in your ability to effectively diagnose the nervous system disorders under discussion?
- Are you now more confident in your ability to diagnose said disorders? If so, how confident?
- Have you applied the knowledge acquired during the activity in patient diagnoses? If so, has your application of said knowledge changed treatment schedules or follow-up regimens among patients?
- If it’s possible to indicate whether these changes in your workplace behavior are having a positive impact among patients, to what extent has patient health improved?
To be sure, the way you communicate with learners – and the types of questions you ask – might differ substantially from what appears above. The point is that you’re following up with learners after the activity to assess competence and, perhaps more importantly, performance.
That’s the “how” of performance assessment. Follow up with learners and gather the data. What we’re still missing is the “what.” What tools can you use to distribute post-activity outcomes surveys and analyze the responses?
Start automating post-activity outcomes evaluations
Your learners didn’t stumble into your CME meeting at random. Somehow or another, they registered for the meeting, submitted payment, and received directions and (possibly) a calendar notification. The best way to get your post-activity outcomes survey into learners’ hands is to capture the contact details they provide during registration and automatically email them the survey in the weeks following attendance.
The more tightly you can integrate things like meeting registration, requests for credit, pre-tests, post-tests, and outcomes surveys, the better.
Basically, the “what” of performance assessment is a fully integrated learning management system for CME. When learners register for an activity, you’ve got their email addresses. Then, when they request (and receive) credit, you have confirmation that those learners actually attended the meeting in question. You can set up the outcomes survey before the meeting ever occurs and know that attending learners will receive it at the appropriate time
In other words, the entire process is automatic.
You just log into a single piece of software to review learner responses to the outcomes survey, assess performance, and determine what modifications are needed for upcoming CME live events. Hospitals can do the same for RSS, whether for an individual session or at the conclusion of a series.
You’re creating a foolproof system for performance assessment
And nothing about it has to be hard. In fact, you want to make it as easy (and as automatic) as possible.
The result is a reliable way to gauge learner performance and assess the strengths of a particular CME meeting. As a provider, that’s exactly what you need to fulfill your mission of providing effective learning opportunities and enabling positive health outcomes.