Hacking course surveys

Course surveys: Is this cheating?

Course surveys: Is this cheating?

If you had a simple, honest system to sway course result surveys by as much as 20% to your advantage, would you use it? What would this mean to the reliability of course surveys filled out by learners after ILT, webinars and elearning deliveries?

The setting

Imagine your work environment allowed you to deliver the same learning solutions both in person and as distance offerings. Imagine part of the evidence used for measuring results is the “exit survey”, the one learners complete at the end of the offering. Imagine it contains several Kirkpatrick L1 questions in a 5-point Likert scale. Sounds familiar?

I have observed that opinions about exit surveys are somewhat polarized. Some businesses believe they are useless, and focus on results. On the other hand, and perhaps an opinion that gravitates closer to line managers, the exit survey is everything.

I’ve had the opportunity to experiment and see what evidence I could bring to the business to reaffirm or discredit exit surveys. So I created several experiments, and this is one of them.

The experiment

Taking advantage of a newly scheduled batch of offerings , I decided to focus on one specific question within the survey: “Will you be able to apply what you learned back in your job?”. Yes, this question carries an assumption, and as such it was probably not the most scientific choice for an experiment. I chose it because if I was able to influence answers, it would be more impactful than others such as “Did you enjoy the course?”.

Next, I planted the experimental piece in half of the scheduled offerings. I’m not talking about a design change, just a small addition: at certain points during the course, mostly after discussions, exercises or section recaps, I would insert a comment such as: “This is something you will be applying during your day to day work” or “…and this is why it’s so relevant to our jobs”, etc. Only one sentence, only after section summaries and brief recaps, always linked to the wording of the survey question.

Then I delivered the two versions of the course several times, both in their face-to-face and over-the-wire flavors. The number of delivery times was again somewhat limited to be a proper scientific experiment. However, the results were surprising.

The results

While the survey results were largely consistent across all versions, my experiment question had improved one full notch relative to the control group. That’s a 20% improvement on a 5-point scale. What’s shocking is that I didn’t change the course content or design, and that the delivery method (online or face to face) was immaterial. All I did was make a small change in the experience of the course, and that change mattered. I believe we deliver experiences rather than content, and this is why I like to conduct small experiment like this one where content remains largely unchanged.

Do you think these experiments are “cheating”? Have you tried something similar? Are we really hacking course surveys? If you obtained similar results, would that change the way you think about measuring learning effectiveness?

Leave a Reply

Your email address will not be published.