Introduction

Feedback in elearning scenarios: Let them think!

Do we really need a know-it-all Omniscient One to explain everything to our learners? Or can we trust them to draw conclusions from the results of their choices? Read more.

Feedback in elearning scenarios: Let them think!

By Cathy Moore

As an Amazon Associate, I earn from qualifying purchases.

You’re at the county fair. Your kids are off watching the pig race, and you’re starving. There are only two food carts nearby. One sells deep-fried pork skins from a pot of bubbling grease, and the other sells sushi from a styrofoam cooler. You decide to buy the sushi.

As you hand over your money, a disembodied voice suddenly booms from the clouds above. “Incorrect!” it intones. “Unrefrigerated sushi can harbor zygodread, which can cause severe vomiting. You should never assume that a cooler at a county fair contains ice. It’s always safer to buy hot food that’s cooked in your presence, such as the pork skins. Try again.”

You’ve just met The Omniscient One. It’s the personality-free know-it-all that drones through most elearning. When it intrudes into decision-making scenarios, it sucks the life out of our stories and the brains out of our learners.

“I know everything, and you have no brain”

The Omniscient One (the OO to its friends) is a big fan of telling feedback, because it knows everything. It not only tells us whether it approves of our choice, it also explains exactly how we have sinned and what we must do to atone. Like the folks in your legal department, it believes that no adult can be trusted to draw even the simplest conclusion on his or her own.

An alternative: show the result

In the real world, we’d remember the sushi lesson best if we ate the sushi and then spent three very unpleasant days. In elearning, you could call this showing feedback because, well, the elearning shows (or at least describes) the results. The feedback isn’t a pronouncement from on high but is instead something like this:

Six hours after you eat the sushi, you begin vomiting. Three days later, you finally stop. Your doctor explains that the sushi was probably poorly refrigerated and contained zygodread. “You should have bought something hot that was cooked right in front of you,” he says as he hands you the prescription for an expensive antibiotic. “Was there any pork skin? I always buy that if it comes right out of the fryer.”

We’ve described the result in a memorable way. We’ve even snuck in some telling feedback, but it’s coming from a person who actually has a role in the story that gives them the right to lecture us, not a preachy disembodied voice.

With this small change, we’re letting people learn from somewhat realistic experience, and the more realistic and vivid we can make the experience, the more likely they are to remember it.

What about correct choices?

Let’s say that instead of choosing the sushi, you go directly to the pork skins. How will you learn that you made the right choice if the Omniscient One doesn’t tell you? Let’s compare approaches.

Telling feedback: Correct. The pork skins are less likely to give you food poisoning because they’re hot and are cooked in front of you. The sushi might not be properly refrigerated and could harbor zygodread.

Showing feedback: While you enjoy your hot, crispy pork skins, you hear a young woman tell her friend, “There’s no way I’m buying that sushi again! Last year there wasn’t ice in the cooler, and the sushi was full of zygodread! I’ve never been so sick in my life!”

Here’s another example. The learner has just tried to stop a (fictional) speeding forklift by pressing the red button on its steering wheel.

Telling feedback: Incorrect. The red button on the steering wheel sounds the horn.

Showing feedback: The forklift sounds a cheery “toot!” as it continues to speed toward the plate-glass window.

Which are you more likely to remember?

For more on this, see this post about eager-beaver feedback and How to rewrite a quiz question as scenario-based training.

But what if a stakeholder doesn’t trust our learners?

If you try to use “showing” feedback, a stakeholder is likely to worry that people won’t be able to draw conclusions. They’ll insist on telling learners exactly what is right and what is wrong and why. You could try proving to this person that learners can think for themselves by giving some prototyped activities to actual learners and then having them explain to the stakeholder what they’ve concluded from the activities.

Or, you could give up and include the telling feedback, but only for the correct answers. It could work like this:

1. The learner makes a sub-optimal choice and sees the unfortunate result, with no explanation from the Omniscient One. They’re required to try again until they make the correct choice.

2. On their second try, the learner makes the best choice. They see the happy result, and then the Omniscient One chimes in to explain why it was the best choice and what was wrong with all the other choices.

If you use this approach, everyone will first see realistic results and draw their own conclusions, and then they’ll have these conclusions confirmed (or maybe corrected) by the telling feedback. I resist this approach because it makes you a control freak, but it can be useful because it reassures stakeholders and the legal department.

In Leaving Addie for SAM: An Agile Model for Developing the Best Learning Experiences, Michael Allen and Richard Sites refer to the two types of feedback as consequences and judgments. They suggest that if you use judgments, you should delay them. “Judgments offered too quickly cheat learners of the opportunity to determine for themselves if they are making good choices,” they write. I’m also concerned that instant judgments can be seen by learners as an insult to their intelligence and could turn them into resentful “just-click-it-to-get-it-done” foes of elearning.

Let them think for themselves, if only for a few seconds!

 


Scenario design toolkit now available

Design challenging scenarios your learners love

  • Get the insight you need from the subject matter expert
  • Create mini-scenarios and branching scenarios for any format (live or elearning)

It's not just another course!

  • Self-paced toolkit, no scheduling hassles
  • Interactive decision tools you'll use on your job
  • Far more in depth than a live course -- let's really geek out on scenarios!
  • Use it to make decisions for any project, with lifetime access
CHECK IT OUT

  

  

19 comments on “Feedback in elearning scenarios: Let them think!

Comments are closed.

  1. This was the perfect reminder blogpost for me. Let’s change the scenario example up a bit:

    -What if you’re trying to teach someone how to use software? I’m using a rich scenario element to put the software learning experience in context and it has a good representation of most of the other elements of instructional interactivity (challenge, activity). But – how do you apply “show me” feedback when trying to teach someone how to get through a series of explicitly defined steps in a software application? If they are allowed to click on anything and see the result, they might not ever realize how to actually complete the step. Also, the effort necessary to build something that would allow for clicking on anything would be immense.

    In this case, would you suggest limiting the show me feedback to specific key decisions that need to be made in the software? For instance, I might have to click on 5 things in the interface to get to the final location where I will be asked to enter a value or make a key important selection. If the wrong selection is made at this “final location” the learner could be allowed to proceed forward, but soon after their wrong selection was made, see the train wreck that occurs.

    The vexing part is, how to show them the impact of their decision while also helping them to see the specific decision that they made that caused this undesirable result? In other words, they could potentially have made so many decisions by the time they reach this “final location” that they will likely not remember what caused this train wreck.

    Any ideas?

    1. Dan, thanks for your interesting comment and question. As you point out, in a realistic software simulation or scenario the learner could click on any of a bajillion options and waste a lot of time before figuring out that they did something wrong somewhere. They also might never recognize a bad result if they have no idea how the program is supposed to work.

      I’ve seen some ways to limit choices but still keep the experience halfway realistic. For example, you might look at the medical software simulation that’s located at the bottom of the portfolio page for Allen Interactions. They include lots of optional help to make the huge array of choices less overwhelming, and they remind you of the next step in the process if you can’t think of it. However, if I remember right, the feedback for an incorrect choice doesn’t show the consequences.

      Scaffolding (doing a decreasing amount of the work for the learner) can also help reduce the amount of aimless clicking. For example, if our goal is to help learners see how to do a specific type of calculation in Excel, we can have a fictional character do the everything that leads up to the steps where errors are most likely to occur.

      For example, the fictional character can do everything right, including selecting the correct cells, and then the learner just has to choose or enter the correct calculation. The “showing” feedback could then describe the business disaster that resulted from the incorrect calculation and the learner could try again. We could offer optional help to reduce their frustration.

      In the next activity, we could present a similar but slightly different problem, and this time the fictional character does everything up to the point of selecting cells. The learner then has to both select the correct cells and enter the correct calculation. Again, showing feedback will let the learner conclude whether they’ve screwed up, but in this case we’d probably also need to tell them where they went astray (“You didn’t include column B, which…”).

      Subject matter experts who are familiar with how people are currently doing the job, as well as people who recently learned the task, can tell us where people are most likely to screw up, so we can focus our simulations on just those steps.

      I hope this helps!

      1. Thanks Cathy. Yes, I am very familiar with that Allen Interactions example. It is a well done example, i just wish that feedback did a better job of showing scenario themed consequences.

        Thank you for the spreadsheet example. That validates what I was I thinking which is very useful to me based on where I am with the project right now.

        I think there is an important opportunity to use show me feedback in the software simulation that I’m presently working on. I have been made aware of the places where the crucial decision points occur. Now all I need is the will to scrap some work and build this kind of feedback experience in its place.

      2. Thanks so much for the information regarding software training. It is always a challenge to breathe life into this type of course. Love the sushi scenario BTW – hilarious!

    2. Interesting point, Dan. I struggle with the development of software training, and came to think that the best option is to create some simple, short, yet polished video tutorials with voice (I like a natural, amicable tone) showing your audience what you are doing, letting them pause, rewind or move forward as they need, and then let them practice, make errors and discover things with the real program. That`s how I´ve learned many interesting features in PowerPoint and other programs. Here are many examples of this kind of tutorials http://learningppt.com/category/beyond-powerpoint/.
      In the case of administrative and database software, I`ve noticed that some organizations have a “testing environment”, so that`s a good option for those users who want to make some proofs before entering data in the real environment. Regards.

  2. Great post!
    I like some of the intentions I perceive behind this:
    – elevate the learner (instead of the opposite!)
    – suck more juice out of the ‘fear of failure monster’
    – convert what could be just a piece of mechanics into a learning interaction (a good mission to be on throughout all engagement with the learner, before, during and after the learning event)

    Thanks Cathy

  3. Hi Cathy,

    Your examples for multiple choice feedback are great and I was really enjoyable to read! How would you provide feedback on short answer questions that may or may not be marked. What is your opinion on short answer questions and when they should be used? I think they are useful as they get the learner to generate the ideas, and if you have a word limit, then be precise too, however it is hard to give them immediate feedback (I am thinking in University type assessments). Do you think a model answer is ok because they should have already had a hard think about the reasoning already? I think one problem with multiple choice is that sometimes (especially after pages and pages of it) the learner will stop reading the feedback and just click the other option to be able to continue. I would agree with your preferred form of feedback (giving it with every wrong answer), as how many students would read the feedback if they already got it right?

    Many thanks,
    Jun

  4. Well, this is all about feedbacks, but when a stakeholders put their heavy thumbprints over our courses, it does not start with feedback, it starts earlier – with INSTRUCTIONS.

    And I can easily predict that the demands for feedback will be redundantly tedious (or tediously redundant) if starting from the very first page I hear this:

    Can we make the Next button larger
    … and brighter
    …. and flashing colors
    …. and have a Next caption
    …. and a rollover
    …. and have the CLICK NEXT TO CONTINUE instructions….
    …. no, move it closer to the button
    …. and add a pointing arrow
    …. and make it flash

    What it really comes down to is this – In the course of creating a training module or a simulation stakeholders often forget about the original purpose of educating the user. Instead they want to baby the users, and hand-hold them so that there will not be any complaints about how hard it is to take these trainings, and how confusing all these buttons are.

  5. Cathy,

    You post reminded me of an ASTD presentation I attended, and book I read, years ago as a new instructional designer – Telling Ain’t Training. The OO is a teller. I have spent the last seven years supporting the DoD and have not developed for e-learning in quite some time, but the idea of showing instead of telling is just as pertinent in instructor-led training as e-learning. When I sit down with SMEs and analyze training I will often speak to allowing students to make mistakes or use scenarios where mistakes were made as discussion prompts. I have found that making mistakes is often a better teacher than always being successful because not only does the learner learn what not to do, but they will learn how to change a perceived failure into a success.

    Thank you for you post and the opportunity to join in the discussion.

    Tim

  6. Hi, Cathy. I am in the process of creating my first eLearning activity since I graduated over a year ago. Lack of confidence being the biggest obstacle, I’m just starting to work on the mini-scenarios. It’s a software training, but I’m creating it to work more like a mega-job aid based on another post you wrote. This process is helping me achieve my goal of modeling tech integration into my PD activities, because I have to think like a teacher, trainer, and student at different points. As a classroom teacher for 13 years turned trainer, I didn’t really connect my teaching experience to ID, but I’m finding that it really helps to have the benefit of all of these viewpoints as I’m creating activities. Exciting stuff! Thanks for this blog.