How to design software training, part 1: Do everything except “train”

“How can I design training for new software?” Maybe training isn’t even necessary. Let’s look at some alternatives. Learn more.

How to design software training, part 1: Do everything except “train”

By Cathy Moore

As an Amazon Associate, I earn from qualifying purchases.

Software training really required?"How can I design good training for new software? What's the right balance between help screens, job aids, and training?"

That's a common question, so let's go at it.

Here's part 1, in which I say, "Try everything but training." We'll cover needs analysis, job aids, help screens, and the radical idea of making the software easier to use.

Later, I'll publish part 2. In that part, we'll look at how to design practice activities for times when the everything-but-training approach isn't enough.

1. Justify your work with a measurable goal

Action mapping begins with identifying how the organization will benefit from the project. The goal justifies the existence of the "training" (and of your job).

Here's a template:
action mapping goal template

To identify the measure for new software, you might ask, "Why did we buy this software? What problem will it solve? How will we know it worked?"

For example, if the organization is installing a new customer relationship manager, why? Have potential new customers been slipping through the cracks? If so, maybe your goal is this:

Sales to new customers will increase 5% by [date] as all sales reps correctly use NewCRM to identify new prospects and build relationships with them.

If your stakeholders refuse to commit to a business-performance goal, you might try a weaker but still useful type. For example, you could measure the strain that new software can put on productivity. For example, if the new software is already in place and causing confusion, you can try to reduce the number of calls to the help desk.

If you doubt the usefulness of this kind of goal, imagine the alternative. A too-common "goal" is "All sales reps will be trained on NewCRM by [date]." This says, "We don't care if the training actually works. We'll just take attendance. Now give us money."

For more on setting goals, see:

2. Ask, "What do they use the software to DO?"

List the most common tasks that people will use the software to complete. Also consider what might make each task difficult.

For example, is it obvious how to complete the task in the software? Are people working under time constraints?

This flowchart helps you consider all the angles.

For more on this, see the discussion in "Technical training: What do they need to DO?"

3. View training with suspicion

Many people assume that every new system must be introduced with formal training. But is that always necessary?

"New" doesn't mean "requires training"

Just because the software is new doesn't mean that people need to be trained on it. The likelihood that training will be necessary depends on a lot of things, such as:

  • How different is the software from what they're using now?
  • How tech savvy are the users?
  • How complex are the tasks that the software is used to complete?
  • How horrible is the outcome if someone screws up when using the software?
  • How clumsy is the software interface?
  • How much help is built into the software?

"Hard to use" doesn't mean "requires training"

If the software is clumsy and its help system is unhelpful, that doesn't mean that you have to develop training. It means the software should be made easier to use.

L&D staff are often surprised to discover that they can request changes to software -- but they have to ask. Don't assume that it's too late to change anything.

If the software is from a third party, making it easier to use would help their sales. If the software was developed internally, there's no excuse for refusing to make it easier. Clumsy software hurts performance.

Make a list of changes that will reduce the need for training. Take screenshots and scribble on them. Write the help blurbs that are missing. Point out where there are too many steps.

"They won't change it" doesn't mean "requires training"

If the software is hard to use and the developers have rejected your requested changes, that still doesn't mean that formal training is your only hope. How about some job aids?

4. Try some job aids

A job aid is a reference that gives you just enough information. It can be a piece of paper, a page on the intranet, a short how-to video, a help screen, or anything else.

We can use Moore's Machete of Awkward Oversimplification to divide software job aids into two groups.

Type 1: Task-based job aids

These are handy guides that quickly tell you how to complete job tasks using the software. Some examples:

  • A short article in a knowledgebase shows you how to record a partial refund in the accounting software.
  • A quick video shows you how to create a template for your marketing emails.
  • The built-in help system highlights the commands you need to use to escalate a customer complaint in the CRM.
  • An extensive, structured document helps you install WordPress, as deconstructed by Dave Ferguson in his handy site about job aids.

These are all aids that help you complete tasks. They aren't the painfully ubiquitous tour of the menus or alphabetical list of all commands.

Type 2: Code and command references

If users need to type in codes or non-intuitive commands, group the most common ones in a quick reference. As above, try to group the commands by what they achieve. Don't just list them alphabetically. Some examples:

Let people choose how much information they need

Don't force-feed everyone with information in your job aids. Let experts skip ahead while novices read deeply. Here's an example from Dave Ferguson of a job aid designed to satisfy both groups.

Help screen or job aid?

A common practice is to use a help screen to give a quick overview of how to complete a small task. For longer tasks or more detail, the help screen could contain a link to a video, knowledgebase article, or other reference.

People should be able to use the software and view the reference at the same time. For example, a reference that opens to the side is way more useful than one that opens in the same window as the software and blocks it.

For a lot more about job aids from real job aid experts, see Job Aids and Performance Support by Allison Rossett and Lisa Schafer.

5. Test your job aids before designing training

Test your job aids on a sample of future users and tweak them as necessary.

If it looks like the job aids alone will get people up to speed, release them into the wild. Tell everyone where they are, make them super-easy to find for the people who missed the memo, and provide a quick feedback mechanism so users can tell you how to improve them.

Let the job aids do their thing for awhile, and then check the measurement in your goal. Has it improved? Also, have managers reported better use of the software? Has the help desk seen a decrease in the number of calls? If so, you might be done. You can make sure you're done by using Robert Brinkerhoff's Success Case Method.

In part 2, we'll look at what you might consider if you decide formal training is necessary.

Scenario design toolkit now available

Design challenging scenarios your learners love

  • Get the insight you need from the subject matter expert
  • Create mini-scenarios and branching scenarios for any format (live or elearning)

It's not just another course!

  • Self-paced toolkit, no scheduling hassles
  • Interactive decision tools you'll use on your job
  • Far more in depth than a live course -- let's really geek out on scenarios!
  • Use it to make decisions for any project, with lifetime access



12 comments on “How to design software training, part 1: Do everything except “train”

Comments are closed.

  1. Thanks for your kind words about my job aid examples, Cathy.

    People sometimes think of job aids as nice-to-haves, like cloth napkins or good choices in a snack machine. Software vendors will assure you, “We’ve got lots of user guides.” Then you discover that the “how to sign on” guide is fifteen pages long and written as if the reader has never seen a computer.

    When you focus on what people need to do, you can then ask whether they need to memorize X in order to get [whatever] done. “Memorize” is a good word here–much drearier and more obvious than “learn.” Nobody needs to memorize the steps for installing WordPress.

    1. Wow, that’s a good point! I’m going to experiment with changing “learn” to “memorize” when trying to make a case for focusing on what people need to do.

    2. “Do they need to memorize X to get [whatever] done?”

      Thank you for this. This is something I’ll use.

  2. We’ve been following your approach for quite a few years, and we’ve designed our cloud solution to match what you’re preaching:

    Software training is tough. Many people we’ve consulted show me slide decks with 150 slides that go over the history of Salesforce, why Salesforce was purchased, etc. And many job aids I review are super long and have lots of background/caveats. Getting it right is a fine balance.

  3. Thank you Cathy! You always have creative and fun ways to look at everything!
    Helpful tips and inspiration as always.

  4. I’m sorry but I have to disagree with this as a valid goal: Sales to new customers will increase 5% by [date] as all sales reps correctly use NewCRM to identify new prospects and build relationships with them.

    Essentially, you’re saying ‘Sales to new customers will increase 5% by [date]’ – that’s what you’re measuring.

    So the idea is: We had this intervention/training | Sales to new customers increased by 5% | Therefore the increase was due to the intervention/training. Unfortunately, correlation doesn’t mean causality.

    Let’s say someone comes along and decides to start using trade tariffs, or there’s a change in currency rates, prices go up, or any number of other reasons. You could end up with:

    We had this intervention/training | Sales to new customers DECREASED by 5% | Therefore the decrease was due to the training.

    In order to show causality, there needs to be a direct link between the result and the cause.

    In the scenario new leads were slipping through – the way this was measured would be a more accurate place to start IMHO.

    So perhaps that’s one reason stakeholders might refuse to commit to this type of business-performance goal.

    I appreciate it’s an imaginary situation, and that you state ‘a measure you already use’, but if it isn’t a real measure/indicator, then any results are likely to be misleading.

    1. Andy, the point of the training is to get the sales people to use the new CRM to do X. The result of them doing X is, HOPEFULLY, an increase in sales. No one assumes that these changes to business performance happen in a perfect vacuum.

      As I explain in my book, it’s common for people to say that the goal will “contribute to” an increase in whatever. I also explain that it can help to have additional “closer” measures such as the number of new leads slipping through.

      However, for L&D to be able to justify their existence, they do need to at least attempt to connect what they do to the success of the business.

      1. Cathy, if ‘the point of the training is to get the sales people to use the new CRM to do X’, that is what you are trying to achieve, and that is your objective – what you can measure. An outcome from that might, or might not, be an increase in sales, but it’s not measurable. As I mentioned above – correlation does not mean causality.

        The additional measures you mention are what I would call objectives – which are measurable. The objectives feed into the goal.

  5. TRAINING: Part of Change Management

    If the software is designed for business processes, such as Salesforce Automation, Support, HR processes, Project Management, etc. the software should be configured to enforce the process and then training should address both the use and new process, helping the users understand how the new software supports the new processes that will help them and their organization with their work.

    There can be a lot of resistance to new software. And for business process software, often, if everyone is not on the same page about how and WHY to use it, you won’t get user adoption, and it won’t matter how great the new application is, the business reason for the new software won’t be realized.

    Done correctly, roll-out software training is part of a Change Management strategy that inherently deals with the emotional component of having to learn something new. And, the facilitator should always have the “Parking Lot” for questions that can’t be answered in class and suggestions for improvement to the software or configuration.

  6. This is a great post, Cathy. I’ve had many variations of this conversation in the last few years and this article will help me make this conversations more meaningful.

  7. WHEN should we train? Before the software is implemented? How long before? After when they used it for a bit?

    What are your thoughts!?