Introduction

Do they just know it, or can they USE it?

It’s easy to write activities that test whether learners know something. How can we make learners use their knowledge as well? Let’s compare two types of activities. Read more

Do they just know it, or can they USE it?

By Cathy Moore

It’s easy and tempting to write activities that test whether learners know something. How can we make learners use their knowledge as well?

You might be familiar with Bloom’s Taxonomy. Its current form identifies six categories of intellectual performance, from remembering to creating.

macheteTo make the taxonomy easier to apply, I grabbed my Unsubtle Machete of Oversimplification and in a few whacks reduced the categories to just two:

  • Know activities ask learners to retrieve and maybe categorize or explain information.
  • Use activities ask learners to apply information to realistic situations.

Often, a “use” activity includes a test of whether the learner “knows” something — you get two activities in one!

Example

Your learners create widgets. To speak with their coworkers, they need to know some technical terms. One term is “transmogrification,” which means modifying a widget so it will work at high altitudes. What can we do to help learners master this term and the related concept?

Know activity: Drag the term to its definition — drag “transmogrify” to “modify a widget so it will function at high altitudes”

Use activity:

Your client wants to use their widget at 2800 meters above sea level. What modification do you need to make to the widget?

a. Transmogrify it

b. Redorbinate it

c. Neoplyordinize it

d. No modification needed

The “use” activity tests whether the learner can apply their knowledge of transmogrification in a realistic situation, not in an abstract definition activity. At the same time, it answers three “know” questions for us. It tells us whether the learner knows that:

  • 2800 meters is officially “high altitude”
  • You need to modify widgets for high altitudes
  • The necessary modification is called “transmogrification”

Of course, if these bits of information are crucial or frequently misunderstood, we’ll want to have more questions or activities to reinforce them. Also, our feedback goes beyond “correct” or “incorrect” to show the consequences of the learner’s choice and reinforce stuff some more.

If you write strong “use” activities, you don’t need to write “know” fact checks at all.

Have I been too brutal to Bloom? Our focus here is on corporate training, where the goal is usually immediate application in complex situations. I’m not convinced that we need to minutely examine whether our activity requires “analyzing” or “evaluating.” By simulating complex, real-world situations, we can’t help but cover several Bloom categories.

I’ve also seen Bloom interpreted as “Write learning objectives using terms like ‘define,’ ‘identify,’ and ‘compare,'” which tends to inspire abstract thought-juggling activities and not real-world applications. For a rant about objectives, see Why you want to focus on actions, not learning objectives.

Photo by SOGKnives


Scenario design toolkit now available

Design challenging scenarios your learners love

  • Get the insight you need from the subject matter expert
  • Create mini-scenarios and branching scenarios for any format (live or elearning)

It's not just another course!

  • Self-paced toolkit, no scheduling hassles
  • Interactive decision tools you'll use on your job
  • Far more in depth than a live course -- let's really geek out on scenarios!
  • Use it to make decisions for any project, with lifetime access
CHECK IT OUT

  

  

29 comments on “Do they just know it, or can they USE it?

Comments are closed.

  1. Yet another awesome post. And now I am wondering why, for most things, do you need the *know* part at all? Why not whack that one off as well since, as you said, the *use* activities are drawing on the *know* as well? Sometimes we ask our authors to come up with a *use* activity for every piece of *know* content, whether they actually put that activity in or not, it is a powerful way to refine their TOC… if they cannot figure out a *use* activity, it suddenly becomes tough to justify why the topic is in there at all. Sadly, with technical books and (in my earlier corporate ID and training life) that often means as much as 50% of the content.

    We tried to implement a 100% use-case driven development to help avoid the pain of cutting later… “if you do not have a good use-case, the topic either does not matter OR you simply do not understand the practical use.” Of all the things that have shocked/surprised me as both a technical author and a course developer (at a very big company), it was discovering how much “content” was being “covered” for no apparent practical or useful reason. With technical books in particular, or anything on a topic that has been around for awhile, there is a staggering amount of inertia for topics… they are included because everyone else did. Or because they are part of the technical specification.

    I do understand the need to include things sometimes that go beyond what can be put it into a useful activity, but going through the exercise of forcing a topic defend its right to exist is a powerful lesson for everyone.

  2. Kathy, thanks for your comment. I agree that we don’t need separate “know” activities for most stuff. And I love your approach of requiring authors to come up with a “use” activity for every bit of content to avoid major surgery later. We can extend that to the entire course or project–it has to justify its own existence, or we’re not going to waste time creating it.

  3. What I find most interesting is that, from a natural perspective, learners HATE those “know” types of activities. Providing specific dates when events happened and identifying the correct definition of a term just come off as busywork to the majority of students. Some of the most enjoyable classes I’ve ever taken can attribute much of their success to the “use” mentality that they focused on.

    The “Use” activities provide context to outline why we need to know certain details.

    1. Bryce, thanks for your comment. I agree that the best classes were the ones that applied what we were learning to realistic projects and put our new knowledge in context.

  4. I’ve been tempted to defend including drill-like “know” activities in language learning in addition to more complex “use” activities. However, as I look at my own language learning, the main appeal of a vocabulary-memorizing drill is that I can get through it quickly and feel a sense of accomplishment: “Ooo, I must be good at Spanish because I translated 9 out of 10 words correctly in my flashcard app.” That’s motivating, but I get the best value (and most realistic measure of my accomplishment!) from the more complex activities.

    For example, it’s easy to respond to one word, like “armpit,” with its Spanish translation. It’s much harder to translate, “I was surprised when the nurse put the thermometer in my armpit.” And it’s the second activity that’s more likely to be useful, because it reinforces the grammatical pattern of “I was surprised when X did Y” in addition to testing my armpit vocabulary.

    Building complete, realistic sentences like that uses a bajillion percent more brain. It also helps fight the biggest problem that I and all the language learners around me have: We can understand almost everything we hear, but we have a heck of a time talking.

  5. I built up this simple diagram awhile ago to begin to transform the view of Bloom’s Taxonomy and begin to encourage its use for something other than a verb-dart target…
    http://www.xpconcept.com/progression2.jpg

    The diagram liberally reorganizes the taxonomy into two “zones” similar to your great illustration above. The higher level cognitive calls to action are also organized in a different complexity hierarchy than Bloom’s, as I’ve assumed that in most cases to analyze you need to comprehend. To synthesize, you need to have some analysis skills. And having evaluation skills without the ability to synthesize would be hollow. Each of the higher order categories contribute to some sort of action / result – the thing that matters and creates meaning.

    This is similar in some ways to the recommendations offered by Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses (L. Dee Fink). This is a good read for ISD types even if the results aren’t college courses.

    On the subject of know vs. use and why we might focus on acquisition of facts or concepts independently of a concrete activity… I think it depends on the context. In many cases, rote fact acquisition and abstract concept representation can be really useful to form a skill foundation provided that it’s hammered home in context with a relevant activity. When you think about it, are skills and knowledge really separable in the end?

    We’ve used this diagram to represent learning foundations for simplified skill chaining. This isn’t accurate in all cases, but it does provide a pretty good build hierarchy and works as a reminder.
    http://www.xpconcept.com/learning_foundations.jpg

    1. Steve, thanks for linking to your diagrams. I like how you make clear where enabling and terminal objectives apply and how you show concepts as the basis for skills, tasks, and performance. By making distinctions like these, we can more easily recognize when we’re spending too much time in the “concept” layer and assuming that knowing the concepts will naturally lead to performing the task.

  6. I attended an instructional design course in the 80s by Ruth Colvin Clark in which she used precisely this Know//Use distinction for setting learning objectives, going across a matrix of Fact, Concept, Procedure, Process and Principle. Both levels apply to them all except Fact, which you can only ‘Know’. She credited David Merrill with it. I think it’s well worth promoting it again as you’re doing here.

    As for the question ‘Do they need ‘know’?’ – in some situations they do – when what they require is declarative knowledge – for example if their job involves explaining things to customers, they need to be able to state correct definitions clearly. Where a lot of elearning design goes wrong is to think that a single multiple choice question a few screens after a bunch of facts are presented proves that the ‘Know’ level has been achieved. As anyone who’s tried to memorise information knows, you need lots of varied repetition over time. Something a 40m elearning course doesn’t do that well.

    1. Norman, thanks for your comment. I’ve seen the same fact, concept, procedure, process, and principle breakdown used in other places as well. I’ve also seen the “know” and “use” (or “understand” and “apply”) dichotomy in several variations, though I wasn’t aware that Ruth Clark also uses it in her training materials.

      According to Don Clark’s site, Merrill’s Component Display Theory uses three categories: remembering, using, and finding, and applies them to facts, concepts, procedures, and principles.

      I agree that if learners do need to be able to recite facts on the job, they need more than one multiple choice question to help them practice that. I’m not sure why Ruth Clark? David Merrill? would say that we can only “know” facts and that there can’t be a “use” component to factual knowledge, even if it’s as trivial a use as repeating the fact to a coworker, but I might not be understanding their definition of “use.”

      1. Out of concern that I could have too-closely based the table on Ruth Clark’s materials when I wrote it in 2008, I’ve now looked at every page of the two Ruth Clark books that I own, Elearning and the Science of Instruction and Efficiency in Learning. I found definitions of the five types of information (fact, principle, etc.). At one point in Elearning and the Science of Instruction, the authors say that there are two goals for elearning, “inform” and “perform,” but I couldn’t find a table bringing everything together, showing inform vs. perform across the five categories or comparing the two types of activities. I haven’t been through Ruth’s training so I don’t have the materials she uses.

        Google searches brought up a much bigger table on Don Clark’s site that matches all 6 of the Bloom categories to the five fact-principle categories, which is more likely to be the inspiration for the table I’ve published. His table lists generic activity types, not specific examples, and has 30 cells, since he’s covering all of Bloom.

  7. She shoots – she scores.
    Thanks Cathy, another great Summary. I have just spent a couple of days debating this pretty much exactly with a client.
    A similar “Litmus Test” I once used was “If you had to present these Objectives to the Financial Director, the Operations Director and the CEO – would they nod and murmur approvingly, or fire you for wasting their training budget”. It’s a quick and easy – but very “real” test, as their roles are usually always linked to business change brought about by “doing” rather than knowing.
    Thanks again.
    Bruce

  8. Gr8 post, Cathy, as everyone has observed. I come to ID work from a different background than many degree in Exp Psychology. My preference is to make instructional objectives that are BEHAVIORAL, i.e. your “acts” not “knows”. In a recent CBT project for the US Army, I created (with help from my team) a final assessment that contained. both “knows” and “actions”. For the actions, we devised a scenario for a day’s workflow where the user, a clerk, performed 10 tasks in the system. I frankly could not conceive of a valid assessment in any type of organizational or corporate training that did not contain performance based components.

    The higher, i.e. analytical/synthesis levels of Bloom’s Taxonomy seem more useful in Education broadly defined, then Training which is primarily use case or task oriented.

    Good discussion also, Thanks to everyone.
    Ed

  9. Real-world scenarios are so often ignored, even though they are the most important goal of many courses. Action-oriented goals and assignments also make the content a lot more interesting, I’ve found, for both the instructor and the students.

    Great post… you hit the nail on the head.

  10. Hi Ed,

    I think it’s really important not to get too hung up exclusively on overt behaviors when exploring measurement and practice opportunities for self-paced eLearning solutions. I’ve seen plenty of folks get wrapped around the axle trying to interpret a compound physical behavior manifestation into something that’s measurable in a virtual environment. It’s a rough and rocky leap from observable behavior to digitally measured practice when the aggregate performance includes things that can’t be reasonably measured or practiced with any precision in the solution.

    It’s true that many actions *can* be measured through the computer. But really, what we’re measuring is the outcome of a network of covert sub-tasks. For this reason, I think it’s often better to consider breaking overt behaviors down into sub-tasks that include covert behaviors for these types of products.

    Take this example:
    Safely energize the XYZ system. (Application: Operate)

    While I think this task can be emulated with some precision in a practical exercise, there’s more at play here. More stuff that can be measured (thinking and decision sub-task molecules) and more benefit we might provide to the learning experience by focusing on those covert molecules of performance rather than the compound level (to use a chemistry analogy:)):

    – Evaluate the current system state. (Evaluation)

    – Recognize hazards to safety and equipment associated with the light-off procedure for the system. (Comprehension)

    – Recall each step in the process of energizing the system. (Knowledge)

    – Interpret normal and abnormal system state feedback / warning after each step of the energizing task and take appropriate action. (Application)

    What folks may find when a task is broken down into sub-tasks are a variety of criticality weights. Some covert tasks may contribute more weight to the speed or precision of the task than others. This is one of the many reasons I believe that uncovering these covert tasks is critical to successful eLearning design. Otherwise, it just seems like we’re abstracting an observable behavior (“hey dad, watch me…”) using leaps and guesses. I think the digital environment is really well suited for safe practice of these “thinking tasks”. I don’t consider rote memorization and recall a significant learning event, but making a decision or chain of decisions and receiving direct or indirect feedback is.

  11. Post = Awesome. Resulting Discussion = Awesome.

    There definitely is difference between designing for on the job tasks vs. college calculus, but one thing I have noticed in the past is this preoccupation with Blooms Taxonomy in LO design. I have worked in environments where instead of truly assessing the needs/goals of the course, we were word-smithing (using very impressive action verb tables!) learning goals to look and sound interesting and impressive.

    I think the taxonomy is a good reminder of where you want to go, but sometimes its hierarchical set up makes one feel like their training is inferior if it remains on the “lower rungs”… I like the concept of it being less of an achievement pyramid and more of a “zone of mastery” set up. My point is that it is easy to focus on what the training says about “us” instead of what it actually helps people accomplish.

    The discussion here is top notch… I have learned a lot from it.

    Take care,
    Anna

    1. I call that activity “Bloom’s Verb Darts” and I too have seen this preoccupation with the verb menu far more often as a first gate than a true needs analysis or pre-design analysis.

      Bloom’s is good for job support if you haven’t internalized a structured mental model of tasks and actions. But it’s just a tool:)

  12. Cathy,
    Bloom’s taxonomy does seem to be showing some cracks now that people are taking a closer look at it – I just spotted this article linked from Clark Quinn’s blog: http://www.performancexpress.org/0212/mainframe0212.html#title3
    The closest thing to a complete model for learning I’ve seen to date is Phil Race’s ‘Ripples’ model of learning (http://phil-race.co.uk/most-popular-downloads/), and it’s what I tend to judge eveything else by.
    Thanks for continuing to inspire me!
    James

  13. Too brutal? Maybe not. Another classic post from you Cathy.

    I don’t believe Bloom is showing cracks. We must remember the work was done in 1956 and culture has dramatically changed over the years.

    I simplified it differently as an ID model and deliver it in my own Instructional Design Course.

    I took the first three levels

    1 Knowledge. All learning has to start with knowledge in its basic form.
    2.Comprehension. We have to understand the knowledge if we are to use it.
    3.Application. How do we use what we now understand.

    Add a motivation for use and voila, change has to happen.

    Regards …

    PS. Sorry for any typos. Doing this on the bbery on a bouncy train

    1. One must also remember that Bloom never intended his hierarchy to be applied and used in the way it is today. It was meant as a mechanism for standardizing assessment in order to save costs at the time. Not to be applied without thought to learning events forever more.

  14. This is great Cathy. I am dying over here at the terms transmogrify, redorbinate, etc. It reminds me that so often are over the heads of our learners and many times try to impress them with OUR knowledge instead of allowing them to actually LEARN :-).

  15. Hi Cathy,

    I found your post quite interesting in the way it explores learning pertaining to bloom’s and gives a new perspective to it. I believe learning and retention gets better when activity or ‘doing’ experiential learning is there. Psychologist Bruner’s active learning concept is apt for e-learning and students learn better with practical experiences. So basically it is implementing what you ‘know’. One of the few good writings I have come across lately.

  16. Hi Cathy,

    I truly enjoyed reading your post. Being a Sunday, part of me wants to stand up an shout “preach on!”, but I will refrain from doing so.

    Being a middle school science teacher, I feel that I am always looking for ways for my students to engage in a “Use Activity.” Many science topics like find the best way for plants to grow, making oobleck, or building a series and a parallel circuit are a hands-on, “Use Activity” that require students to show what they know through experiments. By doing these activities, I can see that my students can meet certain objectives, are learning and, most importantly, can apply what they learning. As Ormrod, Schunk, and Gredler (2009) mention, “Information that is meaningful, elaborated, and organized is more readily integrated into Long Term Memory networks” (p.85). The follow up challenge is to make sure they can bring this concept into a test taking environment when they have to answer the “Know Activity” as well. I find the way you have implemented the use activity into a question format is an excellent way for the student use the information as well. I will have to attempt this on the next quiz or test I create.

  17. Cathy,
    I really appreciate your “Unsubtle Machete of Oversimplification” and no nonsense explanations and examples. Through the assessment of whether or not a learner can use the knowledge a learning experience can help create long-term memory (LTM).
    The most effective learning experiences will create long-term memory. Simply matching a word like “transmogrify” to its definition is not likely to create LTM. Making information meaningful, elaborating or building on prior knowledge, and organizing information promotes integration into long-term memory (Ormrod, Schunk, & Gredler, 2009).
    In the “use” activity, transmogrify becomes relevant through use of a likely real-world scenario and connects it to prior knowledge. When asking authentic questions, questions whose answers are created from a combination of perception and action, “an immense about of information instantly comes together in your brain” (Caine, R. & Caine, G., 2006, p. 51).
    Learning experience created using complex, real-world situations do Bloom justice while increasing the chances that the learner will retain the knowledge long-term.
    Have you found that creating “use” activities instead of “know” is more challenging? Or is it just changing the way we think about how to instruct and assess the knowledge?
    Thank you, again, Cathy for creating such an accessible post. The Elearning Blueprint examples were equally helpful. I cannot wait to explore more of your blog and Elearning Blueprint.
    References
    Caine, R. & Caine, G. (2006). The way we learn. Educational Leadership, 64(1), 50-54.
    Ormrod, J., Schunk, D., & Gredler, M. (2009). Learning theories and instruction (Laureate custom edition). New York: Pearson.

    1. Serena, thanks for your question. Yes, I think that creating “use” activities is more challenging than “know” activities, because we have to find ways to include a realistic context and make the options more challenging than simple identification of a fact. At the same time, I think we can cover more content with one “use” question, as shown by the sample question in my post. This means we might end up writing fewer questions overall.

  18. Cathy,

    Being a novice to studying various learning theories and research, I appreciate your simplification of Bloom’s Taxonomy into two areas: knowing and use. This is a helpful guide for instructional designers and teachers so they can be more effective in meeting the needs and goals of their clients and students, respectively. This simplification helps students develop and improve their metacognitive skills, which ultimately makes them more successful in learning. Learners in the “knowing activities” process are accessing knowledge (e.g., facts, steps) in their LTM (long term memory), organizing it in such a way that they can explain it in their own words for easier comprehension. During the “use activities” process, learners are actually applying their knowledge to real-world situations. One example that comes to mind is a math problem using percentages. Once someone is taught the steps for finding a percentage of a something (know activity), then one can apply it to deciding whether or not to buy a shirt that is 25% off or another shirt 50% off in price (use activity). In the know activity, we access our understanding of the proper steps of multiplying using fractions, while in the use activity we apply this information so that we can get the better deal and save more money.

    I think this simplification is particularly helpful to students in primary school where they are first introduced to to the learning process. It equips teachers with the necessary tools for assisting students in building strong foundations, and ultimately could sustain the desire and eagerness young children have for learning as they progress in higher grades.

    Thanks for posting this topic!

    Regards,
    Hatshepsitu Tull

  19. Hi Everyone,
    Thanks for such rich content and the robust discussion! I am new to this whole thing and we are launching a non profit elearning site which is very bootstrapped so am having to elearn this ID field to ensure I’m helping the organisation and the SMEs deliver the most practical and relevant course content possible – particularly given our target market of volunteers who are already time pressured. I am really pleased at the focus of ‘use’ vs ‘know’ (what a shame this wasn’t around when I was at school!). Coming from a practical application background, I actually defined this for the organisation last year when I said to the team we need our SMEs to deliver more ‘do’ then ‘tell’. I found Blooms to be helpful in articulating our approach (using the 6 layer process) where knowledge= tell; understand=show and apply, analyse, evaluate and create=do. Despite this, I was taking a more considered traditional method in the course outline of – deliver information on the topic; show them how to do it and then walk them through the process. Now however I’m thinking we need to show them how to do it, walk them through the process and along the way allow them to discover more information on the point that they are up to. What are your thoughts/suggestions on this esteemed ones?

    1. Natalie, thanks for your comment. You might get some encouragement from the research presented in the post Throw them in the deep end. Research on “productive failure” suggests that if we *don’t* tell them exactly what to do and when to do it, they learn it more deeply and can apply it to a wider range of situations. In elearning design, that could mean sending learners directly into a series of increasingly challenging activities that include optional support, rather than first showing them everything they need to do. It’s important to structure the activities so the learner has enough support (“scaffolding”).