So yesterday, I went off on some of the subtleties in elearning that are being missed. This is tied to last weeks posts about how we’re not treating elearning seriously enough. And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.
So what do I mean? For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion). Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included. ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.
Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information. So, for instance, in recent work I’ve emphasized that a preliminary output is a competency doc that includes (among other things) the objectives (and measures), models, and common misconceptions. This has evolved from a similar document I use in (learning) game design.
You then need to capture your initial learning flow. This is what Dick & Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence. The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).
It also helps if your tools don’t interfere with your goals. It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples). These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least). They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows). It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).
Putting in place processes that only use formal learning when it makes sense, and then doing it right when it does make sense, is key to putting L&D on a path to relevancy. Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning design?