There’s been a lot of talk about microlearning of late – definitions, calls for clarity, value propositions, etc – and I have to say that I’m afraid some of it is a wee bit facile. Or, at least, conceptually unclear. And I think that’s a problem. This came up again in a recent conversation, and I had a further thought (which of course I have to blog about ;). It’s about how to do microdesign, that is, how to design micro learning. And it’s not trivial.
So one of the common views of micro learning is that it’s just in time. That is, if you need to know how to do something, you look it up. And that’s just fine (as I’ve recently ranted). But it’s not learning. (In short: it’ll help you in the moment, but unless you design it to support learning, it’s performance support instead). You can call it Just In Time support, or microsupport, but properly, it’s not micro learning.
The other notion is a learning that’s distributed over time. And that’s good. But this takes a bit more thought. Think about it. If we want to systematically develop somebody over time, it’s not just a steady stream of ‘stuff’. Ideally, it’s designed to optimally get there, minimizing the time taken on the part of the learner, and yet yield reliable improvements. And this is complex.
In principle, it should be a steady development, that reactivates and extends learners capabilities in systematic ways. So, you still need your design steps, but you have to think about granularity, forgetting, reactivation, and development in a more fine-grained way. What’s the minimum launch? Can you do ought but make sure there’s an initial intro, concept, example, and a first practice? Then, how much do we need to reactivate versus how much do we have to expand the capability in each iteration? How much is enough? As Will Thalheimer says in his spaced learning report that the amount and duration of spacing depends on the complexity of the task and the frequency with which it’s performed.
When do you provide more practice, versus another example, versus a different model? What’s the appropriate gap in complexity? We’ll likely have to make our best guesses and tune, but we have to think consciously about it. Just chunking up an existing course into smaller bits isn’t taking the decay of memory over time and the gradual expansion of capability. We have to design an experience!
Microlearning is the right thing to do, given our cognitive architecture. Only so much ‘strengthening’ of the links can happen in any one day, so to develop a full new capability will take time. And that means small bits over time makes sense. But choosing the right bits, the right frequency, the right duration, and the right ramp up in complexity, is non-trivial. So let’s laud the movement, but not delude ourselves either that performance support or a stream of content is learning. Learning, that is systematically changing the reliable behavior of the most complex thing in the known universe, is inherently complex. We should take it seriously, and we can.