Last Monday I issued Post D, where D was the Roman numeral for 500. But D has another significance for me right now, because post ID (that’s 499) acted as a catalyst for a feast of debunking.
In post ID, Why Managers Matter, I shared the results of an evaluation study carried out by KnowledgePool, in which 10,000 employees and their managers were asked to assess the degree to which they had been able to apply what they had learned on a course completed three months earlier and the impact this had had on their performance. The results appeared to show that “the majority of learners (69 per cent) used what they learnt and experienced significant performance improvement,” and that “line manager support to help learners use what they had learnt was a major factor in tackling the lack of performance improvement.”
This did not impress Garry Platt:
Oh dear, is this piece of nonsense still masquerading as ‘research’? Back in 2007 Kevin Lovell brought this work to the attention of the Development Community on Training Zone, under the title of The Holy Grail of Evaluation?. As evidence of a successful outcomes I would question whether this work provides one iota of worthy data. My exact reasons for stating this are detailed here in my article entitled Holy Grail or Empty Vessel?, which was my somewhat energised response to the title and content of Kevin Lovell’s missive.
As far as I’m concerned, the only crime that KnowledgePool have committed is in overstating the uniqueness of their contribution to the process of training evaluation. The findings may have been based on self-reporting, with no objective evidence, but that doesn’t invalidate the results. After all, the 10,000 employees and their managers could have said that they’d been able to apply nothing from the courses they’d attended, but they didn’t. That has to be worth something and certainly a lot more than happy sheet data collected at the end of courses.
However, the major debunk came from Donald Clark (not the usual Donald Clark who does the debunking, but his American namesake – if there are any more debunking Donalds out there, then please reveal yourselves). In my post I cited the claim made by Mary Broad and John Newstrom in 1992 that “…merely 10% of the training dollars spent result in actual and lasting behavioural change.”
Donald responded as follows:
The reason the numbers differ so much is that Broad and Newstrom cite the Baldwin and Ford (1988) research paper, which in turn cites an article written by Georgenson (1982). In his article, Georgenson is asking a rhetorical question: “How many times have you heard training directors say: ‘I need to find a way to assure that what I teach in the classroom is effectively used on the job’?”
“I would estimate that only 10 percent of content which is presented in the classroom is reflected in behavioral change on the job. With increase demand from my management to demonstrate the effectiveness of training, I got to find a way to deal with the issue of transfer.”
So there is no research at all. My guess is that Baldwin and Ford read an abstract and mistook it for real research. If I thought that number was real I would follow Beck’s advice and go shoot myself for being such a loser. The references are listed here.
Thanks Donald. This is so-called research that I certainly won’t be quoting again.
So, can anyone point me to any reliable and valid research of the effectiveness of training? Does this exist?