What is the secret to great metrics? What is the secret to efficient learning? And a great ROI story? The answer is much closer than you think. It is a simple matter of applying science, and ensuring that the science behind L&D is just the same as in any other discipline where science is applied.
But let me start by saying what this post is not about. It is not about neuroscience, or perhaps more accurately “cognitive neuroscience”. Cognitive neuroscience tries to explain how mental activities are executed in the brain.
Because studying a functioning brain is so difficult and the conditions under which it can be done are so limited, cognitive neuroscience hasn’t been able to contribute much yet to the corporate learning field. This is because what can be observed so far in limited, confined contexts can be hardly generalized to more complex conditions such as learning in the enterprise.
Back to basics
Rather than neuroscience, let’s talk about plain science. And by that I mean the scientific method. The scientific method is an ongoing process that observes measurable evidence and formulates hypotheses. A hypothesis is simply a statement that needs to be tested.
If all this seems too detached from L&D, it’s because we lack an example, so let’s go back to the workplace. Your business contact is telling you that the software developers in his team need a course in Advanced SQL Indexing because the product’s performance is slow due to sluggish SQL database code.
The hypothesis here is that training in Advanced SQL Indexing will improve the ability of developers to write better SQL database code.
And here’s the science
The L&D professional will at this point get to work with providers and SMEs to source the best possible learning solution.
But then, shockingly, it stops there. 87% of Irish organizations do not measure return on investment, according to a survey conducted by the University of Maynooth on behalf of IITD. And that’s where L&D lacks science.
Every hypothesis must be tested. First, L&D must challenge the hypothesis given by the business, because it may be flawed. If both the business and L&D agree with the hypothesis, then the only way to validate it is to measure the evidence. The evidence is not successful delivery of training: the evidence is how that training is helping the team deliver a more efficient product.
Time to change
The recent hype about neuroscience -soon it will be something else- doesn’t help. It keeps L&D happy with some new buzzwords and generalizations we can somehow incorporate into our practice (and claim that we are applying them) and avoid taking a more scientific approach to what we deal with on a daily basis: hypotheses.
It’s time to move outside of the cozy confines of the LMS, where surveys are easily conducted, and into the more challenging world of metrics: counters, long conversations with IT to implement telemetry; statistics, distribution curves, longitudinal studies, assessment, A/B testing, agile iterative approaches and, in short, running L&D like the businesses it supports.
Let’s do that, and then, maybe, we can start looking at the advances of cognitive neuroscience and how they can help our practice. But let’s be scientific first.