Finding the needle in the learning content haystack
research hero textseparator
20 May 2021
Image credits: Haystacks by John Pavelka

The promise of the world-wide web was simple: connection. Connection with each other and connection with information. Putting the world online meant the sum of human information would be at our fingertips. The web has lived up to its promise, all too well. We have moved from Francis Bacon’s world where knowledge was power to one where almost all information is available, immediately.

The problem now is not lack of access to what you want. It’s finding it.

This is particularly true in Learning and Development (L&D). According to this year’s L&D Global Sentiment Survey, over 3,000 practitioners globally thought reskilling and upskilling the most important activity for L&D in 2021. A great deal of this reskilling and upskilling will require materials, and most of this content will be delivered online. Some will be delivered formally, as part of a course. More will be used informally, by people seeking it out to help them learn. However the material is used, though, L&D teams face the issue of connecting people with what they most need amid a sea of content.

When information lived largely on paper, in the days of encyclopaedias and physical libraries, an index solved this problem. Information was so scarce that there would only be a few books that met your needs, and a single entry in the encyclopaedia.

A sea of content
We are, however, well beyond this now. Many, possibly most, large organisations subscribe to one or more libraries of digital learning content in various formats. Skillsoft reckons it has about 45,000 courses, LinkedIn Learning has some 30,000. The Harvard Business Review Library has many tens of thousands of articles, and getAbstract provides access to over 20,000 abstracts of key business books. And there are hundreds of other content providers.

In addition, there are the aggregators, selecting and collecting a range of digital courses. Whether Go1 (100,000+ courses), OpenSesame (20,000+), OfCourseMe (500,000+) or another aggregator, these leviathans add to the huge breadth of content available.

Added to all this content is what organisations themselves produce. Not just content produced explicitly for learning, but anything that can be learned from – which includes almost everything from company reports to presentations to product descriptions and beyond.

And finally, there is the web. Search for ‘authentic leadership’ and Google will provide about 111,000,000 results. In 0.54 seconds. Organisations aiming to match employees with the best possible content will be considering not thousands of assets, but millions. We’ve extended a long way beyond the library.

No manually produced index can handle this volume of material. Even the smartest search engine on the planet produces a vast stack of results that may not suit you and your company’s purpose. (Can you really be sure that what you want is top of that list of 111 million results?)

So, is it possible to find the needle in the haystack, the content that will suit a particular person’s needs at a particular time?

Beyond the index, beyond recommendations
How do platforms with huge amounts of content deal with this? The likes of Twitter, Facebook, Amazon and Netflix use a combination of user recommendation and algorithms based on billions of data points of user behaviour.

There are two reasons this won’t work for L&D.

First, those billions of data points are only possible because of the vast size of the networks. Corporations, in contrast, do not share their content and employee details beyond their firewalls. There is far less data for an algorithm to work on.

The second problem is more subtle – the precise needs of the human at the end of this algorithm. While we may happily follow someone’s recommendation for entertainment, we are far fussier about learning.

Netflix does a reasonable job of recommending movies, but our standards for choosing things we want to learn from are different and more focused. If we want to be entertained, we often don’t care too much how it happens. In contrast, when learning, we typically have very clear criteria for what we want or need. And we don’t want to waste time with anything failing to meet those criteria.

It’s this very clear set of criteria for success, against the huge range of content available, that makes selecting the right content for learning so difficult.

When you’re choosing entertainment, you might tell Netflix you want to watch a Sandra Bullock movie. Netflix will serve up the same recommendations regardless of whether you’re alone, with friends, on the family sofa on Friday evening, or just arrived in your hotel room.

In looking for learning content, we are usually much more particular in our needs. Rather than looking for something as unspecific as a ‘Sandra Bullock movie’, we look for the equivalent of a ‘Sandra Bullock action movie, not too long, and from earlier in her career’. We’ll be satisfied by Speed (1994) and disappointed by the 2000 comedy Miss Congeniality.

The challenge is context, the particular needs and circumstances of a person. And it is a huge challenge. We are not sorting through Sandra Bullock’s 46 movies, but millions of potential learning assets.

Beyond content for a wider context
For a better search, we need fewer, better fitting results that the employee will choose from, and be happy with the results. This is not just a matter of saving time, although that is important. The number of choices presented to a person matters in its own right. In their 2000 study, When Choice is Demotivating, Sheena Iyengar and Mark Lepper showed that having too large a range of items to choose from reduces the likelihood of a person making any choice at all. Subsequent studies revealed that it also reduces the chooser’s satisfaction with their selection.

To reach that short, but useful list of items that a person can choose from, we need to move away from the analogy of indexing information by just looking at the content itself. We also need to explore the context of a person’s working environment.

Here, two forces determine what is important. The first is top-down: what does the organisation believe is needed? In the past, this question has driven the creation of competency frameworks linked to learning content. Often regarded as impositions from a central HR function, these frameworks can be very useful, provided they strike the correct balance between complexity and usefulness. They also – and this is crucial – need to be maintained over time.

Too often competency framework taxonomies either don’t strike this balance or are not maintained and become little more than cumbersome administrative chores. In reaction, ‘folksonomies’ have sprung up, crowd-sourced lists of skills (also referred to as skills clouds) defined by employees with no centralized coordination.

Both methods can work, but the best approach is to combine both: the top-down aims of the organisation with the bottom-up demands of employees. This can be done, using a combination of human and artificial intelligence.

In conversation with the team at Filtered (where I am a non-executive director), I’ve learned that understanding what is important to both the organisation and the individual takes a comprehensive discovery process which involves parsing documentation algorithmically, exploring existing frameworks and a wide range of interviews, including executives, key workers and Subject Matter Experts.

At its simplest level, this process allows the organisation to differentiate between definitions. ‘Personal productivity’ to the organisation may include things like time management, but for individuals it might be much more about how to use the company’s different software tools. Beyond this, it’s possible to add greater granularity of definition of skills and align this with particular content – to be quite detailed about how to manage time, or use those tools.

This taxonomy can be used to categorize content more accurately, but how do we know what is relevant for a particular person?

By reviewing job descriptions, job ads, emails and other communications it is possible to infer the context of the person looking for content and hone the search results. So, rather than the search simply being for ‘personal productivity’, for example, it could be for ‘personal productivity’ for someone new in a management role, new to the company, in a particular division. In this case, they would certainly need to know how to use the company-wide tools, and those relevant to a particular division. They might also, being new to management, benefit from some more generic information, like tips on time management.

By understanding not just the content itself, but also the context of the organisation and the person making the search, it’s possible to prioritize the vast amounts of content in an organisation – to reduce the huge pool of content to a shorter list from which we can make a better choice that suits an individual’s – as well as the organisation’s – learning needs.

Beyond the Netflix of learning
We really don’t need another Netflix of learning. We’re not in the entertainment business, we’re into increasing performance via learning. That means we need ways of connecting people to content which go beyond the content itself, and which take into account the context of work.

Each person, each role, each workplace is different, and that’s why this can never result in a single piece of learning content being selected magically, like a rabbit from a magician’s hat, from the vast amounts available in most organisations. At the end, it is always a human being who will need to select their preferred option from a manageably short list.

This complexity of context is also why the process of producing that list is something that cannot be done entirely by machine. Netflix can suggest a list of movies for you, because it’s a straightforward task informed by a huge, but relatively simple dataset. Learning, and work, are far more complex. Creating a well-honed list demands human brain power and understanding in setting up the algorithms for each work context – talking to those subject matter experts, researching what matters to the organisation, and to individual employees.
And the results of that human intelligence, combined with the right algorithms and processes can be extraordinary. I have seen Filtered use their Content Intelligence approach to search across the vast reaches of corporate content and produce targeted lists that match a particular context extremely well, and they tell me they have satisfied clients already adopting this technology.

The amount of content in the world is only ever going to increase, and existing search methods are straining to cope. Fortunately, it looks like a combination of artificial and human intelligence is providing an answer.

Leave a Comment

Your email address will not be published. Required fields are marked *

The Research Base
the reasearch base separator
the research base homepage
THE GLOBAL SENTIMENT SURVEY 2024
CONTACT
contact separator

    Newsletter
    Newsletter
    Get the latest from Don in your inbox