Zephyrnet Logo

Workplace Learning: A Follow-up

Date:

I was surprised my previous post on workplace learning got some attention from folks in the field. Apparently, I struck a nerve with problems that are still difficult some twenty years since I worked in that subfield. And yet, there are ways to tackle these problems. New tools combined with new approaches can address reusability, quality, and rapid update challenges in novel and more effective ways.

In this post, I will respond to a couple of comments on LinkedIn. I’d like to see if we can build a dialog.

Training vs. Learning

Brandon Carson, the Vice President of Learning and Leadership at Walmart (among other things), commented,

What Michael Feldstein is saying gets to the heart of corporate learning and what I’m calling the “dark ages of training.” Its genesis begins with the simple fact that too many orgs won’t just say “training” anymore. Like somehow this became a bad word at some point. And for too long, people who aren’t training experts are involved in making decisions about what training interventions should be. The business has little to no clue about what, when, and where training should be. So, let’s get back to some fundamentals. We are chartered with helping to increase performance. We need to be selective in how we do that. And we need to say no when we know it won’t work. And we train employees. And hopefully, they learn how to perform better in their work. There’s nothing wrong with the word “training.” Now, don’t get me started on the whole skills movement.

Brandon Carson’s post

Given how long I’ve been out of L&D, I can only speculate regarding what’s on Brandon’s mind for parts of this comment. For example, I wish somebody would “get him started” on “the whole skills movement.” I suspect we have a complementary problem with the competency movement in higher education, particularly where it claims to intersect with job skills. But I can only guess.

The distinction he draws between “training” talk and “learning” talk seems more legible to me. Higher ed underwent a roughly similar transformation when we stopped talking about “teaching” and switched our language to focus on “learning.” There was a lot behind this switch, some of which was good. For example, “teaching” focuses on the performative act—the intervention—while “learning” focuses on the outcome. “Learning” also encourages enlisting the learner as an active stakeholder with motivations, goals, and a need to participate for learning to occur.

But the switch led to a lot of fuzzy-headedness too. On the vendor side, textbook publishers began to realize that students who hate their products can be very creative in finding ways to avoid buying and using them. So they focused more on features that students might care about (including, reluctantly, price). But in doing so, they missed the core complaint that students were buying products their instructors weren’t using in meaningful ways. In other words, the textbook contributed to breaking the social contract between teachers and students about the value of the work that teachers are assigning but not meaningfully incorporating into their teaching. The real problem is that the products fail to provide the educators with tools they can easily make an essential part of what they believe to be their core teaching work rather than an ancillary and a necessary evil. They therefore don’t use them much or well in their teaching. Students, in turn, seeing them as a waste of time and money, don’t use them either and increasingly don’t buy them. Educators, seeing that students are not using the products they are assigning, rely less on them, for example by replicating the reading in class lectures. Which perpetuates the vicious cycle.

On the teaching side, the mess gets worse. Educators find themselves on a slippery slope from engaging with the students as active participants to catering to their needs. This is how I find myself on a stage at a major conference debating whether it is more critical for composition professors to teach students how to write well or to listen to their desires and teach them whatever they want to learn.

That’s a real thing.

I suspect the dynamics in corporate L&D have some differences. First, providing education as an HR benefit is back in fashion now after a few decades of corporate neglect. This is an important function but entirely different from training employees on essential and immediate skills they need to perform their jobs today. Second, mediocre, braindead approaches to Design Thinking seem significantly more prevalent in the corporate world than in higher ed. One lousy design thinking workshop seems capable of inflicting massive brain damage on virtually everyone exposed to it. Lost in a haze of corporate New Age-ism, the cultists seem to forget that the two key words in Design Thinking are “design” and “thinking.”

There are also nuances by industry and even department. L&D for manufacturing line workers is very different than L&D for pharmaceutical chemists. While most folks who a computer or robot can’t replace are knowledge workers these days, the degrees of freedom and need for consistency still vary pretty dramatically.

And finally, there is a legitimate and challenging trade-off between capturing the ever-changing workplace know-how and business processes that the frontline workers see most clearly and maintaining the effectiveness and quality control around knowledge and training interventions that professionals handle best. This is not entirely unlike the tension between the textbook publishers, who can only manage to update their editions every five years, and say, the biology professors whose knowledge of disease and the immune system has been evolving at an astonishing rate since the start of COVID. In the current higher ed system, either we accept the chaos of everybody teaching what they think is the latest understanding, or we drive the pace of updating education based on the ability of the centralized publisher to keep up, given their heavy processes.

The barrier is not technology in either the corporate world or higher education. We have the raw technological building blocks we need to create a system in which innovation is captured at the edge, pulled into the core for collaborative review and refinement by professionals when that is called for, and then pushed back out to the edge for use and adaptation when needed. It’s possible to create easy authoring, scaffolded by UX and AI/ML, that lowers the barrier for subject-matter experts to develop first-iteration learning experiences with some educational validity.

Imagine an authoring system that does something like the following:

  • A subject-matter expert (SME) creates some content.
  • The system says, “Hey, it looks like you’re trying to teach about X. Based on what you’ve written, here are some possible learning objectives. What do you think? Do you want to add them? Do you want to edit them?”
  • Then the system says, “OK, now that we know what you’re trying to teach, it would be good to assess the learners. Here are some suggested assessment questions. What do you think? Do you want to add them? Do you want to edit them?”
  • Finally, the system says, “OK, you have content, learning objectives, and assessments. Let’s look at them side-by-side. Do they look right together? Do you want to change anything? By the way, I have some additional content (or learning objectives, or assessments) that seem to fit with what you’re trying to do. Would you like to see some suggestions?”

These sorts of algorithms are in use at scale today in the textbook industry. They work pretty well. But they’re not being used in this way to help SMEs in the field.

Now imagine that the course draft could be shared with the L&D department. They could ignore it. Maybe it’s just fine as-is and doesn’t merit the attention of a professional learning designer. Maybe it can be improved. Maybe it’s OK to run for a while, but the learning designers (or learning engineers) want to look at data regarding how well the learners perform on the assessments over time to see if it needs fine-tuning. What if they could engage with the SMEs directly in the content, like comments on a Google Doc? Track changes? Make versions? Compare different versions? Do A/B testing for the effectiveness of the training? Only a subset of courses would need any of this, and a smaller subset would need most or all.

Let’s go a step further. Suppose the libraries of premade content L&D departments license were designed to work with such a system. Suppose you could edit a course you licensed to make more sense in your context. Suppose you could fork it. Test it for continuous improvement. I’m not talking about just moving around blocks of locked-down content. And I’m also not talking about writing your course with the help of a licensed library of little tiny bits. I’m talking about taking the 80% of pre-created content and creating or editing the 20% you need to be different, whether at the word, sentence, lesson, or module level. For example, maybe a course is good, but the terminology differs from your organization-internal vocabulary. Why can’t you change just that? It could make a big difference.

The existing tools seem to be designed to solve the wrong problems. A cursory scan of the leading corporate L&D authoring platforms and content libraries suggests to me that they still are designed for traditional training development workflows. Sure, they have easy authoring templates to lower the barrier to creating basic training. But what I’ve seen so far is skin-deep. I don’t see sophisticated workflows for creating real collaboration between the field and the learning professionals. I don’t see any serious effectiveness analytics or continuous improvement tools. The content might as well be SCORM packages or Authorware applications. As far as I can see—again, with the significant caveat that I haven’t seen much yet—neither the cloud nor AI/ML have changed the fundamental paradigm.

But they could.

Context

Mirjam Neelen, Head of Global Learning Design and Learning Sciences at Novartis, wrote,

One sentence that stood out for me: “the key missing ingredient is context.”

Mirjam Neelen’s post

Before the term “reusable learning objects” came into vogue, I remember reading an article in Performance Improvement Quarterly called “Instructional Design Paradigms: Is Object-Oriented Design Next?” The idea was to take the principles of (then still relatively new and hot) object-oriented programming (OOP) and apply them to content. Instead, we got a whole bunch of tiny bits of content with “metadata.” Tags. Labels. You know the tools you could (at least theoretically) use to organize your email inbox? Yeah, that’s roughly what we’re talking about.

If you were writing a series of explanations (like a training manual, for example), you wouldn’t write one paragraph at a time, isolated from the others, and then decorate it with tags. That’s not how we craft explanations, never mind learning experiences.

Rather than composing, we should be thinking about decomposing. Suppose you started with the training manual, course, or whatever, and you broke it down into chunks that made sense. That you might reuse. Suppose you applied both human judgment and AI/ML to update your tags based on new usage contexts continuously. And suppose you expressed these relationships not just in tags but in a language designed expressly to describe relationships—i.e., context—like, for example, xAPI?

On the surface, we would have an authoring environment that feels…like an authoring environment; only it makes good recommendations about changes (like the ones I describe in the bulleted example in the previous section). Under the hood, it would have to be quite different. By tagging little bits in isolation, we’re losing 95% of the context that can help us understand where and how content is useful (or whether it is useful at all). Everything about the system would need to be designed to track context, relationships, and knowledge gained through using and reusing it in different learning contexts. I have yet to see much of that in the corporate market so far (and I’ve seen precious little of it in the higher education market, which I know much better). Where it does show up, it’s generally in highly specialized adaptive learning platforms that are more optimized for creating that magic “Netflix of education” than for reuse and continuous improvement. It’s hard to build these systems so they’re flexible and easy to author and edit in, even if that’s a primary goal (which it usually isn’t).

How you know when you’ve got it right

I don’t mean to slam the platforms in the market now. First, I don’t pretend to be current on today’s offerings. Second, when I do look at platforms, whether they’re mainstream authoring platforms like Articulate Rise, course delivery platforms like LinkedIn Learning, or more niche platforms, I see good work, and I see bits and pieces of what I’ve described above. Authoring and delivery have gotten easier in the past two decades. People are thinking about collaborative workflows and bringing SMEs into the creation process.

What I don’t see is a change in the gestalt. I see tools designed to optimize the same fundamental workflows and approaches that L&D departments had 20 years ago. You might be able to build much more polished training courses much easier with Articulate Rise than you could have with Authorware. But I don’t see how it would fundamentally change what kinds of courses you would build, how you would build them, or how you would figure out what to improve or what to build next. And I certainly don’t see how it helps to do more than incrementally improve the scaling problem by making authoring faster and easier. I don’t see how it fundamentally changes the dynamic.

That’s…odd. The old “If Hewlett Packard knew what Hewlett Packard knows” quote is more accurate than ever. We have the technology that can enable us to break out of this box now. We seem to have a failure of imagination. The same is true in higher education, of course. Different idiosyncracies drive it. But the higher education sector is caught in the same problem of being caught in a locked-down paradigm.

But it’s a bit of a chicken-and-egg problem. We get the tools we ask for, and then the tools nudge us toward specific ways of working—which were probably the same ways we were working when we asked for the better tool. Henry Ford famously (and probably apocryphally) said, “If I asked people what they wanted, they would have said faster horses.”

We’ll know we have it right when we see people work differently and ask for different kinds of improvements to their tools. The comments on the LinkedIn threads provide us with some clues in this regard. Let’s start with Ray Jimenez, Chief Architect and Founder of Vignettes Learning:

Thanks Mirjam Neelen we know all along, something is broken with L&D practices. When we focus on the workers’ context, almost always our L&D solutions, content, tactics become obsolete. Context will guide us, our north.

Ray Jimenez’s comment

“Something is broken with L&D practices.” Yup. That’s one reason why I left. Twenty years ago. What is that something? It’s the ability to be responsive to context and real-world, ever-evolving needs of workers and teams. Bartlomiej Polakowski, Senior Learning Architect at Amazon, responded to Mirjam Neelen,

Totally agree. Most companies concentrate on tools and content instead of context.

I observe this trend with purchasing more and more ..and more of[f] the shelf training every 5 years. First there were content houses, then micro learning platforms, LXPs, recently I read about nano learning services (these are micro “microlearnings”). At the end employees go and ask a colleague for help or check Google.

Bartlomiej Polakowski’s comment

When is taking a course more helpful than looking something up on Google or YouTube or asking a colleague, particularly when you’re looking for knowledge and not just a credential? That’s the bar.

So what options do L&D professionals have to respond to this need? Natalia Alvarez, a leadership and communication consultant, writes,

This is something that I consistently see. Solutions are not relevant because they don’t talk to the changeable scenarios that people are facing.

These days I tend to spend plenty of time trying to talk to the people who attend my workshops to truly understand their mindset, narratives and context. I’m becoming a student of the impact created by the learning experience that I’ve designed.

I used to think that I could learn, reflect, teach and get feedback and now I need to expand the circle and study feedback and impact using narrative, context and mindset approaches. Listening has never been more relevant than today, or at least to me. Thanks for all the great content that you share here Mirjam!

Natalia Alvarez’s comment

First, notice that she’s talking about face-to-face training sessions. In human-facilitated L&D, you can respond to learner needs flexibly. But even there, you’re often walking in semi-blind. You do your best to conduct a needs assessment and prepare for your audience, but you are still creating training somewhat in a vacuum. Getting it right is always a crap shoot. That’s not Natalia Alvarez’s fault; it is simply the best one can do with the available tools.

The fundamental problem is that L&D intervention development is treated as a separate, after-the-fact process divorced from actual business processes. With AI/ML tools like Microsoft Viva and SharePoint Syntex being incorporated into standard productivity tools, it is becoming increasingly practical to capture business pain points and important topics that identify continuous improvement opportunities and provide context for L&D needs assessment. Tools can then simplify the capturing and fleshing out of field knowledge into training modules by SMEs without demanding more time than they can give or more expertise than they can have. These first iterations can be worked on, curated, and improved, either in simultaneous collaboration or after the fact, by L&D professionals. This could be a continuous cycle.

In other words, we’ll know we’re succeeding when L&D becomes integrated with and inseparable from the business processes and knowledge work it supports in near-real time.

This long dreamed of vision among knowledge management and performance improvement geeks (of which I used to be one) is now finally practical, given the tools that exist today. It will not be easy. The design challenges of building the software and change management challenges within adopting organizations are both formidable. But they are, finally, surmountable.

Isn’t it about time?

spot_img

Latest Intelligence

spot_img