Learning Technologies - 25 years and 5 concepts
What could you do at 25?
I remember the thrill of finally having affordable car rentals due to the decreased risk factor, and going into ‘over 25’ venues and realising they weren’t that interesting. Based on recent posts to mark this milestone, 25 is a reflective benchmark for the Learning Technologies conference. As a Learning Techie with 12 years under my belt, it’s a firmly established fixture in my annual calendar. It’s also an invaluable opportunity to meet clients and former colleagues, see what’s new, and engage in conversations I seldom have elsewhere.
So what were the highlights, and what were the dimly lit rooms of this year’s Learning Technologies exhibition?
Read on to find out…
Skills Frameworks
You may be saying to yourself right now “But I was there. Why aren’t you starting with AI?” - I’ll come onto that in a moment.
As Charlie noted in his last post, skills are big right now, and it’s clear that the trend is still on the rise. Many platforms were touting “Skills Graphs” and various forecasting mechanisms at the individual, team, division, and business levels. This was coupled with content delivery that was highly nuanced to specific sectors. I found myself thinking that we all have similar problems, and so much of the sales talk had drifted away from the foundational problem to be solved.
What are ‘skills’ solving? From a tech perspective, they are a way of applying a quantitative measure to something qualitative, what a human knows. However, few of these solutions get to the next step, which is what a human can apply, or better yet, make decisions under pressure in their specific work context. Ultimately, unless skills can be tied to measurable behavioural change, they may be obscuring the problem rather than solving it.
Where do skills come from?
Another thing on my mind is where all these predefined taxonomies of ‘skills’ come from. One company answered this question and said its taxonomy was based on the World Economic Forum’s list. I found this refreshing, although it also got me wondering. Based on my previous experiences working in education, where I supported trainee social workers and NHS staff, skills were ostensibly umbrella concepts that allowed people to map meaningful working experiences to them. Yes, there was also a lot of mandatory training to map as well, but the real-world evidence was vital because only those doing the job, their peers, their mentors, and those with supervisory capacity knew what those skills were. Only they knew how to look at this self-reported evidence alongside real-world evidence and service user feedback to show ability, progress, and the impact of change. In short, skills led from the top down must be gathered and understood from the bottom up; stock definitions will only get you so far.
AI - augmentation at best
This is where AI comes in and will suggest, populate, or recommend skills needed to reach the next level or role, etc. However, as is the Achilles Heel of all AI, this assumes that the foundational model is correct. You cannot buy that model. You have to build it with the humans doing the work in the real world. I’m just going to leave this word here for later…context.
So, what of AI? AI is a tool, not a solution. Its positioning as a solution at Learning Technologies, in many cases, added to the noise. It was positioned as the remedy to a reduced workforce and tight budgets. However, I felt a tangible “trough of disillusionment” deepening all around me… when one vendor was touting their enhanced AI content-building capabilities, and the slide with the privacy practices and data sharing prompted the most smartphone cameras into action.
From an accessibility and inclusion perspective, I was astounded by one technology, which looked like a ring-fenced implementation of GPT 4.0. It suggested a more approachable writing style would include emojis in the middle of sentences. This would result in a poor experience for those using screen-reading technologies and those who might struggle to interpret the meaning of emojis for various reasons. The assumption in this case was that the learning professional would quality control the build process at every step. That is a big assumption concerning editorial awareness and knowing what good looks like.
It wasn’t all bad on the AI front though. It was being implemented well on the skills and data side of things, where there were user-defined models and clear contexts of use - intelligently selecting content to deliver rather than generating it. Here’s that word again, context.
Select image to zoom
L&D: Learning & Data
The data nerd in me is very excited about the trend for user-driven data reporting, and I wrote in my notes, “Is it Learning & Development or Learning & Data?”. From enhanced data analysis in off-the-shelf products to more bespoke tools and advanced data wrangling, it’s a brave new world for us data heads. There are still a few inflexible dashboards making the rounds.
Data Warehousing
This is not a new term, but it was great to hear from companies applying data warehousing based on a user-centred approach and making it possible to compare data ingested from several systems through xAPI and other methods. The key questions for me are always how you can get the data out of a system for more analysis or to move it and how qualitative data, like feedback from real people on the job, can be included. The “data-out” question was often answered, which was heartening; the “real people” question – not so much. Gone are the days of closed boxes; people want their data in a harmonious format in the same place, and we are getting closer to that dream.
Again, though, what problem is this solving? It boils down to impact and the need to show a clear connection between the work of L&D and strategic and business goals in tough economic times. There is a clear move from retention and culture to measurable return on investment. However, it would be a mistake to believe that numerical or quantitative data will prove that alone. It’s only one small part of the impact picture. The impact picture is intrinsically human.
Back in the human realm, the importance of data storytelling is being highlighted, and this is great to see. It’s not enough to have the data; you must also know how to socialise and humanise it, connecting it to people and places.
Hyper-personalisation
This is probably my least favourite buzzword of the day. Tech is no longer content to continue the personalisation trend; it now needs to be hyper-personal, and what makes it hyper-personal? Why, it’s the AI secret sauce, of course! I am all for personalisation, but this needs to be built on a real-world model. Your model. You cannot buy someone else's. This is personalisation to the extent that most shapes will fit into a square.
Personalisation worked well when designed for available time and context of work with AI-mediated research to determine what a person needed, building plans that work for them. The more successful approaches used human oversight to improve these efforts further.
As an enthusiastic career technologist, I always come away from these events valuing the human more than ever. After all, human problems fuel technological innovations. We must be clear on our underlying ‘task’, ‘job’, or problem to make the best tactical use of technologies as tools to meet our needs.