AI Upskilling Programs Are Multiplying and Most of Them Are Completely Useless
March 23, 2026
I want to say something clearly before anything else: I think the AI upskilling industry, as it currently exists, is mostly a waste of money. Not a partial waste. Not a "room for improvement" situation. A genuine, recurring, institutionalized waste of money that companies are doing to themselves while congratulating each other about it.
I feel this way because the data is pretty direct about it, and because I've watched this pattern before - with digital transformation training, with "agile" certification waves, with cybersecurity awareness programs that taught people how to spot phishing emails and then watched them click phishing emails. The names change. The structure stays identical. Someone sells access, someone buys access, a certificate gets issued, nothing changes.
Here is the situation as it actually stands. Coursera's 2025 Job Skills Report shows an 866% increase in demand for generative AI content over the previous year, making it the fastest-growing skill people are trying to acquire. That number sounds extraordinary until you look at the other side of it. 54% of workers believe AI-related skills are very or extremely important for their career competitiveness, yet only 4% are currently pursuing AI-related education or training. And even the ones who are pursuing it aren't getting much from it. A 2024 BCG study found that while 89% of respondents said their workforce needs improved AI skills, only 6% said they had begun upskilling in "a meaningful way."
So we have an 866% spike in demand and a 6% rate of meaningful engagement. That gap - that vast, expensive, badge-covered gap - is the AI upskilling industry.
The Numbers That Should Embarrass Every HR Department in America
The disconnect between what employers believe they're doing and what employees are actually experiencing is not a minor rounding error. It's structural. 44% of employers claim they offer formal AI or upskilling programs, yet only 33% of employees confirm having access to one. That's eleven percentage points of programs that exist, apparently, in a dimension workers cannot enter.
And it gets worse. According to Pew, while half of workers received some form of training in 2024, only 12% learned about AI. TalentLMS research reveals that 63% of employees believe their company's training programs need improvement, and nearly half feel that AI is advancing faster than their organization's ability to train them.
Gerald retired from facilities management four years ago, and even he asked me over dinner last month whether his old company was doing anything about AI. I said probably yes and probably not well. He nodded like he already knew. He'd been through the ISO certification years. He understood what institutional checkbox-ticking looks like.
The really damning figure comes from that same TalentLMS research: while employees prefer dynamic, flexible learning formats like multimedia and self-paced courses, the majority of workers multitask during training, and more than a third forget what they've learned almost immediately. That last part. More than a third forget almost immediately. Organizations are paying for the training equivalent of writing instructions on steam.
And if you want the darkest number of all: 52% of American employees are using AI to complete mandatory work training, including answering questions and even taking entire assessments. They are using the technology they're supposed to be learning about to pass the test about learning about it. I don't know whether to find that funny or just sort of exhausting. Probably both.
The Market Is Growing Because the Problem Is Growing, Not Because It's Being Solved
Here's what the headlines about the AI training market miss completely. The global AI in Learning and Development market is expected to reach around $97 billion by 2034, up from $9.3 billion in 2024. Every newsletter and analyst deck treats this as a sign of health. I think it's a sign of a problem that isn't being fixed.
A market that keeps growing because the underlying skill gap keeps widening is not a success story. It's a subscription. AI-driven upskilling and reskilling is now an urgent necessity - yet, despite the clear demand, the collective response remains fragmented, reactive, and in many cases, ineffective. That sentence is from the Aspen Institute, not exactly a fringe publication. The AI in education and workforce training industries are expanding not because they're graduating capable people at scale, but because the demand keeps regenerating. The market grows. The skill gap doesn't close.
Tory came into the office last week with a new AI productivity certificate he'd printed out himself - one of the $29 online ones that took about four hours. He was very excited. I didn't say anything. He's already got enough going on. But the certificate will go in a drawer somewhere alongside the positive thinking workbook he bought during his separation, and neither of them will change what he does at nine o'clock on a Monday morning.
That's the fundamental problem with this industry. Most companies spend disproportionately on literacy because it is visible and easy to measure. Fewer lean into adoption, which is messier and requires leadership courage. A certificate is visible and easy to measure. An employee who has actually changed how they work is messier and requires somebody to pay attention.
The Behavior Change Problem That Nobody Wants to Fund
McKinsey, to their credit, said this plainly in late 2025. Evidence suggests that training alone rarely drives sustained behavior change. They described a company that ran an AI literacy course and found that everyone left knowing what generative AI can do, but a month later, adoption was minimal because workflows, incentives, and frontline leadership behaviors remained unchanged.
That is almost every AI upskilling program that exists right now. People know what generative AI can do. They can define a large language model. They passed the module. They got the badge. They went back to doing everything the same way because their manager does everything the same way, their performance review doesn't include any mention of AI usage, and the tools they'd need to use aren't integrated into anything they touch during a normal workday.
The organizations that are actually seeing results are doing something structurally different. They introduced AI assistants directly into the flow of work, trained supervisors to model adoption, redesigned performance metrics to reward experimentation, and created peer-led support communities. That is not a Coursera module. That's a management initiative. It costs more than a license fee and it requires executives to actually change how they operate, which is apparently a very large ask.
67% of HR professionals said their organizations were not proactive in upskilling employees to work with AI; 51% said enhanced training is the top need for their organization. Those two facts in the same sentence should be alarming. The people responsible for training are describing training as their biggest unmet need while also acknowledging they're not doing it proactively. That's not a training problem. That's a leadership problem wearing a training problem's coat.
What Good Actually Looks Like (It's Boring and That's the Point)
The programs that produce real results have a few things in common, and none of them are exciting enough to put on a conference slide.
First, they're specific. Not "AI literacy" as a general concept, but here is the tool you will use, here is the workflow it replaces, here is what you do when it gives you a wrong answer. Hands-on learning where workers come to understand the technology's flexibility and usefulness - and see that they can use it to think about doing their jobs in different ways - is what inspires real engagement. That requires someone to design the learning around an actual job, which is harder and slower than deploying an off-the-shelf module to ten thousand people and calling it done.
Second, they measure the right things. Relying on course completion rates alone to measure progress is insufficient. Tracking key performance indicators like productivity gains and employee engagement is what shows whether an L&D strategy is working. Course completion rates measure whether people clicked "next." They do not measure whether anything changed. These are different things that companies have chosen to treat as identical because one is easy to put in a slide deck and one requires actual work.
Third - and this is the one that companies consistently underinvest in - they have leaders who visibly use the technology themselves. Trailblazing CEOs are now spending more than eight hours per week on their own AI upskilling and investing twice as much as their counterparts in upskilling and capability-building across their organizations. That's not a coincidence. The organizations seeing real returns have executives who are learning alongside their teams, not deploying training programs from a distance like a humanitarian aid drop.
Derek told me last week that his whole department got assigned a four-hour AI fundamentals course and had until Friday to finish it. He did it in one sitting, at 2x speed, while eating lunch and looking up something about Andor on his phone. I believe him completely. The course was probably fine. Derek is fine. Nothing changed.
The Certification Problem Is Getting Worse, Not Better
One specific corner of this market deserves particular attention: the AI certification explosion. There are now more AI certifications available than any reasonable person can evaluate, and the quality range is vast. With so many AI certifications available, it can be hard to know which ones are worth your time and money. That sentence, from a recruiter's guide, is doing a lot of polite work. What it means is: most of them aren't.
The ones that hold up - Google's cloud certifications, the AWS AI Practitioner credential, IBM's professional certificate series, Andrew Ng's work through Coursera - share the characteristic of requiring people to actually do things, not just read about them and answer multiple choice questions. The real differentiator of quality AI certifications is their focus on practical application. The best programs don't just teach theory - they help you build working AI solutions and understand model deployment.
But those programs are the minority. Most of what's being sold right now is closer to what Brookings described when analyzing retraining programs broadly: there has been comparatively little discussion about what these programs actually look like and their feasibility, and the evidence that does exist provides reasons to be skeptical of retraining as a reliable means of supporting labor adjustment.
I've been doing this work long enough to remember when everyone needed a Lean Six Sigma green belt. Then it was a PMP. Then a digital marketing certification. The industry produces credentials faster than employers figure out what those credentials actually mean. AI certifications are following the same pattern, just faster, because the underlying technology moves faster and nobody wants to be caught without a badge when the audits happen.
What Business Owners Should Actually Do With This Information
I'm not going to tell you to cancel all your training budgets. Some programs genuinely work, and the underlying need is real. The World Economic Forum's 2025 Future of Jobs Report estimates that AI will displace 92 million jobs but create 170 million more by 2030 - and whatever you think of that projection, the directional reality is that the skills required to stay useful in most industries are shifting, and they're shifting faster than most companies are moving.
But the decision you should be making is not "which AI training platform do we license" - it's "what would it actually look like for this specific person in this specific role to do their job differently because of AI, and what would we need to change about their workday to make that happen?" That question is harder to answer. It requires you to know what your people actually do. It requires your managers to model the behavior. It requires patience for a process that doesn't produce a dashboard of completion rates by Thursday.
Stephanie asked me once how much we should budget for AI training. I said it depends entirely on what changes we're willing to make to how people are managed and measured afterward, because if the answer is nothing, then the training budget is whatever you're comfortable lighting on fire. She looked at me like I'd said something strange. She is not accustomed to thinking about training as something that requires follow-through from the organization, not just enrollment from the employees.
Most employees can learn the basics of prompting or the terminology of generative models in a few hours. The hard part is changing how leaders and teams think, decide, and collaborate in an AI-enabled environment. That's the part nobody wants to buy a course about, because it's not really a course. It's a sustained management commitment. And sustained management commitments don't come with a digital badge you can add to LinkedIn.
The AI upskilling market will keep growing. The skill gap will probably keep growing with it, at least for a while. The companies that pull away from the pack won't be the ones that deployed the most training modules. They'll be the ones that figured out the difference between teaching people about AI and actually changing how the work gets done. There aren't that many of them yet. But they exist, and the gap between them and everyone else is already widening.