BUBOT Learning

← Back to Blog

The Cost of Training that Nobody Measures

The Cost of Training that Nobody Measures
Jason Callina · · corporate-training learning-development roi

The number that stuck with me came from a Gallup report. Voluntary employee turnover costs US businesses about $1 trillion a year. Gallup also found that 42% of that turnover is preventable. Employees say their manager or organization could have done something. Nobody did.

That’s roughly $420 billion a year walking out the door because of things we already know how to fix.

I talked about this in my last two posts. Click training rewards speed instead of understanding. Paper assessments measure recall instead of capability. The pushback I kept getting was, sure, but is anyone in the trenches actually saying that? So we went and asked.

What 12 industry leaders told us

Over the past few weeks at Bubot, we ran structured interviews with training leaders, executives, and operations professionals across construction, enterprise tech, gaming, financial services, life sciences, nonprofit, legal, and a few more. One question framed every conversation. How do you know whether training actually worked?

We didn’t lead with our product. We asked about their experience. The same things came up in every conversation.

Almost nobody believes in the metric they’re using. The word that came up most often was useless. Not “imperfect.” Not “we’re working on improving it.” Useless.

“Completion rate is almost useless.” — Senior Creative Director, Gaming

“We say all the time, we just want to get through them.” — EVP & COO, Banking

“95% completion — but how confident are you that they can apply it? I don’t know how you’d feel confident without them solving a problem in a hands-on practical way.” — VP of L&D, Enterprise Technology

Even the people who know completion is worthless keep reporting it upward. One interviewee gave the cleanest explanation for why:

“Confidence sells competence. Whether you are or not is irrelevant. Perception is reality.” — Head of Production, Gaming

People who complete training present confidently. Confidence reads as readiness. The completion metric survives because nobody has anything to replace it with.

The gap between knowing and doing is the real problem, and it’s invisible. Every person we spoke with described the same failure. Someone completes training, reports understanding, then goes back to doing exactly what they were doing before.

“The knowing-doing gap is massive. It’s very difficult to measure.” — Senior Creative Director, Gaming

“They don’t get it until post-go-live.” — Learning Leader, Enterprise Technology

“Some people just are so stuck in how they’ve done things. Even when learning something new, they’re still adapting the old behaviors.” — Training Executive, Nonprofit Sector

What people actually want to measure is whether training changed behavior. Whether someone applies new judgment under pressure. Whether they internalized intent, not just answers. The tools that exist measure none of that.

Half the failure happens after the classroom. This one surprised me. The most experienced operator we spoke with put the failure rate at 50/50 between the training itself and what employees walked back into.

“The person has to go in with the right mindset to get anything out of it. And then the supervisor has the other responsibility to support and reinforce it. I think we fell short on that a lot. Probably 50-50 on the responsibility of failure.” — EVP & COO, Banking

“Training happens in the nuance. It doesn’t happen in the classroom.” — EVP & COO, Banking

When people practice in real conditions, with managers who reinforce, with colleagues who hold each other accountable, training sticks. Without that, completion is a check in a box.

What actually works is the thing organizations can’t scale. Everyone who described effective training described the same thing. Live practice. Real scenarios. Situations where people had to make decisions and own them.

“I have never met anyone who can’t learn by doing.” — Training Manager, Construction

“I can train almost anyone. The real question isn’t can I — it’s should I, given the resources it takes.” — Head of Production, Gaming

The model that works is the one organizations can’t afford to run. So they buy more content instead. More modules. More videos. More slide decks. They keep treating it like a content problem. It isn’t. It’s a measurement problem dressed up as a content problem.

Why $420 billion walks out the door

Now back to Gallup. A trillion dollars a year in voluntary turnover. Forty two percent of it preventable.

A lot of that money walks because of exactly what those interviews described. People get hired, sit through onboarding that doesn’t land, get assigned to managers who were never developed themselves, and quietly check out. The training department reports 95% completion. The exit interview tells a different story.

Most organizations already know their training isn’t working. The completion dashboard goes up. The capability gap stays the same. Nobody asks the next question. Can people actually do the thing we trained them to do?

That question is expensive to answer. Running a real assessment at scale is its own bureaucracy. So companies keep buying more content.

What we built

The reason I’m writing this isn’t just to wave the problem around. We built something for it.

Instead of capturing whether someone completed a module, our platform captures how someone thinks through a problem. The decisions they make. The reasoning they show. The questions they ask. About 500 words of decision making evidence per session. Something that can’t be gamed, can’t be faked, and doesn’t need a facilitator to grade by hand.

Across 471 active sessions at Hult International Business School, Clark University, and enterprise deployments, our learners hit 90.7% engagement, 48.6% higher order thinking on Bloom’s taxonomy, and a 93.2% mastery rate at the 80% threshold. For comparison, corporate elearning averages 10 to 20% engagement and traditional assessments hit higher order thinking less than 15% of the time.

One executive we interviewed described what we’re building before he ever saw it.

“An algorithmic workout buddy — it knows when to push and when to back off, and it never lets you just coast.” — Head of Production, Major Gaming Studio

That’s the model. Push when push is the right move. Back off when it isn’t. Never let people coast through to a completion certificate that doesn’t mean anything.

If you can’t measure it, you can’t fix it. Right now, hundreds of billions of dollars a year are being spent on things nobody is measuring honestly.

If you’re asking these questions inside your organization, take a look at what we’re building at https://www.bubotlearning.com or reach out at info@bubotlearning.com. I’d like to hear what the numbers look like on your end.