The Content Production Trap: What 60 Years of eLearning Tools Actually Show Us
Published on
Reading Time
8 min

Tim Slade and Mel Milloway recently reframed the AI debate in a way that's worth sitting with: it's not really about whether AI kills authoring tools — it's about the deeper challenges facing instructional design. Specifically, the field's persistent focus on content production over learning outcomes.
That framing landed for me because it describes something that didn't start with AI. Every major shift in eLearning tool development over the past 60 years has followed the same pattern: a new technology emerges, the industry asks "how do we produce content with it," and the question of whether that content actually changes behavior gets deferred to the next era.
AI isn't disrupting that pattern. It's accelerating it.
The two charts below show the core tension. Training spend has grown roughly 85% since 2008. Formal learning hours per employee have declined to their lowest point on record. The measurement gap is just as stark: 90% of organizations evaluate training through satisfaction surveys, but only 35% consistently measure whether it produced any business outcomes. The framework to measure what actually matters — Kirkpatrick's four levels — has existed since 1959.
The interactive section that follows maps the full arc: six eras of tool development, each examined through four lenses — the problem the era was trying to solve, who built the content, what the output looked like, and what each era consistently failed to address. The catalog appendix at the bottom covers every tool, standard, language, and industry shift referenced in the piece.
What the track record shows
Six eras. Each solved a different problem. Each left the same one untouched.
The reach problem (1960–1989). Training doesn't scale when the instructor is the bottleneck. PLATO, Authorware, ToolBook, CD-ROMs. For the first time, one piece of content could reach thousands without an instructor in the room. The constraint meant programmers and IDs had to collaborate. Quality was inconsistent, but the barrier forced some rigor. "Did they complete it?" became the metric by necessity.
The standards problem (1990–2001). Content works on my machine but not yours. AICC, then SCORM 1.2. Solved a real infrastructure problem — content built in one tool could run on any compliant LMS. In exchange, the field adopted completion, score, and time spent as its primary measurement model. SCORM was the right solution to the wrong problem. Nobody noticed because the wrong problem was the only one getting asked.
The production problem (2002–2009). Building training requires programmers, and organizations can't keep pace. Articulate Studio, Captivate, iSpring. Genuine democratization — anyone with PowerPoint could publish. Every productivity gain was spent producing more content, not better content. The page-turner epidemic. The industry became so associated with low-quality click-through training that practitioners started hiding their job titles at parties.
The device problem (2010–2015). Apple killed Flash on iOS in 2010. Every course in every LMS was broken on mobile overnight. The next five years were spent migrating to HTML5. Storyline 1 (2012) brought real interaction design capability to a tool designers could actually use. The mobile learning conversation was dominated by "does it run on an iPad?" rather than "should it look like this on an iPad?" Responsive delivery didn't mean rethinking content design.
The format problem (2016–2021). Fixed-slide content is too slow to build and the wrong shape for how people actually learn. Rise 360 launched in November 2016 — a genuine architectural rethink, not a duct-taped responsive player. Block-based, browser-native, scroll-not-click. Output format diversity increased. Measurement sophistication didn't. The tools got better at making content look like learning.
The generation problem (2022–present). Human production is still the bottleneck. ChatGPT, Articulate AI Assistant, AI-native tools. Production bottleneck genuinely compressed. Organizations can now produce sophisticated-looking, ineffective training at an unprecedented scale. Every era solved a production or delivery problem. None of them solved the learning problem. The underlying measurement model — completion as a proxy for behavior change — has not been disrupted. That's the uncomfortable context for the AI debate.
The numbers that confirm it
If history is circumstantial evidence, the data is the conviction.
The industry is spending more to deliver less.
We measure what the LMS can track. The LMS tracks what the standard supports. The standard was written in 2001.
xAPI has offered a better path since 2013, tracking behavior across contexts, outside the LMS entirely. cmi5 removed most of the implementation friction. Adoption remains marginal. The obstacle was never the standard — it was that most organizations were never asking the question that the better standard was designed to answer.
And here's the number that ties it together: smile sheets — the Level 1 satisfaction surveys that 90% of organizations rely on — correlate with actual learning outcomes at r = 0.09 (basically, 0.) Dr. Will Thalheimer's 2017 meta-analysis found that delivery medium is essentially irrelevant to learning outcomes. What determines effectiveness is the instructional design: spaced repetition, retrieval practice, realistic scenarios, and meaningful feedback. Those factors transfer across every format — from a mainframe terminal in 1965 to an AI-generated Rise course in 2025. The tool has never been the variable that mattered.
Two possible futures for AI
The authoring tool is a production instrument. It has always been a production instrument. The question is whether the paradigm it serves — batch-produce, package, deploy, track completion — still makes sense when content can be generated and adapted in real time.
Future A: AI is another delivery shift, like mobile was. Tools absorb AI, get faster, get better. SCORM packages remain the output. Organizations still measure completion. The learning problem persists at greater speed and on a larger scale. Authoring tools survive. The field's structural problem survives with them. History says this is the more likely short-term outcome.
Future B: The batch-produce–deploy model only existed because real-time adaptive content wasn't possible. AI makes it possible. If content can be generated on demand and adapted midstream, the SCORM package as the unit of learning becomes obsolete. Not tools dying — but finally having to answer the learning question that 60 years of tool advancement deferred.
Which future plays out isn't a technology question. It's a practitioner question: are you going to use AI to produce more of the same, faster — or is this the moment the field finally asks whether what we're producing is actually changing anything?
The framework to answer that has existed since Kirkpatrick published in 1959. The tools have never been the obstacle.

