The Old Dog's New Trick: AI as a Force Multiplier in Building Technical Training

For a long time, I was a sceptic, someone who resisted the use of GenAI in my day to day. I thought that it wouldn't be able to deliver the goods. That changed when we pivoted our development process, and the tools became an essential part of the workflow.

The Old Dog's New Trick: AI as a Force Multiplier in Building Technical Training

Let me be frank upfront: I was not a believer in all this new fangled AI.

I've spent the better part of three decades doing product management across some pretty demanding technical domains. Semiconductor capital equipment. Networking. Industrial measurement instrumentation. Nanotechnology. If you've spent time at the atomic scale of anything, you develop a healthy skepticism toward hype. So when the generative AI wave started crashing into every workflow conversation, my initial reaction was somewhere between eye-roll and mild irritation. I've watched a lot of "transformative" tools come and go, and most of them made good PowerPoint slides and mediocre improvements to actual work.

I was wrong. And I'm old enough and stubborn enough that admitting that doesn't come easy.

Let me set the table: For the past several years I've been a product manager in the technical training organization at a tech major, focused on networking products, solutions, and certification programs. Historically, our content development process was a classic Big Up Front Design effort (think waterfall, but slower). Before a single lesson was written, we'd spend weeks defining learning paths, aligning with SMEs, arguing over taxonomies, and then producing massive design documents that would sit in review purgatory. By the time a developer touched actual content, half the technical landscape had shifted.

We've been working to change that. The move is toward modular development, where each module is scoped tightly against a single terminal learning objective. Nimble. Iterative. Makes sense on paper. The problem was that the front-end work — module definitions, skill mappings, rough outlines, content type recommendations, suggested assessments — still required the same heavy lift, just repeated at a smaller granularity. Multiplied across hundreds of modules, you've just moved the bottleneck, not eliminated it.

Enter generative AI

What changed my mind wasn't a demo or a vendor pitch. It was the output.

I spent time developing prompts — carefully, iteratively, refining based on what came back. Not "give me a course outline for OSPF." Real structured prompting: setting the context, defining the audience profile, narrowing the terminal learning objective(s), the assumed prerequisite knowledge, the delivery modality constraints, the tone for adult technical learners. Layered context. And then I started tuning recursively, feeding the output back in, sharpening the edges.

What came out the other side was not a rough draft. It was a complete module specification: skills mapped to the objective, a content outline with recommended content types (when to use text and graphics versus video versus hands-on lab), and a set of suggested assessments with enough specificity that an instructional designer could act on them immediately. No ambiguity. No "we'll figure that out in development."

I ran these against what our SMEs typically produce in the same timeframe. It wasn't close.

That's not a dig at our subject matter experts - they're deep, and their knowledge is irreplaceable. But their time is expensive and their availability is constrained. And, if I'm being honest, their time is limited, a major factor in the time to develop (and a constraint on our ultimate ability to deliver). Getting our SME's to commit to paper took a minimum of months, months where we weren't creating content. The AI produced a comparable designs in minutes, and in many cases exceeded what the SME would have submitted in terms of instructional structure, assessment design, and content sequencing logic.

I want to be precise here, because precision matters: I still question the accuracy of the output. The technical content requires validation. But from a product management standpoint, the scaffolding - the architecture, the instructional logic, the mapping of content to objectives - has been consistently strong. And that scaffolding is exactly what eats PM time.

The big picture

Here's what I think us old-timers miss when they resist these tools: your experience is not a liability, it is the secret sauce.

Generative AI is a powerful instrument. Like most powerful instruments, it produces garbage in the hands of someone who doesn't know what they're doing. If you don't understand how adults learn technical material, you'll get a prompt that produces something that looks like a training module but teaches nothing. If you don't understand the difference between a learning objective and a performance outcome, the AI will cheerfully generate beautifully written nonsense.

But if you've spent your career wrestling with markets, customers, SMEs, sales teams, and product launches? You know what good looks like. You know what questions to ask. You know the difference between a spec that's ready for development and one that's going to crater in review. That judgment doesn't go away - it becomes the quality filter on what the AI generates.

I spent years in nanotechnology watching people try to interpret AFM images without understanding the physics of what they were seeing. The instrument was extraordinary. Yet the output was only as good as the operator's ability to distinguish signal from artifact. Same principle applies here.

The real deal

The productivity story is real. What used to take our team weeks at the learning path level - before a single lesson was written - we're now doing in days. Module-level specifications that would have been three rounds of SME review are landing production-ready on the first pass. Our instructional designers and development partners are getting cleaner inputs and spending less time in revision hell.

More importantly: we're not just going faster. We're building more consistent product. The modules have better structural integrity across the portfolio because the AI applies the same instructional logic uniformly. Human variability, the kind that makes some SME deliverables brilliant and others a mess, gets smoothed out at the design stage.

Seeing the light

I started this journey as a skeptic. I've become an avid practitioner. Don't get me wrong, the tools aren't perfect, and they're not a replacement for domain expertise or experienced judgment. But for a product manager who has always had too much to do and too few hours to do it, they are the most significant force multiplier I've seen in a very long time.

And I've been around long enough to know what that means.

Do you have a story of your use of GenAI for your world? Share below in the comments!