Back to all posts
7 minAI StrategyMay 9, 2026

AI Rewards People Who Learn by Doing

AI does not replace learning. It tightens the feedback loop for people who believe they can figure things out and stay engaged with the work.

RM

Ryan Macomber

Founder, VibeSec Advisory

The short version

AI rewards people who are willing to get lost.

That is the part I could not explain cleanly until I started reading the research.

I learned more in the past year by stumbling into new technologies during real projects than I did by watching clean tutorials. The mess was the point.

AI did not make the learning easy. It made the loop tighter.

I could try something, hit a wall, ask why it broke, get a new path, test it, then compare the answer against what actually happened. That loop is learning by doing with a tireless tutor sitting next to you.

Why learning by doing works

A 2014 PNAS meta-analysis looked at 225 STEM studies and found that active learning improved exam performance. It also reduced average failure rates from 33.8 percent under lecture to 21.8 percent under active learning.

That matters because most AI training still looks like passive learning. Watch the demo. Copy the prompt. Save the framework. Move on.

That is not how the skill sticks.

The skill sticks when you use the tool on work you actually care about. Your workflow. Your edge cases. Your broken setup. Your confusing error message at 11:17 p.m.

That is where AI gets interesting.

It gives you more attempts per hour.

Where self-efficacy changes the outcome

Self-efficacy is a research term for something practical. It is the belief that you can figure things out if you apply effort, strategy, and persistence.

That does not mean you think everything is easy. It means you do not treat friction as proof that you are done.

Self-regulated learning research describes a loop: plan the work, perform the task, monitor progress, reflect on what happened, then adjust the next attempt. Motivation and self-belief help keep that loop moving.

This is why AI can be such a strong learning accelerator for people with high self-efficacy.

If you already believe you can figure it out, AI gives you more surfaces to push against. You ask the model. You test the answer. You notice what is wrong. You ask a sharper question. You compare approaches. You keep moving.

A person with lower self-efficacy may hit the same bad answer and stop.

A person with higher self-efficacy treats the bad answer as another clue.

AI is not the teacher. The loop is the teacher.

Ready to apply the FORGE framework?

VibeSec Advisory helps knowledge worker teams redesign real processes using the six FORGE pillars: Baseline, Skills, Agents, Guardrails, Schedule, and Capture. The next step is advisory intake, not checkout.

One recent study on GenAI-supported programming education found something that lines up with my experience.

GenAI showed potential to improve learning outcomes and self-efficacy. But there was a catch. Excessive reliance and cognitive outsourcing hurt knowledge acquisition and long-term transfer.

The best learners were not the ones who blindly accepted the output. They actively critiqued AI-generated content and used it to construct their own understanding.

That is the whole game.

AI helps when it keeps you engaged with the problem. It hurts when it lets you avoid the problem.

A 2025 review on AI and learning makes a similar point. If AI only substitutes for an old method without changing the depth of thinking, the learning benefit is limited. If the learner uses AI to generate complete work instead of revising, questioning, and testing, the tool can undermine critical thinking.

So the question is not whether AI improves learning.

The question is whether AI increases your reps or replaces them.

What changed for me

The biggest change in my own learning has been the speed of recovery.

Before AI, getting stuck meant searching, reading docs, trying five answers that almost matched my situation, and hoping one worked.

Now I can stay inside the problem longer.

I can paste the error, explain what I tried, ask for the likely cause, ask what assumptions the answer depends on, then test the next step locally.

That does not remove the need to think. It actually makes my thinking more visible.

I have to describe the problem clearly. I have to judge the answer. I have to verify the fix. I have to decide whether the lesson generalizes or only worked once.

That is why the learning sticks.

How to use AI for learning by doing

Use AI like a sparring partner, not an answer machine.

Start with a real task. Not a toy example.

Make the first attempt yourself, even if it is ugly.

Ask the model to explain why your attempt failed.

Ask for three approaches and what each one trades off.

Pick one. Try it.

Ask the model to critique the result.

Then write down the lesson in your own words.

That last step matters. If you cannot explain what changed, you probably borrowed the answer instead of learning the pattern.

The FORGE angle

This is why I push teams toward real workflows instead of generic AI training.

A training session can show the tool. A workflow creates the reps.

Pick one process the team already owns. Give them guardrails. Let them use AI on the messy middle. Capture what works as a reusable skill. Turn that skill into a repeatable workflow.

That is how AI adoption compounds.

Not by watching another demo.

By building enough confidence that the team starts saying, "I can probably figure this out," and then giving them a safe loop to prove it.

Sources

  • Freeman et al., "Active learning increases student performance in science, engineering, and mathematics," PNAS, 2014: https://pmc.ncbi.nlm.nih.gov/articles/PMC4060654/
  • Panadero, "A Review of Self-regulated Learning: Six Models and Four Directions for Research," Frontiers in Psychology, 2017: https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00422/full
  • Chiu et al., "Effects of higher education institutes' artificial intelligence capability on students' self-efficacy, creativity and learning performance," Education and Information Technologies, 2023: https://doi.org/10.1007/s10639-022-11338-4
  • Li, Liu, and Dong, "Generative artificial intelligence-supported programming education: Effects on learning performance, self-efficacy and processes," Australasian Journal of Educational Technology, 2025: https://ajet.org.au/index.php/AJET/article/view/9932
  • Kestin et al., "AI tutoring outperforms in-class active learning," Scientific Reports, 2025: https://www.nature.com/articles/s41598-025-97652-6
  • Sailer et al., "Looking Beyond the Hype: Understanding the Effects of AI on Learning," Educational Psychology Review, 2025: https://link.springer.com/article/10.1007/s10648-025-10020-8

AI Builder Security Briefing

Practical notes on safer AI workflows, tool adoption, prompt injection, and team guardrails. No spam, unsubscribe anytime.

By subscribing, you agree to receive marketing emails from VibeSec Advisory. You can unsubscribe at any time. Privacy Policy

Ready to apply the FORGE framework to your team?

Map your first process in 10 minutes and get deliverables within 48 hours. No call required.

Cookieless analytics only. No ad tracking. Privacy