Coaching vs Courses
There's now twenty years of randomized-evaluation evidence on whether business training helps micro and small enterprises grow. The authoritative review is McKenzie and Woodruff's VoxDevLit on Training Entrepreneurs. Their bottom line is that classroom-based training has modest but real effects on business practices and survival, while more intensive individualized consulting (Bloom et al. in India, Bruhn et al. in Mexico, Iacovone et al. in Colombia) produces substantially larger gains in productivity and profits. The catch is cost: a single consultant working with a small firm runs anywhere from $4,000 (Nigeria) to $30,000+ (Colombia) to $75,000 (Bloom's India experiment) per firm. Consulting works. Consulting at scale, on a microenterprise budget, is the open problem.
Innovations for Poverty Action just published an evidence review that adds 22 recent studies focused on digital tools (apps, video, SMS, WhatsApp, online platforms) for women's economic empowerment. The first section, on digital delivery of business training, sits squarely on this same question: can digital tools close the gap between cheap-but-modest classroom training and effective-but-expensive consulting?
What the studies found
Of the six studies, only one produced sustained gains in business outcomes:
- Estefan et al., Guatemala (2023): App-based course with 28 short videos, plus three personalized 30-minute video consultations with a business advisor. Result: +6% sales, +23% profits, +5% knowledge, with a 2.3:1 benefit-cost ratio. The researchers explicitly identified the personalized consulting as what drove engagement with the app.
The other five all underperformed on business outcomes:
- Davies et al., Mexico/Guatemala (2024): Live Zoom course, nine two-hour sessions, 61% completion. Real instructors, real engagement, real short-term gains in planning practices (+28%) and sales (+24%). Six months later: gone. Women adopted practices during training and abandoned them afterward.
- Cassidy et al., Ethiopia (2024): Smartphone app with reminders, cash-prize lotteries, and chat rooms designed to replicate networking. 22% completion. No impact on practices, sales, or profits.
- Mehmood, Kenya (2024): 150 SMS messages in narrative format with an interactive chatbot. Engagement collapsed within months. No long-term effect on knowledge, sales, profits, or business survival.
- de Oliveira, Brazil (2024): 15-hour WhatsApp course in video and audio, with reminders and small monetary incentives ($4–8) for adopting specific practices. No impact on practices, survival, profits, or revenue.
- Cole et al., India and Philippines (2022): Weekly recorded phone calls with financial heuristics in a soap-opera narrative. Modest improvements in practice adoption (0.06–0.12 SD); the study was underpowered to detect profit effects.
A quick reading might conclude "digital business training mostly doesn't work." But there are clear patterns here quite consistent with the bigger picture from McKenzie & Woodruff. Estefan worked. Davies worked, briefly. Cole moved practices for the lowest-skilled participants. Why? Look at what Estefan had that the others didn't: a real human responding to a specific entrepreneur's specific situation, three times over the course of the program. The course content itself was identical for every participant. What varied, what the researchers identified as the active ingredient, was the personalized layer. Not the videos, not the WhatsApp reminders, not the digital vouchers for chicken inventory. The advisor.
Now look at Davies, which is the most instructive failure case. Live human instructors, two-hour sessions, 61% completion. By digital training standards, that's terrific engagement. Women genuinely learned. They adopted practices during the training. And then they stopped. The researchers themselves note that women "abandoned them in the long-term" and flagged that the marketing content "was ineffective and could be improved by providing more specific actionable steps." Generic content delivered to a group, on a fixed nine-session schedule, didn't carry over once the schedule ended. Live humans weren't enough. The delivery was not personalized (same content, group setting) and not sustained (course ends, you're on your own).
The other failures fit the same shape. Cassidy's app tried to substitute for relationship with engagement features like reminders, lotteries, chat rooms, and got 22% completion. Mehmood's SMS course tried to substitute with narrative quality and chatbot interactivity, and engagement decayed. De Oliveira tried to substitute with cash incentives, paying entrepreneurs to adopt specific practices, and it didn't work because the prescribed practice wasn't necessarily the right next step for each business.
Generic reminders, lotteries, narrative formats, peer chat rooms, and cash-for-behavior all share something: they're attempts to engineer engagement with content that wasn't built for the individual receiving it.
Coaching is Personalized and Sustained
My read of these results is that the active ingredient in successful business training, digital or otherwise, isn't really the digital part, the curriculum part, or even the human part. It's:
Personalization — the support adjusts to this entrepreneur's specific situation, priorities, and bottlenecks.
Sustained engagement — something pulls the entrepreneur back into action over time, rather than ending when the curriculum does.
This is exactly what makes consulting more effective than classroom training in the McKenzie-Woodruff synthesis: consultants tailor advice to the specific firm and stay involved over months. Estefan's Guatemala study delivered a light-touch digital version of that mechanism (three personalized advisor sessions woven through an otherwise generic video course) and it produced consulting-style results at training-style costs. The other five digital interventions in the IPA review delivered neither personalization nor persistence, and tried some shallow substitutes like reminders, lotteries, peer chat rooms, narrative formats, cash incentives. None of those substitutes closed the gap.
The reframe matters because it separates the mechanism (personalization plus persistence) from the delivery channel (human consultant, AI system, peer group, hybrid model). The McKenzie-Woodruff evidence tells us the mechanism works; the IPA review tells us the substitutes for it don't. Neither tells us whether the mechanism can be delivered cost-effectively at scale through software.
What this means for MAIA
We've built our service around this focus: personalized, sustained engagement delivered through WhatsApp, with an AI coach that holds context across weeks and months, adjusts to the individual's actual situation, and pulls people back into action through a sprint cadence rather than a course completion. None of the studies in the IPA review tested an AI-delivered version of what Estefan's human advisors did, and the McKenzie-Woodruff literature doesn't yet have AI-mediated coaching RCTs to consider. The question of whether AI can deliver the consulting mechanism at a fraction of the cost is genuinely open. But the research is clear about what the high potential mechanism is, and the design choices that have failed are the ones we've also been careful to avoid.
