We asked a simple question: who should be responsible for preparing early-career professionals to use AI effectively at work?
In conversation, the answers sounded reasonable. Thoughtful, even.
Everyone talked about shared responsibility. Collaboration between universities, individuals, and employers.
The idea that no single stakeholder could solve the AI readiness challenge alone .
Then we asked the same question in writing and forced people to rank who should be most responsible: universities, employers, or individuals themselves. .
The tone flipped.
Both employers and interns ranked employers as the least responsible for AI preparation.
Where they diverged was at the top. Employers pointed to individuals as bearing the greatest responsibility. Interns were split, ranking universities slightly ahead of self-upskilling .
When responsibility for AI readiness is shared in theory but shifted in practice, no one fully owns the problem.
And that’s where the internship training gap begins.
This mismatch helps explain why AI training keeps falling through the cracks. Our data shows exactly where the breakdown appears .
On formal training, there’s consistency:
When AI guidance is structured, documented, and intentional, it lands.
The real issue lies in informal internship training.
Nearly 1 in 5 attempts at informal AI guidance fail to register with interns
The result:
Employers think they’re providing AI workplace training.
Interns aren’t experiencing it that way.
And the AI skills gap widens.
This isn’t isolated to internship programs. It reflects a broader AI workforce readiness problem.
According to KPMG and the University of Melbourne’s 2025 global AI study:
The Digital Education Council’s AI in the Workplace 2025 report shows:
AI adoption is accelerating across industries.
AI readiness training is not.
The AI training standoff persists because each pillar faces structural constraints.
Curriculum reform takes time. Approval processes are slow. By the time a new AI module is introduced, the tools have evolved.
Only 6% of employers believe higher education is moving fast enough to stay relevant, while 80% say progress is too slow or largely absent .
Universities are built for academic depth, not rapid AI tool iteration.
Employers often hire for productivity, not foundational skill-building.
When AI training exists, it tends to focus on:
Rather than structured AI capability development.
As our internship data shows, informal guidance frequently fails to register .
Self-directed AI upskilling is increasingly expected.
But without:
It can produce confidence without competence.
Learning how to use AI tools is not the same as learning how to use them responsibly in professional environments.
When no one owns AI readiness in internships, the consequences compound .
Interns enter AI-enabled workplaces without guardrails.
Employers expect fluency but don’t invest in structured onboarding.
Universities teach theory but can’t replicate real-time workplace application.
The KPMG and University of Melbourne research found reports of inappropriate, complacent, and non-transparent AI use in professional settings, with governance lagging behind adoption .
This is not simply an AI skills gap.
It’s a coordination gap across the talent pipeline.
Universities point to limited resources and slow reform cycles.
Employers point to individuals.
Individuals look back to institutions.
The finger-pointing continues while the readiness gap widens .
If the traditional pillars of education and employment cannot independently solve AI workplace readiness, the question becomes:
What fills the void?
The Training Standoff is the central finding from the final paper in our Future-Ready Talent Series .
Download the full report to explore:
Bridging the Gap: How Structured Internships Can Solve the AI Workplace Training Crisis