#9 When Innovation Outpaces Governance: A Harder Conversation About AI’s Future
Why the people building AI are more concerned than the people using it—and what that means for your organization
Key Takeaways:
- Anthropic CEO warns AI risks are being underestimated
- Mass unemployability (not just unemployment) may arrive within 5-10 years
- The frameworks to manage this transition don't exist yet
- Leaders must question workforce assumptions now
Technology has almost always moved faster than our ability to govern it.
We’ve grown comfortable with that lag.
Historically, new tools disrupt labor markets, unsettle institutions, and provoke anxiety—before new norms, rules, and roles eventually emerge.
We tell ourselves that this time will be no different.
But what if that assumption is wrong?
Two Essays Worth Your Time
If you work with AI—or simply live in a world increasingly shaped by it—two recent essays by Dario Amodei deserve serious attention: Machines of Loving Grace and The Adolescence of Technology.
The first is hopeful. It outlines how “powerful AI” could dramatically accelerate progress in health, science, and overall quality of life.
The second is more unsettling. It warns that agentic AI—especially when deployed without adequate guardrails or with malicious intent—poses risks we are systematically underestimating.
What makes these essays stand out is not that they are extreme, but that they are written by someone building the future he’s cautioning us about.
Amodei is not an outside critic. As cofounder and CEO of Anthropic, he is one of the architects of advanced AI systems. His perspective is valuable precisely because it resists both complacency and hysteria.
You don’t have to agree with every conclusion he draws. The point isn’t deference. The point is intellectual seriousness.
The Question We're Not Asking
One question raised implicitly across both essays deserves far more attention than it’s currently receiving:
What happens when agentic AI becomes not just a productivity tool, but a general substitute for human labor?
For twenty years, I’ve posed a related question to leaders in Executive MBA and corporate leadership programs:
What would you do if a new technology enabled you to reduce your workforce by 40% or more?
The answers have varied by industry and geography, but one assumption has been nearly universal: those displaced would find other work—quickly enough for the system to absorb the shock.
That assumption may no longer hold.
From Economic Problem to Civilizational Crisis
What if, within the next five to ten years, large segments of the workforce are not just unemployed, but unemployable—because AI systems are cheaper, faster, and more reliable across a broad range of cognitive and professional tasks?
At that point, this is no longer merely an economic or organizational problem. It becomes a civilizational one.
An economy does more than allocate resources efficiently. It provides livelihoods.
Since the 13th century, the word livelihood has meant “means of keeping alive.” Work has been how most people secure not only income, but dignity, structure, and social belonging.
History is unsparing about what happens when large populations lose their means of keeping alive.
The outcomes are rarely stable. They are rarely humane. And they are never peaceful.
The Frameworks That Don't Exist
These are no longer abstract hypotheticals. The technology is emerging now.
Yet the frameworks we would need to navigate such a transition responsibly are conspicuously absent—and show little sign of catching up at the current pace.
Not frameworks for workforce transition at scale.
Not frameworks for organizational redesign.
Not frameworks for leadership accountability.
Not frameworks for performance management.
Not frameworks for how economic value is distributed and dignity is maintained when human labor is no longer central.
What Leaders Must Do Now
We don’t need perfect answers today. But we do need better questions—and far more courage in asking them.
If you are a leader, start close to home:
Ask what your workforce strategy assumes about the future of human labor.
Ask what breaks if those assumptions fail.
The most dangerous response right now is reassurance by analogy—telling ourselves this will unfold like past disruptions simply because that’s what we’re most comfortable believing.
The people building these systems are telling us the stakes are different this time.
It would be wise to listen.
P.S. If you enjoyed this read, consider sharing it with someone who’d benefit.
P.S.S. What is your organization doing to scenario plan for both the opportunities and challenges of advanced agentic AI? Let me know via email—I read and respond to every message.

