The AI Governance Playbook (Part1) | The (Not So) Quiet Explosion

We’ve all heard the AI hype -
“If your organization isn’t using AI, you’re already behind.”
“If you’re still doing XYZ the old way, you’ve already lost.”
But those headlines actually understate the intensity with which AI is tearing through the enterprise. I’ve been in IT or IT adjacent for nearly 30 years. I’ve lived through the rise of ethernet (yes, my first job was fixing token ring networks), cloud, mobile, even whatever NFTs were supposed to be. And I’ve never seen CEOs send company-wide mandates with this level of clarity and force -
“Dear Organization,
Start using AI—now.
Sincerely,
The CEO”
Take Marc Benioff’s “hard pivot” to make Salesforce “agent-first.” That bold shift paid off with an 11% stock surge overnight Fortune, Dec 2024. And the downstream effect? Companies across industries and sizes are racinmsometimes blindlymto implement generative AI with a speed that’s both impressive and, frankly, a little shocking.
We’re seeing it every day. From ChatGPT Enterprise rollouts to LLM-powered Agentic workflows to casual Slack debates over deterministic vs. probabilistic systems. One major insurance company we work with gave thousands of employees unrestricted access to build their own custom GPTs and they did. Thousands of them. Within months. When the company’s own Risk teams asked, “Do you know what your employees are doing in this tool?” the answer was a resounding "well..." Which is just jaw dropping if you worked in or around enterprise software in the last two decades.
And so, as the tools flood in, companies are now reaching for a tried and true old warhorse - governance.
More and more of them are asking for help. But here’s the catch - governance means something different to almost everyone, depending on where they sit in the org. Risk wants control and accountability. IT wants control and visibility. Legal wants control and compliance. The business wants control and speed.
From where we sit working daily with teams across every department we see the whole picture. And what that picture tells us is clear.
AI makes governance more imperative than ever before because we’ve entered the era where mistakes don’t get lost they get productized...by citizen vibe-coders. Tomorrow’s PR nightmare won’t be a stolen device, it’ll be a user-created GPT guiding users through a process that's remarkably efficient, feels utterly human, and breaches compliance in six guided steps.
This might sound dramatic, but it isn’t. These risks are real, and they come with any AI tool that doesn't specifically have guardrails built-in already. But that doesn’t mean they’re unmanageable. Like any technology shift, AI introduces new failure modes. And to manage any risk you must first understand. So let’s take a clear-eyed look at what’s actually at stake.
What’s Actually at Risk
When people hear “AI risk,” they tend to jump straight to hallucinations, deepfakes, or Skynet. That stuff makes headlines (and movies) and sure, hallucinations and deepfakes are real problems. But the risks we’re seeing on the ground are quieter, more systemic. Think unintended consequences, not mad scientist.
These risks don’t come from the future they come from right now, and from the very things companies are doing in their rush to adopt AI:
Again, no Skynet. No ill intent. Real risk, though, with real consequences.
But fret not! Now that we've diagnosed the problem, we can prescribe a solution. In subsequent posts I'll cover what that solution looks like, both in terms of organzation and policy, and infrastructure and operations.
• Unmonitored ChatGPT usage - What are your employees actually typing into the tool? Are they sharing sensitive data, pasting in customer records, or running internal workflows through a public interface? If you don’t know, the answer is yes they are.
• Custom GPT proliferation - Employees are building their own assistants, but few orgs have visibility into how those GPTs are structured or whether they’re out of date, misleading, or outright risky.
• Lack of oversight - Risk and compliance teams are often unaware of where and how LLMs are being used. The result - a growing shadow stack of tools with no central control.
• Fragmented approaches - With every department racing ahead, orgs are ending up with a sprawling ecosystem of uncoordinated GPTs, agents, workflows, and policy interpretations. There's no unified strategy, just a lot of disconnected momentum.
• Overconfidence - LLMs are persuasive, articulate, and confident even when they’re wrong. Users often trust their outputs blindly. That misplaced trust can lead to critical errors.
• Process bypassing - AI empowers individuals to build high-leverage tools but in doing so, they can unintentionally strip away safeguards that took decades to establish. Data visibility limits, version control, process governance, hard-won controls rendered inert in the blink of a well-intentioned prompt.
Again, no Skynet. No ill intent. Real risk, though, with real consequences.
But fret not! Now that we've diagnosed the problem, we can prescribe a solution. In subsequent posts I'll cover what that solution looks like, both in terms of organzation and policy, and infrastructure and operations.
Read the series of blogs