AI Governance Playbook (Part2) | What Good AI Governance Looks Like - Organization and Policy

Mark Baker
September 11, 2025
5 Minutes
Generative AI Services
Mark Baker
September 11, 2025
5 Minutes

(“If you’re just joining, Part 1 explored how AI is rushing into the enterprise and the risks it’s already creating.  Now we're going to explore how companies of all sizes can build the systems to manage those risks.)

Let’s be clear -  AI governance isn’t just for big tech firms or Fortune 100s. Everything I’m about to lay out applies to organizations of all sizes. But not every organization will need to implement every piece or at least not all at once.

In a startup, one person might wear five hats. In a mid-sized company, maybe one governance body covers everything. In a large enterprise, these roles and systems might span dozens of teams. It all depends on your scale, your tools, your risk surface, and your velocity.

What follows is a comprehensive governance architecture a kind of menu. Use it to figure out what’s relevant to your org, your current maturity, and your exposure.

Some parts might be a one-time setup. Others will become part of your ongoing operational rhythm. But if you’re serious about governing AI, this is the landscape you’re working within.

Organizational Governance

Let’s start with people. Every successful governance program starts with clearly defined bodies, roles, and responsibilities. They don’t have to be big but they do have to be real.

AI Governance Council - Sets strategic direction and defines what "responsible AI" means for the business.  The top-level body that sets agendas and priorities for all the others and resolves disputes when needed.

Data Governance Council - Oversees readiness, sharing, and classification of the data AI depends on.

Use Case Review Committee - Evaluates and prioritizes high-cost or high-risk efforts; aligns AI development with enterprise priorities and across departments so that the same thing isn't developed twice, but differently.

Policy Council - Cross-functional leadership that defines, disseminates, and updates acceptable use policies.

Regulatory & Compliance Council - Interprets external regulations, owns internal compliance strategy.

Local Governance Leads - Embedded reps inside business units; bridge between central policy and actual use.

In smaller orgs, maybe one team wears all of these hats. In larger orgs, these may be fully separate and distributed. What matters is that these roles are assigned, known, and empowered to act.

Governance without people is shelf-ware. Governance without accountability is theater.

Policy and Risk Principles

If the governance bodies are the "who," then policies are the "how", and risk principles are the "why." These are the foundational rules that help every GPT, agent, or workflow stay aligned with your values, your regulatory obligations, and your appetite for risk.

Again -  how formal this gets depends on your size and needs. For some orgs, this is a living doc and a quarterly review. For others, it’s a checklist taped to the wall. What matters is clarity, consistency, and ownership.

For example, here are some risk principles we feel apply to every organization large and small:

• Risk increases with scale, complexity, and unknowns - A GPT used by one person is not the same as a GPT used by a thousand. One simple tool is not the same as a multi-agent workflow. And if you don’t know what it does, what it touches, or who’s using it, you should assume the worst.

• AI is probabilistic and accountability still matters - AI systems don’t give determinate answers. That makes them feel human. But when a human gives a bad answer, you can assign responsibility, even liability. Try suing an LLM.

• Drift is real - Context drifts. Prompt wording drifts. User behavior and use cases drift. And because AI systems feel natural and life like, it’s easy to forget they still need maintenance like any other codebase.

• Signals are early warnings - Usage spikes, sudden tool enablement, risky outputs, user complaints these are real-time signals of something changing. Not watching for them is like driving by looking in the rear-view mirror.

• Human oversight is non-optional, especially in high-risk domains - When it comes to legal, medical, financial, or customer-facing systems, human review isn’t optional. When AI gets it wrong, it won’t be OpenAI or Anthropic on the hook it’ll be you.

• Registration is non-negotiable - The tradeoff for democratized power is accountability. If an AI tool isn’t properly registered with details like ownership, business purpose, and risk assessment, the accountability can’t be established.

Once defined, risk principles like these can be codified into policy that defines clear expectations for how AI is built, deployed, and used—across every team and every system.  A Responsible Use Policy grounded in these principles should define expectations concretely like the following:

Augment, Don’t Replace - AI systems may not operate autonomously in critical workflows. Each deployment must define explicit human review or override checkpoints, documented in the system’s registration metadata, or documentation as to why HITL oversight isn’t needed.

Trust and Transparency - All AI systems must be registered with an accountable owner, include usage logging, and maintain documented system prompts and tool configurations. Black-box systems are not permitted in production.

Fairness & Ethics - AI systems used in regulated or high-impact domains (e.g., hiring, lending, healthcare) must undergo and document bias review (e.g. source material validation and prompt evaluation) prior to production release.

Security & Misuse Prevention - Use of high-risk tools (e.g., Code Interpreter, File Uploads, Custom Actions) must be explicitly authorized and logged. Violations surfaced through telemetry must trigger predefined remediation actions.

Compliance & Alignment - Before deployment, all AI systems must be reviewed by the appropriate governance body for compliance with regulatory requirements (e.g., HIPAA, SOX, GDPR) and sector-specific policy. Reviews must be documented and auditable.

As you can see, AI policy isn’t just a legal asset. It’s an experience layer. If your people don’t understand it, it’s not working.

So now we’ve covered the structural side of governance -  the people, the roles, and the policies that create alignment and accountability.  But that structure needs to show up in the real world to make it truly impactful.  In the final post, we’ll explore how this happens through the tooling and operational scaffolding that turns governance from theory into reality.

Vision to Value-
let's make it happen!