AI

Your AI steering committee needs power, not just opinions

Most AI steering committees fail because they are designed to discuss, not decide. They become debate clubs that slow down implementation rather than governance bodies that accelerate it. The difference between effective and ineffective committees is not expertise - it is authority and the power to make binding decisions.

Most AI steering committees fail because they are designed to discuss, not decide. They become debate clubs that slow down implementation rather than governance bodies that accelerate it. The difference between effective and ineffective committees is not expertise - it is authority and the power to make binding decisions.

Key takeaways

  • Advisory committees slow you down - Without budget control and project veto power, your steering committee becomes an expensive debate club that delays decisions
  • Keep it tiny - Committees of 5-9 members make better decisions than larger groups, with some studies pushing the optimal size down to 3-5 for decision speed
  • Weekly decisions beat monthly strategy - Effective committees meet for 30 minutes weekly to make specific choices, not quarterly to discuss vague possibilities
  • Clear authority boundaries prevent chaos - Define exactly what the committee controls versus what escalates to the full leadership team before you start

You built an AI steering committee. Six months later, nothing shipped.

This plays out the same way every time, and it stopped surprising me a while ago. Smart people. Monthly meetings. Thoughtful discussion. Zero decisions. The committee becomes the place where AI initiatives die in pleasant, well-intentioned conversation.

The problem isn’t who’s in the room. It’s what they’re actually allowed to do.

What steering actually means

A governance survey from Censinet projects that over 60% of enterprises will implement formal AI governance frameworks. That sounds promising until you look at the current state: only 35% have one today, and most of those advisory committees lack actual decision-making power over specific domains.

Most AI steering committees get built as advisory bodies. They discuss things, recommend approaches, provide input to whoever actually decides. Then someone else makes the call, usually someone who wasn’t in the meeting and doesn’t have the context that shaped the recommendation.

Riskonnect’s research found that just 8% of business leaders feel prepared for AI and AI-governance risks. Meanwhile, 63% of breached organizations either lack an AI governance policy entirely or are still developing one. Fragmented authority creates the exact problem you’re trying to solve.

A steering committee without budget control, hiring authority, and project veto power is just a very expensive focus group. Steering means controlling direction. Not suggesting it. Controlling it.

ISO/IEC 42001, the world’s first AI management system standard, defines effective AI governance as requiring clear mandates, roles, responsibilities, and actual decision-making authority over the AI lifecycle. The standard includes 38 distinct controls and follows a plan-do-check-act approach.

For a mid-size company, that breaks down to four specific powers:

Budget allocation. The committee controls the AI budget directly. Not recommends. Controls. If they approve spending on a RAG implementation, finance cuts the check. No secondary approval needed.

Project decisions. The committee can kill projects. Not just suggest killing them. Kill them. They can also greenlight pilots under a specific threshold without asking permission from anyone else.

Vendor and tool selection. When the committee picks a platform or vendor, that’s the decision. Final. Done.

Resource assignment. If the committee says pull three engineers from Feature Team A to work on the AI initiative, those engineers move. Tomorrow.

Without these four powers, you have a book club for AI enthusiasts.

The size trap

The data on optimal committee size is unambiguous: committees of 5-9 members make better decisions than larger groups. Some studies push that down to 3-5 for decision speed.

Two reasons this matters. Communication complexity explodes with size. A 5-person committee has 10 communication paths. A 9-person committee has 36. Small teams decide faster and at lower cost. Meanwhile, only 6% of organizations have a mature AI security strategy. Your committee needs to move faster than the industry average, not slower.

But mid-size companies panic about representation. Engineering wants a seat. Product wants a seat. Operations, finance, security, compliance all want in. You end up with 12 people who can’t agree on where to order lunch, much less whether to restructure the company around AI. I’ve sat in those rooms, and the frustration of watching consensus-seeking kill every good idea is something I genuinely can’t shake.

For a 50-500 employee company, five people is the right number:

Chair: CEO or COO. Non-negotiable. Authority flows from the top. If your CEO or COO won’t chair this, you’re already signaling that AI isn’t actually a priority.

Operations leader. Someone who understands current workflows and can spot where AI creates real value versus theoretical value. This person’s job is to kill ideas that sound clever but don’t connect to actual operational problems.

Finance with budget authority. Not a finance analyst who has to check with the CFO. Someone who can approve spending up to your committee threshold on the spot.

Technical person who evaluates feasibility. CTO if you have one. Otherwise your most senior technical lead who understands what’s possible versus what’s vendor fantasy. This person saves you from committing to six-month projects that aren’t physically achievable.

Subject matter expert, rotating. For each major initiative, bring in the person who owns that domain. Replacing customer service workflows? The head of customer service sits in. This seat changes based on what you’re building.

Five people. No exceptions. If you think you need more, you’re confusing representation with decision-making.

Operating rhythm that doesn’t waste time

Monthly strategy sessions are where ambition becomes PowerPoint. A 2025 governance survey found that while over half of companies report having formal AI policy frameworks, fewer than 20% have implemented model cards, dedicated incident reporting tools, or regular red teaming exercises.

Strategy without operations is just decoration.

The rhythm that works, and I’ll share what I’ve seen succeed at a mid-size company that got this right.

The most effective structure I’ve encountered uses three distinct cadences rather than trying to cram everything into one meeting type. A steering committee (CEO, CIO, IT director, CFO, plus one or two rotating department heads) meets quarterly for strategy and budget decisions. A working group of operational leads meets monthly to coordinate across departments and flag blockers. And department AI champions operate on two-week sprint cycles, testing use cases in real workflows and bringing results back to the working group.

The steering committee’s primary job in this model is removing obstacles that individual champions cannot remove on their own. It is not approving every use case. That distinction matters enormously. When the committee tries to approve everything, it becomes the bottleneck. When it focuses on clearing paths and allocating resources, the actual work moves faster.

One thing that surprised me was the value of a separate ethics sub-committee. This was a small group (legal, HR, one technical person) that handled questions the steering committee wasn’t equipped to debate: AI use in hiring decisions, customer-facing applications where bias risk was real, and regulatory gray areas. Keeping those conversations out of the main committee meetings kept the main meetings focused on execution.

Here’s how the weekly and monthly rhythms break down:

Weekly 30-minute decision meetings. Tuesdays at 9 AM. Same time every week. No slides. Someone brings three decisions that need making. Committee makes them. Meeting ends.

Fast-track approval for small pilots. Anything under a defined threshold, say equivalent to one engineer-month of work, the technical member can approve alone between meetings. They report it the following week. This prevents the committee from becoming a bottleneck on smaller things.

Quarterly strategy reviews. Four times a year, 90 minutes. Review what shipped, what failed, what you learned. Adjust the roadmap. These are the only meetings where slides are allowed.

Monthly metrics check. Ten minutes of the weekly meeting. Someone shows the numbers. Time-to-deployment for approved projects. Pilot success rate. Adoption metrics for what shipped. No discussion unless something’s broken.

NIST AI RMF adoption research shows timelines ranging from 3-6 months for foundational adoption to 12-24 months for organization-wide integration. You can’t afford to spend that runway in meetings.

Authority boundaries and escalation

I think this is probably the most skipped part of committee design, which is strange given how much it matters. Before your first meeting, write down exactly what the committee controls versus what goes to full leadership.

Recent regulatory pressure is real: the EU AI Act is now fully applicable, new CCPA automated decision-making rules have kicked in, and 20+ U.S. state privacy laws are in effect. Do you really want to discover what your committee can and can’t decide during a security incident? Write it down before you need it.

Committee decides without escalation:

  • Pilot projects under your budget threshold
  • Tool and vendor selection for approved initiatives
  • Resource allocation within the AI budget
  • Project cancellation for initiatives that aren’t working
  • Timeline adjustments for active projects

Committee recommends, leadership decides:

  • AI strategy and multi-year roadmap
  • Budget allocation above the committee threshold
  • Changes to company-wide AI policies
  • Decisions that affect more than one major department
  • Anything requiring board approval

Automatic escalation triggers:

  • Security issues that affect customer data
  • Regulatory compliance questions
  • Projects that would affect revenue by more than a defined percentage
  • Anything that requires changing employment terms

Write these down. Share them with the whole company. When someone tries to route around the committee or escalate something that’s in the committee’s domain, you point to the document and say no.

This one step prevents the passive-aggressive escalation game where people go above the committee whenever they don’t like a decision.

Success metrics and what comes next

IBM’s 2025 breach research found that 13% of organizations reported breaches of AI models or applications, with 97% of those lacking proper AI access controls. Three metrics matter most for mid-size companies:

Decision speed. Track time from “committee receives question” to “decision made.” Target: same meeting for straightforward choices, one week maximum for complex ones. If you’re averaging more than two weeks, the committee is too big or lacks authority.

Implementation rate. What percentage of approved pilots actually ship? Fewer than 70% suggests your technical feasibility check is broken. More than 95% suggests you’re being too conservative with approvals. Track this monthly.

Project ROI. For completed initiatives, measure actual impact against projected impact. Don’t just track the successes. Track everything. Failed pilots teach you what doesn’t work, and that knowledge has real value. If your hit rate falls below 40%, something’s wrong with how you evaluate opportunities.

One meta-metric matters more than these three combined: is the committee accelerating AI adoption or slowing it down? Ask people outside the committee. If teams are routing around it or delaying proposals because they dread the process, you’ve built the wrong thing.

Your first committee won’t be your last. Early stage, you’re approving lots of small pilots and learning fast. Speed and learning matter more than perfection. Growing stage, patterns have emerged and the committee sets standards rather than approving every project. Teams self-approve anything that fits established patterns. The committee only reviews novel approaches. Mature stage, AI is integrated into normal operations. The committee shrinks or disbands. The powers that used to be centralized distribute to functional leaders who own their domains.

ISACA’s analysis of 2025 AI incidents found that the biggest AI failures were organizational, not technical. Weak controls. Unclear ownership. Misplaced trust. That evolution from stage one to stage three typically takes 18-36 months for most mid-size organizations. Plan for it. Don’t build permanent bureaucracy.

The goal isn’t a steering committee forever. The goal is to accelerate through the phase where you need one.

Build it with real power or don’t build it at all.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.