AI

How to standardize on one AI vendor without your team going around you

Harmonic Security analysed 22.4 million AI prompts across enterprises and found 665 distinct tools in use. ChatGPT alone caused 71.2% of data exposures. Standardizing on one vendor is not about picking favourites. It is about making the approved option so good that nobody bothers looking elsewhere.

Harmonic Security analysed 22.4 million AI prompts across enterprises and found 665 distinct tools in use. ChatGPT alone caused 71.2% of data exposures. Standardizing on one vendor is not about picking favourites. It is about making the approved option so good that nobody bothers looking elsewhere.

Key takeaways

  • 665 AI tools in the wild - Harmonic Security found hundreds of distinct GenAI tools running across enterprise environments, most without IT knowledge
  • Blocking alone fails - Zscaler data shows AI usage surged 36x year-over-year even as 60% was actively blocked. Supply beats restriction
  • SSO is the real control plane - When every AI interaction flows through your identity provider, offboarding kills access instantly and audit trails write themselves
  • Start with visibility, not enforcement - You can't standardize what you can't see. Discovery tools like CrowdStrike AIDR find 1,800+ AI apps on endpoints before you block anything

Every mid-size company I advise has the same conversation at some point. The CTO wants to standardize on Claude. The VP of Sales already bought ChatGPT Team seats. Marketing is using Jasper. Someone in finance found a PDF tool powered by an AI model nobody has heard of. Legal is panicking about all of it.

The question is always the same: “How do we get everyone on one platform?”

The answer isn’t just picking a vendor. It’s building a system where the approved option is so good, so accessible, and so deeply embedded in the workflow that going around it feels like more effort than using it. And then making sure the technical controls catch anyone who tries anyway.

The 665-tool problem nobody sees coming

Harmonic Security analysed 22.4 million prompts flowing through enterprise environments and the findings are genuinely alarming. They found 665 distinct GenAI tools in active use across their client base. Not 6. Not 60. Six hundred and sixty-five.

Most IT teams I talk to guess their exposure at maybe 5 to 10 tools. The reality is two orders of magnitude worse.

The data breakdown is painful. ChatGPT accounted for 43.9% of all prompts but caused 71.2% of all sensitive data exposures. The disproportionate risk comes from ChatGPT being the default. It’s what people know. It’s what they reach for. And 16.9% of those sensitive exposures, about 98,000 instances in the dataset, happened on personal free-tier accounts that are completely invisible to corporate IT.

What was leaking? Source code at 30%. Legal documents at 22.3%. M&A data at 12.6%. The kind of information that makes your general counsel lose sleep.

IBM’s Cost of a Data Breach report found shadow AI breaches took an average of 247 days to detect. That’s six days longer than traditional breaches. By the time you discover the exposure, the damage has been compounding for eight months.

The thing is, most of this isn’t malicious. Nobody is intentionally exfiltrating source code through ChatGPT. They’re trying to get their work done faster. They’re pasting a code snippet to get a bug fix. They’re uploading a contract to get a summary. The intent is productivity. The result is data exposure. And that distinction matters for how you solve it.

Why vendor standardization beats vendor governance

Some companies try the governance route. Approve five tools. Write policies for each. Train everyone on which tool to use for what. Monitor compliance across all five.

I’ve watched this approach fail at every company that tried it. The governance overhead alone is a nightmare. Five vendor relationships. Five data processing agreements. Five security reviews. Five sets of usage policies. Five training programmes. And still, people use tool number six because their friend recommended it.

Standardizing on one vendor is a different philosophy. It’s not about restricting choice. It’s about making one choice so obviously superior that alternatives feel unnecessary.

The practical difference: governance says “you can use these five tools within these rules.” Standardization says “here’s one excellent tool that does everything you need, and it’s the only door in.” One approach creates a shadow AI prevention challenge with five attack surfaces. The other creates a single, defensible perimeter.

The pattern I’ve seen work is straightforward. Pick one vendor. Make it available to everyone on day one. Connect it to every tool your team already uses. Invest in training. And then enforce compliance through technical controls, not just policy.

Does standardization on one vendor mean you’ll never use another model? No. It means your official, governed, SSO-protected, audit-trailed platform is one vendor. Power users who need specific capabilities from other models can access them through approved API channels with proper logging. But the default, the thing 90% of employees use daily, is one platform.

The technical enforcement stack

Here’s where it gets specific. After the access strategy, you need technical enforcement. The companies that get this right layer four controls.

Layer 1: DNS and firewall rules. Block the API endpoints and web interfaces for unauthorised AI tools at the network perimeter. The big ones: api.openai.com, api.anthropic.com, claude.ai, chat.openai.com, generativelanguage.googleapis.com, and the long tail of smaller tools. Palo Alto Networks created an “Artificial Intelligence” URL category in their next-gen firewalls with granular sub-categories as of December 2024. This catches web traffic but misses desktop apps.

Layer 2: Cloud access security. Zscaler’s data shows AI/ML usage surged 36x year-over-year while 60% was actively blocked. The surge tells you blocking alone isn’t enough. But CASB tools give you three modes worth knowing: Block (financial services usually), Caution (coaching popup that logs but allows), and Isolate (browser isolation that prevents copy-paste of sensitive data). Caution mode is underrated. It captures the intent without creating the resentment that drives people to personal devices.

Layer 3: Endpoint detection. This is the newest layer and the one most companies miss. CrowdStrike’s AI Data Risk Detection extends beyond network traffic to desktop applications. It discovers 1,800+ AI apps on endpoints including ChatGPT desktop, Claude Desktop, Cursor, IDE extensions, and even MCP servers. The kicker: it captures full prompt content from the endpoint itself, bypassing HTTPS encryption that blinds network-level tools.

Layer 4: DLP tuned for AI patterns. Traditional data loss prevention looks for file transfers and structured data. AI prompts are unstructured text. Harmonic Security built 21 purpose-built small language models specifically for real-time prompt inspection, achieving 96% fewer false alerts than standard DLP and about 75% cost savings over traditional approaches. The key insight: you need DLP that understands prompts, not just payloads.

No single layer catches everything. DNS blocks catch the obvious web traffic. CASB catches cloud-routed access. Endpoint detection catches desktop apps. AI-tuned DLP catches the content that shouldn’t leave regardless of the channel. The companies doing this well run all four. The ones doing it badly run only the first one and think they’re covered.

Making SSO the only door in

This section matters more than the technical enforcement above. I know that sounds backwards, but hear me out.

When every AI tool sits behind your identity provider, three things happen simultaneously. Access is automatic for current employees. Access dies the instant someone is offboarded. And every interaction gets an audit trail tied to a real identity, not an anonymous email signup.

Without SSO, here’s what happens when your VP of Product joins a competitor: their personal Claude account goes with them. Every strategic product conversation, every competitive analysis, every roadmap discussion they had with the AI is on their personal device, in their personal account, completely outside your control. GenAI tools are now the leading channel for corporate-to-personal data exfiltration at 32% of all unauthorised data movement.

The SSO gap in Claude’s pricing tiers is worth knowing about. The Teams plan includes SSO through SAML 2.0 or OIDC. But it doesn’t include SCIM for automated provisioning and deprovisioning. Without SCIM, adding and removing users is a manual process. That means a terminated employee could retain Claude access until someone remembers to remove them manually. The Enterprise plan adds SCIM, but it requires a minimum of 70 users on a 12-month contract.

Even with SCIM, there’s a timing gap. Microsoft Entra pushes SCIM changes on a 40-minute cycle. A terminated employee could technically access Claude for up to 40 minutes after offboarding. For most companies, that’s acceptable. For regulated industries dealing with material non-public information, it might not be.

The point isn’t that SSO is perfect. It’s that SSO is the control plane that makes everything else work. Data privacy design starts with identity. Block the network all you want, but if your employees can sign up for AI tools with their personal email on their personal phone, your network controls are a proper kludge that catches maybe half the actual usage.

What to do this week

Not next quarter. This week. Five actions that take less than a day each.

Monday: Discover what’s actually running. Before you can standardize, you need visibility. Run an endpoint audit. Check DNS logs for AI-related domains. If you have CrowdStrike or a similar EDR, turn on AI app discovery. The number will be higher than you expect. Don’t panic. Just know the scope.

Tuesday: Pick your vendor. If you haven’t already, make the decision. Claude, ChatGPT Enterprise, or Gemini. The choice matters less than making it and committing. The best platform is the one your team will actually use through approved channels. Whichever you pick, ensure it supports SSO, has admin controls for data retention, and offers usage reporting.

Wednesday: Enable SSO. Connect your AI platform to your identity provider. SAML 2.0 or OIDC, whichever your IdP supports. This is the single highest-impact action. It takes a few hours to configure and immediately gives you identity-tied access control and audit logging.

Thursday: Block the big three. Add DNS blocks or CASB rules for the AI platforms you didn’t choose. If you picked Claude, block chat.openai.com, api.openai.com, and the Gemini endpoints. If you picked ChatGPT, block claude.ai and api.anthropic.com. Don’t try to block everything on day one. Start with the platforms that pose the most data exposure risk based on your discovery audit.

Friday: Communicate. This is the step everyone skips and the one that determines whether your standardization effort succeeds or gets routed around. Tell your team what you chose, why, and where to get started. Make the onboarding path dead simple. Record a 5-minute walkthrough. Pin it in Slack. The cost of your AI platform is nothing compared to the cost of people ignoring it and using personal accounts instead.

If you skip Friday, you’ll be back at 665 tools within six months. Turns out the technical controls are the easy part. Getting people to actually use the thing you bought? That’s the real work.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.

Want to discuss this for your company?

Contact me