Your team is producing 500 documents a week with Claude and none of them look like yours
Claude Projects have no character limit on instructions. You can paste an entire 80-page brand guide. But most companies have not done this, so every AI-generated document goes out with default formatting and generic voice. The five-layer brand enforcement stack fixes this systematically without slowing anyone down.

If you remember nothing else:
- Claude Projects with brand voice instructions sit at the highest priority in the instruction hierarchy and have no character limit
- MCP connectors for Canva, Figma, and Brandfetch deliver brand assets directly into Claude conversations without manual uploads
- The governance layer is where most companies fail because nobody audits AI-generated content for brand compliance
Your marketing team used Claude to write a proposal last Tuesday. It was good. Clear, persuasive, well-structured. It also looked like it came from a completely different company. No brand colours. Generic formatting. A tone of voice that read like a Wikipedia article with better punctuation.
Multiply that by everyone in your company using Claude for reports, emails, presentations, and client deliverables. Hundreds of documents a week. None of them recognisably yours.
This is what I’d call prompt entropy. Every ad-hoc AI interaction produces output that’s technically competent but brand-neutral. One person’s Claude writes formal and reserved. Another’s writes chatty and informal. The sales deck looks nothing like the support documentation which looks nothing like the executive brief. Your brand guide exists. Nobody told Claude about it.
The fix isn’t complicated. But it does require thinking about brand enforcement as a system, not a one-off configuration.
The brand dilution nobody planned for
Here’s what happened at most companies. Someone in IT provisioned Claude Team or Enterprise seats. Everyone got access. People started using it immediately because it’s genuinely useful. And nobody, at any point in this process, thought about what the AI’s output should look and sound like.
That’s not a criticism. It’s how every new tool gets adopted. Fast, organic, ungoverned.
The problem is that AI generates far more content than humans do. A marketing team of five people producing five documents a week manually is now producing fifty. The volume multiplier means brand inconsistency scales at the same rate as productivity gains. You got 10x the output and 10x the brand dilution.
TELUS, the Canadian telecom with 57,000 employees, built their Fuel iX platform partly to solve this. Over 13,000 custom AI solutions across the company, with brand guidelines baked into the platform itself. They reported 500,000+ hours saved. But the brand enforcement piece was what prevented those hours from producing a disconnected mess.
Most mid-size companies don’t need a bespoke platform. They need a systematic approach using the tools Claude already provides.
The five-layer enforcement stack
After looking at how several companies handle this, a clear architecture emerges. Five layers, each solving a different part of the problem, each building on the layer below it.
Layer 1: Foundation. Claude Projects with brand voice instructions and your complete brand guide as knowledge files. This is where 80% of the enforcement happens.
Layer 2: Assets. MCP connectors that deliver brand assets (logos, colours, fonts, design tokens) directly into Claude conversations without manual uploads.
Layer 3: Automation. Skills and plugins that automatically apply brand rules to specific content types: presentations, reports, social posts.
Layer 4: Templates. Branded Artifacts and document templates that produce ready-to-use deliverables in your company’s visual identity.
Layer 5: Governance. Approval workflows, brand scoring tools, and audit trails that catch what slips through the first four layers.
You don’t need all five on day one. Start with Layer 1. It takes an afternoon and covers most use cases. Add layers as your AI usage matures.
Setting up the foundation layer
Claude Projects are the single most underused enterprise feature. And for brand enforcement, they’re the linchpin.
Here’s what most people don’t know: Claude Projects have no practical character limit on their instructions. ChatGPT’s custom instructions cap at 1,500 characters. Claude’s? You can paste your entire 80-page brand guide. Tone of voice document. Writing style rules. Vocabulary preferences. Formatting requirements. All of it, in the Project instructions.
This matters because of how Claude prioritises instructions. The hierarchy is: Project instructions sit at the top, then uploaded knowledge files, then conversation context, then the current message. Brand voice rules in Project instructions literally cannot be overridden by anything a user types in a conversation. It’s structural enforcement, not just a suggestion.
The setup is straightforward. Create a Project called “Company Brand Voice” or whatever suits you. In the Project instructions, include:
- Your brand voice description: tone dimensions, personality traits, what you sound like and what you don’t
- Writing rules: sentence length preferences, vocabulary restrictions, formality levels by document type
- Three to five examples of on-brand writing and three to five examples of off-brand writing
- Formatting standards: heading styles, bullet formats, how you handle numbers and dates
- Channel variations: how the voice shifts for email versus proposal versus social media
Upload your complete brand guide, style guide, and any tone of voice documents as knowledge files. Claude will reference these automatically.
Then share the Project org-wide with “Can use” permissions. Everyone gets the same voice. Everyone gets the same rules. Claude Projects as a knowledge management system is already a pattern that works well; adding brand enforcement is the same principle applied to output quality.
The limitation worth knowing: there’s no admin ability to force all users into a specific Project. People can still start conversations outside it. That’s what Layers 2 through 5 address.
Connecting your brand assets
Layer 2 is where it gets genuinely clever.
MCP, the Model Context Protocol, lets Claude connect to external tools and data sources. Several MCP servers now exist specifically for brand asset delivery.
Brandfetch has an MCP integration that retrieves logos, brand colours, and visual identity elements by company domain name. Type “get our brand assets” in a Claude conversation and the model pulls your colour palette, logo variations, and typography specs directly. No manual uploads. No hunting through shared drives.
Canva’s MCP Server creates on-brand designs inside Claude conversations. Here’s the bit that matters: it applies your Canva Brand Kit automatically. Colours, fonts, voice rules, all locked to your brand specifications at generation time. The output isn’t generic. It’s yours. And it’s an editable Canva file, not a static image.
Figma’s MCP integration extracts design tokens through the get_variable_defs endpoint: colours, spacing, typography. It can generate Tailwind CSS directly from your design system tokens. If your engineering and design teams use Figma as their source of truth, this connector ensures Claude’s code output matches your design system from the start.
For companies wanting full control, building a custom MCP server that serves brand guidelines as tools Claude can call is a weekend project using the TypeScript or Python SDK. The server exposes your brand colours as hex codes, your approved font stack, your logo URLs, your template library. Claude calls these tools automatically when generating branded content.
The combination of Brandfetch for identity, Canva for design, and Figma for technical specifications covers the visual brand. Project instructions cover the verbal brand. Together, they handle both sides.
Setting up SharePoint and OneDrive for Claude access properly means your brand assets are always reachable, not buried in a folder structure nobody remembers.
The governance question nobody asks
Layers 1 through 4 are about making branded output the default. Layer 5 is about catching what slips through. And it’s the layer that basically nobody implements.
Here’s what I mean. Your team creates 500 AI-generated documents a week. How many get reviewed for brand compliance? If the answer is “all of them,” you’ve created a bottleneck that kills the productivity gains. If the answer is “none of them,” you’ve accepted that brand consistency is optional.
The answer should be tiered.
High-risk content gets full review. Client proposals. External presentations. Press releases. Anything representing your company to someone who might spend money with you or write about you. These go through human review with brand compliance as an explicit checklist item.
Medium-risk content gets streamlined review. Internal reports. Team communications. Project documentation. A quick scan for obvious brand violations, maybe by a designated person in each department, but not a full approval workflow.
Low-risk content gets light-touch or no review. Internal notes. Quick summaries. Research synthesis that stays within the team. The brand foundation layer handles these automatically.
Brand scoring tools exist for the review process. Typeface’s Brand Agent checks content against your voice profiles in real-time. Frontify provides a DAM with AI governance features including logo misuse detection. Writer AI applies voice profiles org-wide with terminology flagging. These tools aren’t cheap, but for companies where brand consistency directly affects revenue, they pay for themselves.
The audit trail piece matters for regulated industries. Track which content was AI-generated, which was human-modified, which was approved, and by whom. EU AI Act requirements and California’s AB 2013 are making this not just good practice but legal necessity for certain content types.
Artifacts in Claude can generate .pptx, .docx, and .pdf files directly with full HTML, CSS, and JavaScript support. Each artifact supports up to 20MB. Templates created as Artifacts with your company’s CSS, fonts, and colour tokens produce consistently branded output every time. Share these templates org-wide and they become the default starting point for every document type.
Worth discussing for your situation? Reach out.
The bottom line is a bit uncomfortable. Most companies invested significant effort in building their brand. The colours, the voice, the visual identity. All of it carefully crafted and documented. And then they handed their entire team an AI tool that ignores all of it by default.
The fix is not a weekend project, but it’s not a six-month initiative either. Start with a branded Project this afternoon. Add an MCP connector next week. Build the governance framework next month. Each layer compounds on the previous one. Within a quarter, every document your team produces with Claude looks and sounds like it came from your company. Because it did.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.
Want to discuss this for your company?
Contact me