AI

How to pick and run a lighthouse site for your AI rollout

Most companies try to deploy AI everywhere at once and wonder why nothing sticks. A lighthouse site lets you prove value with one team first, build a playbook, then expand with confidence instead of chaos.

Most companies try to deploy AI everywhere at once and wonder why nothing sticks. A lighthouse site lets you prove value with one team first, build a playbook, then expand with confidence instead of chaos.

Key takeaways

  • One team goes first - A lighthouse site is a single location or team that proves AI value before you spend budget rolling out company-wide
  • Pick willing, not brilliant - Select a team with enthusiastic leadership and representative workflows, not the most technically sophisticated group you have
  • Four to six weeks is enough - A focused lighthouse sprint produces real data on adoption, time savings, and quality changes without dragging into pilot purgatory
  • Package what you learn - The lighthouse only matters if you document what worked, what failed, and what surprised you in a format the next wave of teams can actually use

Here is something that drives me slightly crazy. A company decides they want AI. They buy licenses. They announce a rollout. Every department gets access on the same Tuesday. By Friday, IT is drowning in tickets, managers are confused, and the people who were excited last week are now telling everyone it doesn’t work.

This is how SaaS rollouts have worked for 20 years. Email, CRM, project management tools. You flip the switch, everyone gets it, some people figure it out, most muddle through. AI is not like that. It cannot be treated like a new version of Slack.

MIT research found that the overwhelming majority of generative AI implementations are falling short of expectations. And CIO reporting indicates the vast majority of AI proofs of concept never make it to production. These numbers aren’t about bad technology. They’re about bad deployment strategy.

The companies that get AI right don’t deploy everywhere at once. They pick one site, one team, one set of workflows. They go deep instead of wide. They learn before they scale.

That first site is your lighthouse.

What a lighthouse site actually is

The term comes from McKinsey’s Global Lighthouse Network, which originally studied manufacturing facilities that demonstrated how to scale advanced technologies beyond the pilot phase. The concept translates directly to any AI deployment.

A lighthouse site is a single team or location that goes first. Not as a beta test or a science experiment. As a real deployment with real workflows, real measurement, and real stakes. The purpose isn’t just to see if the technology works. It’s to build a complete picture of what adoption looks like: the wins, the friction, the workarounds nobody predicted, the training gaps you didn’t know existed.

Think of it this way. You wouldn’t open 50 restaurants on the same day if you’d never run one before. You’d open one, learn everything about operations, fix the problems, document the recipes, then open the second. Then the tenth. Then the fiftieth.

AI rollouts work the same way. The lighthouse generates proof, process, and momentum. Without it, you’re guessing. With it, you’re scaling from evidence.

How to choose the right team

This is where most companies get the selection wrong. The instinct is to pick your most tech-savvy team. The developers. The data analysts. The people who already have three AI side projects running.

Don’t.

Your lighthouse needs to be representative, not exceptional. If your best engineering team gets great results with AI, that tells you almost nothing about what will happen when you roll it out to accounting, operations, or HR. You need a team whose daily reality resembles most of the organization.

Here are the criteria that actually matter:

Willing leadership. The team lead needs to genuinely want this. Not because they were voluntold, but because they see the potential and are ready to invest their own time in making it work. Skeptical leadership will doom your lighthouse before it starts, and forced participation produces compliance theater.

Manageable size. Fifteen to forty people is the sweet spot. Big enough to generate meaningful data about adoption patterns. Small enough that you can actually observe what’s happening, provide hands-on support, and adjust quickly when something isn’t working.

Representative workflows. The team should do work that resembles what most of your organization does. If you’re a services company, pick a client delivery team. If you’re in manufacturing, pick a shift at a plant that runs standard processes. The point is that lessons from this team need to transfer.

Measurable outputs. You need a team that produces things you can count and compare: reports written, tickets resolved, proposals drafted, analyses completed. Without baseline metrics, your lighthouse produces stories instead of data. Stories don’t survive the budget meeting.

HBR’s research on successful AI pilots backs this up. The organizations that got real results from pilots weren’t the ones with the fanciest technology. They were the ones that matched AI to specific business problems where outcomes were clearly measurable.

The four to six week sprint

Here’s where the AI transformation timeline question gets practical. Your lighthouse doesn’t need six months. Four to six weeks of focused deployment produces enough data to make a real decision about scaling.

Week one: baseline and setup. Measure everything before you change anything. How long do tasks take? What’s the error rate? How many steps does each workflow require? How do people feel about their work? These numbers are your before picture. Without them, every result from the lighthouse is just an opinion.

Install the tools. Configure access. But don’t train anyone yet. Let them poke around on their own for a day or two first. You’ll learn something from watching which features people gravitate toward naturally.

Weeks two and three: guided adoption. Now you train. Not a four-hour workshop crammed into one afternoon. Short, targeted sessions tied to specific workflows. “Here’s how to use AI to draft your weekly status reports.” “Here’s how to summarize these customer call transcripts.” Concrete use cases, not abstract capability demos.

This is when the real friction appears. Someone’s workflow doesn’t map cleanly to the tool. The AI output needs heavy editing for one type of task but works perfectly for another. These are gold. Write all of it down.

Weeks four through six: independent operation. Pull back the training wheels. Let the team operate with AI as part of their normal workflow. Observe without intervening. Track the metrics. Collect feedback weekly; short surveys, quick conversations, not formal interviews that make people perform.

By the end of week six, you know things that no amount of vendor demos or analyst reports could tell you. You know which use cases genuinely save time. You know which ones people abandoned after day three. You know whether quality improved, stayed the same, or got worse. You know what the real adoption curve looks like.

What to measure and why it matters

The temptation is to measure everything. Resist it. You need four categories of data from your lighthouse, and tracking more than that creates noise.

Adoption rates. What percentage of the team uses AI tools daily? Weekly? Not at all? Track this over time, not as a single snapshot. A tool that starts at 80% adoption but drops to 30% by week four tells a very different story than one that starts at 40% and climbs to 75%.

Time impact. Pick three to five specific workflows and measure the time difference. Don’t trust self-reporting. Use actual timestamps where possible, or have someone observe and record. McKinsey’s lighthouse research found that successful implementations averaged improvements exceeding 50% in conversion cost, cycle times, and defect rates. Your mileage will vary, but you need hard numbers either way.

Quality changes. Are the outputs better, worse, or different? This one is harder to measure but just as important. If AI helps people write reports twice as fast but the reports need twice as much editing from their manager, you haven’t saved time. You’ve moved it.

User sentiment. How do people actually feel about working with AI? Not whether they think AI is “the future” in the abstract. Whether they find it useful today, in their actual job, for the tasks they do. People will tell you the truth in week five that they won’t tell you in week one. Give it time.

The pattern I’ve seen too often is organizations celebrating adoption numbers while ignoring sentiment data that screams trouble. High adoption means nothing if people are using the tool because they were told to, not because it helps.

But measurement alone isn’t enough. Here is where most lighthouse efforts waste their own results. The team gets good outcomes. Leadership hears about it. Everyone gets excited. Then the next team starts from scratch because nobody bothered to document what actually happened.

Your lighthouse needs to produce a playbook. Not a glossy slide deck for the board. A practical document that the next team can follow. It should include:

What worked and why. Be specific. “Summarizing customer calls using AI saved an average of 22 minutes per call” is useful. “AI was helpful for many tasks” is not.

What failed and why. This is more valuable than the wins. If a particular use case flopped, the next team needs to know before they waste two weeks trying the same thing. Honest failure documentation is rare and extremely useful.

Workarounds that emerged. People will invent processes you never anticipated. Someone figured out that feeding the AI a template before asking it to draft a response cut editing time in half. Someone else discovered that a certain type of query always produces terrible results. These tricks and traps need to be captured.

Training recommendations. What training sequence worked? What fell flat? How long should onboarding be? What questions came up repeatedly? Build this into a repeatable program, not a one-off event.

This playbook becomes the foundation for the adoption flywheel. When team two sees a real playbook from team one, with real metrics and real failures documented alongside the successes, credibility transfers. They don’t feel like guinea pigs. They feel like the second wave of something proven.

From lighthouse to enterprise

The hardest part isn’t the lighthouse. It’s what comes after. Scaling AI to the full enterprise requires a deliberate progression that most organizations try to skip.

Crawl. Your lighthouse is the crawl phase. One team, full support, heavy observation. You’re learning whether this works at all and building the initial playbook.

Walk. Expand to three to five teams simultaneously. These teams use the playbook from the lighthouse but get less hands-on support. This phase tests whether your learnings transfer. It also reveals which parts of the playbook were specific to the lighthouse team and which are universal. Expect surprises. Each new team will surface issues the first team never encountered.

Run. Roll out to entire departments or business units. By now you’ve refined the playbook through multiple iterations. Training is systematized. Support is documented. Success metrics are established. You’re no longer experimenting; you’re executing.

Fly. AI becomes part of how the organization works, not a separate initiative. New employees learn AI tools during onboarding. Workflows assume AI assistance. This phase takes the longest and never really ends. It’s continuous improvement, not a finish line.

Gartner’s research on scaling AI points to a critical finding here: nearly two-thirds of organizations remain stuck in pilot mode, unable to scale beyond their initial experiments. The crawl-walk-run-fly progression exists specifically to prevent that trap. Each phase produces the evidence and infrastructure that makes the next phase possible.

The worst thing you can do is jump from crawl to fly. I get frustrated when I see companies run a successful lighthouse and then immediately announce an “enterprise-wide rollout” the following quarter. That’s not confidence. That’s impatience disguised as ambition. And it usually ends with the same dismal failure rate that organizations were trying to avoid in the first place.

Pick your lighthouse carefully. Run it honestly. Document everything. Then let the results, not the enthusiasm, determine your pace.

Worth discussing for your situation? Reach out.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.