Technology is only a small part of driving the value of AI
DBS Bank made three quarters of a billion dollars from AI last year. Their CEO told a Fortune conference to stop hiring for knowledge and start hiring for attitude. Walmart, Starbucks, JPMorgan, and Caterpillar all arrived at the same conclusion: the technology was the easy part.

The short version
Every major company getting real value from AI says the same thing: the technology was the easy part. DBS Bank, Walmart, Starbucks, JPMorgan, Caterpillar, and Unilever all point to organizational change as the source of their results. Mid-size companies can use this to their advantage because they can change faster than anyone.
- Technology delivers roughly 20% of AI value; the other 80% comes from redesigning workflows and changing how people work
- 42% of companies abandoned most AI initiatives in 2025 because the organization did not change around the technology
- The diagnostic question for any AI engagement: what percentage of this proposal addresses technology versus organizational change?
DBS Bank pulled roughly three quarters of a billion dollars in economic value from AI last year. Fortune reported the number could exceed $780 million this year. Not from some moonshot research project. From production systems running across the entire bank, touching fraud detection, customer service, and credit decisions.
CEO Tan Su Shan doesn’t credit the models or the infrastructure when she talks about what drove that value. She’s said publicly that companies should stop hiring for knowledge and start hiring for attitude. “Whatever I knew up to today is no longer relevant today or tomorrow,” she told a roomful of executives at a Fortune conference. The technology worked. Getting an entire bank of 40,000 people to think differently was the actual fight.
That’s not one CEO’s hot take. I wrote about why AI projects fail a while back, and the pattern has only gotten louder since. Company after company, across completely different industries, keeps arriving at the same conclusion.
What the companies getting results actually say
Walmart’s approach is the clearest example I’ve come across. Their SVP of Enterprise Business Services, David Glick, built a framework PEX Network documented around three words: eliminate, automate, optimize. In that order. Before adding AI to anything, the team first asks whether a process should exist at all. Then whether it can run without humans. Only after both questions are answered does AI enter the conversation.
The first question isn’t “which model should we use?” It’s “should this work even happen?”
Starbucks learned a related lesson the hard way. CEO Brian Niccol reset the company’s AI approach after early automation efforts stalled. He framed AI as a “co-pilot, not a replacement” for baristas. The distinction matters more than it sounds. Starbucks wasn’t struggling with algorithms. They were struggling with how technology fit into the craft of making coffee and serving people face to face.
JPMorgan Chase went a completely different direction. Tearsheet reported on their dual-pillar approach: executive strategy from the top, grassroots experimentation from the bottom. Over 450 use cases. But the real insight was their “learn by doing” philosophy. The bank didn’t try to plan everything in advance. It built the organizational muscle to experiment, fail fast, and scale what worked.
Unilever trained over 23,000 employees on AI ethics and usage. MIT Sloan’s analysis highlighted their accountability principle: “We will never blame the system; there must be a Unilever owner accountable for every AI decision.” That sentence tells you everything. The risk sits with the people running these systems, not the systems themselves.
John Deere, an agricultural equipment company, spent years breaking data silos across design, production, and service before their AI could deliver real value. Technology was ready long before the organization was.
None of these companies led with model selection.
Nobody learned the factory electrification lesson
Stanford economist Erik Brynjolfsson has a story that should be required reading for every executive buying AI tools. He told Microsoft WorkLab about the transition from steam power to electricity in American factories. When electricity arrived, factory owners did the obvious thing. They ripped out the steam engine, dropped in an electric motor, and kept everything else the same. Same factory floor. Same layout. Same workflow.
Productivity barely moved. For thirty years.
The gains came when manufacturers redesigned the entire factory around what electricity made possible. Smaller motors distributed throughout the building. Assembly lines organized by workflow instead of proximity to a central power shaft. Wider, more open floor plans that steam pipes no longer constrained. Identical technology. Completely different organizational design.
We’re doing the same thing with AI right now. Companies bolt ChatGPT onto existing email workflows. They add summarization to meetings nobody should be having in the first place. Reports that shouldn’t exist get automated anyway. The process stays the same. New power source, same output.
The same pattern shows up in academic research. INSEAD frames AI transformation as fundamentally about reimagining roles and workflows, not deploying technology. Andrew Ng made a similar argument in his AI Transformation Playbook: start small, teach the organization to learn, and let that learning compound before trying to scale. Thomas Davenport, writing in MIT Sloan Management Review, makes the point from the investment side: organizations change far more slowly than AI technology does, and the real work in 2026 is closing that gap rather than chasing the next model upgrade.
I keep coming back to this analogy because it explains something that genuinely frustrates me about how most AI engagements are structured. You can have the best electricity in the world. If your factory floor was designed for steam, you’re just paying more for the same output. In advisory work with mid-size companies, I see this pattern constantly. Expensive tools sitting on top of processes that were broken before anyone mentioned AI. Your readiness assessment is measuring the wrong things if it doesn’t ask how willing the organization is to redesign its workflows.
The investment ratio everyone gets backwards
Over five years, Caterpillar committed more than $100 million to upskilling their workforce on AI and data literacy. That number dwarfs their technology spending. They understood something most companies miss: the bottleneck isn’t computing power. It’s whether your people know what to do with it.
Separate research from the OECD confirms this across industries: 60% of respondents cited lack of internal skills as the primary barrier to AI adoption. Not technology limitations. Skills. An HBR analysis of the “last mile” problem in AI transformation found the same pattern: the technology gets built, tested, and validated, then stalls at the point where the organization needs to actually change.
The World Economic Forum published findings pointing to the same conclusion. Human behavior and workforce adoption determine most of the value companies extract from AI. Model accuracy doesn’t drive it. Data quality doesn’t either. Whether people actually change how they work does.
Meanwhile, 42% of companies abandoned most of their AI initiatives in 2025. Up from 17% the prior year. The technology isn’t immature. Nobody changed the organization around it.
Building Tallyfy taught me this the hard way. The product worked fine. Getting organizations to change how they ran their processes was where every engagement lived or died. Technology was maybe 20% of the work. I think the other 80% was convincing people that the old way wasn’t coming back, and giving them something better to move toward.
For mid-size companies, there’s an advantage hiding in this data. Technology is basically commoditized at this point. Any company can buy the same models, the same tools, the same cloud infrastructure. Your edge isn’t which AI you pick. It’s how fast your organization absorbs the change. Smaller companies can move faster here. If they choose to.
Building the other 80%
What does an AI engagement look like when the emphasis lands on organizational change rather than technology procurement?
The first phase is education and alignment. Executives experience AI directly instead of watching a vendor demo. They find opportunities specific to their operation and set guardrails that reflect their actual risk tolerance. This phase is about getting leadership to agree on what they’re trying to accomplish. Companies skip it at a remarkable rate, and fail at a remarkably similar one.
Second is governance and experimentation. Find your internal champions. Run small pilots with clear kill criteria. Define what success looks like before you start, not after you’ve spent the budget. Communicating these changes effectively across the organization matters more than which model you pick.
Third comes scale. Train everyone who’ll be affected. Build the capability to sustain this independently, so it doesn’t collapse the moment outside support ends.
Notice what’s absent from all three phases. Model selection. Vendor evaluation. Technology procurement. That’s the 20%.
The problems I keep hearing about in conversations with operations teams aren’t technology problems at all. Shadow AI spreading because the approved tools don’t match real workflows. Expensive platforms collecting dust because nobody was trained. No way to prove ROI because nobody defined success before the pilot launched. Every one of these is an organizational problem. Building a champions network to address them matters more than upgrading your language model.
Ask your AI vendor this one question
A diagnostic that I’ve found reliable, maybe the single most useful question in this space: ask any vendor, consultant, or AI partner what percentage of their proposal addresses technology versus organizational change.
If the answer is 80% technology and 20% organizational change, they have it exactly backwards. They’re selling you the electric motor without redesigning the factory floor.
Unilever’s accountability principle deserves repeating. “We will never blame the system.” If your AI deployment underperforms, the issue is how the organization adopted it, governed it, and wove it into real work. The algorithm probably works fine.
My prediction, for whatever it’s worth: the companies that get this ratio right won’t just end up with better AI. They’ll end up with better organizations. The disciplines required to absorb AI properly (clear processes, trained people, defined accountability) are the same disciplines that make a company run well regardless of which technologies it uses. Post-transformation reality looks nothing like the vendor pitch. It looks like a company that learned how to change.
That last point is probably the most important one, and the least discussed. AI is not a destination. It’s a forcing function for organizational maturity. The technology will keep evolving. The vendors will keep selling new things. The companies that thrive will be the ones that built the capacity to keep adapting, regardless of what the next cycle brings.
If you want to think through what this ratio looks like for your company, I am happy to talk it through.
Related questions
What percentage of AI value comes from technology versus organizational change?
Research across multiple industries shows technology delivers roughly 20% of AI value. The other 80% comes from redesigning workflows, training people, and changing how the organization operates. Caterpillar’s $100 million workforce investment versus their comparatively smaller technology spend illustrates this ratio.
Why do most AI pilots fail to reach production?
Most pilots fail because they bolt AI onto existing processes without redesigning how work gets done. The technology works fine in controlled environments. It fails when the organization around it has not changed to support it.
How should companies allocate their AI budget?
Successful companies like DBS Bank and Caterpillar allocate the majority of their AI investment to people and process change, not technology procurement. Education, governance, champion networks, and workflow redesign should consume several times what you spend on software and infrastructure.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.