AI anxiety is not about the technology - it is about losing control
Employees are not afraid of AI algorithms. They are afraid of losing agency over their work, their relevance, and their future. Here is how to fix it.

What you will learn
- AI anxiety stems from loss of control, not technology fear - Job displacement fears surged from 28% to 40% in two years, driven by employees feeling like passive recipients of change
- Leadership silence amplifies anxiety - Fewer than 20% of employees have heard from their manager about how AI will affect their job, making the communication gap itself a source of fear
- Involvement beats reassurance every time - When employees help select and customize AI tools rather than just receive updates, anxiety drops, especially important given that 62% currently feel leaders underestimate AI's impact
- Training gaps create retention risks - Only one-third of employees received any AI training last year, and 36% planning to resign cite inadequate development as a driving factor
When I first read EY’s 2023 research on AI anxiety, I almost dismissed it. Another anxiety statistic. But the specifics stopped me: 75% of employees worried that AI will make certain jobs obsolete, and 65% felt anxious about AI replacing their own job. These aren’t vague fears. They’re pointed.
But here’s what I think the numbers miss. Those fears aren’t really about algorithms. They’re about having no say in how those algorithms reshape their work.
That distinction is the whole thing. If you assume AI anxiety is a technology comprehension problem, you’ll spend your time writing explainer docs and hosting lunch-and-learns. When what people actually need is the steering wheel, not a better map.
At Tallyfy, we watched automation anxiety evaporate the moment people understood they were becoming workflow designers rather than workflow followers. Not because we explained the technology better. Because we handed over control.
Why AI anxiety is really about control
Research from Nature confirms what I’d suspected: AI adoption significantly undermines psychological safety, and that’s what drives the depression and stress responses. But here’s what the summary stats miss. It’s not the technology itself causing the damage. It’s the powerlessness that arrives with it.
Roll out AI tools without involving your team in selection. Don’t let them customize how things work. Give them no authority over when to use it or when to ignore it. You’ve just sent a clear message: you’re a passive recipient of whatever comes next.
That message is what drives AI anxiety in workplaces. Not the software.
Research has identified five distinct fears employees carry about AI, and most trace back to control. Not fear of robots. Fear of having their job redesigned without their input. Fear that bias or inaccuracy will tank their performance reviews and they’ll have no mechanism to push back.
Mid-size companies have a genuine structural advantage here, I think. You can give people real influence over AI decisions without fighting enterprise-scale bureaucracy where every choice gets approved three layers above the people doing the actual work.
The psychological mechanics of this
Something specific happens when you introduce AI without giving people agency. Worth understanding.
First, professional identity starts to erode. The craft someone spent years developing? Now a tool does it differently, on someone else’s terms. They didn’t choose this. They didn’t shape it. One day they showed up and their job had changed.
Second, you create what researchers call the autonomy-control paradox. Job autonomy satisfies people’s need for control and increases engagement. But algorithmic control disrupts that relationship. The AI starts making decisions that used to belong to them. Even when the tool makes someone more productive, they feel less in charge of their own work.
Third, anticipatory anxiety kicks in. Not about what’s happening today, but about what’s coming. If this decision got made without them, what other decisions will? Mercer’s research shows job displacement fears jumped from 28% to 40% in just two years. Deutsche Bank analysts have warned that “anxiety about AI will go from a low hum to a loud roar.”
What makes it worse: fewer than 20% of employees have heard anything from their direct manager about how AI will affect their job. The silence becomes its own message. And it’s not a reassuring one.
Involvement instead of reassurance
Stop telling people AI won’t replace them. 62% of employees already feel their leaders underestimate AI’s emotional and psychological impact. Empty reassurance doesn’t help. It probably makes things worse.
Show them how they’ll direct AI to do better work. That’s a completely different thing.
Reassurance is passive. Agency is active. One makes people feel temporarily better. The other changes the actual power dynamic. Why does almost no one lead with the second one?
When you’re evaluating AI tools, include employees from all levels in the selection process. Not as rubber stamps. As actual decision-makers with real influence over the outcome. EY’s 2023 data backs this up: 77% of employees would be more comfortable with AI if people from all levels were involved in adoption decisions.
Give employees authority to customize how tools work in their specific contexts. Let them set boundaries on what the AI handles versus what stays human. Let them override AI recommendations when their judgment says otherwise. These aren’t small gestures. They’re structural signals about who’s in charge.
I know one company that lets teams vote on whether to adopt specific AI features. Not all features. Just the ones that change how core work gets done. They have slower adoption rates. They also have virtually no AI anxiety problems. Because people chose this.
Support that actually builds confidence
Training helps. But not the way most companies do it.
There’s this study on AI adoption and workplace stress that nails the dynamic: self-efficacy in AI learning moderates the relationship between adoption and job stress. Higher self-efficacy weakens the stress connection. You don’t build self-efficacy through mandatory training sessions where someone talks at people for three hours.
You build it through peer networks and safe spaces to fail.
Set up learning groups where employees teach each other what they’ve figured out. Not formal training. Just spaces where someone who discovered a useful AI workflow shows five colleagues how it works. Where people can ask what feel like basic questions without worrying they should already know the answer. That kind of psychological safety changes everything.
Create tech champions: early adopters who are genuinely enthusiastic but not evangelical. They provide hands-on help. They share what went wrong when they tried something, not just what went right. Organizations with help desks and regular follow-up sessions see significantly lower resistance to technology adoption. The follow-up part matters as much as the initial session.
Make support ongoing rather than one-time. AI tools keep evolving. Your support system has to evolve with them.
The scale of the gap here is worth sitting with: only one-third of employees received any AI training in the past year, even as over 90% of enterprises are projected to face critical skills shortages in the near term. That gap is exactly where anxiety grows.
Building an organization that handles ongoing change
This isn’t about managing a one-time transition. It’s about building an organization that can absorb continuous technological change without generating continuous anxiety.
The pattern you want to establish: employees have real influence over tools and processes, not just nominal input. When that becomes normal, AI anxiety becomes manageable rather than existential.
A few culture shifts that actually work:
Make experimentation explicitly safe. Create spaces, whether that’s a Slack channel or dedicated meeting time, where people can test AI approaches and discuss what failed without any performance implications. Clinical psychologists report increasing numbers of workers discussing AI anxiety in therapy, with the most common fear being “becoming obsolete.” When psychological safety exists, people treat AI as something they can shape rather than something that shapes them.
Build feedback loops that actually change things. When someone flags that an AI tool is creating problems, and you fix it based on their input, you’ve just demonstrated that they have control. When you listen but nothing changes, you’ve proven they don’t.
Give people authority to disconnect from AI when it makes sense. Sometimes the human approach works better. If employees need permission to override the AI, you’re telling them the algorithm has more authority than they do. That’s a problem.
Connect AI adoption to skill development rather than just efficiency. Workers with AI skills earn significantly more than those without. When someone gets good at directing AI, does that open new opportunities for them? Or just make them more efficient at the same job? One creates positive anticipation. The other creates resignation.
36% of employees planning to resign within a year cite inadequate training and development as a driving factor. And 45% of leaders say they’d leave their company if it significantly lagged in AI adoption. The anxiety cuts in both directions.
Mid-size companies can move faster on this than enterprises. You can change actual practices rather than just updating policies. You can give teams real budget authority to choose their tools. You can let someone who finds a better AI approach roll it out to their whole department without eighteen approval layers.
The companies that handle AI transitions well won’t be the ones that explained it best. They’ll be the ones that gave people genuine control over how it changed their work.
AI anxiety is a control problem wearing a technology costume. Fix the control problem. The anxiety takes care of itself.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.