5 AI Risks Every Company Should Be Aware of – and What to Do about Them  - FleishmanHillard

Compatibilità
Salva(0)
Condividi

Skip to content

Article

September 24, 2025

By Zach Kavanaugh

AI is accelerating, but its promise is falling behind.  

Why? Because transformation is a people challenge, not just a tech race. 

This piece surfaces five often-overlooked risks that quietly stall progress – each one rooted not in code, but in communication. Breakdowns in clarity, coordination and leadership commitment continue to limit adoption and erode trust. 

And yet, these are exactly the areas where strategic communication plays a pivotal role – helping organizations course-correct, contain risk and unlock the value AI is meant to deliver. 

For leaders ready to close the gap, here’s where to focus next. 

1. The AI Narrative Isn’t Moving as Fast as Tech  

What’s happening: AI rollout is rolling out fast, but most employees remain unclear on what it means for their work. 

Why it matters: Multiple reports show that companies are investing in AI tools faster than they’re training teams or communicating the impact. The result? Employees feel left behind, unsure where they fit in or how to contribute. 

What to do: Communications should partner with L&D and AI enablement teams to build a clear, role-relevant narrative that connects AI to everyday work. That means going beyond the “what” and “why” to include practical, team-specific examples – and showing what good AI use actually looks like. Managers play a crucial role here and should be equipped to reinforce these messages in regular team settings. 

2. Shadow AI Is Outpacing Governance 

What’s happening: Employees are quietly using unapproved AI tools to stay productive – often because sanctioned options aren’t accessible, intuitive or well-communicated. 

Why it matters: Recent research shows that over half of employees using AI at work are doing so under the radar. Only 47% have received any training, 56% have made mistakes due to misuse and nearly half say they’ve gotten no guidance at all. That creates risk – for the business, the brand and the people trying to do the right thing without clear support. 

What to do: Communications should partner with IT, HR and Compliance to promote trusted tools, clarify what’s allowed and explain why governance matters. Use short, human-centered scenarios that help people understand tradeoffs and risks. Managers should be given clear guidance on how to check in with their teams and normalize asking, “What tools are you using and why?” 

3. People Assume AI Replaces Judgment – So They Stop Using Theirs 

What’s happening: Without the right framing and support, employees may treat AI output as the final answer – not a starting point for critical thinking, refinement or discussion. 

Why it matters: A recent MIT/Wharton study found that while AI boosts performance in creative tasks, workers reported feeling less engaged and motivated when switching back to tasks without it – suggesting that over-reliance on AI can dull ownership and reduce the sense of meaning in work. 

What to do: Communications and L&D teams should align around positioning AI as a co-pilot, not a decision-maker. Messaging should emphasize the value of human input – especially in work that shapes brand, strategy or outcomes that may pose ethical dilemmas. Training should encourage questions like: 

  • “Would I feel confident putting my name on this?” 

  • “Where does this need my voice, perspective or context?” 

By reinforcing the expectation that employees think with AI – not defer to it – organizations can strengthen decision quality, protect brand integrity and keep teams connected to the meaning in their work. 

4. The Organization Is Focused on Activity, Not Maturity 

What’s happening: Many organizations are tracking AI usage – but not its strategic impact. The focus is on activity (how often AI is used), rather than maturity (how well it’s embedded in high-value work). 

Why it matters: According to a Boston Consulting Group survey, 74% of companies struggle to achieve and scale the value of AI – with only a small fraction successfully integrating it into core, high-impact functions. Without a clearer picture of what good looks like, AI efforts risk stalling at the surface. 

What to do: Communications teams should partner with AI program leads to define and share an AI maturity journey – through narrative snapshots, team showcases or dashboard insights that reflect depth, not just breadth. Highlight moments where AI has meaningfully shifted workflows, improved decision-making, unlocked new capabilities or resulted in notable client or business wins. And celebrate progress in stages – from experimentation to strategic integration to measurable ROI – to help the organization see not just what’s happening, but how far it’s come. 

5. Leaders Aren’t Framing the Change – or Making It Visible 

What’s happening: Many leaders say they support AI – but too few are actively learning, using or communicating about it. When leaders aren’t visibly experimenting or sharing what they’re discovering, employees are left to wonder if the change is important or safe to engage with themselves. 

Why it matters: According to Axios, while a quarter of leaders say their AI rollout has been effective, only 11% of employees agree. That’s not just an implementation gap – it’s a trust gap. And the root cause isn’t technical. It’s about clarity, consistency and whether people feel the change is relevant, credible and real. 

What to do: Communications teams should make it easy for leaders to show up – not just with bold vision, but with curiosity and candor. Encourage short, human signals: what they’re trying, what surprised them, what didn’t work. Share safe-fail stories. Invite open conversations. When leaders model vulnerability and visible learning, they normalize experimentation – and create the cultural conditions that AI adoption actually needs to take root. 

Making AI Real – and Communicating What Matters Most 

These risks don’t stem from infrastructure or algorithms – they come from gaps in alignment, communication and visible leadership. And they escalate when left unspoken. 

In the first article of this AI adoption series, we made the case for a people-first approach to AI. In our second article, we unpacked the psychology of hesitation, showing how quiet friction, not overt pushback, is what most often stalls momentum. 

Our hope is that this third piece has connected the dots: Communications may not own every risk – but it’s essential to identifying, navigating and de-escalating them. 

The bottom line: Technology may spark change, but it’s clarity, trust and visible leadership that make it real. FleishmanHillard partners with organizations worldwide to align ambition and action, helping clients avoid pitfalls, contain risk and realize full value of AI. As the pace accelerates, that human advantage will be the ultimate differentiator. 

See what else is happening

You might also like

  • Expertise

    A Look At Our Most Powerful AI Ingredient: People

    September 2, 2025

  • Expertise

    Elevating Cybersecurity Messaging After Black Hat 2025

    August 27, 2025

  • Expertise

    The Answer Engine Era Is Here

    August 20, 2025

  • Expertise

    A New Approach to Modern Comms: What It Takes to Win in a World Defined by Uncertainty

    August 19, 2025

  • Expertise

    Tariffs, Trust and Transparency: How to Communicate Price Increases Without Losing Stakeholder Confidence

    August 13, 2025

  • Expertise

    Why Primary Research is the Power Source for AI That Works 

    August 11, 2025

  • Expertise

    The Real Reason Your AI Rollout is Stalling

    July 30, 2025

  • Expertise

    What America’s AI Action Plan Means for Leaders Now

    July 24, 2025

  • Expertise

    The Friends You Never Knew You Needed: Why IT and Communications Must Team Up

    July 24, 2025

Recapiti
Annette Wells-Saur