Nearly Half of Employees Using Banned AI Tools at Work: A Growing Concern
A recent survey by Anagram, a cybersecurity firm, reveals a startling trend: nearly half (45%) of U.S. employees are using banned artificial intelligence (AI) tools, such as ChatGPT, Gemini, and Copilot, in the workplace, often without clear company policies.
Additionally, 58% admit to inputting sensitive data, like client records and internal documents, into these large language models, raising significant cybersecurity, compliance, and reputational risks for organizations.
This phenomenon, dubbed “shadow AI,” highlights a disconnect between employee behavior and corporate governance, with 78% of surveyed workers using AI tools on the job, many covertly.
The significance of this issue lies in the potential for data leaks and legal consequences. As Andy Sen, CTO of AppDirect, notes, data entered into external AI systems could be stored or used to train models, risking proprietary information exposure.
This is particularly concerning in regulated industries where noncompliance could lead to severe penalties. The survey also shows that 40% of employees would knowingly violate policies for efficiency, underscoring a lack of awareness or prioritization of risks.
Generational divides exacerbate the issue. Generation Z workers, often self-taught in AI, are leading adoption, with nearly 50% believing their supervisors don’t understand AI’s benefits, according to a 2025 UKG survey.
However, inadequate training—only 47% of global employees have received formal AI education, per KPMG—leads to risky behaviors, like using unverified AI outputs (66%) or making errors (over 50%). This gap in guidance fuels “shadow AI” use, as employees seek efficiency under pressure.
For businesses, the impact is clear: without robust AI governance and training, organizations face vulnerabilities.
Experts like Harley Sugarman of Anagram urge companies to prioritize modern, transparent AI training and create “AI playgrounds” to foster innovation while mitigating risks.
HR consultant Bryan Driscoll emphasizes that banning AI outright is ineffective; instead, ethical policies and education are essential to balance productivity with security.
This trend signals an urgent need for companies to adapt. By implementing clear guidelines and investing in training, businesses can harness AI’s potential while safeguarding their operations, ensuring employees use these powerful tools responsibly.
FAQ
Why are employees using banned AI tools at work?
Employees often turn to banned AI tools to boost efficiency and manage workloads, especially when companies lack clear AI policies or approved tools. Many prioritize convenience over compliance due to inadequate training or pressure to perform.
What are the risks of using unauthorized AI tools in the workplace?
Using unauthorized AI tools can lead to data leaks, as sensitive information may be stored or used to train external models. This poses cybersecurity risks, compliance violations, and potential legal consequences, particularly in regulated industries.
Image Source:Photo by Unsplash