What Are Your Workers Telling Chatbots? Combating Shadow AI in the Workplace

A recent study by Harmonic Security revealed widespread “Shadow AI” use. Employees are using generative AI tools regularly, without oversight, policy, or training.

Across 300 platforms, Harmonic analyzed over 1 million prompts and 20,000 file uploads. The results were alarming:

  • 4.4% of prompts included sensitive data
  • 22% of file uploads contained proprietary information

Much of this exposure came from personal or free accounts on tools like ChatGPT, Google Gemini, and Perplexity, and employees, often unknowingly, shared internal code, financial models, legal strategies, and client data.

Even more concerning, tools like Canva, Grammarly, and Replit now embed AI by default, making them easy to overlook in traditional security reviews

Another article, from Business Insider, references “KPMG’s Trust in AI report, which surveyed 48,340 people across 47 countries between November 2024 and January 2025”, found that more than half of employees surveyed, 57%, admitted to hiding their use of AI at work and presenting AI-generated content as their own.”

Employees are feeling pressure to utilize AI tools to work efficiently and effectively, to remain employable, and gain a competitive advantage at work.

What Can Businesses Do?

To mitigate Shadow AI risks, businesses need to act decisively:

  • Develop and enforce AI use policies that outline approved tools and data handling rules
  • Block or restrict personal and free-tier AI tools through device and network-level controls
  • Monitor usage across browsers and SaaS platforms to detect unauthorized AI interactions
  • Train employees on how generative AI works and what constitutes risky behaviour
  • Review third-party SaaS tools for embedded LLM features that may create blind spots

Shadow AI isn’t emerging—it’s already here. Without governance, there is a lot at stake: privacy, trust, and trade secrets. It’s like having a side door to your business wide open, undetected, and unprotected—the fix isn’t more tools, it’s governance, AI policy, and education.

Resources

Morrone, M. (1 C.E., July 31). Workers are spilling secrets to chatbots. AXIOS. https://www.axios.com/2025/07/31/workers-company-secrets-chatgpt

Thompson, P. (2025, April 28). Researchers asked almost 50,000 people how they use AI. Over half of workers said they hide it from their bosses. Business Insider. https://www.businessinsider.com/kpmg-trust-in-ai-study-2025-how-employees-use-ai-2025-4?

Need support, fast?

Take the next step—contact us today for a free compliance and cybersecurity strategy session, and find out more about our security assessment, to ensure your business is fully protected and compliant! 

Our Cyntry experts can identify strategies to safeguard your data and systems. At Cyntry, simplifying the compliance journey and strengthening your security posture is what we do best. 

Book a no-cost 30-minute compliance and cybersecurity strategy session at Cyntry.com

Follow us on