Promoting the use of artificial intelligence at work is typical of many workplaces in 2025, with employers urging workers to optimize their productivity by relying on the technology that is freely available to them. This trend has inadvertently led to the rise of shadow AI. It is becoming increasingly common for workers to now use AI without the explicit permission of employers, giving birth to an invisible AI economy that doesn’t necessarily bode well for businesses.
The normalization of personal chatbot use and the easy availability of such technology have led employees to turn to unsanctioned AI usage, hiding the traces of it from employers to ensure they don’t get in trouble. While this doesn’t appear to be a high-risk issue at first glance, the blurring boundaries between when it is and isn’t acceptable to use AI could land businesses in hot water.

Image: Pexels
Shadow AI is On The Rise—Where Do We Draw the Line?
The State of AI in Business 2025 report, a new study from MIT’s Project NANDA, recently revealed some fascinating data regarding the use of AI within the workplace. For one, there is evidence that a large majority of businesses are investing in AI, but only 5% of them have seen substantial returns from these investments. The rising investments in AI suggest that the technology should now be a normalized part of any workforce, but the numbers have more to share.
While 40% of the companies revealed that they had purchased official LLM subscriptions for their organizations to explore, over 90% of the employees admitted to using personal AI tools for work. These numbers suggest that while organizations are still freeing up resources to invest in AI tools that are right for the businesses, employees have already begun to use AI in their work. Not only is AI use growing, but most use it regularly as a part of their day-to-day routine.
Additionally, while 90% users still prefer human involvement for “mission-critical work,” around 70% find it acceptable to use AI for drafting emails, while 65% believe it is useful for basic analysis. This is where the discussion of shadow AI comes in. Employees are becoming comfortable with using AI tools in their work, but not all uses are equally beneficial to the organization.
What Are the Risks of Shadow AI Use at Work?
Over the last few years, employees have been flooded with AI-based messaging, and while many continue to be fearful of the technology taking their jobs, others have learnt to swim instead of drowning. Companies like OpenAI, Google, and Microsoft have encouraged individuals to use AI for every little task and query, and the general public has grown comfortable with using these tools in their day-to-day activities. With their work being such a significant part of their daily life, unsanctioned AI use has taken over here as well.
The primary concern with the rise in shadow AI, or the secretive use of AI at work, is that it poses a security risk for the organization. A licensed LLM for the organization or one that is built internally is more likely to protect trade secrets, but using a free AI tool with abandon releases a considerable amount of sensitive data into the wild, which these AI services are not obligated to protect. When employees copy company data into ChatGPT in order to generate a reply to the email or create a summarized table of data, it puts the data at risk.
In addition to the security concerns, AI tools are not always factual—they draw from unverified data available on other platforms or hallucinate the information to provide a response. Using that data for company activities can be risky, and it also opens up the company to using copyrighted work without permission to do so.
Should You Be Concerned About the Rise of Shadow AI?
While integrating AI into daily operations can be beneficial for an organization, the invisible shadow AI economy does pose a threat to businesses. Providing employees with the right tools and training workers to exclusively rely on the proffered tools is in a company’s best interest. It is equally important to set regulations in place regarding unsanctioned AI use to ensure employees are aware of the conditions and standards of work.
AI can be used to automate and simplify menial tasks, however, some tasks require active human consideration and involvement in order to get it right. For others, some degree of experimentation with AI may be permissible. Understanding where the boundary lies for each organization is an essential part of planning ahead and ensuring that there are no unnecessary risks being taken within the workplace. Are you ready to formalize an AI strategy for your organization?
What do you think about the rise of shadow AI? Let us know. Subscribe to The HR Digest for more insights into the evolving landscape of work and employment right now.





Employees may unknowingly upload sensitive corporate data into public AI platforms—where it could be used for AI model training or exposed to third parties.