The rise of shadow AI
Getty Images; Tyler Le/BI Gregg Bayes-Brown helped develop the AI policies at a former job in biotech research, but even as a rule maker, he says he couldn't afford to not be a rule breaker. Though he understood the technology and its risks, he used an unapproved personal enterprise Google account for work, to access NotebookLM to organize large chunks of information that would typically require lots of back and forth between customer service and other departments. He estimates the shortcut crunched 150 hours of work to 30 minutes. As his IT department debated how to regulate AI tools for months, Bayes-Brown says, the pressure for employees to work more efficiently with AI mounted. He felt that the potential for company intel to leak from his personal NotebookLM account (Google says it does not use data put into NotebookLM to train its models) was lower than the risk of falling behind. "That miniscule risk of data going out — really small," he says. "But the chance of you being eclipsed by a Chinese peer — massive risk." Shadow AI is the shorthand term for skirting the company IT policies to prompt your favorite chatbot or have an agent tidy your inbox. Data suggests that the practice has spiraled so far out of control, it's likely most of us are running afoul of the policies. A Microsoft survey of UK workers found that 71% said they have used unapproved consumer AI tools at work, and half are doing so weekly. At mid-size companies, there are typically 200 unsanctioned AI tools being used per 1,000 workers, according to Reco, an AI security platform. A 2024 report from Microsoft found that nearly 80% of workers using AI relied on their own tools. Leslie Nielsen, chief information security officer at Mimecast, calls the rise of shadow AI to "death by a thousand cuts, and people just don't understand it." If someone uploads a document with financial data to an AI tool, a chatbot or agent could regurgitate the data, or an analysis of it, to someone outside the company if they used the right prompts. Three years ago, Samsung banned employees from using generative AI tools on company devices after software engineers put internal code into ChatGPT. Amazon also grew wary when ChatGPT's responses to some prompts began to look a lot like internal company data. These incidents happened before OpenAI rolled out its enterprise version of the chatbot, which doesn't use inputs to train models. Since then, however, specialized apps with built-in AI functions have flooded the market, creating a whack-a-mole of security threats for IT to monitor. While companies scramble to square the push for growth with security, the race to be that top worker might mean breaking the rules. Shadow IT — the use of tech shortcuts and software that hasn't been cleared by a company — isn't new. In 2014, the Department of Health and Human Services reached a $4.8 million settlement with Columbia University and New York Presbyterian Hospital after a physician who developed applications for both the hospital and school deactivated a personally-owned computer server on a network that held patient health information. Thousands of patient records were accessible via Google. Today, sneakily feeding company information to AI is particularly enticing, and can also be accidental. People are developing emotional attachments to their AI tools. No technology has ever proved so deft at personalizing itself to individuals. Tech innovation in the office has generally happened from the top down. Management picks the tools the company will use, deciding between Gmail or Microsoft 365, Slack or Discord. The AI hype cycle has flipped the imperative; Big Tech companies have unleashed generative technologies and put the onus on white collar workers to figure out what work it can automate. The race to be a top worker today may require breaking the rules. "The shadow AI problem is worse" than shadow IT, says Nicole Jiang, cofounder of Fable Security. "Companies are actually allowing and pushing for more AI adoption in a rate that we've never seen before." That leaves those in IT "trying to figure out, 'OK, how do we best protect — that's not blocking — but saying yes to letting people explore?'" IT experts tend to see the nuances of AI adoption, with a high likelihood to benefit the bottom line even as they pose potential risks to the company. In a survey of 1,000 IT leaders by software company Freshworks, nearly 80% said they believed employees who use unapproved AI tools are more productive. But 86% said they had also seen one negative incident regarding unsanctioned AI use over the past year, from compliance violations to security beaches. Preference for one AI that your company doesn't have an account for over the one it does isn't simply the evolution of the iPhone vs. Android debate. The technology is fundamentally changing the way people work, pushing non-technical workers to experiment with technology in ways they never have before. "Six months ago, the conversation was, 'I use Claude because I think that the output is better,'" says Harley Sugarman, CEO and founder of security company Anagram. Now, while many companies have approved an enterprise AI tool, workers are seeking out other apps that are more bespoke to their roles, for anything from HR to marketing to coding. Worker AI use is still a blind spot for many companies. A February survey of 345 company leaders by consulting firm Protiviti found that about half do not know the extent to which their employees are using AI. Only four out of 10 had a formal AI governance policy in place. But companies are increasingly spending on enterprise AI, with 90% of IT leaders at large companies saying their workplaces planned to raise budgets for AI tools this year. Half of white collar workers are already using agents, according to a survey from agentic AI platform Writer and research firm Workplace Intelligence. This month, Microsoft made its Agent 365 generally available, taking it out of preview and saying the shift will help companies "take control of agent sprawl" and to "observe, govern, and secure agents and their interactions." "It'll probably get worse before it gets better," Sugarman says of the shadow AI dilemma. "You can imagine the next evolution of this problem is that these agents start self-improving and making more decisions and building more software without even input from end users, and at that point you're in science fiction territory." But he doesn't think cybersecurity is doomed. IT pros need to understand how people are using AI, and how agents are operating. There needs to be training for less technical white collar workers on how to use AI, and why security protocols matter. "No one's really been able to solve that right now." As long as that messaging remains, it's hard to imagine workers will stick to their few approved tools. For all the times millions of employees who report using shadow AI, there are few known consequences. But it takes one slip up to realize an IT nightmare. Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends. Read the original article on Business Insider
