Why employees smuggle AI into work

2025-02-04 01:40:00

Abstract: Many employees use unapproved AI tools at work for efficiency, despite company concerns. This "shadow AI" risks data leaks, but its usefulness makes it hard to ban.

“It’s easier to ask forgiveness than it is to get permission,” says John, a software engineer at a fintech company. He believes it’s better to take action and address issues if they arise later. John's view reflects a trend of many people using personal AI tools at work without IT department approval, which is why we are not using John's full name.

According to a survey by Software AG, half of all knowledge workers are using personal AI tools. The study defines knowledge workers as “those who primarily work at a desk or computer.” Some use personal AI tools because their IT teams don’t provide them, while others say they want their own choice of tools. Although John’s company provides GitHub Copilot for AI-assisted software development, he prefers Cursor.

John finds Cursor, although primarily an advanced autocomplete tool, very useful. “It does 15 lines of code at a time, and then you check it and it’s like, ‘Yeah, that’s exactly what I would have typed.’ It just feels more fluid,” he says. He states that his unauthorized use does not violate any policies, it is just more convenient than going through a lengthy approval process. “I’m too lazy and too highly paid to chase after reimbursements.”

John advises companies to be flexible when choosing AI tools. “I keep telling my colleagues not to renew a team license for a year at a time because the whole landscape will change in three months,” he says. “Everyone wants to do different things and will feel locked in because of sunk costs.” The recent release of free AI models in China, such as DeepSeek, may further expand the range of AI choices.

Peter (not his real name), a product manager at a data storage company that provides its employees with the Google Gemini AI chatbot, uses ChatGPT through the search tool Kagi, even though the company forbids the use of external AI tools. He finds the biggest benefit of AI is its ability to challenge his thinking when he asks the chatbot to respond to his plans from different customer perspectives. “AI is less about giving answers and more about providing a sparring partner,” he says. “As a product manager, you carry a lot of responsibility, and there aren’t a lot of good avenues to discuss strategy openly. These tools allow that without restrictions.”

The version of ChatGPT he uses (4o) can analyze videos. “You can get a summary of a competitor’s video and have a conversation with the AI tool about the points in the video and how they overlap with your own product.” Through a 10-minute ChatGPT conversation, he can review material that would require watching two to three hours of videos. He estimates his increased productivity is equivalent to the company getting one-third of an extra person’s work for free.

Peter is unsure why the company forbids the use of external AI. “I think it’s about control,” he says. “Companies want to have a say in the tools that employees are using. It’s a new area of IT, and they just want to be conservative.” The unauthorized use of AI applications is sometimes referred to as “shadow AI.” It is a more specific version of “shadow IT,” which refers to someone using software or services not approved by the IT department.

Harmonic Security helps identify shadow AI and prevents company data from being improperly entered into AI tools. The company is tracking over 10,000 AI applications and has found over 5,000 of them in use. These applications include customized versions of ChatGPT as well as business software with added AI features, such as the communication tool Slack. While shadow AI is popular, it also carries risks.

Modern AI tools are built by digesting large amounts of information, a process called training. Harmonic Security has found that around 30% of applications use user-entered information for training. This means user information becomes part of the AI tool and could be output to other users in the future. Companies may worry that their trade secrets could be revealed in an AI tool's answer, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks this is unlikely. “It’s difficult to directly extract data from these [AI tools],” he says.

However, companies do worry that their data is being stored in AI services they do not control, understand, and which might be vulnerable to data breaches. It is difficult for companies to stop the use of AI tools because they are so useful, especially for younger employees. “[AI] allows you to get five years of experience in 30 seconds of prompt engineering,” says Simon Haighton-Williams, CEO of The Adaptavist Group, a UK software services group. “It doesn’t replace [experience] entirely, but it’s a great force multiplier, like having a good encyclopedia or calculator that allows you to do things you couldn’t do without those tools.”

If companies find out they have shadow AI in use, what should they do? “Welcome to the club. I think probably everyone is using it. Be patient, understand what people are using and why, and figure out how to embrace it and manage it, rather than demand it be shut down. You don’t want to be an organization that gets left behind by not [adopting AI].”

Trimble provides software and hardware for managing data in the built environment. To help employees use AI safely, the company created Trimble Assistant. This is an internal AI tool based on the same AI model used in ChatGPT. Employees can consult Trimble Assistant for various applications, including product development, customer support, and market research. For software developers, the company provides GitHub Copilot.

Karoliina Torttila, Trimble’s AI lead, says, “I encourage everyone to explore various tools in their personal lives, but to recognize that their professional lives are a different space, where there are some guardrails and considerations.” The company encourages employees to explore new AI models and applications online. “This forces us to develop a skill: we have to be able to understand what sensitive data is,” she says. “There are places you wouldn’t put your medical information, and you have to be able to make similar judgments [about work data].” She believes that employees’ experience using AI at home and on personal projects can shape company policy as it develops its AI tools.

She believes there needs to be an “ongoing conversation” about “which tools best serve us.”