Why employees smuggle AI into work


"It's easier to get forgiveness than permission," says John, a software engineer at a financial services technology company. "Just go ahead and do it. If you run into trouble later, you can sort it out then."
He's one of many people using their own AI tools at work without permission from their IT department (which is why we're not using John's full name).
According to a survey by Software AG, half of all knowledge workers use personal AI tools.
The research defines knowledge workers as "those who primarily work at a desk or computer."
For some, it's because their IT team doesn't provide AI tools, while others want to choose their own tools.
John's company offers GitHub Copilot for AI-supported software development, but he prefers using Cursor.
"It's mostly a fancy autocomplete, but it's very effective," he says. "It completes 15 lines at a time, and then you review it and think, 'yes, that's what I would have typed.' It gives you more freedom. You feel more fluent."
His unauthorized use doesn't break any rules; it's just easier than dealing with a long approval process, he explains. "I'm too lazy and well-paid to bother with the expenses," he adds.
John suggests that companies should remain flexible in their choice of AI tools. "I've been advising people at work not to renew team licenses for a whole year because in three months, everything can change," he says. "Everyone will want to try something new and might feel stuck because of the sunk cost."
The recent release of DeepSeek, a free AI model from China, is likely to increase the available AI options.
What is DeepSeek? Peter (not his real name) is a product manager at a data storage company that offers its employees the Google Gemini AI chatbot.
External AI tools are banned, but Peter uses ChatGPT through the search tool Kagi. He finds the greatest benefit of AI is when it challenges his thinking by responding to his plans from different customer perspectives.
"The AI isn't just giving you answers; it's like having a sparring partner," he says. "As a product manager, you have a lot of responsibility but not many good ways to openly discuss strategy. These tools let you do that freely and without limits."
The version of ChatGPT he uses (4o) can analyze video. "You can get summaries of competitors' videos and have a full conversation with the AI tool about the points in the videos and how they relate to your own products."
In a 10-minute ChatGPT conversation, he can review material that would take two or three hours to watch in video form.
He estimates that his increased productivity is like the company getting an extra third of a person working for free.
He's unsure why the company has banned external AI. "I think it's about control," he says. "Companies want to decide what tools their employees use. It's a new area for IT, and they just want to be cautious."
The use of unauthorized AI applications is sometimes called 'shadow AI'. It's a more specific form of 'shadow IT', where someone uses software or services not approved by the IT department.
Harmonic Security helps identify shadow AI and prevents corporate data from being inappropriately entered into AI tools.It is tracking over 10,000 AI apps and has observed more than 5,000 of them in use.
These include custom versions of ChatGPT and business software with added AI features, like the communication tool Slack.
Despite its popularity, shadow AI comes with risks.
Modern AI tools are developed by processing large amounts of information, in a process called training.
About 30% of the applications Harmonic Security has observed use information entered by the user for training.
This means the user's information becomes part of the AI tool and could be shared with other users in the future.
Companies might worry about their trade secrets being exposed by the AI tool's responses, but Alastair Paterson, CEO and co-founder of Harmonic Security, believes that's unlikely. "It's pretty hard to extract the data directly from these [AI tools]," he says.
However, companies will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.
Subscribe to my newsletter
Read articles from CodeCraftGuru directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
