OneStart

Shadow IT Goes Beyond Tech, It’s Creating Serious Security Gaps

Shadow IT used to mean installing software without permission. Today, it’s more subtle, and more dangerous. Employees use unsanctioned AI tools, browser extensions, and cloud platforms every day to speed up their work. But because these tools live in browsers or online accounts, most are completely invisible to IT.

This isn’t just a tech oversight, it’s a major security gap. Data flows out through these tools, often without employees realizing they’ve crossed a line. Regulations are violated. Sensitive documents are exposed. And IT teams are often the last to know.

Shadow IT Is Evolving, and It’s Easy to Miss

Imagine this: an employee installs a Chrome extension that uses ChatGPT to write better emails. They paste in internal customer feedback or financial summaries. It seems harmless, until that data is stored by an external AI platform with no corporate oversight or controls.

This is shadow IT in 2025. It isn’t software someone snuck onto their laptop, it’s tools people access directly through their browser. And it’s everywhere.

Modern workers are often just trying to be efficient. But these decisions can unintentionally leak data, introduce regulatory risks, or expose access credentials.

What Is Shadow IT in 2025, and Why It’s Everyone’s Problem

Historically, shadow IT referred to unapproved software running on company machines. What’s changed is the friction. Employees no longer need to download and install anything. They can open a tab, log in, and start working, often using personal credentials. There’s no warning to IT, no access logs, and no visibility.

According to a University of Melbourne and KPMG global study, 57% of employees hide their use of AI from managers, and nearly half upload company data into public tools. What’s more concerning is that 66% of those surveyed said they often trust AI-generated responses without verifying them. Only 47% received any formal training on how to use AI tools safely at work.

This behavior isn’t driven by recklessness, most employees just want to be more productive. But when nearly half of all workers are copying sensitive information into tools IT doesn’t control, the risk to company data and compliance multiplies fast.

The Real Risks of Invisible Tools

These seemingly harmless tools open the door to major problems.

There’s data leakage. 

One of the biggest risks with shadow IT is data leakage, and it often starts with a simple copy-paste. Employees trying to save time routinely feed sensitive content, client notes, contracts, internal reports, into AI tools to rewrite, summarize, or brainstorm. But what feels like a shortcut can quietly expose valuable data.

According to Cyberhaven’s 2025 AI Adoption & Risk Report, 83.8% of corporate data shared with AI tools is routed through platforms marked as high or critical risk, meaning they lack key protections like encryption, access control, or compliance guarantees.

A real-world example: In early 2023, Samsung allowed engineers to use ChatGPT for development support. But multiple employees accidentally pasted confidential source code and internal meeting summaries into the tool. Since ChatGPT stores inputs by default, that data left Samsung’s secure environment and became unrecoverable. The company responded by banning all generative AI tools internally, including ChatGPT and Bard.

The problem isn’t just what’s shared, it’s how little control companies have once the data leaves.

Many popular AI platforms, like ChatGPT, Jasper, and Copy.ai:

  • Retain user input to train future models (unless settings are changed)
  • Lack audit logs unless you’re on an enterprise plan
  • Offer no guarantees on encryption or data residency
  • Operate outside IT oversight, especially when used in personal browsers or accounts

That means if an employee uploads sensitive client data, even with good intentions, it can be stored indefinitely, with no visibility, no control, and no way to pull it back.

Even companies with strong policies are vulnerable if they don’t have the tools to see what’s being shared and where it’s going.

Compliance failures are common. 

Most data protection laws—such as GDPR (General Data Protection Regulation) in Europe, HIPAA (Health Insurance Portability and Accountability Act) in the U.S., and SOX (Sarbanes-Oxley Act) for financial data—require organizations to maintain strict control over sensitive data. That includes knowing:

  • Where the data is stored
  • Who has access to it
  • How it’s being used or shared
  • Whether there’s a record (audit trail) of those actions

When employees use unauthorized tools or platforms that control is immediately compromised. A common example today is using AI tools like ChatGPT or similar platforms to help with tasks like writing reports, emails, or proposals. While the intention may be harmless, the method creates a serious risk.

Let’s say an employee pastes client data like names, account numbers, or internal reports into ChatGPT to improve the wording of a presentation. That data is now:

  • Outside of the company’s secure environment
  • Stored temporarily (or even longer) on external servers owned by another company
  • Untraceable, meaning there’s no audit log showing who accessed it, when, or why

From a compliance perspective, this is a red flag. These laws were created to ensure data transparency and accountability. If data ends up on platforms that don’t guarantee compliance with those regulations, the organization could be in violation even if it was a well-meaning employee who caused it.

Worse, the breach often goes unnoticed. Unlike official systems where usage is monitored and logged, shadow IT tools operate outside the visibility of your security team. You might only discover the issue after a compliance audit, during a breach investigation, or if you’re unlucky, through a regulator notifying you of a violation.

Why This Matters:

  • GDPR could fine you millions of euros for mishandling personal data.
  • HIPAA could penalize you heavily for exposing health information.
  • SOX violations could lead to audits, fines, and loss of investor trust.

And all it takes is one employee pasting data into the wrong tool.

Shadow IT severely undermines access controls.

Shadow IT doesn’t only risk data, it undermines account security too.

Many employees use personal email addresses or social logins like Google or Facebook to sign into unapproved tools. These accounts exist entirely outside the company’s identity systems, meaning IT has no visibility or control.

That creates several problems:

  • Access can’t be revoked when someone leaves the company
  • Permissions can’t be adjusted if roles change or risk levels increase
  • Security features like multi-factor authentication often aren’t enforced
  • Sensitive data stays accessible to anyone with that login, even long after they’ve left

And this isn’t a small issue. According to CSO Online, 94% of employee AI use now happens through accounts not tied to any corporate identity system.

So even if your data security is strong on paper, it can break down quickly in practice if employees are using tools that IT can’t monitor, manage, or shut off.

Why Traditional IT Tools Aren’t Enough Anymore

Firewalls and endpoint protection are designed to guard installed software on devices. But today’s shadow IT hides in browsers and cloud tools, where those protections don’t reach.

Browser extensions, for example, remain widely used yet frequently overlooked. A 2025 Enterprise Browser Extension Security Report found that 99% of employees had at least one extension installed, and over half were using more than ten, many with access to sensitive data like cookies, passwords, and page content. The same report noted that 53% of these extensions grant “high” or “critical” permissions, and over 50% hadn’t been updated in over a year, leaving them vulnerable.

Despite these glaring risks, most IT teams remain blind. Extensions are installed directly by users and operate in browsers, outside endpoint detection tools. A recent independent security analysis highlighted that extensions often silently gather or transmit web page data, affecting millions of users without triggering any alerts.

In short, without monitoring browser activity and cloud logins, IT leaders remain effectively in the dark.

How to Regain Control Without Killing Productivity

Striking the right balance between security and efficiency starts with regaining visibility, without slowing your team down.

Start with Visibility

Modern tools like Netskope, Cisco Umbrella, or Skyhigh Security provide cloud-native monitoring. They can identify when users access AI tools, browser extensions, or SaaS platforms, even without installs. The first step is knowing what’s being used.

Netskope inspects real-time traffic and can detect when a user accesses tools like ChatGPT, unauthorized SaaS platforms, or risky browser extensions, even if nothing is installed on the device. For example, if an employee opens an AI writing tool in their browser and uploads a file, Netskope logs the activity, flags it based on policy, and can even block or quarantine the session. This visibility is the first step to understanding what tools are being used and where sensitive data might be going.

Use CASBs and IAM Systems to Strengthen Control

To take visibility a step further, organizations can deploy cloud access security brokers (CASBs) and identity and access management (IAM) systems. These tools work in the background to ensure that only approved users can access the right tools, while keeping sensitive data protected.

  • CASBs sit between users and cloud services. They monitor usage, enforce security policies, detect risky behavior, and provide deeper insights into unsanctioned tool access.
  • IAM systems help IT teams manage who has access to what, using features like single sign-on (SSO), multi-factor authentication (MFA), and automated access revocation when employees leave or change roles.

When integrated with existing apps and approved tools, CASBs and IAMs help companies regain control without blocking productivity. They close gaps where browser-based tools and AI platforms might otherwise go unchecked.

Train Employees with Real-World Scenarios

Only 47% of workers have received training on how to use AI tools safely, according to the KPMG Trust in AI report. The solution isn’t strict bans, it’s practical guidance. Employees need to understand why pasting a customer contract into a chatbot puts data at risk, or why using personal accounts for AI tools bypasses company safeguards. Short, scenario-based micro-trainings, like spotting red flags before sharing files or knowing when not to use AI at all, are far more effective than vague, broad warnings.

Update Policies with Specific Examples

Avoid generic policy language. Instead of saying “don’t use unapproved software,” write policies like: “Do not use personal accounts to access work data” or “Do not paste sensitive internal content into AI tools like ChatGPT, Claude, or Gemini unless approved.”

Create a Feedback Loop

Employees usually aren’t trying to break rules, they just want better tools to do their jobs. The problem is, if they don’t know how to ask for permission (or think IT will just say no), they’ll use the tools anyway, without telling anyone.

To prevent this, make the request process easy and clear:

  • Set up a simple form where employees can suggest a tool and explain what it’s for.
  • Make the status visible. Share a list of approved and pending tools so teams don’t keep asking for the same ones.
  • Respond quickly. Even a “not yet” is better than silence. Aim for decisions within a week or two.
  • Involve the right people. Let IT, security, and team leads review tools together so the decision balances risk and usability.

This kind of system encourages openness. Employees feel heard, and IT gets the visibility it needs, without slowing people down.

Use Enterprise Browsers for Deeper Control

Enterprise browsers like Island and Talon secure the browser itself, where most shadow IT now lives. These tools enforce security policies directly in the browser, controlling what users can copy, paste, upload, or access online.

For example, they can block unapproved Chrome extensions, prevent uploads to AI tools, or require step-up authentication when someone accesses a sensitive SaaS platform. Unlike network monitoring, this gives IT granular, real-time control over day-to-day user behavior, without adding friction to how employees work.

It’s Time for IT and Business to Collaborate

Shadow IT is no longer a rogue tech problem, it’s a business-wide visibility problem. With AI tools, browser add-ons, and SaaS accounts flooding the workplace, the risk is now embedded in daily processes.

Instead of tighter lockdowns, implement smarter controls, better training, and open communication. When business and IT teams align, companies can regain control without giving up the speed and creativity these tools bring.

Scroll to Top