back to top
Saturday, October 25, 2025

Employees Threaten Businesses from the Shadows with AI

Share

The rise of unsanctioned artificial intelligence use by employees—often called “shadow AI”—is creating serious cybersecurity and governance risks for organisations across Australia. With an estimated 81 per cent of workers admitting to uploading confidential business data into public AI platforms, companies now face a dual challenge: harnessing AI’s productivity benefits while guarding against uncontrolled information exposure. The stakes are high, especially when only one in ten managers believe their teams are properly trained to use AI tools safely.

As businesses integrate generative AI into everyday tasks—from drafting emails and reports to financial analysis and presentation preparation—the lines between official and unofficial tool use are blurring. Many employees bypass approved channels and use free consumer AI platforms for convenience, unaware of the data-leak implications. Organisations that ignore this “undertow” of insider risk may soon find themselves blindsided by an information breach they never saw coming.


Understanding the Shadow AI Threat

Why Unsanctioned AI Use Has Grown and What It Means

Generative AI tools have exploded in popularity within the workplace. Employees use them to speed up repetitive tasks, craft better presentations, handle routine communications and crunch numbers in ways that weren’t possible just a few years ago. Yet it is this very ease and accessibility that has led to “shadow AI” — employee adoption of AI tools without formal approval, oversight or governance. (SmartCompany)

Surge in Unapproved Use

Several recent reports highlight how widespread the phenomenon has become:

  • A global study found that 90 per cent of organisations use AI tools without formal IT-approval, and 68 per cent of employees admitted of using public GenAI assistants via personal accounts. (cacm.acm.org)
  • In Australia specifically, between 21 and 27 per cent of workers in white-collar roles report using generative AI tools behind their manager’s back. (thetimes.com.au)
  • A KPMG and University of Melbourne study revealed that almost half of employees had uploaded sensitive company data into unsanctioned platforms. (KPMG)

Key Drivers of the Trend

  1. Speed and convenience: Official enterprise tools often lag behind popular consumer apps. When employees need fast answers, they gravitate toward the tools they already know. (KPMG)
  2. Lack of governance: Many businesses haven’t yet implemented clear AI policies, leaving a vacuum that shadow AI fills. (HCAMag)
  3. Productivity pressure: Staff under tight deadlines may view AI tools as the shortcut they need — even if the shortcut is unauthorised. (TechRadar)

Why the Risk Is Bigger Than It Appears

Unapproved AI use isn’t harmless. It can lead to data leakage, compliance failures, reputational damage, and regulatory risk. Some of the concrete dangers include:

  • Employees uploading sensitive or proprietary data to tools hosted abroad or without clear data-use restrictions. (Ocnus Consulting)
  • Lack of accuracy verification: many users rely on AI outputs without rigorous checks. One survey found 59 per cent of workers admitted to mistakes from unverified generative AI outputs. (Ocnus Consulting)
  • Fragmented tool usage: when employees adopt their own AI solutions, it becomes hard to track and integrate these into formal governance frameworks. (thetimes.com.au)

Given these factors, the threat of shadow AI must be addressed proactively—not merely as a productivity convenience but as a corporate-risk vector.


Actionable Framework: How Organisations Can Respond

Below is a table summarising key steps that organisations can adopt now to mitigate the risks of shadow AI while enabling safe adoption of generative tools.

Action Table

StepWhat to DoWhy It Matters
1. Audit current AI tool landscapeIdentify all AI or GenAI tools being used—both sanctioned and unsanctioned. Map who uses what, where data flows, and under what conditions.Understanding what’s happening gives visibility into what needs control. Shadow AI thrives in “unknown” corners.
2. Establish and communicate clear AI policyDevelop a written policy that states what tools are approved, how data can be used, who is responsible, and what the training requirements are. Roll out company-wide.A policy sets expectations and ensures that employees understand boundaries. Lack of policy = governance gap.
3. Provide safe, approved alternativesRather than forbidding all AI use, make company-approved tools available that meet security, privacy and productivity needs.If employees have no approved options, they will continue using alternatives. Bridging that gap reduces risk.
4. Train and upskill employeesConduct training sessions around safe AI use: what data can be entered, how to verify outputs, when to escalate, responsibilities.Even the best tools fail without competent users. Training builds trust and accuracy.
5. Monitor and control data flowsUse logging, role-based controls and access governance to manage how data moves into AI tools, especially external ones.Data leakage is the core risk. Monitoring ensures data does not slip through unintended pathways.
6. Encourage a culture of reporting and safe experimentationCreate channels where employees can report AI tool use, suggest productivity tools, and experiment in sandboxed environments.Shadow AI often arises from buried innovation. Harnessing it safely benefits productivity and reduces hidden risk.

Implementing these steps can help organisations turn the tide on shadow AI. It is not about eliminating AI use, but about channeling it so that data, security and governance remain intact while innovation continues.


The Road Ahead: Balancing Innovation and Control

Businesses in Australia are navigating a complex moment. On one hand, generative AI promises efficiency gains and competitive edge. On the other, the unsupervised use of these tools by employees creates a latent risk that is rapidly escalating.

Many firms make the mistake of viewing shadow AI as simply a discipline issue. But the reality is deeper: a mismatch between employee expectations and corporate infrastructure. As organisations evolve, they must recognise that shadow AI is a symptom of a larger gap — between the tools provided and the tools expected, between governance and agility, between risk-aversion and innovation. (KPMG)

For board-level executives, CISOs, HR leaders and business managers, the message is clear: arrival of generative AI does not allow for a business-as-usual approach to risk management. Instead, it calls for a revised paradigm—one where innovation is guided, not forbidden; where data flows are secured, not ignored; where employees are empowered, not demonised.

Concrete steps such as implementing approved AI sandboxes, aligning roles and responsibilities, and revisiting governance structures are part of this new paradigm. Equally important is fostering a culture of transparency around AI use — where employees feel able to disclose tool use, seek guidance, and contribute to governance rather than hide in the shadows.

In doing so, businesses will not only reduce their exposure to data breaches and regulatory fallout, but they will also build a more resilient, AI-enabled workforce that can leverage generative tools safely and confidently.


Q 1: What exactly is “shadow AI”?
Shadow AI refers to the use of artificial intelligence tools—especially generative AI platforms—by employees without formal approval, oversight or governance by their organisation’s IT or security departments. (cacm.acm.org)

Q 2: Why do employees use shadow AI even when official tools exist?
Several reasons: official tools may be slower, harder to access, less familiar; employees may feel pressure to deliver quickly; there may be no clear guidelines; there may be fear of being seen as less productive without AI. Research shows employee convenience and familiarity are key drivers. (KPMG)

Q 3: What are the main risks of shadow AI to businesses?
Key risks include: inadvertent data leakage when sensitive information is input into public AI tools; compliance or regulatory breaches; reliance on output without verification leading to errors; tools operating outside formal security controls; reputational damage. (Ocnus Consulting)

Q 4: Does banning AI use at work solve the problem?
No. Outright bans often fail because employees may continue using tools on personal devices or outside company oversight. The better approach is to provide approved alternatives, training and clear governance. (SmartCompany)

Q 5: What should organisations do to manage shadow AI effectively?
They should audit tool usage, set clear policies, provide secure alternatives, train employees, monitor data flows, and create a culture of transparency and safe experimentation. The table above gives a detailed step-by-step.

Q 6: Are there any specific statistics for Australia?
Yes. For example: 21-27 per cent of Australian white-collar workers report using generative AI tools behind their managers’ backs. (thetimes.com.au) Another study found many workers rely on personal AI tools because corporate tools are not meeting their needs. (Ocnus Consulting)


In summary: the era of employee-managed AI is here, and businesses must adapt. Shadow AI is not just a covert productivity hack—it is a latent risk that demands attention. By implementing thoughtful governance, offering approved tools and engaging employees proactively, organisations can turn potential threat into competitive advantage and keep the lights on in an increasingly AI-driven workplace.

Read more

Local News