Shadow AI Means Corporate Risk and Sleepless Nights
August 22, 2025
Shadow AI Means Corporate Risk and Sleepless Nights
Welcome to Part 3 of our series on Shadow AI. In the first installment, we defined Shadow AI and the risks that the unauthorized use of artificial intelligence tools present. In our second post, we took a deeper dive into the trifecta of threats that Shadow AI poses by undermining regulatory compliance, weakening security, and causing reputational damage.
Along with the risks outlined in our previous post, other risks raised by the use of unauthorized AI tools include contractual, regulatory, IP, and exposure at the board level. It is just the kind of thing that keeps executives up at night.
Contractual Risks
Shadow AI usage can put companies at risk of breaches of confidentiality clauses or vendor and customer terms. Many NDAs and master service agreements (MSAs) have explicit provisions about how data can be stored and with whom it can be shared. When information involved in these agreements is uploaded to a Shadow AI tool, those provisions may be violated.
The risk extends further into data processing agreements (DPAs), which require strict control over subprocessors and formal approval for any new technologies that handle personal data. If personal or sensitive data is routed through an unapproved AI tool, the company has effectively introduced an unauthorized subprocessor and breached the DPA.
Customer contracts and terms of service are another source of exposure. These agreements often contain warranties that data will be processed only in secure, governed environments. Even where personal data is not involved, uploading customer information to a Shadow AI tool may breach those warranties and trigger indemnification or damages clauses.
Finally, regulated industry agreements such as those in financial services, healthcare, or insurance frequently impose heightened requirements for traceability, audit rights, and compliance with sector-specific standards. The undocumented use of Shadow AI can undermine these obligations, leading to breach claims that are compounded by regulatory scrutiny tied to contractual commitments.
These risks highlight that Shadow AI exposure is not limited to confidentiality breaches but extends across the full spectrum of contractual obligations that organizations rely on in their operations.
Regulatory Risks
Shadow AI presents risks from several regulatory aspects, including expanded AI-specific laws and overlap with existing privacy and security laws and enforcement.
When it comes to regulations targeted at AI, regulatory scrutiny is primed to increase next year with several significant AI-specific regulations becoming enforceable. The EU AI Act’s main provisions, setting out the obligations and rules for providers, deployers, and other operators of high-risk and general-purpose AI (GPAI) systems, become enforceable on Aug. 2, 2026. This phase of implementation includes mandated risk classification, conformity assessments, documentation requirements, and human oversight for AI use. Organizations using Shadow AI cannot meet the obligations of this phase of the EU AI Act because there is no traceability or governance of data being used. In the United States, a similar wave of oversight will arrive in 2026, when new state laws in California and Colorado take effect. These laws require companies to demonstrate transparency, conduct bias testing, and disclose risks associated with high-risk AI systems. The use of unauthorized AI tools makes compliance with these requirements impossible and may expose organizations to fines or enforcement actions from state regulatory authorities.
As more AI-specific laws come into force, they increasingly overlap with existing AI, privacy, and security laws. Noncompliance in one area often cascades into noncompliance in others, creating a compounded risk. For example, when employees use Shadow AI tools that process personal data, the activity occurs outside of official data inventories. This makes it impossible to comply with GDPR Art. 30, which requires organizations to maintain records of all processing activities. The lack of documentation covering Shadow AI tools likely also violates cybersecurity law, because undocumented tools bypass security monitoring and incident reporting protocols, as mandated under NIS2. There are dozens of global data protection laws mandating demonstrable compliance, including audit trails, risk assessments, and due diligence—none of which can be satisfied without inventorying the tools and mapping the data flows. Cross-regulatory investigations are gaining momentum as an enforcement trend; a Shadow AI-related incident can also trigger a privacy regulator and/or cybersecurity authority investigation.
Using Shadow AI takes AI tools from a manageable internal risk to a regulatory tinderbox. Organizations must inventory, assess, and govern all AI tools in use, formally or informally, to avoid a potential avalanche of compliance failures.
IP Risks
The risks to intellectual property from Shadow AI are also significant. Along with leaking proprietary information by mistakenly or carelessly uploading it into a consumer AI tool, other risks of Shadow AI involve the potential of contamination. There are also significant questions about who exactly owns AI outputs. In addition, infringement concerns are mounting. In February, in Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., Judge Stephanos Bibas, sitting by designation from the Third Circuit in the U.S. District Court of Delaware, ruled against AI startup Ross in its claims that using Thomson Reuters’ copyrighted material fell under the fair use doctrine. The risk of this happening is compounded in the era of Shadow AI.
Additional Risks
Shadow AI also exposes organizations to reputational and ethical risks that heighten scrutiny at the board level. Even if the use of Shadow AI doesn’t rise to the level of legal or regulatory risk, customers, partners, vendors, and others may have serious concerns about protocols and the trustworthiness of organizations that fail to rein in its use.
Prominent technological regulatory bodies, such as the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST), have also recognized the regulatory risk AI poses and have acted accordingly. ISO released the ISO42001 framework, the world's first AI management compliance standard, and NIST acted similarly, releasing the NIST AI Risk Management Framework. Organizations are seeing growing pressure not only to follow traditional compliance standards, but also to follow AI-specific frameworks in order to systematically mitigate the risks that AI poses.
In our next blog post, we’ll talk about moving from risk to action. If you have questions about Shadow AI in the meantime, QuisLex Advisory is here to help.