Danger Lurks in Shadow AI

August 07, 2025

Danger Lurks in Shadow AI

In case there were any doubts about privilege and generative AI, OpenAI CEO Sam Altman recently dispelled them. TechCrunch (and numerous other publications) recently quoted Altman speaking on a podcast: “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

What does this mean for companies whose employees are using unauthorized AI tools at work, often referred to as “Shadow AI”? And what does it mean for IT, cybersecurity, compliance, and other areas?

Shadow AI is the newest version of Shadow IT, in which employees use tools and technology that haven’t been vetted and implemented by the organization and fall outside of corporate oversight. Shadow AI may be even riskier than traditional Shadow IT because employees risk exposing sensitive company information to systems that are not protected or privileged, unlike personal email accounts or cloud storage spaces, as Altman noted.

The appeal of logging in to ChatGPT and getting an immediate answer to a legal question rather than waiting for lawyers to respond is easy to understand. Yet doing so exposes the organization to risk, much like when employees conduct business using text messaging, WhatsApp, Signal, or Snapchat instead of approved corporate communication channels. The pattern is familiar. When employees bypass approved systems for convenience, it creates gaps in oversight that regulators have repeatedly penalized. The risks are similar to those seen in recent enforcement actions. In 2022, the U.S. Securities and Exchange Commission and the U.S. Commodity Futures Trading Commission fined 11 Wall Street firms more than $1.8 billion in penalties for record-keeping failures. The SEC investigation found “pervasive off-channel communications,” and the CFTC pointed to failures to stop employees, “including those at senior levels, from communicating both internally and externally using unapproved communication methods, including messages sent via personal text, WhatsApp or Signal."

While employees who turn to Shadow IT probably, at some level, understand that they are circumventing their organization’s privacy and retention policies, it is less likely that they are aware of the intricacies of attorney-client privilege or the legal and cybersecurity risks posed by Shadow AI. This leaves chief legal officers and GCs with the difficulty of navigating a dual role: enabling innovation while protecting the enterprise.

In our next post, we’ll highlight the litigation and privacy risks associated with Shadow AI. This is a topic we discuss frequently with our clients as we work together to shape their technology and compliance strategies. Click here to read more about QuisLex Advisory.

◀ Back to Blog Listing