Moving from Risk to Action: Balancing AI and HI
August 28, 2025
Moving from Risk to Action: Balancing AI and HI
Thank you for following our four-part series on Shadow AI. So far, we’ve discussed the risks of Shadow AI in Part 1, taken a close look at the threats it poses to compliance, security, and reputational damage in Part 2; and in Part 3 explored more of the ways that Shadow AI keeps executives up at night through risks with contracts, regulations, IP, and risk exposure at the board level. Here, we describe how to begin balancing artificial intelligence with human intelligence, allowing organizations to benefit most from both.
The risks with Shadow AI are real, and it’s not just organizational leaders who recognize them–employees do as well. According to a report from Gartner, more than 90% of workers who use tools and processes that are not secure recognize the risks, but they aren’t willing to stop using them. This could be for a variety of reasons. Some may not appreciate how much harm Shadow AI usage can cause, especially considering the ubiquity of some of these tools. Others may be worried that these tools make them look less vital to the organization and therefore prefer to keep their use under wraps. According to findings from Microsoft, among those who use AI on major tasks at work, slightly more than half--53%--worry that the tools could make them seem replaceable.
It might seem easier for legal departments to ban the use of AI tools completely, rather than attempt to combat unauthorized use. However, that isn’t just unrealistic, it will leave general counsel at odds with other stakeholders eager to take advantage of the efficiencies and insights that AI provides. Instead, they must find the right balance between AI enablement and human oversight. In other words, a balance between Human Intelligence and Artificial Intelligence. Doing so may include the following steps:
Inventory AI Use
Identify all tools employees are using, both approved and unapproved
Prioritize Risks
Focus on the highest-risk activities based on sensitivity of data and regulatory exposure
Set Interim Guardrails
Issue quick guidance and provide secure alternatives while long-term solutions are developed
Integrate Governance
Fold AI oversight into existing compliance, risk, and security frameworks
Train Employees
Build awareness of risks and reinforce the safe, approved ways to use AI tools
Update Continuously
Revisit policies and controls as laws evolve, and especially as new AI tools emerge
Balancing AI and HI is not just a legal priority. It requires close coordination with IT, cybersecurity, HR, and business leaders to ensure policies are workable, risks are managed, and employees have safe, approved alternatives. Regulators, courts, and even contractual counterparties are already demanding evidence of governance, making cross-functional collaboration essential.
Shadow AI is only the beginning. As AI becomes more deeply embedded into business workflows, legal departments that act now will be better positioned to guide enterprise-wide adoption. The opportunity for legal and compliance leaders is not only to mitigate risk but also to shape how their organizations adopt AI responsibly and effectively.
This is where QuisLex Advisory can help. We advise GCs and CCOs both about instituting appropriate business controls to address enterprise risks and enabling AI adoption within their own legal departments. We work directly with legal leadership to find the right balance between AI and human intelligence, ensuring legal teams themselves are equipped for safe and effective AI use while supporting the business. Contact us to learn more.