Generative AI has moved from future consideration to a clear boardroom priority for legal teams. There’s a lot of risk associated with using generative AI in a company, and boards are looking to legal counsel to understand and manage that.
I recently participated in a webinar on what GCs need to know about AI transformation in 2026, where I was joined by two expert panelists with deep, high-profile experience shaping AI, privacy, and governance at some of the world’s most recognized companies.
Jim Shaughnessy is the Chief Legal Officer at Docusign, where he oversees the company’s global legal and corporate affairs functions. He has more than 30 years of experience, including a decade in senior leadership roles at Workday.
Nuala O’Connor is a global Fortune 10 technology governance leader and former SVP and Chief Counsel for Digital Citizenship at Walmart, where she led both legal and operational AI governance. She previously developed Amazon’s early global privacy and compliance strategy and served as the first Chief Privacy Officer at the U.S. Department of Homeland Security, among other roles.
They offered practical, actionable advice for GCs navigating AI.
10 Tips for GCs To Lead AI Transformation in 2026
Create a Safe Test Environment for Experimentation (With Reminders and Guidance)
Give employees a sanctioned place to try generative AI so experimentation happens inside your controls, not out on public models. Couple the sandbox with short, role-specific rules and just-in-time reminders in the interface so people get guidance where they work.
100%, the ‘walled garden approach’ is the right step for where we are right now in the evolution of this technology.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
Give Employees AI Tools They'll Actually Use
If your team lacks easy, approved tools they will reach for consumer apps on their phones. Note that, as found in our whitepaper on AI maturity, surveys show how a large percentage of employees knowingly use AI improperly and will use banned AI tools.
If you don’t, they’ll use their phones outside the firewall and they’ll put in sensitive data … and it’s gone.
Jim Shaughnessy
Chief Legal Officer at Docusign
Jim Shaughnessy
Chief Legal Officer at Docusign
Address Fear Directly and Honestly
AI adoption is emotional: people worry about job security. Make sure you confront those concerns openly.
The people in the legal and compliance and operations departments who were the most tech savvy still had discomfort. I said, ‘as long as you're willing to work hard and learn new things, there will always be a job for you.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
Keep Humans in the Loop for All Consequential Outputs
Treat generative AI as a drafting assistant, not an approver; require human review for anything that has legal, regulatory, or external impact. Put approval gates and clear sign-off rules in place so responsibility is never ambiguous.
For things that matter, you have humans in the loop [for AI output] in the right time, in the right place … think of it as a good first draft.
Jim Shaughnessy
Chief Legal Officer at Docusign
Jim Shaughnessy
Chief Legal Officer at Docusign
Ask Vendors the Hard Questions
Vendor demos can dazzle, but look for proof. Demand evidence on data provenance, auditability, hallucination controls, update cadence, and governance before procurement.
Neither be afraid nor bamboozled. Ask the hard questions, ask the dumb questions.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
Know When Not To Use Generative AI
Generative models are probabilistic and can vary answers. For workflows that need to return identical results, use deterministic or retrieval systems instead. Map use cases by the level of required repeatability and choose the right technology accordingly.
If you need the same answer every single time, you want AI, but you don’t need it to be generative.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
Treat Hallucinations as Legal Liabilities
Generative models can create false citations and invented facts. These hallucinations can create reputational or legal exposure. Test for them, require human fact-checking, and log outputs so you can trace and correct errors.
And it happens more often than people would think.
Jim Shaughnessy
Chief Legal Officer at Docusign
Jim Shaughnessy
Chief Legal Officer at Docusign
Set a Clear, Memorable Rule for AI Inputs
Protect sensitive information by enforcing a simple input rule. If you wouldn’t want it published or leaked, don’t paste it into AI.
Don’t put anything there that you don’t want to see on the front cover of the New York Times.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
Prepare for a Multi-Agent Future
Expect an assorted ecosystem of agents and embedded AIs across vendors and applications. Governance will need to scale across your entire AI stack, with clear ownership, data flow mapping, and oversight.
You're going to have agents from Salesforce or agents from Workday or agents from Docusign ... interacting with another system and it may end up being evaluated by other agents. There's going to be a lot of complexity in these systems.
Jim Shaughnessy
Chief Legal Officer at Docusign
Jim Shaughnessy
Chief Legal Officer at Docusign
Start Small With Low-Risk Internal Agents
Pilot low-risk agents like knowledge-retrieval or contract lookup tools to deliver quick value and build governance muscle memory. Use those pilots to refine accuracy requirements, review workflows, and adoption metrics before expanding.
The low-risk internal … ‘Has this contract been written before?’ … ‘What’s our policy on X?’ … I think it’s a great way to start out.
Nuala O’Connor
Formerly Walmart
Nuala O’Connor
Formerly Walmart
For more in-depth insights, examples, and discussion from the panel, watch the full webinar.
Webinar
AI Transformation: What Every GC Needs to Know in 2026