Responsible AI Management: A Simple Framework For In-House Lawyers

// robot trader algorithmic trading artificial intelligence AI“There’s no way we can stem the AI tide now,” an in-house lawyer told me. And she’s right. Before Large Language Models brought us tools like OpenAI’s ChatGPT, companies already used AI-enabled technology tools for day-to-day tasks and operations. 

Now after the release of ChatGPT, we’ve been inundated with information on AI’s potential impact on business and the law. 

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-top-300x250" ); });

“Even if I had the time to wade through it all, how can I determine what’s relevant?” my friend asked. “I just want a simple framework to use as a launching pad for managing AI.” 

ChatGPT Draws State Lawmakers’ Attention To AI Sponsored ChatGPT Draws State Lawmakers’ Attention To AI ChatGPT and other AI models have drawn a great deal of public attention since being introduced last year—state lawmakers are taking notice too. From Korey Clark, Editor, State Net Capitol Journal™  

To that end, here are three overarching areas of concern in-house lawyers can focus on when developing a proactive approach to AI management.

1. Build AI Products Responsibly

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-middle-300x250" ); }); googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-storycontent-440x100" ); }); googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-in-story-youtube-1x1" ); });

Today’s legal teams must collaborate more closely than ever with product development teams to understand how the company’s products and services incorporate AI usage and to provide guidance on applicable legal requirements such as data privacy, security, and intellectual property rights. 

For example, exaggerated claims about a product’s AI-driven capabilities can lead to legal issues and loss of consumer trust. Developing proof is critical for truthful and accurate claims. 

Sponsored Lex Machina Expands The Power Of Legal Analytics To Litigation In State Courts Sponsored Lex Machina Expands The Power Of Legal Analytics To Litigation In State Courts Lex Machina expands its powerful machine learning, trained and reviewed by attorneys, to help legal practitioners make data-driven decisions in state courts as well as… From Ethan Beberness   ChatGPT Draws State Lawmakers’ Attention To AI Sponsored ChatGPT Draws State Lawmakers’ Attention To AI ChatGPT and other AI models have drawn a great deal of public attention since being introduced last year—state lawmakers are taking notice too. From Korey Clark, Editor, State Net Capitol Journal™   Why CRM Is An Essential Tech Tool For Law Firms In 2023 Sponsored Why CRM Is An Essential Tech Tool For Law Firms In 2023 CRMs are powerful tools to help law firms grow and scale without needing more staff. Read on for a brief overview of the basic benefits… From Jan Hill - Lawmatics   Exploring ESG through the Lens of Risk Management Sponsored Exploring ESG through the Lens of Risk Management Join us on May 11th and learn how to frame ESG as risk management and the trends to watch. From Axiom and Above The Law  

A cross-functional team might include legal, engineering, IT, product management, sales, and other relevant stakeholders. The more expertise you can tap into, the more likely you are to find the delicate balance between risk and innovation.

2. Properly Onboard Vendors Of AI-Powered Products And Services

For legal, this involves reviewing licensing and service level agreements for the appropriate provisions for risk allocation, liability, and indemnification. As AI use expands, you’ll also want to guide your procurement team to ask vendors how their AI models were created, what data they used, how they obtained permission to use the data, and other questions.

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-bottom-300x250" ); });

Procurement teams must ensure that vendors don’t use your data in ways that violate your data privacy policies and must protect your and your clients’ data from being misused or stolen.

Like so much else in your role, this requires staying current on the rapidly changing regulatory landscape and legal precedents. Participating in initiatives like the MIT Task Force on Responsible Use of Generative AI for Law can help you incorporate responsible AI management and develop best practices for mitigating AI-related risks and limitations.

Sponsored Exploring ESG through the Lens of Risk Management Sponsored Exploring ESG through the Lens of Risk Management Join us on May 11th and learn how to frame ESG as risk management and the trends to watch. From Axiom and Above The Law   The Ultimate Guide To eDiscovery With Google Workspace And Google Vault Sponsored The Ultimate Guide To eDiscovery With Google Workspace And Google Vault An effective team must be well-equipped to handle the unique challenges these tools present. From Onna and Above The Law  

3. Implement AI Policies And Training

Guide employees to make choices that withstand public scrutiny and the test of time. Implementing company policies, procedures, and training shows you are dedicated to responsible AI management. 

The critical thing to remember is that policies and programs don’t have to be perfect or all-encompassing right now. You will likely revise them several times as the AI ecosystem evolves. Don’t let that stop you from taking the first step.

Trust Your Legal Experience

You don’t necessarily have to create a whole new set of rules for AI. Start by looking at your existing tools and processes for meeting your standard legal and ethical duties and responsibilities. Many of these can be applied to AI and its use.

Trust in your expertise. After all, you’re in this position because you are a talented lawyer with a proven record of legal and ethical decision-making. Your experience is a powerful tool in helping you be a wise AI leader.

How confident are you in handling AI and law? 

How is AI affecting your company? 

Can we use the same tools to deal with AI as other technologies that came before? Share your thoughts in the comments section below!

Olga MackOlga V. Mack is the VP at LexisNexis and CEO of Parley Pro, a next-generation contract management company that has pioneered online negotiation technology. Olga embraces legal innovation and had dedicated her career to improving and shaping the future of law. She is convinced that the legal profession will emerge even stronger, more resilient, and more inclusive than before by embracing technology. Olga is also an award-winning general counsel, operations professional, startup advisor, public speaker, adjunct professor, and entrepreneur. She founded the Women Serve on Boards movement that advocates for women to participate on corporate boards of Fortune 500 companies. She authored Get on Board: Earning Your Ticket to a Corporate Board SeatFundamentals of Smart Contract Security, and  Blockchain Value: Transforming Business Models, Society, and Communities. She is working on Visual IQ for Lawyers, her next book (ABA 2023). You can follow Olga on Twitter @olgavmack.

Topics

Artificial Intelligence (AI), In-House Counsel, Olga V. Mack, Technology


Introducing Jobbguru: Your Gateway to Career Success

The ultimate job platform is designed to connect job seekers with their dream career opportunities. Whether you're a recent graduate, a seasoned professional, or someone seeking a career change, Jobbguru provides you with the tools and resources to navigate the job market with ease. 

Take the next step in your career with Jobbguru:

Don't let the perfect job opportunity pass you by. Join Jobbguru today and unlock a world of career possibilities. Start your journey towards professional success and discover your dream job with Jobbguru.

Originally posted on: https://abovethelaw.com/2023/05/responsible-ai-management-a-simple-framework-for-in-house-lawyers/