AI Update: AI In Legal Ed, Suing Copilot, OpenAI’s Unique Investment Deals

// Robot thinking on white background

Law schools must acknowledge and respond to the rise of AI in the legal industry by incorporating relevant training into their curricula, argues research fellow and assistant director of the Stanford Program in Law, Science, and Technology and Stanford Center for Legal Informatics (CodeX) Megan Ma in Bloomberg Law. AI education should cover what AI can and cannot do, as well as the ethical responsibilities that fall upon attorneys who use AI, writes Ma.

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-top-300x250" ); });

Bloomberg Law also spoke with Matthew Butterick, a lawyer, typographer, and computer programmer who is a key figure at the center of several IP lawsuits against Copilot — an AI tool produced through collaborations between tech industry giants like Microsoft, OpenAI, and GitHub and trained on open-source code. “My career as a programmer, as a designer, as a writer, I felt like it was over if this AI thing wasn’t addressed,” Butterick said.

The Financial Times explored the details of Microsoft’s multibillion-dollar “minority economic interest” in OpenAI as the two companies come under increased regulatory scrutiny in the UK. OpenAI has a series of unique deals in place with its financial backers, with investors receiving a share of profits through a specific AI subsidiary rather than holding conventional equity in the company. 

Also from across the pond, Britain’s highest court determined that artificial intelligence cannot be legally named as an inventor to secure patent rights, according to The Guardian. The decision came after US-based technologist Dr. Stephen Thaler attempted to list an AI called DABUS as an inventor of a food or drink container and a light beacon, claiming that he was entitled to the rights over the AI’s creations.

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-middle-300x250" ); }); googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-storycontent-440x100" ); });

Red Teaming, a popular traditional cybersecurity strategy, could play a significant role in mitigating the risks that come with rapid AI adoption, write Adam Harrison and Nebu Varghese of FTI Consulting for Legaltech News. The strategy involves simulating a cybersecurity attack to spot potential weak points and fine-tune defense protocols.

Ethan Beberness is a Brooklyn-based writer covering legal tech, small law firms, and in-house counsel for Above the Law. His coverage of legal happenings and the legal services industry has appeared in Law360, Bushwick Daily, and elsewhere.

googletag.cmd.push( function() { // Display ad. googletag.display( "div-id-for-bottom-300x250" ); }); CRM BannerCRM Banner Topics

AI Legal Beat, Artificial Intelligence (AI), Ethan Beberness, Technology


Introducing Jobbguru: Your Gateway to Career Success

The ultimate job platform is designed to connect job seekers with their dream career opportunities. Whether you're a recent graduate, a seasoned professional, or someone seeking a career change, Jobbguru provides you with the tools and resources to navigate the job market with ease. 

Take the next step in your career with Jobbguru:

Don't let the perfect job opportunity pass you by. Join Jobbguru today and unlock a world of career possibilities. Start your journey towards professional success and discover your dream job with Jobbguru.

Originally posted on: https://abovethelaw.com/2023/12/ai-update-ai-in-legal-ed-suing-copilot-openais-unique-investment-deals/