Several top law firms are turning to specialists to beef up their artificial intelligence compliance practices in a way they wouldn’t with more established areas of law.
They’re hiring data scientists and technologists as they test clients’ systems for bias, ensure compliance with emerging regulations and rethink their own legal offerings, which may themselves be enhanced through use of AI.
The emerging field, which has consumed popular imagination for AI’s often lifelike behavior, also gives rise to potential legal snags.
“The legal and the technological issues are inextricably intertwined, and we believe that over five years ago, when we launched the practice, that to truly be an AI practice, you needed legal and computational understanding,” said Danny Tobey, partner and global co-chair of AI and data analytics practice at DLA Piper.
Unlike other areas of law such as environmental regulations or automotive safety, where legal experts routinely handle intricate details, AI poses unique challenges that require technologists’ expertise, Tobey said.
“AI is unique because we’re not just talking about an incredibly complex and novel technology that is developing every day, but at the same time we are rewiring the infrastructure of how we practice law,” Tobey said in an interview. “A true AI practice combines both legal and computational skill sets.”
DLA Piper is among many multinational firms employing this strategy. Faegre Drinker has a subsidiary called Tritura that employs data scientists to advise clients on using AI, machine learning, and other technologies driven by algorithms, according to its website. DLA Piper, which employs 23 data scientists, confirmed it hired 10 data scientists away from Faegre Drinker last year.
Faegre Drinker did not respond to emails seeking comment.
Others employ technologists as they incorporate AI into their own practices.
A&O Shearman announced last year that it had launched an AI tool called Harvey built using the OpenAI’s ChatGPT platform that could “automate and enhance various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance.”
Clifford Chance said in February that it had deployed an in-house AI tool called Clifford Chance Assist that was developed on Microsoft’s Azure OpenAI platform. The tool would be used to automate routine tasks and improve productivity, the firm said.
“Teams of legal technologists in the U.S. and globally are thinking through what automation and AI solutions may be helpful for us as legal professionals,” Inna Jackson, technology and innovation attorney for Americas at Clifford Chance, said in an interview.
Red teaming and governance
To help clients figure out whether their AI models perform within the bounds of regulations and laws, DLA Piper routinely employs so-called red teaming – a practice in which officials simulate attacks on physical or digital systems to see how they would perform.
“We’re working with a major retailer on testing various facial recognition solutions to make sure not only are they living up to their technical promise, but are they legally compliant and in line with the latest pronouncements from federal agencies and AI related legislation,” Tobey said.
He noted that companies are rapidly incorporating AI in human resources as well, “from hiring to promotion to termination.”
“It is an incredibly regulated and fraught area that raises the risk of algorithmic bias and discrimination,” he said.
Clients large and small are looking for the proper controls, Jackson said.
Large clients “are interested in figuring out what is the right governance model to use in deploying AI, in building AI, in partnering for AI,” Jackson said in an interview. While smaller clients are likely building governance practices from the ground up, she said.
“And by governance I mean processes, controls, thinking through laws and regulations that may apply, best practices that may apply,” Jackson said. “So everybody’s thinking through the best ways to approach AI.”
DLA Piper and Clifford Chance were among 280 chosen to participate in the Artificial Intelligence Safety Institute Consortium, which is part of the National Institute of Standards and Technology.
The goal is to develop “science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world,” according to the AI Safety Institute.
Although Congress has yet to pass any broad legislation covering AI use, the European Union’s AI Act, which took effect in August, would be applicable to multinational corporations that deploy AI systems, if such systems are used to make decisions that affect EU citizens, Clifford Chance said in an advisory to clients.
The EU law, which prohibits discrimination and bias, “will have a significant impact on employers and HR professionals who use, or plan to use, AI systems in their operations, recruitment, performance evaluation, talent management and workforce monitoring,” Clifford Chance said.
“Clients with a global presence in particular want to know how to think about EU AI Act applicability to their operations, not just in the EU, but maybe broadly outside of the EU as well,” Jackson said. Clients are seeking advice on creating one set of practices that would be acceptable across jurisdictions “because a segmented approach per market obviously wouldn’t be practical,” Jackson said.
Companies also are trying to figure out what AI guardrails will be enacted in the United States, said Tony Samp, head of AI policy at DLA Piper.
“With each company that our data analysts, red-teamers, and attorneys work with, there is a parallel need for them to understand the AI regulatory landscape in Washington D.C. and the direction of congressional interest,” Samp said in an email.
Samp was previously senior adviser to Sen. Martin Heinrich, D-N.M., one of the four lawmakers tapped by Senate Majority Leader Charles E. Schumer to draw up a report on AI innovation and regulation.
Samp said the law firm recently hired former Sen. Richard M. Burr of North Carolina, a Republican who chaired the Intelligence Committee, to advise clients on the direction that U.S. legislation on AI could take.
This post was originally published on here