AI in Europe: An Evolving Landscape

What is "AI," how it is being regulated in Europe, and what does the legal future hold for AI-powered software like CLM?

Whether it’s detecting a visitor at your door with your Ring doorbell, identifying a song on the radio with your Shazam app, or unlocking your phone with facial recognition software – artificial intelligence (AI) is everywhere in our lives. What was once touted as the future is now the reality of today.

As with most major technological advancements, the explosion of AI in our daily life is associated with a host of important questions and possibilities, not just for the individual, but also for businesses and organisations. Although there is much to celebrate about the emergence of AI, it also has the potential to pose substantial risks, like magnifying inherent bias, fetching inaccurate data, or, worse, making fatal safety errors. This has led to an emerging regulatory landscape for AI, particularly within the European Union. Along with it is a heightened cost and risk of complying with AI regulations, and the threat of missed obligations between organizations of every shape and size.   

Let’s start with the basics: what “AI” actually is, how it is being regulated in Europe, and what the legal future holds for AI-powered software like contract lifecycle management (CLM).   

Types of AI

Artificial intelligence,” also known simply as AI, is an umbrella term for a variety of different technologies, including machine learning; natural language processing; review, analysis and extraction; and creative generation.  

The four main types of AI are: 

  • Machine learning (ML): The creation of algorithms through user-led training.  
  • Natural language processing (NLP): Extracting intent from unstructured user requests.  
  • Review, analysis & extraction: Using ML algorithms to find, review, and highlight meaningful data within a given data set.
  • Generative AI: Using ML algorithms and deep processing to create uniquely new content. A popular example of this is ChatGPT.

How AI Impacts Legal Agreements

Manual contracting and review are time-consuming processes that are prone to human error. CLM systems powered by AI, however, are eliminating manual tasks and changing the way contracting professionals handle contract creation and negotiation.  

The four main ways AI will affect legal agreements are: 

  • Machine learning (ML): Currently, the most common offerings on the marketplace are generic algorithms, based on files that are publicly available, using the judgment of whoever identified each instance of a given datum. This creates broadly-accurate algorithms. Machine learning experts estimate a good-to-excellent accuracy rate on algorithms at 70%-90%.   
  • Natural language processing (NLP): Just as Bing and Google are delivering the ability to search the Internet more easily, CLM vendors are improving access to contract data. Thanks to NLP, authorized non-legal individuals can find contracts – and extract valuable information from within them – without having to learn how to construct jargon-heavy searches. 
  • Review, analysis & extraction: Initial reviews use ML-generated algorithms to capture key terms and clauses, highlight them for further analysis, and extract them to be used as easily-reportable metadata. This reduces the time to “onboard” a contract at the front end, thereby freeing up contracting professionals to focus on strategy and negotiations, and makes the vital information contained in stored contracts easy to analyze, report on, and use for strategic purposes. 
  • Generative AI: Although creative generation is a challenge AI still struggles to solve effectively, there are libraries of clauses that can be connected to one another to create a total contract. Future iterations of AI may be able to intuit from a natural language request what clauses need to be combined to make a proper contract. 

Evolving Regulations in Europe

The European Commission proposed the “Artificial Intelligence Act,” the first regulatory framework for AI, in April of 2021. The framework proposes that different applications of AI should be analysed and classified in accordance to the risk that they pose to users. The risk assessment of each application will determine the level of regulatory requirements imposed. 

For example, a “high-risk” application of AI would be anything related to safety – such as automobiles, medical devices, or elevators. All high-risk AI systems, according to this framework, would need to be assessed before being put on the market, and constantly reevaluated throughout their lifecycle.  

It is not yet clear how regulators will assess AI-powered CLM on their risk spectrum, but legal software is not named specifically in the EU’s product safety legislation, nor the 8 other specific use cases outlined in the suggested regulations. It is up to each organisation to balance the risk of possible regulation with the risk of not innovating with AI to transform contracting. In our view, regulations can be managed with a reputable technology partner, but the risk of not innovating in a space that’s moving so fast is far greater. 

It’s worth noting that existing EU frameworks such as the General Data Protections Regulation (GDPR) stipulate people cannot be subject to a decision with a legal impact on them if that decision is made solely by automated processes, such as AI. Of course, legal agreements aren’t finalized by automated processes, as they are still negotiated and signed by real people. As such, the contracting process likely wouldn’t be covered by these restrictions.

The Artificial Intelligence Act is expected to be finalised by the end of calendar year 2023. Once approved, it will provide the world’s first comprehensive rules on AI, creating a clear set of rules of the road for all as AI innovation accelerates. 

The Biggest Risks of AI

As with any emerging technology, there are risks and limitations associated with using AI.  

For example, Generative AI projects still suffer from problems like catastrophic forgetting, when the new things it learns overwrite the old things it used to know; and a lack of transparency because the AI  can’t explain why it made a particular decision. 

Another risk, particularly for European users, relates to language. AI models in the AI platform being used are based on the natural language of the content being reviewed, which means that sentence structure, grammar, spelling and characters used (e.g. é, ß, ø, å) can all affect the accuracy of an AI model. Most AI models for CLM are trained on American English and U.S. law documents: they may perform extremely poorly when applied to other languages or dialects.  

Some CLM vendors offer to translate documents into English first, before applying their AI capabilities, creating the risk of errors being introduced into the translation, which in turn can corrupt the AI’s output. For example, the term “bug,” when used in a software contract, could be translated first as “insect,” leading to all manner of potential challenges. Before implementing any AI solution, organizations should be sure to fully research which languages the models are trained on, and whether they will be able to train the models on their own contract data to improve accuracy.  

Lastly, AI relies on content created by human input, which inevitably results in biases. This was readily apparent with the resume-sorting algorithm used for a short time at Amazon. What was meant to help cut down on manual processing of resumes resulted in horribly sexist outcomes for female applicants.  

Opportunities for AI in Legal Environments

To truly harness the power of AI while mitigating risk, it’s critical that legal departments feel empowered to take control of their AI environments, finding an acceptable balance between risk and reward. 

Self-trained AI models bring controls “inside the tent” of Legal departments. Legal professionals use their own internal files to “train” the AI models, using their own documents and resources to identify key terms and clauses that are uniquely valuable to their organisation and industry. For example, biotech organisations might not just need to know if a contract has an indemnity clause, but whether it has a specifically-formulated pharma R&D indemnity clause.  

When implemented in a targeted, context-specific way, AI can virtually clone the best of the group’s resources, becoming a force multiplier that allows an organization’s staff to get most repetitive tasks done quickly and efficiently, relieving them to focus on negotiating the best deals at the lowest levels of risk possible.

Conclusion

The era of artificial intelligence is here in Europe, bringing with it exciting opportunities and challenging risks. AI is poised to transform contract management, especially when embedded in a sophisticated CLM system. As with any emerging technology, the regulatory considerations are still in flux, bringing a range of challenges that will settle only with time, especially in the EU. Yet, for all those challenges, the opportunity to do more with less, using bespoke algorithms built by the very best practitioners, will surely transform the contracting process. 

Recent Posts