AI isn’t magic (and no, it’s not killing CLM)
AI isn’t killing CLM or SaaS. It’s evolving them. Here’s what Anthropic’s legal AI reveal really means for legal teams today.
Every few months, the tech hype cycle delivers a familiar ritual: A big AI company ships a new feature. Markets panic. AI or legal tech industry pundits declare the death of SaaS/CLM/something. Influencers on social media confidently announce that this time, CLM, CRM, ERP, or “insert your choice of enterprise software of the week” is truly finished.
Anthropic’s recent legal plug-in product reveal triggered exactly that reaction. Software stocks slid. Think-pieces on industry publications and social media networks multiplied. “AI will replace lawyers” was a hot take officially back on the menu.
It’s no secret that I clearly work for a CLM vendor, but let me take off my Agiloft Head of Community hat for a few minutes and put on my customer/purchaser hat to respond to this market move.
My take on this (and almost every other AI announcement)? As Shania Twain would say, “That don’t impress me much.” I propose that we all slow down, and take a deep breath. Not because AI isn’t powerful. It is. But because most of the discourse around it is sloppy, imprecise, and allergic to reality.
Let’s dive in.
First, a quick AI vocab lesson
I’ll start with a confession. I hate the term “Artificial Intelligence.” It’s cinematic, not descriptive. What we call AI today isn’t intelligence in human sense.
It doesn’t actually understand. It doesn’t actually reason. It doesn’t know what’s true. What we’re actually talking about is a bundle of technologies, each good at very specific things.
Consider the nuance between the following:
- Machine Learning (ML) is pattern recognition. Common applications of this technology include spam filters, fraud detection, and recommendations. It’s useful, boring, and everywhere.
- Deep Learning is ML at scale. More data, more computing power, and more complexity. Deep Learning is great for vision, speech, and language. This technology is harder to explain, and more expensive to run.
- Large Language Models (LLMs) sit on top of that Machine- and Deep-Learning stack. They predict the next token in a sequence. That’s it. Everything impressive you see is built on that very simple mechanic. LLMs are excellent at summarization, drafting, and producing outputs that look like reasoning. They are also very good at being (confidently) wrong. There is no built-in truth, no native understanding, no guarantee of correctness. Hallucinations aren’t a bug; they are inherent to the structure.
- Generative AI (GenAI) is a term oft-adopted by tech marketers to describe systems that create new content. This includes text, images, code, and audio. Important note: If a tool extracts data from a contract, that’s not generative AI. If it summarizes or drafts based on that data, that is indeed generative AI.
- Agentic AI is where things get interesting and also wildly overstated for the vast majority of the CLM market. Although these agents are considered fully “autonomous,” they do not actually think, nor do they plan like humans. They’re often LLMs connected to external tools, logic, and feedback loops. They interpret inputs, take actions, observe results, and repeat. (If this sounds suspiciously like workflow automation with a better UX, congratulations. You’re paying attention!)
AI is very good at language-heavy, repetitive, and high-volume work. It is, however, not as strongly suited for judgment, nuance, and accountability.
What Anthropic really announced last week
(and why everyone lost their minds)
Anthropic launched a tool with plugins spanning several areas and practices, including one tailored specifically for legal work. The idea with this tool is an AI “coworker” that can read files, organize information, draft documents, and assist with workflows.
The legal tech and wider software markets reacted as if Anthropic was the first to discover SaaS and contracts. Software stocks fell. Headlines declared a reckoning. Investors started asking why anyone would pay for enterprise software if AI can just “do it all.”
Here’s the thing: The tool announced is not conceptually new to legal operations professionals. Contract review. Drafting assistance. Risk flagging. Data extraction. These capabilities have existed in legal tech for years, increasingly so with the increased availability of LLMs over the last decade.
What is new, however, is the packaging. This tech company is offering these capabilities directly, without the surrounding systems that enterprises rely on. No lifecycle orchestration. No structured data models. No approvals, integrations, audit trails, or governance frameworks baked in.
That doesn’t make the product bad. But it does render it an incomplete tool for most enterprise use cases.
Will Claude be useful for experimentation? Absolutely. Will it help individuals move faster? Likely. Does it replace contract lifecycle management platforms? Not even close.
Crucially, Anthropic still has to prove that legal teams can rely on this tooling consistently, securely, and at scale, without turning AI management into a new full-time job.
The announcement in the market was indeed interesting, and could signal a step in the right direction for making AI more accessible when solving complex use cases. The reaction in the market, however, was theatrical.
AI only matters if it solves real problems
The question isn’t whether AI is impressive; the question is whether AI delivers meaningful value in real-life legal environments.
Leavel teams already have systems, processes, and platforms. Replacing them is expensive. Implementation is painful. Change management is a headache. Ongoing support costs are not theoretical.
A mere 3–5% efficiency gain in a controlled demo environment does not justify uprooting incumbent systems to replace them with the next “magic AI tool” du jour. Many experienced legal operations professionals know this and experienced this themselves in the early days of CLM hype, before platforms matured.
Where AI does shine is in targeted, high-friction moments across the eight distinct stages of the contract lifecycle.
Consider the following examples:
- Intake. People hate forms. AI can reduce friction by extracting information, asking smart follow-ups, flagging risk early, and setting expectations before work even begins.
- Review. AI is very good at spotting deviations from standards, highlighting risk, and proposing redlines. Depending on how you structure the review, attorneys may still need to review the work, but that’s not a failure. That’s just how risk management works.
- Routing and workload balancing. These functions can benefit from AI when it goes beyond static rules and starts optimizing for volume, capacity, and turnaround time. If an AI agent is just being used to auto route a contract to Jimmy in Finance when its value is over $100K, then it is a waste of AI use.
And, of course, what AI should not do is replace judgment-heavy decisions without oversight or be bolted onto workflows with the expectation that intelligence will magically emerge.
AI is most effective when it accelerates humans, not when it pretends to replace them.
To sum things up…
We’ll wrap up with the same thoughts that instigated this industry-wide panic: AI is not killing SaaS. AI is SaaS. It’s just newer, more compute-intensive SaaS.
Every platform worth its salt, whether in legal tech, the broader enterprise tech ecosystem, or beyond, will eventually incorporate AI components. The vast majority already have. The winners won’t be the companies that slap “AI-powered” on a landing page. They’ll be the ones that embed intelligence into real workflows, grounded in domain expertise, governance, and trust.
AI won’t kill CLM. AI won’t kill ERPs. AI won’t kill *insert random thing investors are nervous about this week.* What it will do is punish vendors who don’t solve real problems and reward those who do.
Want to learn more about how to work in tandem, not competition, with AI? Check out Agiloft’s recent report with World Commerce & Contracting: “Humans and AI: Together, Transforming Contract Management.”
Recent
Posts
Learn about the realities of AI today, its limitations and capabilities, and its use as a “force multiplier” for contracting.
If there is one message for tech buyers as we approach 2024, it is that AI is here – ready or not.
With the introduction of ConvoAI, Agiloft delivers the same benefits of simplified AI experiences to the world of contracts.