3 things you should never use AI for in Legal
Discover 3 critical legal tasks AI should never replace. Learn where human expertise is essential in legal operations.

Artificial Intelligence (AI) feels a bit like magic right now, and it’s quickly becoming a valuable tool in legal and contracting workflows. Drop a contract into a system, and it spits back a summary in seconds. Ask a chatbot to write a clause, and boom – it writes it. It’s tempting to think: if AI’s this good, why not let it run the whole show?
Many agree that AI has the potential to enable lawyers and legal teams to work faster and more efficiently. But with that promise comes risk. Not every application of AI makes sense, and some approaches can cause more harm than good.
So, what should you look out for? Here are some things legal professionals should never do with AI.
1. Never rely on AI for final legal judgement
One of the biggest mistakes you can make is to take AI output at face value. Generative AI (GenAI) models are known to produce hallucinations and in a lawyer’s case, those hallucinations could look like plausible-sounding but completely fabricated case law or misquoted statutes. Using these in your work can result in serious professional risk or reputational damage. And the laws reflect that fact. Federal judges are cracking down on attorneys who cite fake AI cases, with fines reaching up to $15,000, and some are even calling for harsher professional penalties.
For example, in 2023, New York attorneys submitted a legal brief that cited six fictitious cases generated by ChatGPT. Despite ChatGPT’s confident presentation, and even the attorneys asking the AI if they were real cases (the GPT said yes), the court discovered the fabrication when the opposing counsel could not locate the cited cases. Widely covered by The New York Times, this cautionary tale’s lesson here is clear: never submit or rely on AI output as final without human legal review and validation against primary legal resources.
This case highlights a phenomenon known as automation bias. The Center for Security and Emerging Technology (CSET) within Georgetown University’s Walsh School of Foreign Service describes it as “the tendency for an individual to over-rely on an automated system, which can lead to increased risk of accidents, errors, and other adverse outcomes when individuals and organizations favor the output or suggestion of the system, even in the face of contradictory information.”
While AI can certainly be helpful to speed up research and generate ideas, remember that AI is a tool, not an actual person, nonetheless a lawyer. It can help you work faster, but the judgement, responsibility, and nuance still rests with you.
2. Never enter confidential or proprietary information into public AI
Client confidentiality is central to the legal profession, but AI platforms don’t see it that way. Some AI platforms store and reuse the data entered into them. If you paste a confidential agreement or litigation strategy into a public, unsecured AI tool, you may be breaching privilege or disclosure rules.
We’ve seen this play out IRL\. Back in 2023, Samsung engineers reportedly leaked trade secrets by pasting confidential code into ChatGPT. If it can happen at a tech giant, it can happen anywhere.
Legal work lives and dies on confidentiality. Using tools that don’t guarantee data protection, encryption, or proper retention policies isn’t just sloppy – it may cross into malpractice territory. An American Bar Association (ABA) article goes as far as to recommend strict internal governance before allowing employees or lawyers to ever touch generative AI.
3. Never delegate your professional judgement to AI
Almost half of employees using AI at work admitted they were doing so inappropriately, such as trusting all answers AI gives without checking them, or entrusting it with sensitive information. The ABA’s Model Rule 1.1 mandates that lawyers provide competent representation, which includes understanding the benefits and risks associated with the technologies they use.
Your duty of competence cannot be delegated to an AI. Over-relying on AI can lead to critical oversights, such as misinterpreting jurisdictional nuances, applying one-size-fits-all templates, or overlooking subtle client-specific considerations.
AI is a powerful assistant, but it cannot replace discernment, strategic thinking, and ethical reasoning and responsibility that lawyers provide.
Think of AI like a flashlight in a dark room: it illuminates, but you still need to guide the way. Always review, adjust, and tailor AI outputs to your organization’s unique needs, and remember that professional judgement cannot – and should not – be outsourced to a machine.
AI is a partner, not a replacement
AI can be an incredible ally for legal teams, from summarizing long agreements to highlighting risks to speeding up tedious tasks, but one thing is clear: human verification is non-negotiable.
Don’t fear AI. You can still use it confidently and accelerate your work. Tools like Contract Lifecycle Management (CLM) platforms can streamline contracts, automate routine approvals, and surface potential risks – but overall, they should aid your judgement, not replace it.
When AI and CLM systems support your workflow thoughtfully and pragmatically, you can work faster, smarter, and safer, all while maintaining the standards and accountability that the legal profession demands.
Recent
Posts
Learn about the realities of AI today, its limitations and capabilities, and its use as a “force multiplier” for contracting.
If there is one message for tech buyers as we approach 2024, it is that AI is here – ready or not.
With the introduction of ConvoAI, Agiloft delivers the same benefits of simplified AI experiences to the world of contracts.