Why Lawyers Keep Using ChatGPT Despite Its Risks
Why do lawyers keep using ChatGPT? Legal professionals are increasingly drawn to AI tools like ChatGPT, even as these technologies generate so-called "AI hallucinations"—fabricated legal citations that have landed some attorneys in hot water. The main reason? ChatGPT and similar large language models (LLMs) offer speed and convenience in legal research and document preparation, often making them an appealing option for time-strapped lawyers. However, these tools are prone to errors that can result in severe professional consequences, including fines and reputational damage.The Temptation of AI for Legal Professionals
Lawyers juggle heavy caseloads and tight deadlines, making AI’s promise of faster research and drafting hard to resist. Platforms like LexisNexis and Westlaw, which are widely used in the legal industry, have already integrated AI features. A 2024 Thomson Reuters survey found that 63% of lawyers have used AI tools, with 12% reporting regular use for tasks like summarizing case law and generating legal drafts. For many attorneys, ChatGPT acts as a virtual junior associate, capable of quickly sifting through volumes of legal information.
The Risks of AI Hallucinations
While AI offers time-saving benefits, it also brings significant risks. ChatGPT and similar models can produce convincing but inaccurate information. In one case, a motion to dismiss filed by journalists’ lawyers in Florida included multiple fake citations generated by ChatGPT, leading the judge to strike the filing. Similarly, lawyers for Anthropic and legal experts in Minnesota have admitted to submitting AI-assisted filings riddled with citation errors. Judges are increasingly skeptical of these submissions, and fines or sanctions can follow when errors are uncovered.
Why Do Lawyers Keep Using AI Despite the Risks?
The persistence of AI use in the legal field stems from a mix of necessity and overconfidence. Many lawyers mistakenly assume that LLMs like ChatGPT function as "super search engines," when in reality, they are prone to generating plausible-sounding but entirely fictional content. Additionally, tight deadlines and the sheer volume of information involved in legal cases push lawyers to lean on AI for preliminary research, even when they know the risks.
Striking a Balance: How Lawyers Can Use AI Safely
Legal experts like Andrew Perlman, dean of Suffolk University Law School, argue that generative AI has its place in the legal profession. Perlman suggests using AI for tasks such as:
Sifting through discovery documents
Reviewing briefs and filings
Brainstorming arguments and strategies
However, Perlman and others stress the importance of rigorous fact-checking and citation verification. Treating AI output as a first draft rather than a final product can mitigate the risks of hallucinations. Alexander Kolodin, an Arizona attorney, likens using ChatGPT to assigning work to a junior associate—reviewing the output is essential.
Regulatory and Ethical Considerations
The American Bar Association (ABA) issued guidelines in 2024, emphasizing lawyers’ duty of technological competence. This includes understanding the strengths and weaknesses of generative AI and maintaining confidentiality when using these tools. The guidance also advises transparency with clients about AI usage in legal work.
The Future of AI in Law: Here to Stay?
AI’s integration into legal practice isn’t going away. Experts predict that generative AI will soon become a staple tool in law firms, with those resistant to adoption potentially falling behind. However, skepticism remains. Judges like Michael Wilner, who has sanctioned lawyers over AI-generated errors, argue that attorneys should never outsource legal research and writing entirely to AI without careful verification.
Post a Comment