
Categories

This article, authored by WLJ patent attorneys Meredith Lowry and MaryScott Polk Timmis, appeared in the Winter 2025 issue of the Arkansas Bar Association’s The Arkansas Lawyer magazine.
“Your Honor, my AI assistant objects!” While we haven’t quite reached that level of artificial intelligence (“AI”) in the courtroom or in business, AI tools are rapidly becoming as essential to law practices as coffee makers in the break room. We have grammar assistants suggesting the next word for a brief, algorithms combing through centuries of case law in seconds, and AI scribes dutifully taking notes during client meetings – they’ll even tell you who talked the most during the conference and how participants reacted to the discussion.
Each day comes with the promise from a digital assistant vendor that this new assistant will revolutionize the practice of law. However, before entrusting your firm’s sensitive information to an AI vendor, it’s crucial to consider some risk management tips to ensure you’re not accidentally pleading guilty to poor due diligence.[1]
The ethical stakes are high. High-profile incidents involving AI legal research have placed these tools under intense judicial and professional scrutiny. But that’s just the start of the concern. When AI systems are used to process sensitive client communications or privileged information, a misstep in vendor selection isn’t just a technical glitch – it could be a violation of the Arkansas Rules of Professional Conduct or a breach of client confidentiality. AI vendors might promise capabilities that sound like science fiction, but attorneys must approach these tools with the same careful scrutiny we’d apply to cross-examining a witness.
The Arkansas Bar Association (“ABA”) recognizes the challenges for our Arkansas attorneys in navigating the risks and the potential for AI in the legal practice. In 2023, the ABA formed the Artificial Intelligence Task Force, which is charged with researching AI and developing recommendations and resources to educate the membership on the use of this technology. Our key elements for study are guardrails for AI in the legal profession, how AI can benefit our members, and ethical considerations for the use of AI in legal proceedings. This author has the privilege of leading the task force for the 2024 – 2025 term and working with a dedicated group of attorneys to provide education and guidance to the Bar for potential changes to the Rules of Professional Conduct and risk management concerns for AI use in the practice of law.
This article focuses on those risk management guardrails related to AI, specifically what practitioners need to consider in regard to the accuracy and bias of the AI-generated work product, the data security and confidentiality of client information, and the ownership of the work product returned by the vendor. Recognizing these concerns is essential not only for maintaining ethical compliance, but also for leveraging AI technology to enhance legal practice while minimizing professional risk.
What Does AI Do?
But before we talk about risks, we need to discuss what an AI tool actually can do. An AI tool is similar to a highly-capable law clerk. AI can process vast amounts of information and perform specific tasks – but like law clerks, AI lacks experience to see bias in a witness or understand the weight of professional ethics.
AI operates through sophisticated pattern recognition rather than genuine understanding. Just as a law clerk might cite relevant cases by identifying legal precedents, AI systems recognize patterns in data to perform tasks. Large Language Models (LLMs) are a specific type of AI that specialize in processing and generating human language – this is why we call them generative AI. LLMs are the AI that can read virtually every legal document, brief, and transcript available in their training data and provide feedback on this training data – but they don’t comprehend the law in the way human attorneys do. Instead, they use statistical patterns to predict what words should come next in any given context. When an LLM takes notes during your client meeting, it’s not really “understanding” the conversation in the way a human would. Instead, it is using its pattern recognition capabilities to identify important points based on repeated topics and predicting the next word that makes sense in that context.
Accuracy & Bias
Accuracy and bias are typically the first concern for a practitioner considering the use of an AI tool. The legal profession has witnessed cautionary tales of AI-generated legal research, most notably the infamous ChatGPT case where fabricated citations led to significant professional embarrassment.[2] These AI-generated falsehoods, known as “hallucinations,” are not mere errors, but often elaborately constructed fictional legal narratives complete with seemingly legitimate case citations.
The root of this problem lies in AI’s predictive nature. Trained on vast datasets of legal materials, these models can remarkably approximate legal language, creating case summaries that appear credible at first glance. However, this training process reveals a more insidious challenge: the potential perpetuation of systemic biases embedded in historical legal decisions. An AI model trained on decades of case law following an older and now overruled decision risks amplifying outdated case law or long-standing discriminatory patterns in areas critical to justice – from criminal sentencing recommendations to civil rights interpretations and employment law analyses.
While we scoff at the use of a mainstream AI model like ChatGPT or Claude for legal research, our traditional tools of LexisNexis and Westlaw have recently started offering AI-assisted research and may pose similar risks. These tools pull from a narrower data set focusing on the law, so there is more precision with the results by a dedicated legal AI tool. But recent research shows that while Lexis+ AI and Westlaw AI-Assisted Research hallucinate less than general-purpose AIs, each hallucinated between 17% and 33% of the time and these hallucinations showed significant flaws, from misunderstanding holdings to outright fabrications.[3] These established platforms also still grapple with bias – their AI features may inadvertently favor certain jurisdictions, legal theories, or demographic outcomes based on their training data.
Dedicated AI tools for lawyers represent the most promising current solution for use by attorneys. Attorneys considering these tools should therefore conduct thorough due diligence of the vendor and the solution. Vendors should be asked to provide not only the accuracy of the AI solution and hallucination rates, but also the vendor’s strategies and processes for minimizing hallucinations and bias risks.
The goal is not to reject AI technology, but to implement it intelligently. By demanding transparency and rigorous validation, attorneys can leverage AI as a powerful research tool while maintaining the highest standards of professional responsibility.
Confidentiality & Data Security
In the legal profession, client confidentiality is not just an ethical obligation – it’s a sacred trust. The integration of AI tools introduces complex new challenges to this fundamental principle. LLMs operate through statistical pattern recognition, drawing from vast training datasets that can potentially include user-provided information. This creates a critical risk: the inadvertent exposure of sensitive client data.[4]
Let’s jump back to our explanation of how AI works – AI uses statistical patterns to predict what words should come next in any given context and those patterns are based on the training data provided to it. If an AI tool incorporates client communications, case details, or strategic discussions into its learning model, there’s a genuine risk of that confidential information being reproduced in responses to other users. Alarming research has demonstrated that AI models can memorize and potentially reproduce granular details – even sensitive information like social security numbers – if prompted strategically.[5] Luckily, many AI vendors with legal tools provide information to prospective users on sources of the training data and whether the tool incorporates user inputs into its training data.
Beyond the risk of inadvertent disclosure by the AI is the risk of security of the vendor itself. Many AI vendors are startup companies with varying levels of security sophistication. Their terms of service range from robust protection to placing significant liability on the user. For example, there are a number of note-taking software integrations for video conferencing platforms that record meetings and then provide AI generated transcriptions and summaries to the account holder. These note-takers are not marketed specifically to attorneys, so client confidentiality is not a primary concern for the vendors, but this author has had clients start to use these integrations to memorialize legal strategy sessions.
Some AI solutions do provide HIPAA compliance certifications and other security certification, but often the free models have limited security and, as mentioned before, this author is aware of at least one note-taking solution that requires the user to indemnify the vendor for any and all liabilities incurred as a result of the user’s use of the service.
Attorneys, fortunately, are more familiar than the average business owner with reading vendor agreements, terms of use and privacy policies provided by AI providers. Attorneys need to review the confidentiality provisions, the indemnification and warranties provided by the vendor, and the data security standards the vendor utilizes to vet whether an AI vendor is suitable for use by the attorney.
Intellectual Property Rights
Most attorneys are familiar with the recurring of standard clauses for common agreements, like boilerplate language for confidentiality provisions or standard export law provisions in technology agreements. However, the ubiquity of certain phrases and clauses doesn’t mean that legal documents are entirely excluded by copyright protections or diminish law firms’ rights to protect their intellectual property. Recent legal developments underscore the complexities of document reuse in the digital age.
A recent federal court case highlights the growing tensions around document originality. A Boston law firm representing a defendant in a case brought suit against Winston & Strawn LLP, alleging unauthorized copying of a motion to dismiss that was filed against a common plaintiff.[6] This case is not unique, with other similar cases filed in the past likewise raising the issue of intellectual property rights in the legal profession.
The emergence of AI technologies has introduced an even more nuanced and potentially treacherous challenge to traditional document creation practices. Unlike previous concerns about junior associates directly copying from sources like PACER, AI presents a more sophisticated risk. Strategically prompted AI tools can potentially reproduce work product verbatim or with minimal alterations from source material. This raises significant legal and ethical concerns, particularly given that U.S. Copyright law considers a new legal brief based on an existing copyright-protected legal brief to be a derivative work – a right exclusively reserved for the copyright owner.[7]
In the era of increasingly powerful AI tools capable of drafting opening statements and legal briefs, the potential for unintentional infringement has escalated dramatically. To minimize risks associated with AI in legal work, attorneys must approach the AI output with the mindset of a supervising attorney and with rigorous review of AI-generated content. This means treating AI tools as drafting assistants rather than final document creators, and implementing a comprehensive review process that maintains the highest standards of originality and professional integrity.
Navigating the AI Frontier in Legal Practice
As AI technologies continue to evolve at an unprecedented pace, the legal profession finds itself at a critical juncture. The integration of artificial intelligence into legal practice has already begun with predictive text and voice-dictation of emails, but, as generative AI continues to develop, we face more challenges for risk management. The challenges outlined throughout this article – from accuracy and bias to confidentiality and intellectual property concerns – are not insurmountable barriers, but rather critical considerations that demand thoughtful, proactive management.
Attorneys must neither blindly embrace AI as a panacea nor reflexively reject its potential. Instead, the most effective strategy lies in intelligent, measured integration. This means approaching AI tools as sophisticated assistants that augment – but never replace – professional legal judgment. Rigorous vendor vetting, comprehensive understanding of AI limitations, and maintaining an unwavering commitment to ethical standards are paramount.
- [1] The jokes for this introduction were suggested by Claude, an artificial intelligence language model, with review and begrudging approval by the author. See Anthropic, Claude (Version 3.5 Sonnet) (2024), https://anthropic.com/claude While it is daunting to teach humor to AI, it is apparently easier than teaching a middle-aged patent attorney.
- [2] See Mata v. Avianca, Inc., 678 F.Supp.3d 443 (S.D.N.Y. 2023).
- [3] Varun Magesh et al., Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (on file with Stanford University), https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf.
- [4] Lauren Leffer, Your Personal Information Is Probably Being Used to Train Generative AI Models, Sci. Am. (Oct. 19, 2023), https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/.
- [5] Nicholas Carlini et al., The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks, USENIX, (Jul. 16, 2019), https://arxiv.org/pdf/1802.08232.
- [6] Complaint, Hsuanyeh Law Group, PC v. Winston & Strawn LLP et al., 1:23-CV-11193 (S.D.N.Y. Dec. 26, 2024).
- [7] 17 U.S.C. § 106 (2002).