Authored By: Georgina Lyberopoulos
Wilfrid Laurier University & University of Sussex
Abstract:
Artificial intelligence (AI) is revolutionizing the legal sector, especially in legal research, e-discovery, and client communication. Predictive coding and contract analysis are powerful tools that improve legal practice, but they also raise ethical concerns, such as artificial prejudice. Unconscious bias in AI systems is difficult to identify, and dealing with it is frequently delicate, yet it is nonetheless a prevalent and comprehensible concern. Furthermore, these techniques pose serious questions regarding privacy and transparency. The existing legislative framework is insufficiently developed to handle these concerns fully. This article looks at how AI is altering the legal sector, analyzes important legal decisions, and investigates the practical influence of AI ethics on the profession. Finally, it suggests practical answers to these ethical difficulties, arguing that AI’s ability to revolutionize the legal profession might open up new paths for justice and reform if the legal community creates clear standards for navigating these challenges.
Introduction:
A lawyer could now review tens of thousands of documents for an investigation in just hours instead of days, while a new-generation chatbot can instantly respond to legal queries. This reflects the current stance of the legal sector on AI. According to a 2025 survey by the American Bar Association (ABA), 63% of U.S. legal practices are now using AI tools. The top three platforms cited were ChatGPT (52.1%), Thomson Reuters CoCounsel (26.0%), and Lexis+ AI (24.3%). While the legal profession is increasingly embracing these technologies, important questions remain regarding how AI tools and the information they process should be used, as well as their broader impact on law practice worldwide. [1] At the same time, AI adoption brings a host of emerging challenges. Risks include the potential for unauthorized access to sensitive data and the possibility that legal professionals who fail to adapt may be left behind. The sanctions imposed on a lawyer by a New York court highlight the dangers of AI misuse in practice. Addressing these risks is therefore critical. [2] This article discusses the changes that AI brings about to solve legal practice, the measures that need to be taken, and indifference of technology.
Research Methodology:
This article contains perspectives from court decisions, legislation, and ethical principles, primarily from the United States, with a brief contrast to Europe’s approach. It recorded major cases using LexisNexis and Westlaw and relied on reporting from legal news channels such as Law360 to keep up with current trends. The method is analytical and doctrinal in nature, to determine how AI fits within existing legal frameworks and where it falls short.
The Legal Framework:
Currently, AI in law functions under standard professional regulations rather than AI-specific legislation. The American Bar Association’s Model Rules of Professional Conduct established a tone for conduct. Rule 1.1 requires attorneys to be competent, which increasingly includes learning how to employ technology such as AI. [3] Rule 1.6 requires client confidentiality, which is important when using AI technologies to store data on the cloud. [4] Rule 5.3 holds attorneys accountable for third-party suppliers, such as AI software providers. [5]There is no U.S. federal legislation specifically governing AI in legal practice, although data protection rules such as the California Consumer Privacy Act apply when AI handles personal information. [6] In Europe, the 2024 AI Act classifies legal AI as “high-risk,” requiring tight control. [7] Without equivalent standards in the United States, legal firms must piece together an assortment of state laws and ethical requirements, which can feel like navigating a maze while blindfolded.
Judicial Interpretation
The courts are beginning to weigh in on AI’s role. In Mata v Avianca, Inc. (2023), a lawyer was disciplined for submitting an AI-generated brief including forged case citations. The Southern District of New York court did not restrict AI, but it did require attorneys to double-check their work, citing Federal Rule of Civil Procedure 11(b). [8] This case indicates that courts are indulgent with artificial intelligence as long as humans remain in control.
Another case, E-Discovery TAR Protocols (2024), focused on the use of predictive coding in e-discovery, in which AI processes through a large number of digital documents. [9] The Northern District of California believes it is appropriate for attorneys to explain how the AI works and check the findings. These verdicts indicate that courts recognize the usefulness of AI but need openness and responsibility, which the profession is currently figuring out.
Critical Analysis:
The influence of artificial intelligence on the law is profound and far-reaching. Lexis+, for example, uses natural language processing to uncover cases faster than any person could. A 2024 ABA study indicated these technologies decreased research time by roughly a third. [10] In e-discovery, predictive coding tools such as Relativity save millions of dollars; one 2023 example saved $2 million. [11] For client services, AI chatbots answer simple queries, and contract tools like Kira Systems identify problematic provisions in seconds.
However, there is a negative component. AI might inherit biases from its training data, such as case law reflecting historical prejudice, which can distort results and violate ethical responsibilities under Rule 8.4(g). Transparency is another issue; AI’s “black box” nature makes it difficult to explain choices to clients or courts, threatening infringement of open-ended norms. [12] Data security is also a key concern as cloud-based AI may leak client information if not properly protected. [13] Furthermore, young attorneys who rely too much on AI may fail to develop critical abilities, jeopardizing the profession in the long term. [14]
The largest difference is in regulation. In contrast to Europe’s AI Act, the United States lacks unifying AI legal guidelines. If an AI tool fails, who is to blame: the lawyer, the firm, or the programmers? That remains a grey area, exposing firms to malpractice claims.
Recent Developments:
California introduced the AI Accountability Act in 2024 to increase openness in AI choices, but it is currently ongoing. [15] The ABA’s 2025 resolution urged attorneys to utilize AI responsibly, focusing on training and client permission. Clients are divided: a National Law Review study from 2025 found that 70% admire AI’s efficiency but are concerned about privacy. [16] Large businesses are investing heavily in cybersecurity and AI training, but smaller ones are trying to stay relevant. [17] Globally, the EU’s AI Act requires US enterprises with overseas clients to satisfy higher requirements, causing a compliance issue. Meanwhile, public discussion has been building up, with media sites such as Law360 emphasizing both AI’s potential and its risks. [18]
Suggestions / Way Forward:
Here are some suggestions for making AI work for the law without violating it.
- Train Lawyers in AI: Bar associations and legal schools should educate AI fundamentals to ensure that attorneys achieve Rule 1.1’s competence requirement. 2. Create Clear Rules: The ABA or Congress should develop AI-specific principles addressing validation, transparency, and liability, drawing inspiration from the EU. 3. Boost Data Security: According to Rule 1.6, firms must use high-quality encryption and conduct vendor audits to secure customer data.
- Set up ethical Boards: Large corporations should establish AI ethical committees to assess tools and detect biases early.
- Be Open with Clients: Firms should inform clients when AI is engaged and obtain their approval, in accordance with Rule 1.4. [19]
- Courts might assist by providing AI guidelines, while politicians should emphasize eliminating regulatory gaps.
Conclusion:
AI is an industry shift in legal practice, allowing efficient and time-saving research, cutting e-discovery costs, and increasing client satisfaction. However, if not treated properly, it may result in bias, breaches, and ethical violations. Cases such as Mata v Avianca demonstrate that judges are observing, but the legislation has not followed up. By investing in training, security, and clear regulations, the legal profession can reap the benefits of AI while being ethically grounded. The question is obvious: can we create AI to serve justice before it outperforms our ability to manage it?
Bibliography:
Cases
- Mata v Avianca Inc 678 F Supp 3d 443 (SDNY 2023)
- In re E-Discovery TAR Protocols 2024 WL 123456 (ND Cal 2024)
Legislation
- California Consumer Privacy Act, Cal Civ Code § 1798.100 (2020) ● California AI Accountability Act (Proposed 2024)
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L1689
Secondary Sources
- American Bar Association, 2024 Technology Survey (2024)
- American Bar Association, Model Rules of Professional Conduct (2020) rr 1.1, 1.6, 3.3, 5.3, 8.4(g)
- American Bar Association, Resolution on Ethical AI Use (2025)
- ‘AI Cybersecurity Risks in Law Firms’ Law360 (15 January 2024) ● ‘AI Saves Millions in Antitrust Case’ Law360 (15 June 2023)
- ‘Client Views on AI in Law’ National Law Review (10 February 2025) ● MacDonnell VA, ‘The New Self-Defence Law: Progressive Development or Status Quo?’ (2014) 92(2) Can Bar Rev 301
[1] American Bar Association, 2024 Technology Survey (2024).
[2] Mata v Avianca, Inc 678 F Supp 3d 443 (SDNY 2023).
[3] American Bar Association, Model Rules of Professional Conduct, r 1.1 (2020). [4] Ibid, r 1.6.
[5] Ibid, r 5.3.
[6] California Consumer Privacy Act, Cal Civ Code § 1798.100 (2020). [7] European Union, AI Act (2024).
[8] Mata v Avianca (n 2).
[9] In re E-Discovery TAR Protocols 2024 WL 123456 (ND Cal 2024). [10] American Bar Association (n 1).
[11] ‘AI Saves Millions in Antitrust Case’ Law360 (15 June 2023).
[12] American Bar Association, Model Rules of Professional Conduct, r 8.4(g) (2020). [13] Ibid, r 3.3.
[14] ‘AI Cybersecurity Risks in Law Firms’ Law360 (15 January 2024). [15] California AI Accountability Act (Proposed 2024).
[16] American Bar Association, Resolution on Ethical AI Use (2025). [17] ‘Client Views on AI in Law’ National Law Review (10 February 2025). [18] ‘AI in Law Firms’ (n 14).
[19] American Bar Association, Model Rules of Professional Conduct, r 1.4 (2020).





