Authored By: Rownak Morshed
Earth House Alternative School
Introduction:
It is no secret that AI is revolutionizing the world by taking industries by storm, from self-driving cars to the world’s very first AI-generated minister introduced to Albania’s cabinet by their prime minister![i]
As a matter of fact, the legal field is also transforming with law students and lawyers adopting AI for greater efficiency; This can be exemplified by Notre Dame Law school’s recent partnership with a generative ai specially programmed for the legal profession, Harvey AI, with the motive of being the preliminary step of exhibiting how ai is integrated in legal practice to their law pupil. [ii] Conversely, why is an international law firm only granting access to using legal AI tools to their staff via a request process and changing their AI usage policies to more strict ones?[iii] This leads to the central question this paper seeks to answer: Is employing AI in the legal field doing more harm than good to law students, lawyers, and law firms?
This paper aims to answer the question by highlighting these key objectives: (a) how AI is beneficial to law students, lawyers, and law firms, (b) are there any challenges of employing AI in the legal field that may undermine the benefits? (c) Are there proper regulations and ethics in my Australian Jurisdiction for combating such challenges?
Main Body:
In spite of the legal field having an unhurried attitude towards adopting new technologies historically, the incorporation of AI has seen undeniable benefits. A University of Oxford study found that the use of AI has changed how law firms operate by making them more efficient and collaborative. [iv] Let’s delve into ways generative AI is evolving traditional law firm practices into more effective ones: for starters, one of the most compelling applications of AI is most probably legal drafting. Legal professionals are using generative AI tools like ChatGPT to create preliminary drafts of legal documents, such as contracts, pleadings, and memos—this effectively saves time for more productive work. Productivity and efficiency are seen to be further optimized as AI can be used internally to optimize the day-to-day operations of legal practitioners. Such areas might include automating routine, time-consuming tasks to allow legal professionals to focus on more complex and value-added work. Ultimately, these act as cost-cutting drivers of law firms as Repetitive, time-consuming processes make up a big chunk of the work of law firms, which normally rely on junior lawyers to get the job done, but with AI, these tasks can be done in a matter of seconds, which can dramatically reduce the cost of law firms, hence, also facilitating cost reduction, meaning, law firms will be able to offer more competitive services and prices to clients, which can ultimately improve access to justice. adding to that, AI can effectively streamline the process of analysing legal documents and case law, retrieving relevant information that would otherwise take hours of manual research to find, hence, enhancing legal research and document review for legal professionals, from lawyers and judges to clerks and law students
That being said, integrating AI in law has its challenges. The fundamental drawback comes in the form of an AI literacy skills gap. As AI and other innovative technologies are rapidly transforming industries, legal practitioners, including attorneys and judges, may fall short due to a lack of AI literacy skills. [v] Another drawback that may haunt law firms takes the shape of AI Hallucinations. It is a phenomenon where, in a large language model (LLM), often a generative-AI chatbot or computer version, perceives patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. [vi]
This phenomenon is not just a possibility, and the following cases highlight the consequences that followed: the case of Mata V. Avianca, Inc[vii]., initiated like any other personal injury case when the defendant, Roberto Mata, filed a lawsuit against Avianca Airlines claiming he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York. The case took a turn when the claimant’s lawyers submitted a brief that cited more than half a dozen relevant court decisions. There was “Martinez v. Delta Air Lines”, “Zicherman v. Korean Air Lines,” and of course, “Varghese v. China Southern Airlines.” The issue saw the light of day when no one, including the judge, P. Kevin Castel, could find the decisions or the quotations cited and summarized in the brief. The claimant’s lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, revealed that he had used an AI, Chat-GPT, to conduct his legal research. The reason why neither the defendant’s lawyers nor the judge himself could find the decisions or the quotations cited and summarized in the brief was that the AI chatbot had generated fictitious cases. The AI also said yes when the lawyer asked it to verify if the cases were real. [viii] Judge Kevin imposed sanctions on the said lawyers for submitting a legal brief that included six fictitious case citations generated by Chat-GPT. The lawyers Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman were ordered to pay a $5,000 fine in total.[ix]
Another case highlights how AI continues to haunt legal practitioners by their “hallucinations.” A Texas federal judge, Marcia Crone, sanctioned an attorney, Brandon Monk, for submitting a court filing with non-existent cases and quotations generated by artificial intelligence. The attorney was fined a $2,000 penalty and must attend a course about generative AI in the legal field.[x]
These two recent cases highlight the limitations of using AI in the legal field, to maintain ethics while using AI in legal work, stricter regulations surrounding AI, and a call for urgent upskilling of legal practitioners, including judges, attorneys, and even law students on AI Literacy and skills required for using AI in law.
The Law Society of South Australia puts forth the key ethical obligations, their ethical duties that Australian lawyers must balance with the opportunities of AI, many of which are codified in the Australian Solicitors’ Conduct Rules (ASCR), key ethical considerations include: Competency and Diligence (Rule 4.1.3) — The use of AI cannot justify a breach in diligence or care. Lawyers must verify the integrity and accuracy of any AI-assisted work. GenAI can enhance productivity, but it can also introduce errors—especially when hallucinated legal precedents or misinterpreted facts are unknowingly relied upon. Transparency – Disclosing the use of AI to clients, courts, and colleagues. Client Confidentiality (Rule 9) — Only use AI platforms with strict data security protocols. Independence and Judgement (Rule 17) — AI cannot replicate the nuance, ethics, or context-specific analysis required for sound legal judgment. Relying excessively on AI may compromise a lawyer’s ability to deliver truly independent advice. Lawyers must take responsibility for the final content and should treat AI-generated suggestions as supportive tools, not authoritative sources.[xi]
The text above depicts a few key ethical obligations Australian lawyers must follow when using AI in their legal work, including legal drafting and research. Therefore, by doing so, the limitations of using AI in the legal field discussed earlier can be significantly reduced.
The paper shall proceed further by discussing the governance and regulations in Australia surrounding the use of generative AI in litigation and their importance.
Australian courts and regulators are moving from passive observation to active governance of generative AI in the legal field. this is illustrated by the following: (1) According to the New South Wales Bar Association, the Supreme Court of NSW issued a Practice Note on the use of generative AI, effective from 3 February 2025, which provides one of the most detailed frameworks on AI use in legal proceedings to date. It delineates acceptable and prohibited uses of Gen-AI: Prohibited Uses: Gen-AI must not be used to draft affidavits, witness statements, or expert reports; these must reflect personal knowledge without AI-generated content. Permitted Uses: Gen-AI may assist with non-evidentiary tasks like chronologies, indexing, or summarising documents, subject to human verification and disclosure. (2) The Supreme Court of Victoria has also issued comprehensive guidelines titled “Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation”, which outline ethical expectations for both legal practitioners and self-represented litigants. This guidance aligns with national trends, emphasising transparency and ethical accountability: AI use must be disclosed when relevant to document reliability. Practitioners are accountable for AI-assisted content and must not use Gen-AI in evidence documents like affidavits or expert reports. Caution is urged regarding confidentiality, and lawyers must understand the limitations of AI tools.[xii]
This demonstrates that the Australian Jurisdiction has introduced stricter and more effective regulatory measures surrounding the use of generative AI in the legal field. This will ensure human accountability while still allowing the flexibility to use AI for supporting tasks while maintaining the ethics discussed earlier, leading to improved productivity.
As I reach my conclusion, I would like to compile what this paper unpacked— Technology is evolving at a rapid rate and thus, transforming industries from manufacturing, services, and even an AI-generated minister! The legal industry is no different. Once historically skewed towards a traditional approach, this industry is also being stirred by revolutionary AI. Therefore, to answer the central question: does AI offer companionship or rather an adversary to the legal practitioners and students? — AI has the potential to be a companion of law students, lawyers, and judges if they are trained on AI literacy, meaning, judges, lawyers, and students would be able to utilize AI if they have the skills to do so. Adding to that, if proper ethics are made obligatory to follow when using AI in the legal field, as seen in the Australian Jurisdiction, it would significantly reduce the possible adversity of hallucinations as discussed in the paper. Lastly, if jurisdictions across the globe also introduce stricter regulations surrounding the application of AI in law similar to the Australian jurisdiction, cases involving misuse of AI in the legal field is expected to drop significantly. To sum it up, AI can be a companion for us legal folks if we equip ourselves with the competencies required to leverage AI, follow the ethics while doing so, and remain within the framework of the proper regulations issued.
REFERENCE(S)/ BIBLIOGRAPHY:
[i] Guy Delauney, World’s first AI minister will eliminate corruption, says Albania’s PM BBC News (2025), https://www.bbc.com/news/articles/cm2znzgwj3xo (last visited Sep 23, 2025).
[ii] Denise Wager, Notre Dame Law School Becomes First Law School to Partner with Harvey AI to Integrate Artificial Intelligence into Legal Education Notre Dame Law School (2025), https://law.nd.edu/news-events/news/notre-dame-law-school-becomes-first-law-school-to-partner-with-harvey-ai-to-integrate-artificial-intelligence-into-legal-education/ (last visited Sep 23, 2025).
[iii] Angus Tiffin & Graham Fraser, Law firm restricts AI after “significant” staff use BBC News (2025), https://www.bbc.com/news/articles/cglyjn7le2ko (last visited Sep 23, 2025).
[iv] John Armour et al., New research finds that AI is improving the way the legal sector operates Saïd Business School (2021), https://www.sbs.ox.ac.uk/news/new-research-finds-ai-improving-way-legal-sector-operates (last visited Sep 23, 2025).
[v] Javier Luna, The role of AI in law | datacamp DataCamp (2025), https://www.datacamp.com/blog/ai-in-law (last visited Sep 23, 2025).
[vi] Ibm, What are ai hallucinations? IBM (2025), https://www.ibm.com/think/topics/ai-hallucinations (last visited Sep 24, 2025).
[vii] Mata v. Avianca , 1:2022cv01461 (2023).
[viii] Benjamin Weiser, A man sued Avianca Airline. his lawyer used chatgpt. The New York Times (2023), https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html (last visited Sep 24, 2025).
[ix] Sara Merken, New York lawyers sanctioned for using fake CHATGPT cases in legal brief Reuters (2023), https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ (last visited Sep 24, 2025).
[x] Sara Merken, Ai “hallucinations” in court papers spell trouble for Lawyers Reuters (2023), https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/ (last visited Sep 24, 2025).
[xi] Evyenia Walton , Ethical uses of Artificial Intelligence in the Australian Legal Profession The Law Society of South Australia (2025), https://bulletin.lawsocietysa.asn.au/Bulletin/Bulletin/Content/Articles/2025/June/Ethical_Uses_of_Artificial_Intelligence_in_the_Australian_Legal_Profession.aspx (last visited Sep 24, 2025).
[xii] Evyenia Walton , Ethical uses of Artificial Intelligence in the Australian Legal Profession The Law Society of South Australia (2025), https://bulletin.lawsocietysa.asn.au/Bulletin/Bulletin/Content/Articles/2025/June/Ethical_Uses_of_Artificial_Intelligence_in_the_Australian_Legal_Profession.aspx (last visited Sep 24, 2025).





