Is Generative AI Helping or Hindering Legal Aid?

Generative AI can serve as a legal aid, however its limitations need to be carefully taken into account.

By Simran Mann

Although artificial intelligence (AI) technologies have been a subject of research since the 50s, the implications of AI’s mainstream application remain unclear. Nonetheless, traditional and generative (Gen) AI systems have become an essential aspect of everyday life – the legal field being no exception. While traditional AI excels at analyzing data and performing specific tasks, generative AI focuses on creating new content. 

Lawyers and paralegals are using AI to automate routine tasks like contract review and formation, legal research, and document management. According to the 2024 Canadian Law Firm Market report, a third of law firms expect to increase their investment in legal-specific technology over the next year. Notably, 26% of law firm lawyers said they are using Gen AI, 24% plan to use it within the next three years, 23% were unsure, and 27% had no plans of using Gen AI. Similarly, more U.S. law firms are looking to technology to improve efficiency – 20.7% of small firms and 14.6% of solo attorneys are following large firms through their current use or interest in AI tools. Despite recent Gen AI advancements, 37.7% of small firms and 45.8% of solo attorneys, said they did not know enough about AI to consider its implementation. While the Canadian and American legal markets are willing to adopt AI, many law firms and lawyers remain hesitant. 

AI is also increasingly being used in court administration and the justice system to detect crime, identify suspects, automate decision-making and filing, and transcribe hearings and trials. Due to AI’s prevalent use, courts have begun outlining policies around appropriate use such as disclosure requirements. In June 2023, the Court of King’s Bench of Manitoba became the first court in Canada to require that litigants disclose if and how they used AI to prepare materials. Yukon, Alberta, Newfoundland and Labrador, Quebec, and Nova Scotia courts followed shortly after. The Federal Court of Canada issued the latest guidelines, stating that parties to legal proceedings must make a declaration for the use of Gen AI. Similarly, courts across the U.S. have made decisions regarding the use of Gen AI. For example, the District Court for the Northern District of Illinois adopted a new requirement that, “any party using any generative AI tool to conduct legal research or to draft documents for filing with the Court must disclose in the filing that AI was used.” State judicial councils are also getting involved through the formation of task forces to help govern the use of Gen AI within their respective jurisdiction. With AI developing exponentially, courts are realizing that it’s best to understand how current technological advancements can enhance legal processes rather than advocate against their use. Arguably, if technology has evolved to a point where the practice of law can be reliably improved through efficiency gains, then courts and legal professionals should explore where and how to engage with such tools. Understanding the reliability of AI tools can help to inform corporate governance strategies and the need for transparency to ensure that AI is not obscuring harmful and potentially illegal behaviours.

Similarly, the general public is also turning to digital sources for legal guidance and advice. According to Stats Canada, the majority of Canadians seek resolution for their most serious problems without involving the formal justice system. Only 33% contacted a legal professional and 8% contacted a court or tribunal. Among those who did not use the justice system, the most common actions involved searching the internet (51%), taking advice from friends or relatives (51%), and/or contacting the other party involved in the dispute (47%). With the number of pro bono inquiries doubling between 2020-2022, access to justice continues to be of major concern. Unfortunately, for those who cannot afford a lawyer/attorney, free legal answers serve as a critical resource. Therefore, prioritizing opportunities for intentional investment is essential to reducing the justice gap. 

With the launch of ChatGPT and similar platforms, we can expect to see an increase in the number of users seeking free legal advice online. ChatGPT surpassed 1 million users within five days, making it one of the fastest-growing applications in history. ChatGPT’s explosive popularity comes as no surprise as it promises to make information more accessible. In theory, this would also promote access to justice by providing greater opportunities for underrepresented and/or low-income individuals. Both American and Canadian Bar Associations agree that AI offers the potential to reduce the justice gap by improving efficiency, ultimately allowing lawyers to serve more clients.

However, with there being limited or pending legislation on the use of AI in North America, it opens the floodgates for inaccuracy, biases, and confidentiality concerns, ultimately magnifying the barriers that people face in receiving legal aid. Below are some examples demonstrating how Gen AI has misbehaved and why it is important for legal professionals and the general public to be mindful of their reliance on such tools. 

Inaccuracies

One of Gen AI’s biggest hurdles to widespread adoption is its lack of accuracy. Legal inaccuracies manifest as hallucinations by presenting misleading information as fact. Unfortunately, the large language models (LLMs) used in common legal AI tools like Lexis+ AI, Thomson Reuters, and GPT-4 are also prone to hallucinations. According to the study, Lexis+ AI provides the most accurate responses (65% accuracy) and has the lowest hallucination rate (17%). While Lexis+ AI and Thomson Reuters are less prone to hallucination than GPT-4 (43%), users of these products should be cautious.

Hallucinations are not just limited to popular legal Gen AI tools. In October 2023, the New York “MyCity Chatbot” advised small businesses to break the law. The chatbot was made to help small firms obtain advice on the legal obligations and regulations businesses have to adhere to. However, five months after its launch, the chatbot began endorsing illegal activity, such as, bosses can take workers’ tips, and landlords can discriminate based on source of income. Users had little reason to distrust the service as it claims to use information published by the NYC Department of Small Business Services. Since then, the city has updated their disclaimers on the MyCity chatbot website stating that, “it may occasionally provide incomplete or inaccurate responses.”

Examples of Gen AI hallucinations have also led to legal challenges in Canada. In February 2024, Air Canada lost a court case after one of its chatbots lied about policies relating to discounts for bereaved families. The chatbot told a customer that they could retroactively apply for a last-minute funeral travel discount which was at odds with Air Canada’s policy. Ultimately, companies are responsible for ensuring that reasonable care is taken to ensure their representations are accurate, regardless if they come from a human representative or an automated chatbot.

Another issue is fake precedent cases which are not only making headlines in the U.S. but also in Canada. In February 2024, disciplinary action was taken against a British Columbia (B.C.) lawyer for citing fake cases invented by ChatGPT. While the lawyer was not aware of the fictitious materials, they were still held liable. The B.C. Law Society reminded lawyers that they have an ethical obligation to ensure accuracy. 

While Gen AI can serve as a legal aid, it is also unreliable in its current form. AI-based chatbots can give inaccurate answers due to limitations in the data or algorithm, or if they are not properly trained or updated. It is imperative that legal professionals and the general public engage in source verification. Platforms also have a role to play by reminding end-users of potential inaccuracies to ensure they are aware of the need to verify.

Biases

Most Gen AI systems are trained using extensive data sets collected from external sources. As the model learns patterns from the data, it can develop biases that were not explicitly present in the training data. When biased information outputs are used in the practice of law, it can lead to unfair outcomes and perpetuate discrimination. 

In October 2023, a lawsuit was filed against a U.S. management group for using a chatbot to illegally rule out a renter who had a housing voucher. Elizabeth Richardson asked the chatbot whether they accepted renters with housing vouchers, to which it said no. However, in Illinois, it is illegal to discriminate against tenants based on their source of income. With African Americans making up 78% of voucher holders in Illinois, Richardson filed a lawsuit for discrimination. In January 2024, the parties settled and the defendants agreed to not deny applicants based solely on their income source.

Racial discrimination is also present in courtrooms via “predictive justice” AI tools. Infamously, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), is used by some U.S. criminal judges in assessing the risk for recidivism. The algorithm overestimates reoffending among Black Americans, who are often labeled as high risk, whereas white Americans are often labeled as low risk. Unfortunately, these assessments help to inform judicial decisions in some states, raising wrongful detention and imprisonment concerns. Despite pre-existing concerns of traditional AI tools like COMPAS, some judges have begun implementing Gen AI tools to automate decision-making. For example, Lord Justice Birss for the Court of Appeal for England and Wales, used ChatGPT to provide a summary of an area of law. He said, “I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment.” Arguably, while there isn’t anything inherently wrong with testing the efficiency of Gen AI, there is the risk of developing an automation bias by taking the path of least cognitive resistance. It is the over-reliance of automated aids and decision support systems. As exhibited by Judge Kevin Newsom of the 11th U.S. Circuit Court of Appeals, perhaps it is best to limit judicial use of Gen AI to consulting the ordinary meaning of words until technology improves and/or adequate guidelines are put in place.

With AI biases disproportionately affecting marginalized groups, lawyers and judges should question the validity of Gen AI responses. Explaining how Gen AI systems make decisions and why they produce certain results can provide greater transparency on their limitations. Transparency is important because as fair outcomes become less accessible, discriminatory biases risk leading to lower levels of confidence in the justice system.

Confidentiality & Data Breaches

The advent of LLMs and their associate chatbots also poses security challenges for companies and law firms. Gen AI’s insatiable appetite for extensive personal data and the potential for unauthorized access raises confidentiality and privacy concerns. According to a 2023 CISCO study, 62% of surveyed consumers expressed concerns about how organizations are using their personal data for AI, with 60% saying they have already lost some trust in organizations because of their AI use. Below are two examples demonstrating some of the significant and ongoing vulnerability risks of Gen AI systems.

In April 2023, three Samsung employees accidentally leaked confidential data. One Samsung employee asked ChatGPT for a solution after finding a bug in the company’s source code, the second employee used ChatGPT to optimize a test sequence, and the third employee recorded an internal company meeting and fed it to ChatGPT to generate meeting minutes. Subsequently, Samsung banned the use of ChatGPT for internal use.

Internal chatbots are also susceptible to data breaches. In December 2023, Amazon’s newly launched AI chatbot, Amazon Q, experienced severe hallucinations and the leaking of confidential data. While Amazon Q was developed as an alternative to other Gen AI chatbots, documents allegedly leaked the location of Amazon Web Services data centers, internal discount programs, and unreleased features.

Law firms utilizing Gen AI must take extensive precautions to ensure client confidentiality. Likewise, people seeking legal advice online should be mindful of their digital footprint. Users may unknowingly increase their susceptibility to data breaches through the normalcy bias. This is when a person believes their actions won’t contribute to a negative security event, and if one were to occur, the damage would be insignificant. However, such oversights can be detrimental.

Where do we go From Here?

Gen AI can serve as a legal aid however, organizations need to be cautious about how they proceed as there are currently significant limitations that need to be accounted for. The limitations discussed in this article can be summarized as the ABCs (accuracy, bias, and confidentiality):

  • Accuracy – While doing online searches may be helpful, it is important to maintain human oversight by contacting a legal professional and engaging in fact-checking. Always question the source. Law firms with their own Gen AI tool(s) should work on building ethical guardrails and a robust knowledge base. Creating these parameters and developing an AI tool from the ground up can help to eliminate erroneous information. 

  • Bias – To eliminate bias, users should ensure that the data they’re using to train the algorithm is free of bias. Data should be diverse and representative of the population. Consulting a third party and engaging in educational workshops on best practices can help to prevent biased input. Given that small biases or skewed data can have a “butterfly effect” on Gen AI systems, law firms and courts should implement regular audits and ongoing monitoring to detect and correct any emerging biases in their AI. Outputs also need to be closely monitored as unconscious biases can emerge despite controlling for data inputs. 

  • Confidentiality – Avoiding data breaches requires vigilance. Thus, legal professionals should stay up-to-date on technological advancements. Understanding the technology you work with can help to inform necessary safeguards. The average person seeking legal advice from Gen AI should also take proactive measures by not disclosing sensitive information, especially on public platforms. Alongside proactive methods, law firms and courts should develop an incident response plan (IRP). Tabletop exercises (i.e. simulations of real-world scenarios in a controlled environment) can serve as a beneficial tool in developing an IRP. 

By considering the above ABCs, legal professionals and other users of Gen AI can optimize the quality of legal aid they provide/receive. Given that Gen AI has the potential to undermine judicial independence by influencing legal reasoning, AI use in courts and other legal processes should be disclosed. Moreover, platforms have an obligation to demonstrate how biases and inaccuracies are being mitigated, along with protecting user confidentiality. Essentially, a multidisciplinary approach is necessary to tackle the core issue of access to justice. Many people can’t account for the above ABCs when their primary concern is navigating difficult legal situations. Disadvantaged individuals face several barriers to justice – financial, geographic, linguistic, logistical, and gender-specific to name a few. Chief U.S. Supreme Court Justice Roberts warns that,

 “Any use of AI requires caution and humility.” 

Therefore, in addition to addressing the limitations of Generative AI, organizations and legal professionals should continue to provide educational resources and emotional support, engage in pro bono work, and/or offer accessible service options. This two-pronged approach will allow the legal field to not only reap the benefits of innovation but also tackle it sustainably.

Arguably, if the general public becomes sophisticated users of Generative AI and receives greater access to affordable resources, it may also increase concerns regarding job security. 46% of surveyed Canadian law firms believe that AI will result in increased competition from DIY legal websites and services over the next few years. While AI can be a valuable tool for legal research, document review, and other tasks, the legal profession also entails other responsibilities. It involves ethical considerations, judgment calls, complex human interactions, and knowing how to elicit the right information, which AI has yet to fully replicate. Does this mean AI can never fulfill these responsibilities? Probably not, and certainly not in the short term. However, with growing awareness of flaws in current AI applications, complete automation seems unlikely in the near future. 


As job security continues to be threatened by AI, the federal government must implement fair and equitable regulations. This involves assessing the efficiency of AI tools while ensuring that jobs and employee rights stay protected. Although the Artificial Intelligence and Data Act is a step towards AI regulation in Canada, many groups have been excluded from its formation. Cross-functional collaboration and consultation are important in bringing new insights and addressing potential gaps in the Act.

Next
Next

What’s AI Got To Do With It? Competition Policy in Canada