horizontal steel girder in left foreground from a Chicago iron bridge withs skyscrapers in background on right and 2 rays of sunlight reflecting off building windows
| BLOG
Fake Law, Real Trouble: How One Illinois Court is Responding to Chat GPT’s Hallucinated Cases
Kelly M. LibbraBrittany D. Shoaff

The Takeaway

A Madison County, Illinois, judge has offered guidance on the use of AI that’s practical for use in—and even outside—a courtroom:

  • Ensure the accuracy of any representations made in discovery responses.
  • Safeguard private information.
  • Certify that answers used in discovery are accurate, complete, and not generated solely by AI.
  • Use AI during depositions and court proceedings only if:
    • all parties consent to the use of the specific AI tool
    • the AI tool meets privacy restrictions and doesn’t store any information
  • Disclose AI-generated evidence to the opposing party 90 days before the start of trial.
  • Identify AI-generated evidence used at trial.
  • Disclose in advance the use of AI for jury selection.
  • Disclose an expert’s use of or reliance on AI to formulate an opinion.

AI Can Be Quite Helpful – Until It’s Not

Many of us have embraced ChatGPT as a useful tool in our everyday lives to promote efficiency or spark creativity. And the more you use ChatGPT, the more it learns your preferences. For instance, ChatGPT knows that our families prefer certain foods when we ask for dinner suggestions. If we ask for Halloween costume suggestions, it immediately understands we’re looking for coordinating costumes for kids of certain ages—without needing a reminder about our families’ demographics.

When it comes to using AI in the legal profession, some may think it’s just as easy. The hour is late, and deadlines are looming. You enter a research query into your favorite AI assistant and voilà—it returns the perfect precedent that makes the exact argument you need!

It was perfect—except the precedent doesn’t exist or misrepresented the law.

These problems arise when an AI tool gets too smart: it knew just what you were looking for and created the sought-after legal authority.

Attorneys face serious legal and ethical consequences if they rely on AI-generated (“hallucinated”) legal authority.

An Illinois Case Ran Afoul of Ethics by Relying on an AI-Generated Citation

In the summer of 2025, attorneys for the Chicago Housing Authority (CHA) cited the Illinois Supreme Court case Mack v. Anderson in their post-trial motion to reconsider a multi-million-dollar verdict. Unfortunately, Mack v. Anderson doesn’t exist.

Cook County Circuit Judge Thomas Cushing held a special hearing for further explanation of the use of this false precedent. The attorney responsible for the brief and for using AI to find precedent stated she didn’t think ChatGPT was capable of creating false precedent and therefore didn’t check whether it was a legitimate case. She said she had no “intent to deceive the court.” (She’s since been relieved of her position because at that time, using AI was against their policy.) As a result of the mistake, Plaintiff’s attorney was granted permission to file a motion for sanctions.[i]

Plaintiff’s Motion for Sanctions not only pointed out the citation of false precedent but also argued that several other areas of misrepresentation existed throughout the proceedings, including:

  • Plaintiff pointed to the expected integrity and high standard to which attorneys should be held to uphold public confidence. Plaintiff argued opposing counsel’s actions violated the 2010 Rules of Professional Conduct by not providing competent representation.
  • Plaintiff directed the court to case law on the use of artificial intelligence in various jurisdictions, including a 2025 Illinois Northern District case, In re: Marla C. Martin, 24 B 13368, D,E. 78. The Court in Martin noted that the use of generative AI producing false cases and precedents is not new and has been discussed in detail since 2023. Martin, at 12-13. 
  • Plaintiff argued there is evidence of CHA’s counsel’s knowledge of the serious risks associated with AI in research, which includes an internal policy regarding its use. Further, Plaintiff pointed out that the attorney who used the false precedent from ChatGPT had previously published an article on the ethical considerations of using AI in the legal profession.[ii] Plaintiff indicated that CHA’s counsel stated to the court it was just one citation out of 50 cases.
  • Later, counsel failed to produce policies and publications ordered by the Court, and after being made aware of the false case and misrepresentations, still failed to account for the mistakes. Plaintiff argued that it’s a complete disregard of their duties of candor to the Court.
  • Plaintiff noted that the pleading citing the false precedent had not been withdrawn at the time of their filing of Motion for Sanctions.

Their motion requests sanctions against four attorneys: the attorney who drafted the document that cited false precedent, the attorney who signed the pleading, and two other supervising attorneys within the firm for their roles. They also asked for sanctions against the Chicago Housing Authority and the firm retained by the Chicago Housing Authority.[iii]

Ironically, in the Motion for Sanctions, Plaintiff’s attorney cited a quote from In Re Smith, 168 Ill. 2nd 269 (1995), a reference they found in response to a ChatGPT query for “Illinois Supreme Court quotes on candor.” Similar to the results of their counterpart’s query, this quote is an AI-generated hallucination that doesn’t appear in the opinion.

(The motion for sanctions was recently argued, but no order has yet been issued.)

Other Illinois Attorneys Receive Discipline for Poor AI Use

The CHA case is not the only one where Illinois attorneys have been disciplined for using AI. For example:

  • A Springfield attorney in a parental rights case was fined for citing eight non-existent cases in an appellate court filing in July 2025.[iv]
  • An attorney and his firm were fined for citing four different matters in bankruptcy proceedings that were misrepresented or fictitious. The attorney admitted to using AI (specifically ChatGPT) to generate at least portions of his argument and stated he didn’t verify if the citations were accurate or legitimate prior to filing.[v]

Clearly, the issue of AI use without verification is an emerging problem in the legal profession.

Illinois Policy on Using AI

The rapid advancement of AI technology and the expansion of its use have created opportunities for enhanced efficiency and creativity. However, it’s not without risks. Over the past two years, reports of AI tools generating fake legal citations have become widespread, both within the legal community and beyond.

  • In early 2024, an Illinois Judicial Conference Task Force on Artificial Intelligence was created to make recommendations regarding the use of Artificial Intelligence (AI) in Illinois courts.
  • The Illinois Supreme Court’s Policy on Artificial Intelligence (effective January 1, 2025) recognizes that AI presents new challenges for attorneys and courts to ensure the protection of private information, avoid bias and misrepresentation, and maintain the court’s integrity. One of the Supreme Court’s primary concerns is “upholding the highest ethical standards in the administration of justice.”
  • In October 2025, the Illinois Attorney Registration & Disciplinary Commission (ARDC) released The Illinois Attorney’s Guide to Implementing AI. This Guide provides attorneys with background information on AI tools, discusses their risks, and offers strategies for ethically and effectively integrating AI tools into the practice of law. The Guide also offers examples of checklists, office policies, best practices, and informed consent forms.

A Standing Order Offers Practical Guidelines

In September 2025, Judge Sarah D. Smith, a presiding circuit judge in Madison County, Illinois, entered a “Standing Order on Use of Artificial Intelligence (AI) in Civil Cases” in her courtroom, which has now been extended for use in other courtrooms in Madison County.

In line with the Illinois Supreme Court’s AI Policy, Judge Smith “embraces the advancement of AI,” but also mandates that these tools “remain consistent with professional responsibilities, ethical standards and procedural rules.” The guidelines in Judge Smith’s Order offers guidance that’s practical for use in—and even outside—the courtroom, including the principle that human oversight and legal judgment are required with any use of AI.

Her Order mandates that attorneys bear the responsibility for:

  1. Ensuring the legal and factual accuracy of any submissions filed in their name, even if the submission is prepared by another person or AI.
  2. Ensuring the accuracy of any representations made in discovery responses.
  3. Ensuring that private information is safeguarded.
  4. Certifying that answers used in discovery are accurate, complete, and don’t rely solely on AI if AI is used to review and summarize factual information.
  5. Only using AI during depositions and court proceedings if all parties consent to the use of the specific AI tool, the AI tool meets privacy restrictions, and the AI tool doesn’t store any information.
  6. Disclosing any AI-generated evidence to the opposing party 90 days before the start of trial.
  7. Identifying AI-generated evidence used at trial or in jury selection in advance of such use.
  8. Disclosing an expert’s use of or reliance on AI to formulate an opinion.

In addition to outlining an attorney’s responsibilities, Judge Smith’s order states that an attorney may be subject to sanctions if a submission includes “case law hallucinations, [inappropriate] statements of law, or ghost citations.”

Conclusion

While Judge Smith’s order is limited to civil cases filed in Madison County, the principles set forth offer practical guidance to all attorneys practicing civil litigation. The overriding principle of her order and the recent court cases on this issue make it evident: reliance upon AI is not an excuse for failure to abide by ethical rules adherent upon all attorneys.

[i] https://www.yahoo.com/news/lawyers-chicago-housing-authority-used-164200574.html

[ii] See D. Malaty, Artificial Intelligence in the Legal Profession: Ethical Considerations, Goldberg Segalla (Sept. 4, 2024).

[iii] https://websitedc.s3.amazonaws.com/documents/Goldberg-Sanctions-Motion.pdf

[iv] https://www.wcia.com/news/sangamon-county/springfield-attorney-fined-for-using-ai-citing-nonexistent-cases/#:~:text=by:%20Ethan%20Holesha,an%20Illinois%20Supreme%20Court%20rule.

[v] https://www.ddg.fr/actualite/fake-law-real-sanctions-when-ai-generated-citations-collapse-in-an-illinois-bankruptcy-court#:~:text=United%20States%20Bankruptcy%20Court%20for,%E2%80%8D

Search Blog

Categories

Archives

Contact

Kerri Forsythe
618.307.1150
Email

Jump to Page

HeplerBroom LLC Cookie Preference Center

Your Privacy

When you visit our website, we use cookies on your browser to collect information. The information collected might relate to you, your preferences, or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. For more information about how we use Cookies, please see our Privacy Policy.

Strictly Necessary Cookies

Always Active

Necessary cookies enable core functionality such as security, network management, and accessibility. These cookies may only be disabled by changing your browser settings, but this may affect how the website functions.

Functional Cookies

Always Active

Some functions of the site require remembering user choices, for example your cookie preference, or keyword search highlighting. These do not store any personal information.

Form Submissions

Always Active

When submitting your data, for example on a contact form or event registration, a cookie might be used to monitor the state of your submission across pages.

Performance Cookies

Performance cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.

Powered by Firmseek