We are happy to announce that we're celebrating 30 years! →

Published on:

Artificial Intelligence Makes Up Evidence in Court Brief

pexels-pixabay-247791-300x165

The role of artificial intelligence (AI) has been a recent topic of discussion for many. While some see AI as a next step in technological advancements, others fear it will replace many jobs and cause more problems. A recent case sparked controversy regarding the role of AI in the legal world, as it provided completely fabricated information that it believed was true. In this case, a lawyer had relied on ChatGPT, an artificial intelligence program, and filed a briefing without realizing the cases cited were made up. The lawyer was sanctioned and the case prompted a case order in Texas being filed against the use of AI in court filings, bringing up the issues of bias, prejudice, and lack of allegiance to the law and truth that these technological advancements may have. If you were harmed by a defective product, it is crucial to call the seasoned Chicago-based attorneys of Moll Law Group to discuss your legal options.

Consult Moll Law Group About Your Product Liability Lawsuit

A shift in the labor landscape due to technological advancements is not a new concept. Any new advancement, like the internet or robots that can perform factory work, brings unprecedented change. AI is different in this regard because it challenges professions that are knowledge based. It has the ability to generate responses that are complex, because of its ability to take large amounts of information from the internet, something that a human would not be able to do in a lifetime of reading or searching. 

The Case of Roberto Mata v Avianca Inc.: AI Fabricates Information in Court Brief

A recent case in the legal world sparked controversy regarding the role of AI. A lawyer was representing a Texas man against an airline in Roberto Mata v. Avianca Inc.. The case was going to be thrown away, so the lawyer wrote a brief citing past relevant cases. However, no one was able to find any evidence of those past relevant cases. It was later revealed that the lawyer had used Chat GPT, a public and easily accessible AI program, to write the brief. After asking the AI program if the cited cases were real, the program responded with “yes”. This highlighted the unreliability and risk of using AI in the legal system. 

The lawyer involved in the case was Steven A. Schwartz of Levidow, Levidow, & Oberman, and had been practicing in New York for over thirty years. The judge that sanctioned Schwartz was Judge P. Kevin Castel, who was informed by the lawyers of Avianca Inc. that much of the court filing was bogus, as they could not find any evidence of the cases or citations. 

Texas Judge Order Bans Use of AI in Court Briefings

Artificial intelligence (AI) programs, such as ChatGPT, generate responses based on billions of text examples from the internet. These platforms are able to do research, generate questions, proofread, and summarize with faster efficiency than a human is able to do. This makes them compelling to use for a company looking to maximize efficiency. AI uses a statistical model that makes it able to understand large amounts of information on the internet at speeds that humans could not do. Something that seems like an advantage initially, but, as the instance of Roberto Mata v Avianca Inc. shows, may come at the cost of truth. 

Attorneys are under oath that they will abide by the law and represent their clients. AI programs like ChatGPT swears no such oath and can therefore function with bias or prejudice. AI has no allegiance to the law or to a client, it merely functions based on an algorithm that was designed by people who also have no allegiance to the law or a client. The case of Roberto Mata v Avianca Inc. showed the issues that can arise when relying on AI. An order from a judge in Texas, Judge Brantley Starr, made it mandatory for attorneys to file a docket that no part of the filing was drafted by AI. He explained that issues of bias, hallucinations, and a lack of allegiance to the truth were the reasons as to why AI has no place in a legal briefing.

Call Our Seasoned Defective Product Attorneys

It is important to retain knowledgeable legal counsel for your claim. If you were harmed by defective products or technology, you should contact the seasoned Chicago-based product liability lawyers of Moll Law Group about whether you have a claim. We represent injured people across the country. Please complete our online form or call us at 312.462.1700.

Contact Information