online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 The pitfalls of AI that could predict the outcome of court cases


images/judge.jpg

▲圖片標題(來源: Reuters/Chip East)

Companies have long sought technologies that promise an advantage in fighting litigation. For most enterprises, casework is a major drain on resources. In 2020, U.S. businesses spent a total of $22.8 billion dollars on litigation; law firm Fulbright & Jaworski estimated in 2005 that nearly 90% of businesses are engaged in some type of litigation and that the average company balances a docket of 37 lawsuits.

With the democratization of AI and analytics tools, it was perhaps inevitable that startups would begin applying predictive techniques to the legal field — particularly given the enormous market opportunity. (According to Statista, the legal tech segment’s revenues could reach $25.17 billion in 2025.) For example, Ex Parte, a predictive analytics company founded by former lawyer Jonathan Klein, claims to use AI and machine learning to predict the outcome of litigation and recommend actions companies can take to “optimize their odds of winning.”

But experts are skeptical that AI can predict events as complex as how legal cases will unfold. As Mike Cook, a member of the Knives & Paintbrushes research collective, told VentureBeat, there’s a number of factors in litigation that the data used to “teach” an AI system might not capture, leading to flawed or potentially biased predictions. “You can certainly train an AI to predict things — you can train an AI to predict anything you like — but it doesn’t always learn what you think it’s learning,” he said in an interview. “When it comes to legal cases, there’s a lot we’re not seeing, and a lot that we can’t give to an AI to train on, which means it might end up learning only part of the picture.”

A growing market

Ex Parte isn’t the only vendor that claims to be able to predict the outcome of legal cases. Toronto, Canada-based Blue J Legal says it can estimate litigation outcomes with 90% accuracy, drawing on models trained on the corpuses of relevant precedent and a case’s fact pattern. ArbiLex — a competitor — focuses on arbitration, presenting companies with metrics like how much a case will cost and the likelihood of who might win.

“The legal market has seen an explosion of AI products, especially in machine learning and natural language processing. Uses include improving contract management, gaining insight into legal department and law firm operating data, and analyzing U.S. public law,” Gartner senior research director Ron Friedmann told VentureBeat via email. “The U.S. legal system has a vast volume of publicly accessible law, including court decisions, agency rulings, and motions and briefs. Starting in the 1980s, portions of US law became available online, mainly to retrieve documents. Around 2010, startups emerged that offered deeper insight into publicly available law.”

Ex Parte — which recently raised $7.5 million in series A financing, including from Ironbound Partners and former Illinois governor Bruce Rauner’s R8 Capital — generates recommendations for any corporate litigation, like whether to settle and which claims to assert, where to file or defend a lawsuit, and which attorneys and law firms might provide the best chance of success. Klein says that the platform can correctly forecast the outcome of cases about 85% of the time.

“There are many technical and conceptual challenges associated with building a robust model of a lawsuit. Legal data is by its nature disparate, unstructured, and semantic,” Klein said in a statement. “We have solved these problems by combining a highly specialized understanding of the legal field with advanced expertise in AI, machine learning, and natural language processing. Our mission is to be the leading worldwide provider of data-driven decision-making solutions in the field of law and provide our customers with a winning advantage.”

Recent history, however, is filled with examples of how problematic data can influence AI legal tech’s predictions in unfavorable ways.

The U.S. justice system has adopted AI tools that have later been found to exhibit bias against defendants belonging to certain demographic groups. Perhaps the most infamous of these is Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is designed to predict a person’s likelihood of becoming a recidivist. A ProPublica report found that COMPAS was far more likely to incorrectly judge black defendants to be at higher risk of recidivism than white defendants, while at the same time flagging white defendants as low risk more often than black defendants.

Published last December, a separate study from researchers at Harvard and the University of Massachusetts found the Public Safety Assessment (PSA), a risk-gauging tool that judges can opt to use when deciding whether a defendant should be released before a trial, tends to recommend sentencing that’s too severe. PSA is likely to impose a cash bond on male arrestees versus female arrestees, too, according to the researchers — a potential sign of gender bias.

In July 2018, more than 100 civil rights and community-based organizations including the ACLU and the NAACP signed a statement urging against the use of algorithmic risk assessment tools like COMPAS and PSA.

Potential risks

The stakes are lower in the cases that Ex Parte and other corporate-focused predictive analytics startups currently deal with, of course. (Klein says that Ex Parte’s customers include hedge funds, law firms, insurance companies, litigation finance firms, and universities.) And researchers like Cook acknowledge that some litigation, like common real estate disputes and other “generic” law work, might be within the real of possibility to predict. But Cook cautions that if the technology were to be widely adopted, it might create a “strange” and unpredictable feedback loop as future AI learns from the outcomes of cases predicted by today’s AI.

“One thing we’ve yet to explore properly with AI systems like this is the impact they have on their ecosystem simply by existing. If this system became the standard, 20 years from now, we’d be training these systems on case outcomes decided by AI … So, even if this works for now, it’s potentially setting up some strange situations in the future,” Cook said. “[I]f you imagine this as being applied to boring, low-stakes, harmless legal stuff, it’s maybe not so bad. But if we imagine it scaling up to huge trials, cases involving real people’s lives, and important issues, I think it becomes a much sadder and more unpredictable idea.”

Os Keyes, an AI ethicist at the University of Washington, also expressed concern about how case-predicting AI could perpetuate inequality in the court system. In a future where wealthier clients — whether businesses or people — tap into AI-powered case recommendations, assuming the technology works, those who can’t spring for the same could be disadvantaged.

Well-funded defendants already have advantages in the court system, as outlined by trial lawyer Kiernan McAlpine in a recent Quora post. Wealthy clients can afford to settle cases that they’re likely to lose and pay for expensive discovery and pre-trail lawyers. They also have the budgets to pay experts with high-level knowledge and testimonial skills, as well as firms with arcane legal knowledge.

Some experts argue that these advantages hold even at the highest levels of the judiciary. Adam Cohen, the author of Supreme Inequality: The Supreme Court’s 50-Year Battle for a More Unjust America, asserts that conservative Supreme Court justices — who rarely vote to reverse convictions of poor criminal defendants — have shown a clear sympathy for rich ones. One study found that the conservative Justice Antonin Scalia voted for defendants in about 7% of non-white-collar criminal cases and 82% of white-collar ones.

“The result of [this AI case-predicting technology] working will be — and I do not say this lightly — societally appalling,” Keyes told VentureBeat via email. “[W]e already know that there’s a vast difference in outcome between poor and indigent clients and wealthy ones, and the reason for this is, largely, resourcing. The promise of this [technology] … is to make that worse — to stack even more resources in the corner of those who already have it.”

轉貼自: VentureBeat

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance


留下你的回應

以訪客張貼回應

0
  • 找不到回應
Powered by Komento

YOU MAY BE INTERESTED

Popular Tags