Let me begin with a disclaimer: I am not a columnist, nor am I particularly interested in murder cases. Yet I found myself drawn into the media storm surrounding the recent murder of UnitedHealth Group’s CEO in the US. My interest stems not from any fascination with America’s corporate crime or Luigi Mangione’s irresistible smile, 1but rather from my concern about the societal impact of algorithmic decision-making.
While this case has exposed public outrage against America’s trillion-dollar healthcare insurance industry, 2 I believe the media has overlooked a crucial aspect: UnitedHealthcare’s use of automated decision-making systems to systematically deny healthcare coverage.
The case: when algorithms decide who lives and who dies
In November 2023, a lawsuit that reads more like a dystopian novel than a legal filing landed on the desk of the US District Court in Minnesota. 3The case exposed how UnitedHealthcare, America’s largest healthcare insurer, 4 deployed an AI algorithm called nH Predict from naviHealth to systematically deny post-acute care to elderly Medicare patients. What makes this case particularly chilling is not just the use of automated decision-making in healthcare, but the apparent systematic effort to conceal it.
The Plaintiffs’ arguments paint a disturbing picture of algorithmic governance gone wrong, in three key points (save the bullets for the conclusion): 5
- Improper delegation of claims review to the algorithm: The Plaintiffs allege that UnitedHealth Group improperly delegates its claims review function to the nH Predict algorithm, in violation of state insurance laws. The algorithm generates coverage determinations by comparing patient data to a database of six million past cases, while deliberately ignoring individual medical needs and physician recommendations. The plaintiffs highlight that the algorithm has a 90% error rate in denials, meaning these automated decisions are overwhelmingly overturned when appealed. Instead of addressing this systemic failure, UnitedHealthcare enforced algorithmic compliance by requiring medical review employees to stay within 1% of nH Predict’s projected care durations. Employees who sided with doctors’ recommendations over the algorithm’s decisions faced discipline or termination, effectively subordinating medical judgment to automated cost-cutting measures.
- Deliberate opacity: The plaintiffs argue that UnitedHealthcare maintained a veil of secrecy around its decision-making process. The company concealed the very existence of nH Predict from patients and refused to provide even basic information about denial decisions, claiming “proprietary” protection. When patients like Mrs. Kell, who was unable to walk, repeatedly requested to know which doctor denied their care, they were never told that an algorithm, not a physician, made these decisions. This lack of transparency made it impossible for patients to effectively challenge the denials or understand the true basis for their rejected claims.
- Manipulation of the appeals process: The Medicare Act establishes a five-step appeals process for patients to challenge denied coverage for medical services, but the Plaintiffs contend that UnitedHealth Group systematically undermined successful appeals through a calculated strategy. When patients managed to win their appeals, UnitedHealthcare would immediately issue new denials, forcing them to restart the entire five-step process from the beginning.
- The Defendants file to dismiss the case, 6mainly on procedural grounds, and little do they enter into the subject matter of the algorithmic decision-making: The defendants assert that the plaintiffs failed to exhaust their administrative remedies before filing the lawsuit, arguing that they should have first pursued their claims through the Medicare appeals process (the five-steps process mentioned above, after the completion of which the Defendants would nonetheless issue denials – the Plaintiffs allege). On a substantive basis, the Defendants claim that the Plaintiffs’ allegations are factually inaccurate and lack evidentiary support, without arguing any further, and even this is extrapolated not from the Defendants’ memoranda but from the court’s Rule 26(f) Report, which states that “Although Defendant naviHealth uses a data tool to assist front line staff to coordinate care with members’ families, only physicians make coverage determinations based on their clinical judgment and the good faith application of criteria mandated by government regulations”. 7
The REAL story: when algorithms say ‘no’ and humans won’t say why
The Plaintiffs’ complaint The legal complaint reads like a checklist of everything that could go wrong when algorithms hold the keys to healthcare: breach of contract, bad faith dealings, unjust enrichment – you name it, UnitedHealthcare allegedly did it all. Beyond the legal jargon, as this is not the appropriate venue for an in-depth comparative legal analysis, 8 there are three issues that should make us all pause and think:
1) Transparency, or rather, the spectacular lack of it: The fact that patients weren’t even aware an algorithm was deciding their healthcare coverage isn’t just a procedural hiccup – it’s a tragedy with often fatal consequences. A tragedy – hence, the public rage – that could have been avoided if companies showed just a smidgen of ethical proclivity in running their multi-billion-dollar operations. While across the pond, the EU’s General Data Protection Regulation (GDPR) demands basic courtesy – like telling people when algorithms are making decisions about their lives (check out Articles 13(2)(f), 14(2)(g), 15(1)(h))– American patients are kept in the dark.
2) The Kafkaesque nightmare of administrative appeals: The defendants’ argument about “not exhausting administrative remedies” might sound reasonable in a courtroom, but let’s get real: we’re talking about sick patients who can barely take it one day at a time, forced to navigate a bureaucratic maze where the decision-maker speaks binary and knows only one word: Denied. Even worse? The human “translators” are instructed not to argue, not to contest, and not even to listen – unless they fancy losing their jobs in an already uncertain market. However, it should be a basic right for every human being to be heard and understood. The EU’s GDPR recognizes this through the right to object (Article 21) and to have one’s point of view expressed and the decision contested in front of a human being meaningfully involved (Article 22). These aren’t just procedural safeguards, but essential steps toward justice in algorithmic decision-making.
3) The critical need for AI regulation (my personal favorite): The Plaintiffs and Defendants in the case present contrasting views on the role of employees in overriding the nH Predict algorithm’s output. The Plaintiffs argue that UnitedHealth Group actively discourages and even punishes employees who deviate from the algorithm’s recommendations, effectively limiting their ability to exercise independent judgment. Conversely, the Defendants argue that physicians have the ultimate authority in making coverage determinations, implying that employees can override the algorithm if necessary. However, their defense strategy may not entirely alleviate concerns about transparency and potential bias. The contrasting arguments highlight a key point of contention in the case: the extent to which the algorithm influences coverage decisions and whether employees have the autonomy to override its output. The Plaintiffs’ claims, if proven true, suggest a concerning scenario where a profit-driven company employs a proprietary algorithm that dictates patient care, with employees facing pressure to comply. Even in a system where physicians nominally have the final say, the algorithm’s influence on their initial assessments and recommendations remains shady (the CJEU’s recent SCHUFA judgment offers valuable insights on this very issue). Without greater transparency about nH Predict’s role in the process, it’s impossible to assess the true balance of power between human judgment and algorithmic output in UnitedHealth Group’s coverage determinations. This fundamental question of accountability is precisely what the EU’s AI Act addresses through its risk-management requirements.
Delay, Deny, Defend: Case Dismissed
In conclusion, this case exposes a fundamental problem plaguing both corporate America and our algorithmic society: the absence of accountability. Justifications in the fashion of computers as scapegoats, ownership without liability, and the relentless pursuit of profit over everything are as old as the hills surrounding Silicon Valley – and just as immovable. Without adequate regulation, these practices will continue to shield companies from responsibility while their algorithms determine who receives care and who doesn’t. 9
In UnitedHealthcare’s case, the mathematics is (un)surprisingly simple: every denied claim translates to increased profits. The alleged 90% error rate in denials isn’t a bug in the system – it’s a feature, designed to exhaust patients before they can access the care they paid for through their premiums.
The tragic irony manifested itself in the most brutal way when three bullets, each inscribed with “delay”, “deny”, “depose”, ended UnitedHealthcare CEO Brian Thompson’s life a few weeks ago.. While Luigi Mangione’s alleged actions cannot be justified, the words on those bullets echo a decades-old critique of the insurance industry’s tactics, famously documented in Feinman’s 2010 book “Delay, Deny, Defend.” In the end, it seems the only case that wasn’t dismissed was the final one – written in bullets rather than legal briefs.
in memory of (dis)UnitedHealthcare’s CEO Brian Thomson
- https://www.vanityfair.it/article/luigi-mangione-cosi-ti-costruisco-la-favola-nera ↩︎
- https://www.bbc.com/news/articles/cm2eeeep0npo ↩︎
- For all the procedural matters, see https://litigationtracker.law.georgetown.edu/litigation/estate-of-gene-b-lokken-the-et-al-v-unitedhealth-group-inc-et-al/ ↩︎
- https://www.forbes.com/sites/brucejapsen/2024/01/12/unitedhealth-group-profits-hit-23-billion-in-2023/ with profits – NOT REVENUES, BUT PROFITS – of over 20 BILLION dollars in 2023. ↩︎
- https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/11/PLAINTIFFS-MEMORANDUM-OF-LAW-IN-OPPOSITION-TO-DEFENDANTS-MOTION-TO-DISMISS.pdf ↩︎
- https://www.statnews.com/2024/05/22/unitedhealth-class-action-lawsuit-algorithm-motion-to-dismiss/ ↩︎
- https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/11/Estate-of-Gene-B.-Lokken_6.18.24_RULE-26f-REPORT.pdf ↩︎
- For the legal geeks: the complaint is founded on five causes of action: 1) breach of contract for failing to fulfill fiduciary duties and properly investigate claims before denial through the use of the nH Predict AI Model, 2) breach of the implied covenant of good faith and fair dealing for improperly delegating claims review to an automated system and failing to conduct thorough investigations, 3) unjust enrichment for retaining premiums while failing to deliver promised services through improper delegation to the AI system, 4) violation of Wisconsin Administrative Code Insurance § 6.11 for failing to properly investigate and settle claims in good faith, and 5) insurance bad faith (in various States). ↩︎
- See other cases currently pending concerning AI in healthcare: https://litigationtracker.law.georgetown.edu/issues/artificial-intelligence/ ↩︎