Stanley Zhong's story first landed as an admissions outlier: a Palo Alto student with a 4.42 weighted GPA, 3.97 unweighted GPA, a 1590 SAT, and a software project called RabbitSign that Amazon praised as "one of the most efficient and secure accounts" it had reviewed. He was rejected by 16 of 18 colleges in the 2022–2023 cycle, accepted by UT Austin and the University of Maryland, enrolled at UT Austin, then left after Google extended him a full-time software engineering offer in September 2023.
Now the case is becoming something broader. Zhong and his father, Nan Zhong, say they could not find lawyers willing to take the matter, so they used ChatGPT and Gemini to help research and draft a nearly 300-page complaint. The first lawsuit, against the University of California system and individual campuses, was filed on February 11, 2025. A second suit against Cornell followed in early April 2025. As of April 12, 2026, both are still ongoing.
That makes this less a settled legal story than a revealing test case: not just for post-affirmative-action admissions challenges, but for what happens when consumer AI tools lower the cost of bringing a complicated discrimination claim.
What is actually alleged
The Zhong family's theory is straightforward, even if proving it is not. After Stanley's rejections were finalized in spring 2023, they began investigating what they saw as disproportionately high rejection rates among highly qualified Asian American applicants. Their complaints argue, in part, that Zhong's credentials were enough for a Google software engineering role but somehow not enough for undergraduate admission at many top schools.
The UC case is real and docketed. A court order confirms the caption Zhong et al. v. The Regents of the University of California, et al., No. 2:25-cv-00495-DAD-CSK. But the legal status matters here: these are still allegations, not findings accepted by a court. In November 2025, the judge denied Nan Zhong's motion for sanctions as premature because discovery had not begun. That is a useful reality check. The case is active, but it has not yet reached the point where the family's claims have been tested through full evidence exchange.
UC, for its part, has denied the allegations. The system has stated that race has not been a factor in admissions since 1996 because of Proposition 209, and that Asian Americans were 36.3% of its undergraduate population in fall 2024.
Why the Harvard case keeps showing up
A lot of the public framing around Zhong's lawsuits points back to Students for Fair Admissions v. Harvard and the companion UNC case. That is not because Zhong is suing Harvard. He is not. But the Supreme Court's June 29, 2023 decision in the Harvard and UNC cases changed the legal temperature around admissions challenges, especially those involving Asian American applicants and subjective review criteria, as Britannica and Wikipedia both explain.
In that litigation, Students for Fair Admissions alleged that Harvard used race as a specific and influential admissions factor and that Asian American applicants were harmed in part through lower "personal ratings"—subjective scores tied to qualities like likability, courage, and kindness. SFFA also alleged that Harvard's class composition reflected de facto racial balancing.
The Supreme Court ultimately ruled Harvard's and UNC's race-conscious admissions practices unconstitutional, citing problems including failure of strict scrutiny, racial stereotyping, and no meaningful endpoint for the use of race. A legal analysis of the majority's reasoning under strict scrutiny is outlined by the Duke Undergraduate Law Review.
That backdrop matters because it gave future plaintiffs a vocabulary and a litigation roadmap. But it does not mean every disappointed applicant now has a Harvard-style case.
The analytical gap in Zhong's lawsuits
This is where the distinction between signal and proof matters.
The Harvard record was unusually deep. It included years of internal data, dueling expert testimony, and detailed modeling about how applicants were scored across academic, extracurricular, athletic, and personal categories. SFFA's expert, Peter Arcidiacono, said he found a statistically significant penalty against Asian Americans in Harvard's personal and overall ratings. Harvard's expert, David Card, disputed that and said there was no statistically significant negative effect when the model was specified properly. In 2019, district judge Allison D. Burroughs ruled that Harvard did not intentionally discriminate against Asian Americans, even while acknowledging disparities in personal ratings. The Supreme Court later struck out the broader admissions framework on constitutional grounds anyway.
Zhong's UC and Cornell cases do not appear, from the materials available, to be at that stage. There is no indication in the reviewed reporting that a court has yet accepted the family's statistical theory or found discriminatory conduct. There is also no evidence in the available reporting that the family used specialized legal AI systems or proprietary admissions-analysis models. The reporting describes something more familiar: consumer frontier models used for research and drafting.
That matters because drafting a complaint and proving a complaint are different jobs. AI can help with the first one much faster than with the second.
Why AI changes this story even if it does not change the legal standard
The novel part here is not that AI found a hidden answer in admissions data. There is no public sign of that. The novel part is procedural: AI appears to have helped two pro se litigants assemble a complicated, document-heavy case after they failed to get legal representation.
That lowers one barrier to entry. It does not lower the burden of proof.
For universities, this probably means more complaints that are more polished than the typical self-filed case. A nearly 300-page filing produced with help from ChatGPT and Gemini is a sign of that already. Courts may see more litigants arrive with long complaints, embedded citations, and statistical arguments that would once have required a law firm or a determined amateur with a lot of time.
But there is a catch. AI can expand access to filing; it does not create access to discovery, expert testimony, or admissible evidence. In discrimination cases, those later stages often determine whether a lawsuit survives. That is especially true in admissions disputes, where schools can point to holistic review, institutional constraints, and applicant pools far deeper than any one résumé.
UC has a different legal posture than Harvard did
Another reason not to overread this case: the University of California is not litigating on the same terrain Harvard did.
Harvard's case centered on explicit race-conscious admissions policies. UC says race has not been considered since Proposition 209 barred that in 1996. So if the Zhong family is trying to prove unlawful discrimination, the challenge is not simply to say "the Supreme Court banned affirmative action." That part was already old law in California. The harder question would be whether race-neutral systems still produced unlawful bias in practice, whether through proxies, subjective review, implementation, or some combination.
There is at least one piece of public material that could become relevant. A California State Auditor report (2019-113) found weaknesses in UC admissions oversight, including that some campuses allowed readers to see names and other personal attributes and had not taken enough steps to guard against reader bias. That is not the same as proof of anti-Asian discrimination. But it does suggest a more specific line of inquiry than the broad claim that elite admissions outcomes alone must be discriminatory.
In other words, the strongest analytical version of this lawsuit may not be "Stanley was too qualified to reject." Plenty of highly qualified students get rejected from hyper-selective schools. The more legally durable version, if one emerges, would likely have to show how a supposedly race-blind system translated subjective review into a measurable disadvantage.
The Google angle is striking, but limited
The Google detail is what makes this story travel. Zhong was turned down by many top universities, then hired by Google as a full-time software engineer shortly after turning 18. The complaints use that contrast aggressively, arguing that his credentials were enough for a role typically associated with PhD-level qualifications or equivalent experience.
That is compelling rhetoric. It is not necessarily a legal comparator.
Companies and universities select for different things, on different timelines, with different constraints. A software engineering offer says something real about Zhong's technical ability. It does not, by itself, establish that any particular admissions office discriminated against him. Nor is there any public indication, as of April 12, 2026, that Google has commented on the lawsuits or on the family's use of AI.
Still, the contrast does have one analytical use: it sharpens public skepticism about highly subjective admissions systems. If a student can move from mass rejections to a Google engineering offer in the same year, it becomes easier for critics to argue that elite admissions are measuring something other than pure academic or technical promise. Whether that "something" is unlawful bias, institutional preference, unpredictability, or simply competition from thousands of similarly exceptional applicants is exactly the unresolved question.
A short timeline
What to watch next
There are really three separate stories here, and they should not be collapsed into one.
First, there is the admissions discrimination claim. That will rise or fall on evidence the public has not yet seen in full. The key thing to watch is whether discovery produces internal admissions data, reviewer guidance, or statistical patterns that can be tested in court.
Second, there is the post-SFFA litigation wave. Zhong's case suggests the Supreme Court's 2023 ruling did more than change policy. It also encouraged applicants to revisit old assumptions about what can be challenged, even at institutions that say they no longer consider race.
Third, there is the AI-for-litigation question. Here the lesson is already clearer. Consumer AI tools appear to be making it easier for people without counsel to file complex cases and sustain them longer than they otherwise could. That does not mean the claims are stronger. It means the courthouse may get busier.
For readers trying to make sense of it now, the practical takeaway is narrow: treat the lawsuits as live tests, not verdicts. The filings show how one family is using AI to push an admissions-bias theory into federal court. What they do not yet show is whether that theory can survive the harder stages of litigation, where polished drafting stops mattering and evidence starts doing the work.
Comments