Legal Codes and Artificial Intelligence

by Jason Morgan

[Click here for a PDF]


On July 4th, 2025, President Donald Trump signed into law the “One, Big, Beautiful Bill Act,” a massive hunk of legislation that had cleared the 119th Congress just days before. Trump’s Big, Beautiful Bill crossed the finish line at 940 pages, a somewhat slimmer version than the more than 1,100 pages of text introduced into the House of Representatives by Jodey Arrington of Texas in May. As has become customary, very few, if any, of the congressmen and senators who voted for or against the piece of legislation had actually read it. It was only after the bill had become part of the federal legislative panoply that most people–including legislators–began to understand what the words in the bill entailed.

The ritual of discovering what legislation contains and means only after it has acquired the force of law by a putatively democratic process is more than just an embarrassment or a symptom of over-reliance on textual rules. Deferring to stacks of paper containing words that have gone largely unread, even by the people elected to write and debate the words on those stacks of paper, is not only irresponsible and dangerous: it will have dire consequences for any attempt to achieve a planetary order in which the liberties of the individual are guaranteed. If we human beings cannot get a hold of the words that bind us to one another in society and to the consequences of defying those words’ meanings (whatever those meanings might be), then we seem destined to be ruled, not by the words themselves or the people who neither write nor read them into law, but by another intelligence sufficient to the task. Either we, as a species, master our legal codes and marshall them into liberty-and-peace-enhancing forces at both a human and global scale, or Artificial Intelligence (AI) or Artificial General Intelligence (AGI), probably at the behest of a baffled humanity, will view our legal codes as operating codes, excising humans from legislation and legal interpretation and turning our welter of laws into a realm of cogitation and interpretation entirely beyond human life.

The difficulty of instituting a planetary federal law can be gleaned by examining the complexity of the law in just one country, the United States. In America, legal codes already resemble, in some ways, operating codes to be implemented by machines. The past century’s worth of United States Code spans more than fifty titles running to nearly one million pages, more than any person could read and comprehend in a lifetime. Of the Federal Register, “the daily repository of all proposed and final federal rules and regulations,” Clyde Wayne Crews writes:

During the 2010s, 775,734 pages were added to the Federal Register (a simple average of 77,573 pages each year). Five years into the 2020s, which includes Trump’s final calendar year and Biden’s four, the average is 87,092 annually. Figure 11’s extrapolation for the remainder of the 2020s shows an expected inventory of 870,922 pages, approaching twice the level of the 1970s, when overregulation was a concern and liberalization in transportation and financial services occurred.

To this glut of words must be added legislation passed in the various states and territories, as well as the reams of rulings emanating from state and federal courts. We humans are producing laws and law-like texts at an extraordinary rate. Could it not be that a higher intelligence will interpret these legal codes as operating codes and take over the practice of law from human beings?

Indeed, the fact that it is difficult to determine how many state and federal laws, regulations, and other textual guidelines there are indicates that “the law” has taken on a life of its own. Average Americans do not understand what the law is, do not know how to begin to understand it, and, even when presented with a given passage from federal or state law, have little to no idea how to make sense of its wording. America is said to be a highly litigious society. Ironically, this may be because of the law’s impenetrability and sheer volume. Without appealing to specialists, few Americans can hope to navigate the ocean of words that govern the minutiae of their existence. But it is doubtful whether specialists are of much help any longer. Human beings have created a textual mesh that no single human mind could ever untangle, or even comprehend.

Perhaps it is the hopelessness one might feel on approaching a legal question in this milieu – one before which even Supreme Court justices quail – that has led many legal professionals to embrace AI. “AI tech has reached nearly every aspect of the criminal justice system,” a July 30, 2025 news report declares. Not only is that system “not ready” for the consequences of this AI invasion, as the report questions, it seems to be an unasked question whether the laws are now being written for human beings or for AI to interpret and apply.

Applicability

There is no shortage of research on AI and law. There are entire journals dedicated to the field. But as AI moves from a secondary, assistant role in judicial proceedings to a dominant one, with AIs acting as judges in cases, the questions arise: will, and should, and can, laws be written by humans for AI adjudication? Given the complexity of law already, will humans be able to regain control of the legal codes they have produced, or is legal interpretation and application already destined to be taken over by AI, or AGI, in the near future?

For now, these questions lie largely unasked. Humans continue to express confidence that the practice of law will remain a human task. Even Supreme Court Chief Justice John Roberts thinks so. But consider that a 2016 state court case (Loomis v. Wisconsin) found that the use of algorithms in making decisions about recidivism risks does not violate due process. If this is so, then it would seem that the door has already been opened for AIs to take over more and more of the work typically done by human minds acting on principles of human reason and justice. There are many other concerns about AI, of course, such as over possible bias by AIs and the underlying large language models they use, as well as over how the impenetrability of law imperils the functioning of democracy. But the fundamental problem with infringements on the exercise of “discretionary authority” by human judges. Once this discretionary authority is peeled back, as it already has been, then the overwhelmingly superior abilities of AIs to examine, understand, and process huge amounts of written information means that it is only a matter of time before that peeling-back continues. At some point, the judge will find him- or herself a bystander in a system wholly under the sway of AI.

Many argue that AI can act as a paralegal or secretary, sifting through pages of legal documents to find the key information germane to a particular case or legal question. Perhaps this is true at a layman’s level, as AI has proven helpful to regular people confronting complex situations seeming to call for litigated solutions. For example, when Nan Zhong was preparing to take legal action on behalf of his son, Stanley, against several American universities over allegedly discriminatory admissions practices, using AI was a “game changer” in helping the Zhongs to level the playing field. Nan Zhong writes:

For highly politicized lawsuits like ours, the lawyers leaning left don’t want to take them, and the lawyers on the right think that the courts in California are too biased for us to possibly win. So we are forced to represent ourselves.
Our ‘legal team’ consists of ChatGPT and Gemini. They did a fantastic job of drafting the legal complaints. For $20 a month, 24/7 access, and no conflict of interest to worry about, we can hardly expect more! When the lawyer of one of the universities objected to the scope of our litigation hold notice, our AI-drafted response compelled them to back down and fully comply with our document retention request.
Furthermore, we are developing a trial preparation tool named TrialGPT. It would run trial simulations where different AI agents take the roles of the plaintiff, defendant, judge, jury, witness, etc. The goal is to find the optimal litigation strategy […], anticipate the defence moves, and maximize our win rate.
As pro se litigants with no legal background, our battle against well-resourced universities and their top legal teams is undeniably a David vs. Goliath fight. AI may just be the sling we need.

These are positive developments, as long as AI is used as a way to help overcome inherent biases in a given legal system. The problem is not with average people pushing through the thicket of legal codes toward greater equity. The problem is with legal practitioners using AI, even casually, to assist with information gathering and, worse, decision-making. US District Court for the District of New Jersey Judge Julien Xavier Neals recently made headlines when a ruling issued in his name was withdrawn due to non-existent quotes, purportedly from legal rulings, apparently concocted by a “hallucinating” AI. Pollyannaish assurances that “better tools” will solve this problem seem almost like hallucinations themselves. Judges are charged with understanding law and using their whole persons in rendering just decisions. Justice is not a matter of cold logic. It requires a heart to temper logic with mercy, and an experiential field, a universe of thoughts and emotions and modes of understanding, in which to work out how best to apply law, especially in difficult cases. Using AI as a secretary of sorts, as the Zhongs did, to increase fairness of access to procedural justice is one thing. Giving the reins of justice over to non-human intelligences, as Judge Neals may have done, is another thing entirely.

A concomitant problem is that human beings seem to have reached, or exceeded, their natural limits of information processing when it comes to the law. There is just too much law and too little time, so Judge Neals will surely not be the last judge who decides, or whose clerks decide, to let AI do the heavy lifting. This spells the end of the law as we have known it and the beginning of the law as something fit for machines and not human minds. Some analysts have raised concerns about software codes replacing legal codes as controlling texts for certain aspects of human behavior. These concerns are well founded, but the problem goes well beyond software codes blocking out, as it were, legal codes in human societies. The problem is that the laws we already write are being interpreted by AIs as software codes of a kind. The legal codes we have produced are now operating codes for AIs. We have bound ourselves to and by our laws. AIs know this, and so, by shutting us out of the social structures we have created (with millions and millions of words none of us now understand), it will be a simple matter for AIs to impinge on our liberties should they so choose.

It would seem that the time to think about how AIs will affect, probably deleteriously, human liberty in a global framework is now.


Jason Morgan is an Associate Professor at Reitaku University. Send him mail.