by Jason Morgan
In Isonomia Quarterly, vol. 3, issue 3, I published a short warning about AI’s ill effects on legal reasoning. My argument was that the complexity of legal frameworks, and the sheer number of laws, had already put comprehension of the body of law beyond the reach of human intellects. And so, I concluded, it is possible that artificial intelligence, which has vastly superior powers of research, recall, and synthesis than humans do, may begin to interpret human legal codes as operating codes, taking over jurisprudence from humans and turning the law into an instrument of control over people planetwide. Attention to the possible takeover of law by AI seems, to me, to be a matter of great urgency.
I shared the piece with some colleagues who work in the fields of legal history and practice. They disagreed with my premise and conclusions. With their permission, I have combined their comments into one rebuttal, which I present below. My colleagues wish to remain anonymous, but I think it is important to mention that they each have decades of experience as lawyers and as legal academics, and that they each were trained at some of the best law schools and academic institutions in the world.
My responses to their comments follow some of the paragraphs. For ease of reading, I have prefaced my colleagues’ paragraphs and my own paragraphs with the initials “LE,” for “legal experts,” and “JM,” standing of course for my name. [Editor’s note: I have also italicized the voice of the Legal Experts.]
LE: You tend to overestimate legal complexity. The intersection of law and individual lives is pretty simple. We couldn’t get through the day if we did not have a pretty clear idea of what we may not and must do.
JM: It very well may be that I overestimate legal complexity. However, I don’t think there is a connection between legal complexity and how regular people live their lives each day. John Hasnas’ “The Obviousness of Anarchy” strikes me as soundly argued and empirically true. The reason I don’t commit murder, don’t drive on the wrong side of the road, and don’t pour bleach on my neighbor’s flowers is that those things are morally wrong, in the case of murder and the destruction of property, and would cause chaos, in the case of all three. In other words, social order and the moral law are not derived from the law, whether we posit that the law is simple or complex.
My concern is that AI will exploit this gap between lived experience and the written law in order to control the former by means of the latter.
LE: The complexity of law arises in areas that do not directly affect the everyday lives of individuals. A large part of the complexity arises because the law cannot anticipate every possibility. Novel facts require interpolation, guessing at what the law’s “intent” was. There is also a lot of intentional complexity that supports lawyer rent-seeking in protected specialties. There are some things that only big firms with specialized departments can do. That is why they can charge $1,000 an hour. But this does not directly affect the everyday lives of individuals.
JM: Rent-seeking by big law firms may not directly affect me, but it almost certainly indirectly affects me. Companies that must pay high legal fees as a cost of doing business do not eat those costs; they pass them on to the consumer. (The same is true of the cost of lobbying, which is related to the cost of lawyering.)
One of the appeals of AI is that using it promises to reduce these kinds of costs. But whereas human beings tend to sin by seeking rents and otherwise maximizing personal advantage even at the expense of others, AI does not have even a flawed moral compass. It has no moral compass at all. What AI will do with the law, and what fallen humans have been doing with the law since law was invented, are, potentially, very different things. I will take the human fallenness I know over the AI I don’t know any day.
The incompleteness of law is a very good point, but this, too, is a liability when AI has entered the equation. The incompleteness of law is a function of the contingency of human life. Nobody knows what happens tomorrow, or even ten minutes from now. The law must guide without predicting, be a moral force without being epistemologically closed-off. Worming into such epistemological aporia is one of AI’s strong suits. All the worse that the law’s complexity can be said to have little to do with the lives of regular people. That separation only makes AI’s illicit work much easier.
LE: AI makes the process of finding the law, finding the right citations and cases, much easier than it was 50 years ago, when associates spent much of their time in law libraries, not in front of a PC. As you say, it is a super-paralegal, a super search engine. It has made legal work much more efficient just in the last few years.
JM: I would argue that AI is not just a paralegal or a search engine. It is also of a different type than either human or machine. One can see this indirectly in how it changes human interfaces with the law. Spending time in law libraries is a human endeavor. It is human work. Finding the right citations and cases in a library entails sifting through many other citations and cases that are also potentially related, and that all teach the seeker how human beings in other times have thought about the law and how it is supposed to work in human societies.
Also, to harp on my go-to chord again, efficiency is the backdoor through which enters AI, promising convenience but delivering enslavement.
LE: You seem to be fearful of a future in which “discretionary authority” of judges is surrendered wholesale to machines. I personally think it unlikely, at least during my lifetime. The two cases you cite, Loomis v Wisconsin and the Julien Xavier Neals case, do not strike me as precursors of wholesale delegation of judicial authority. I don’t understand why Loomis is an AI case at all. Are we suspicious of using the SAT as a predictor of GPA? If not, then why should we be suspicious of using proven predictors of recidivism in sentencing? The methodology and reliability of the “algorithm” need to be tested of course. Judge Neals’ bad cites if anything are a warning against wholesale delegation.
JM: I am very much fearful of such a future, and I think it is already partly here. The Loomis case and the Neals episode are not determinative, to be sure. They are both very minor affairs. But what they portend is enormous. Opening the door just a little to algorithmic legal reasoning–which I believe to be a contradiction in terms, because legal reasoning is a human art, an act of phronesis and not an act of arithmetic and averages–is tantamount to opening the door all the way. Human beings can be lazy. Shortcuts are hard to take when one must do the hard work of finding cases in big books in law libraries or scrolling through text on a screen, but are infinitely easier and more tempting when AI is standing by, promising to make things so much easier.
Judge Neals’ case and the AI hallucinations that it foregrounds are, I agree, very good reasons to avoid wholesale delegation. But given human nature I think that Neals will not be the last person to cut corners, and won’t end up being the most egregious offender by far.
LE: Furthermore, for at least 100 years, statutes were the result of multi-point negotiations (and logrolls) in committees — and the legislature then deferred. There’s nothing particularly wrong with this — it’s not first-best, perhaps it worked reasonably well. Not theoretically ideal, maybe, but it worked.
In particular, it means that a statute that passed could make 5 rules, of which a majority of Congress opposed all. This is because the intensity of preference varied. Picture five committee members. Committee member Able was in favor strongly of Rule 1, but opposed to 2-5. Baker was in favor of Rule 2 but opposed to the rest, etc. You can create scenarios in which there were 4 votes against each of the 5 rules, but all 5 passed because each rule had a strong proponent. The committee packaged all of the provisions favored by at least one of the committee members into a set, and all committee members agreed to support it. Congress in turn voted in favor of the logroll packages passed out of committee (largely without knowledge of specific content) because they could count on other congressmen favoring their own committee’s output in due course. This was standard procedure for at least 100 years. And it sort of worked.
Note that it makes a mockery of the question: what is the legislative intent behind rule 3? The legislative intent was that 80 percent of the committee thought it was a bad idea. But the other committee members (who opposed rule 3) voted in favor because — in exchange — the other committee members agreed to favor each of their favored rules.
Over the past 10 years, this system of working out logrolls broke down. It used to be that the logroll was worked out in the committee. The rest of the Congress went along with this, so it passed with bipartisan support. But the notion that one defers to the committee seems dead now. This means that the logroll is no longer bipartisan. Instead, it’s only within one party. This means, though, that you need virtually unanimous support from the majority party because otherwise it won’t pass. Given that the minority party is locked out, you need everyone from the majority party to play along in order to get a majority in Congress.
In turn, however, this means that each member of the majority can now destroy the logroll by threatening to defect. They can each threaten to vote no — and this matters because the majority party has such narrow margins that it can’t afford to lose even one vote. In turn, however, this means that potentially every member of the majority party can threaten to defect and extort large gains (in terms of district-specific pork) in exchange for supporting the majority statute.
What does this mean? I guess my point is partly that it was never the case that legislators knew the details of a statute. If you were part of the relevant committee, you made your demands, and got your demanded pork made part of the logroll. You read that part of the statutory package, but largely ignored the rest. You voted for the package as a whole because it included your demanded pork provision. The rest of Congress went along, because that’s the way the machine worked — they knew that over the long-haul, everyone would play along with everyone’s favored pork. Hence, their time for their own favored pork would arrive in due course.
The machine has broken down. The parties are no longer playing a repeated game in which everyone cooperates in anticipation of future favorable votes for one’s own log-rolled provision. There is no across-the-aisle cooperation. Instead, each logroll is no longer bipartisan — it’s within the majority party, and it’s whatever is necessary for the entire party to go along. Given the difficulty involved, there are many fewer statutes, and each one is vastly longer than before.
JM: This is an excellent insight into how laws are made. I take all of the above points. But I think that, if anything, this behind-the-curtains look at legislative sausage-making (to mix metaphors) makes the threat of AI all the more existential to human liberty. Legislatures may very well be committee-level logroll schemes. I have no reason to doubt it. The upshot may be that laws are not expressions of deep cogitation about moral philosophy, but, rather, the product of grimy compromise among people whose job is to deliver pork in exchange for votes. As the logrolling mechanism has jammed, the laws have gotten longer, hence the glut of words that legislatures spit out each month.
But my point is that, for AI, it doesn’t matter where words come from. Legislative will is irrelevant. What matters is that there are lines and lines of text that, by convention and by consensus, both longstanding, govern human conduct. The more words the better, as this simply provides AI with more cotton for its gin. At any rate, given human nature, legislatures are unlikely to delegitimize the laws they churn out, even if the laws are specious and badly overwrought. If AI can interpolate, taking the words from legislatures and interpreting them in ways deleterious to liberty, then we are in a bad position.
Legislative processes matter, in other words, in human contexts, but with AI, the bypass is already built in. Logrolling or no logrolling, it is liberty that stands to get steamrolled in the end.
LE: Congress has been captured by special interests that control campaign finance. Congress people are not voting their conscience to do the right thing. They are voting for spending and regulation approved by their sponsors. Congress delegates the details to the bureaucracy (aka The Swamp) which in turn is run by revolving door appointees also captured by the special interests. The breakdown of the committee structure is a symptom of a larger phenomenon.
JM: Completely agreed. The question then becomes, What takes the place of politics, however dirty and corrupt it was, once politics has broken down? This is where my fears show their faces. If it is true that we already live in an age of superintelligent agents, and if those agents are growing more powerful by the day, and if, on a separate track, our politics is broken and getting broken-er and broken-er, then the convergence of the two trends, superintelligence and political dysfunction, seems almost inevitable. What happens next, to my mind, is unpredictable, except that it will surely not redound to the benefit of mankind.
Jason Morgan is an Associate Professor at Reitaku University. Send him mail.
