Can AI Improve My Brief?


Noah Coco '26 
Managing Editor 


In the previous edition of the paper, I authored an article covering an event hosted at the Law School highlighting the impending impact of artificial intelligence (AI) technology on legal practice. In short, AI technologies are expected to increasingly perform legal tasks and displace the legal workforce. Attorneys who can most effectively deploy AI technologies will be best positioned to succeed in the transforming legal industry.

With that portent looming large, I wanted to experiment with some of the legal AI tools currently available. As a new entrant into the space, I was unfamiliar with the current landscape of legal AI tools, so I began with one that recently showed up in my email inbox: Westlaw’s Quick Check.

Westlaw pitches Quick Check as “[c]utting-edge AI combined with Westlaw's editorial excellence [delivering] relevant authority traditional research might miss.” As the description suggests, Quick Check is a document analysis tool powered by an AI model that, among other useful features, purports to analyze uploaded documents and suggest authorities that may be relevant to the legal issues identified in the document but that were not cited in it. The tool generates a report that lists relevant authorities organized by the headings from the original document, and it displays the outcome of the recommended cases along with excerpts of case text relevant to the legal issue analyzed.

I approached Quick Check with a simple challenge: can it improve my LRW brief? Or, can it improve a hypothetically bad version of my brief emblematic of disregarding two semesters’ worth of LRW class sessions? Although I presume that at least one-third of the Law School has by now analyzed this same legal issue, I will give a brief crash course on the legal question analyzed. The case concerns whether digital sampling of sound recordings constitutes per se copyright infringement or whether a de minimis exception applies. The first main argument presented in the brief supports the rule that digital sampling constitutes per se copyright infringement. Assuming the court does not adopt this rule, however, the second main argument maintains that the particular instance of digital sampling in the case is not de minimis as a matter of law, first under a test called the fragmented literal similarity test, and second under a test called the observability test.

I first uploaded a moderately complete draft of my own LRW brief. Due to the gracious beneficence and tutelage of the Law School’s own Professor Ruth Buck ’85, I was very confident that nearly every authority relevant to my analysis was already accounted for. The results of the Quick Check report confirmed my suspicions.

As a preliminary matter, Quick Check correctly identified the headings labeling the two main arguments, as well as the three sub-arguments contained under both. However, as expected, the suggestions bore meager relevance to the precise legal issues analyzed in the brief. For example, although the recommended cases for the first sub-argument of the first main argument did all pertain to music and copyright infringement, they all dealt with different forms of infringement, none of which concerned digital sampling. I was nonetheless impressed that the top-recommended cases were all within the jurisdiction of the Second Circuit—four of the cases were tried in the Southern District of New York, and the fifth was argued at the Second Circuit itself. If not a coincidence,[1] then the AI model’s ability to recognize the relevant jurisdiction from the brief is admittedly impressive.

Although the failure to identify additional relevant cases is excusable since it is likely that nearly all relevant cases have already been included in the brief, less excusable was the failure of Quick Check to recommend any relevant or useful secondary sources. Of the scant twelve recommendations across all the headings, two authorities—one, an alphabetically-listed table of case names from a treatise on copyright, the other, the digital sampling portion of American Jurisprudence Proof of Facts—comprised seven of the total, and they were not particularly helpful. This is a striking result since a basic targeted search in Westlaw’s generic search bar yields hundreds of relevant law review articles and other authorities. It is surprising that not even the most cited law review articles on the topic were recommended.

Although the initial test of the “control” brief produced unsurprisingly mediocre results, I next challenged the Quick Check tool with an “experimental” brief where I removed key text and citations from the arguments. First, I completely removed the discussion of the statute from the Copyright Act of 1976 that is most relevant to the discussion of sound recording copyright infringement.[2] Second, I removed the discussion of one of the few cases in the Southern District of New York (and the Second Circuit broadly) where the de minimis exception had been applied to a case of digital sampling of s sound recording copyright.[3] Third, I omitted one of two Second Circuit cases applying the fragmented literal similarity test.[4] Finally, I omitted three cases where the fragmented literal similarity test had been applied to digital samples of sound recordings.[5] Maybe I was a little heavy-handed on the omissions, but I wanted to see how much Quick Check could help me if I had been completely unconscious in every LRW class of the year.

After uploading the lackluster brief, I first observed that recommendations of relevant statutory provisions were not actually a feature offered by Quick Check. Pity. Quick Check did, however, provide a backdoor of sorts, since the recommendations for the first sub-argument of the first main argument recommended the same relevant case twice, each time highlighting text that cited the missing statutory provisions. Perhaps if I had not read the cases carefully enough the first time, these results could have provided a second chance. The results did actually impress me for another reason. Again assuming no coincidence, Quick Check properly recommended the case that was most beneficial to my side of the argument, rather than the competing case for the opposing side that unflinchingly eviscerated the statutory argument put forward in the first.

The recommendations fared marginally better in identifying the omitted cases. Only one of the omitted cases was identified among any of the top five recommended cases displayed on the main page of the report.[6] However, three more of the omitted cases were recommended when I clicked on the links to “See additional cases” appended to the main report.[7] Although the cases were not recommended under the same headings as those from which they were originally included, it is difficult to find fault in that lack of precision in what is otherwise a nuanced legal issue. More upsetting were the recommendation of dozens of irrelevant cases and the failure of Quick Check to recommend the final omitted case, which actually applied the fragmented literal similarity test to six different digital samples.[8] Moreover, it should be unsurprising that Quick Check again failed to recommend any appreciably beneficial secondary sources.

In conclusion, Quick Check will likely not meaningfully improve either a relatively good or relatively bad LRW brief (yet). The main problem I had is that the top recommendations were generally not relevant and missed the main legal issue. When Quick Check did identify relevant cases missing from the brief, they were not matched to the appropriate argument headings. Also, by the time I found the cases, it felt no more efficient than working through search results rendered from a targeted search in the generic search bar. Nonetheless, some of the characteristics of the results did impress me, and the tool should not be entirely discounted. The results of this dubiously rigorous study should also be taken with a grain of salt because they seem incongruous with the general narrative surrounding generative AI, particularly models used for legal research.[9] I would highly recommend testing the tool out for yourself. Perhaps you will have more success than I had, but at the very least you will be preparing yourself to adopt the technologies that will likely shape your legal career.


---
cmz4bx@virginia.edu 


[1] I find it improbable that this was purely a coincidence since many music copyright cases naturally arise out of California in the Ninth Circuit.

[2] For those of you in the know, 17 U.S.C. § 114.

[3] Again, for those that care, TufAmerica, Inc. v. WB Music Corp., 67 F.Supp.3d 590, 591-98 (S.D.N.Y. 2014).

[4] Same disclaimer, Ringgold v. Black Ent. Television, Inc., 126 F.3d 70, 75 (2d Cir. 1997).

[5] For the last time, TufAmerica, Inc. v. Diamond, 968 F. Supp. 2d 588, 603 (S.D.N.Y. 2013); New Old Music Group, Inc. v. Gottwald, 122 F.Supp.3d 78, 97 (S.D.N.Y. 2015); Williams v. Broadus, No. 99 CIV. 10957 MBM, 2001 WL 984714, at *4 (S.D.N.Y. Aug. 27, 2001).

[6] TufAmerica, Inc. v. WB Music Corp.

[7] Ringgold v. Black Ent. Television, Inc.; New Old Music Group, Inc. v. Gottwald; Williams v. Broadus.

[8] TufAmerica, Inc. v. Diamond.

[9] For a more redeeming experience with legal AI tools, check out Westlaw’s other AI model, Ask Practical Law AI. In hindsight that may have been a more successful article.