Abstract

Excerpted From: Anjali Vats, (White) Racial Arithmetic as Intellectual Property Architecture, 103 Texas Law Review 1581 (June, 2025) (120 Footnotes) (Full Document)

 

AnjaliVatsIn The Signal and the Noise, a manifesto for our cognitively dissonant post-fact, pro-statistics era, Nate Silver writes: “Data-driven predictions can succeed--and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.” He continues: “[O]ur bias is to think that we are better at prediction than we really are.” The devil, of course, is in the details of determining which data-driven predictions are failures and which ones are successes. Maria Kuecken observes in the LSE Review of Books:

[A] data-driven claim does not a good prediction make. Much of the information out there is simply noise “which [sic] distracts us from the truth.” If we sift through enough of this noise, we are likely to come up with relationships that seem meaningful when they don't truly exist or predictions that are way off the mark from reality.

Silver's suggestion for minimizing “noise” is to aspire “to be less subjective, less irrational, and less wrong.” Yet these are neither straightforward nor universal aims.

Public cultural conversations about prediction came to a head in the 2016 United States presidential election, when political pundits, including Silver, came under fire for forecasting that Hillary Clinton would likely be the next President of the United States. While the nuance of those debates is beyond the scope of this paper, public investment in the election predictions showcases just how much epistemic credibility is attached to data and statistics despite their profound limitations. This Essay focuses on how quantitative data functions in the context of intellectual property, specifically vis-à-vis racial justice. In order to anchor arguments for intellectual property equity against frequently racialized claims of infringement, scholars and activists have increasingly turned to empirical research (e.g., data regarding demographic inequity in copyright and patent registrations, economic costs of infringement, and benefits of distributive justice frameworks) as evidence for their claims. These statistics are often deployed in public policy conversations to advocate for diversity, inclusion, and equity, as well as reform of intellectual property systems. The statistics are frequently used to counter industry and lobby claims about the catastrophic costs, particularly to the United States economy, of practices such as sampling, piracy, and counterfeiting of music, goods, and pharmaceuticals, as well as highlight the continuing need for copyright, patent, and trademark interventions focused on racial justice.

Theodore Porter observes that, out of Cold War fascination with particle physics, also came a commitment to administrative quantification, a practice of bureaucratically applying social scientific methods to populations with the goal of “achieving some kind of impersonal validity by following the rules.” Porter argues that this form of measurement created a critical distance between quantifier and subject, an air of objectivity through which subjectivity itself could be negated. Recently, critical race theorists and ethnic studies scholars have taken up similar critiques. Their approach, described as QuantCrit, raises questions about how and when numbers are leveraged in public cultural conversations in the service of some policy outcomes, but not others. In 2018, a special issue of Race, Ethnicity and Education examined how quantitative methods could assist critical race theory research--a full fifteen years after asking that same question of qualitative methods. The authors observe that: “By exploring how quantitative methods are (mis)applied, (mis)interpreted, and often (mis)characterized, these articles remind us that quantitative approaches can't simply be adopted for racial justice aims.”

The political mis-deployment of racial statistics produces specious arguments that can be weaponized against particular policy outcomes, even when they may support them. The concerns I want to raise in this Essay, then, relate to how architectures of racial proof operate in intellectual property law. I ask two questions: (1) How has quantitative data been used to justify racism and inequity, notably anti-Asian and anti-Arab rhetorics, in intellectual property contexts? and (2) How have scholars attempted to combat racism, and defend racial equity using quantitative data? While I cannot comprehensively answer these questions in this short space, I can offer a set of observations from which to start a larger conversation. These observations also necessarily encourage evaluation of discipline and methodology, e.g., how humanistic and empirical data fit in conversations about intellectual property maximalism/minimalism, and how we ought to deploy them in future struggles for a more equitable world. Answering the questions I raise here also suggests a need to consider how persuasiveness operates in practice, i.e., when and how the illusion of objectivity produced by quantification sways people more than ““mere” qualitative data.

Elizabeth J. Kennedy recently argued in the MIT Sloan Management Review that “[r]acial equity strategies must be systemic, race-explicit, and outcome-oriented if they are to succeed.” Highlighting the need to ensure that quantitative data is used without bias, she offers a five-step plan that includes: (1) collecting, disaggregating, and analyzing race and ethnicity data; (2) identifying racial disparities in workforce outcomes; (3) naming race when speaking about disparities; (4) investigating structural causes of racial disparities; and (5) developing strategies to eliminate patterns that produce differential outcomes by race. Kennedy's approach is ostensibly the best case use of quantitative data to correct racial inequity because it is explicit about identifying racial causation and eliminating racial bias in a verifiable manner. Yet ethnic studies scholars illustrate how the process of producing statistics can go wrong in the context of racial justice, through the deployment of “racial arithmetic” that weaponizes numbers against those who most need assistance. Michael Rodríguez-Muñiz writes that “[e]thnoracial statistics, or what political scientist Kenneth Prewitt has aptly called 'statistical races,’ are political abstractions that represent a way of thinking and enacting 'race’ in numerical, aggregate terms.” This double entendre calls attention to the statistical production of race as well as the use of numerical argumentation to pit racial groups against one another. Racial arithmetic describes what Porter might characterize as the deployment of quantitative data in ways that prey upon political and cultural desires to “idealize automatic or mechanical standards of knowledge, such as the reduction of judgment to a calculation.” Rodríguez-Muñiz's argument suggests that the production of quantitative data itself socially constructs race, through relational comparisons between socially constructed groups. This suggests a need for the relational study of racial inequity, i.e., how racial categories are described relative to one another.

I argue here that racial arithmetic is a common tactic in intellectual property infringement loss analyses that report on costs to the United States economy. These assessments often invoke racial and/or national identity to create an enemy that purportedly threatens the United States. For instance, the 2024 Review of Notorious Markets for Counterfeiting and Piracy (hereinafter Notorious Markets) begins: “Commercial-scale copyright piracy and trademark counterfeiting cause significant financial losses for U.S. right holders and legitimate businesses, undermine critical U.S. comparative advantages in innovation and creativity to the detriment of American workers, and pose significant risks to consumer health and safety.” It then goes on to list these “notorious markets.” Peppered with seemingly alarming but largely decontextualized data, Notorious Markets pits nations against each other in a racialized infringement competition. The entry for China states: “Counterfeit and pirated goods from China, together with transshipped goods from China to Hong Kong, China, accounted for 84% of the value (measured by manufacturer's suggested retail price) and 90% of the total quantity of counterfeit and pirated goods seized by U.S. Customs and Border Patrol (CBP) in 2023.” The entry for Peru notes: “In 2024, Peru's National Police conducted 36 police operations on Gamarra Emporium, reportedly seizing counterfeit items with a total street value of $15.7 million and resulting in 96 arrests.” The racial arithmetic highlighted in Notorious Markets makes a case for cracking down on foreign infringers. Numbers give the document an air of objectivity and legitimacy--yet the quantitative data employed is both difficult to verify and difficult to compare despite marking some groups, e.g., Asians, as worse infringers than others, e.g., Latinos. Adding statistics appears to substantiate claims of infringement--without evidence of injury to consumers, tradeoff with United States sales of the same goods, or harm to the United States economy. Good intellectual property citizens are distinguished from bad ones through this numerical shell game.

This is precisely the type of administrative quantification that Porter critiques because it stands in for contextualized and historicized argumentation. Making standalone numerical infringement claims suggests a monumental problem that might not be so troubling when read in the context of economic development or industrial production more generally. Asking questions about these numbers shows how quickly they unravel: What was the actual value of the goods seized by CBP from China and Hong Kong in 2023? How does a street value of $15.7 million compare to the value of counterfeit items produced globally? What goods were represented? Why were consumers so eager to obtain these particular goods? How, if at all, would consumers obtain these goods otherwise? What tangible harms did the infringements cause? These are the types of questions that reveal the slipperiness of quantitative data, as well as its embeddedness in logics that evolved over time, in understandings of ““rationality” and “science.” Such data, even when poorly analyzed, stands in for independent critical thinking and good judgment.

Administrative quantification--and the air of objectivity associated with it--has a history in the context of intellectual property. Once authors were granted limited monopolies in their works--a mere twenty-eight-year monopoly under the Statute of Anne of 1710 sought to protect those works against infringement. The same history unfolded for inventors with respect to patents, established through English letters patents in the 1700s. By the late-1800s, copyrights and patents had become entrenched in the U.S. and U.K. as economic objects that could be propertized and monetized for sale in commerce. Their growth accelerated on both sides of the Atlantic in the 1900s, as the demand for culture industries, scientific knowledge, medical treatments, and military technologies exploded. By the 1960s, intellectual property in the United States was well-established as a political and economic object. Perhaps no one better exemplified the politics of intellectual property, the theme of this symposium issue, than Jack Valenti, a central figure in the development of the American film industry. His career trajectory and comments to Congress on the Betamax and semiconductors in the 1980s illustrate the central role of copyrights and patents in U.S. economic and racial politics.

The rise of intellectual property as commodity prompted the emergence of valuation of copyrights and patents as a cottage industry with its own claims to objectivity. As copyrights and patents have been increasingly drawn into calculative discourses, scholars and activists have sought to respond to industry valuations, including those making racialized claims, using their own quantitative data. For instance, Michael Masnick of TechDirt has written extensively on the inflation of copyright infringement loss numbers. These empirical approaches to intellectual property research that have emerged in the past twenty-five years merit closer examination. This Essay, then, proposes an intellectual history of the valuation of infringement of copyrights and patents, particularly as it has drawn lines based on race and nation, before considering how academics, particularly those who are invested in racial equity, are now responding to the uptake of statistics in those spaces. This Essay proceeds in four parts: Part I, “A Short History of Intellectual Property and Economic Loss,” traces how copyrights and patents became properties subject to data collection from the 1850s to the present. Part II, “The Math Isn't Mathing, or How I Learned to Stop Worrying and Love Racial Arithmetic,” thickens the concept of “racial arithmetic” as a tool of critical legal analysis, specifically with respect to racial justice and intellectual property. Part III, “Economic Valuation in the Liberatory Politics of Intellectual Property,” considers how critical race intellectual property scholars have leveraged empirical methods and quantitative data to advocate for racial justice. Finally, Part IV, “Equity Mathematics and the Futures of Racial Equity in Intellectual Property,” considers how scholars and activists might more effectively use their racial justice scholarship to critique the racial arithmetic that undergirds copyrights and patents. In Quants & Crits: Using Numbers for Social Justice (or, How Not to Be Lied to with Statistics), Claire E. Crawford, Sean Demack, David Gillborn, and Paul Warmington remind their readers that: “Even when people have a gut-feeling that the numbers (or their interpretation) are not correct, many lack the skills to seriously explore and critique quantitative data.” This becomes problematic when quantitative data is used to produce a certain policy outcome--as in the case of intellectual property lobbying--instead of to make good-faith arguments about the nature of the policies required to achieve racial equity.

 

[. . .]

 

This Essay has laid out how intellectual property, specifically copyrights and patents, are justified through administrative quantification, a practice that imposes social scientific approaches on quantitative data produced through complex government and industry collaborations. When quantitative data about race and ethnicity is deployed for political purposes, without the consent or awareness of audiences, it is rightly described as racial arithmetic intended to persuade and even manipulate. Demack, Gillborn, and Warmington observe that “[t]here are no inherent reasons why critical race theorists should dispense with quantitative approaches entirely but they should adopt a position of principled ambivalence, neither rejecting numbers out of hand nor falling into the trap of imagining that numeric data have any kind of enhanced status or value.” This may seem outrageous to some, but it is the only path forward that attends to the complex cultural and political histories of administrative quantification and cost-benefit analysis with healthy skepticism and grounded honesty. As copyright and patent valuation is increasingly reduced to mere numbers, quantitative data will remain important but so too will humanistic critiques.

Achieving genuine racial justice in the context of intellectual property requires confronting the historical emergence of cost-benefit analysis as the norm for evaluating policy decisions, as well as deconstructing the overarching ideological systems in which it is grounded. Racial capitalism, neoliberal rights, and property ownership all threaten racial justice goals, especially when they are upheld using racial arithmetic left uncontested by racial justice advocates. QuantCrit, an approach to attending to biases in racial justice-related quantitative data, provides one path for addressing these issues, particularly when coupled with trenchant humanistic critiques. However, QuantCrit is only effective when quantitative data is produced and deployed with awareness and accountability about its likely consequences in policy conversations. While scholars have taken a number of distinct categories of approaches to producing empirical research about copyright and patent inequity, I contend that these scholars can operate as more powerful tools for addressing racism in this moment if they draw definitive conclusions and directly engage with the racial arithmetic that drives copyright and patent policy. Investigating, understanding, theorizing, and addressing how the federal government and culture industries leverage racial arithmetic for their political and cultural benefit, particularly via rally-around-the-flag nationalism, will make intellectual property scholars invested in racial justice more effective in dismantling the status quo commitments that continue to impede racial justice.

This will, in turn, serve all of us in the battles ahead.


Anjali Vats is Associate Professor of Law with secondary appointment in Communication at the University of Pittsburgh.