Sonia Katyal and Charles Graves Explore the Use of Trade Secrecy to Conceal Algorithmic Decisionmaking
Publication Date: April 27, 2022 12 minute read“Corporate and government actors have pushed to transform the law of trade secrecy into one of the most — if not the most — powerful tools to ensure concealment of information,” law professors Charles Graves and Sonia Katyal wrote in an article published in the Georgetown Law Journal in June.
“The irony is that this has happened at the very same time that the opaque nature of algorithmic decisionmaking, coupled with the new interplay between government agencies and private technologies, has created a crisis regarding access to information by journalists, regulators and others working in the public interest.” - quoted from Los Angeles Times article, “Waymo Sues State DMV to Keep Robotaxi Safety Details Secret”
In their article, “From Trade Secrecy to Seclusion,” law professor Sonia Katyal, UC Berkeley, and adjunct law professor Charles Graves, UC Hastings Law, argue that trade secret law is being applied beyond its intended purpose of protecting intellectual property against misappropriation, and increasingly being used as a tool for open-ended concealment. The authors state, “The law is moving from trade secrecy to trade seclusion...” and “raises serious concerns, ultimately distorting the flow of information that should be available to the public.”
Professors Katyal and Graves have collected and identified a variety of nontraditional cases in order to bring to light the broad trade secrecy arguments in differing areas of law. They then classified these diverse claims into three categories:
- investigatory concerns involving journalists and whistleblowers;
- delegative concerns where the government relies on private technologies, such as automated decisionmaking and artificial intelligence;
- dignitary concerns where employers seek control over employee attributes, such as diversity data and workplace harms, beyond the normal context of employer/employee trade secret lawsuits.
“From Trade Secrecy to Seclusion” provides several examples of trade secrecy arguments in each of the above categories. This post focuses on the second category: the use of algorithmic decisionmaking and artificial intelligence technologies developed by private entities for use in government functions such as criminal justice, voting, and education.
Below are select excerpts from “From Trade Secrecy to Seclusion” by Charles Tait Graves and Sonia Katyal (Georgetown Law Journal, Vol. 109, Issue 6, June 2021).
Government Entities Delegating to Software and Algorithmic Decisionmaking
Today, as several scholars have noted, government entities rely on an ever widening range of private parties for any number of purposes—from management of detention facilities, to the provision of voting machines, to relying on algorithms to calculate Medicaid benefits and bail amounts, and to assessing educator performance.
Example: Voting Machine
In 2005, a voting machine company, Diebold Election Systems, refused to follow a North Carolina law that required electronic voting machine manufacturers to place their source code in escrow with a state board of elections approved agent. The law was designed to ensure fair elections by providing for limited government oversight over the tabulation process. However, Diebold chose to withdraw from servicing the state’s elections altogether rather than reveal its source code.
As this example shows, governance of our fundamental freedoms—the right to vote—has been outsourced to private companies, stripping the public (let alone the state) of the possibility of investigation or oversight, even with a protective order in place.
Criminal Justice and Algorithms
Although the intermingling of private engagement with public functions is not entirely new, what is unprecedented is the degree to which trade secrecy, more recently, has impeded public oversight. This produces a delegation of a government function—law enforcement and prosecution—to a private entity, where trade secrecy grants even further immunity to the prosecution within the criminal justice system. Because fact-finding and investigation become insulated from adversarial scrutiny through trade secrecy, this delegation raises classic concerns about the reach of the Confrontation Clause of the Sixth Amendment in such contexts, as well.
Today, Automated Suspicion Algorithms (ASAs) apply machine learning to data with the purpose of identifying individuals who may be engaged in criminal activity, conflicting with the requirement of individualized suspicion under the Fourth Amendment. Aside from these constitutional concerns, trade secrecy makes it difficult to even discover, let alone investigate these technologies and their implications. For example, as Elizabeth Joh has discussed, some companies require police to sign nondisclosure agreements about new surveillance technologies like “stingrays” (cellphone surveillance tools), promising not to disclose that the technologies exist to the defendants, courts, legislators, and the public.
These concerns are not limited to surveillance and predictive policing technologies; they extend to nearly every stage in the life cycle of a criminal justice case, including bail investigations, pretrial and trial evidence, sentencing, and parole. Today, algorithms, and the trade secrecy that envelops them, surface throughout many types of forensic technologies, including fingerprint analysis, ballistic analysis, firearm and cartridge matching analysis, facial recognition technologies, DNA analysis, and other AI-related tools. In addition, algorithms that are used to sentence defendants or parole prisoners have raised significant issues of racial bias. A ProPublica report studied Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), one of the most popular algorithms that is used to assess a defendant’s risk of recidivism and subsequently sentence that defendant based on this risk. When ProPublica tested the results from the proprietary algorithm used to predict recidivism, it discovered that the scores were wrong almost forty percent of the time, and seriously “biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants.” Trade secrecy assertions can hobble oversight of these technologies, with significant implications for the rights of defendants.
Example: COMPAS - Algorithm Used to Assess Defendant’s Risk of Recidivism
Despite the problems that ProPublica documented, the Wisconsin Supreme Court, in State v. Loomis, upheld the use of COMPAS in sentencing in July 2016, although it recognized the potential for overreliance on such tools. In that case, in 2013, Eric Loomis was charged with crimes related to a drive-by shooting. He was sentenced to eleven years in prison; the court considered the COMPAS risk assessment report that labeled Loomis a high risk for pretrial recidivism risk, general recidivism risk, and violent recidivism risk. Loomis appealed the sentence on due process grounds. The court rejected his concerns, noting that “to the extent that Loomis’s risk assessment is based upon his answers to questions and publicly available data about his criminal history,” the court found that he could verify the accuracy of his answers.
Tellingly, however, the Wisconsin court did not discuss trade secrecy at all, even though Loomis was unable to determine how COMPAS arrived at its conclusion, because the company refused to reveal its proprietary algorithm. Loomis is just one example of how trade secrecy has created insurmountable obstacles for defendants caught in the criminal justice system.
Here, the government essentially delegates its responsibilities—factfinding, investigation, pretrial, trial, and sentencing administration—to a software program. Moreover, the private status of the manufacturer facilitates the striking dismissal of core constitutional protections regarding the right to confront witnesses at trial. And judges further aid this process by insulating the state’s evidence, and related information, within an impermeable layer of trade secrecy.
Fallibility of Software and Algorithms
Ironically, in many such cases, both state and federal courts often presume the reliability and accuracy of the techniques they rely upon. And yet, computer scientists would argue exactly the reverse: that the programs themselves do not automatically or inherently ensure reliability. As Christian Chessman writes, “computer programs are not more reliable than human statements because they are human statements—and no more than human statements.” Because they are tools of human design, they are often subject to human error, faulty assumptions, and mistakes, just like any other kind of evidentiary tool. This is perhaps the strongest reason for why machine testimony deserves the benefit of adversarial scrutiny. These errors are structural in nature, and they produce structural errors, as a result, because they stem from the nature of computer programming itself—ranging from accidental errors (including technical coding errors) to outdated code, software rot, intentional and unintentional forms of bias baked into the code, failures of self-testing, and other processing issues.
Administrative Agencies and Use of Automated Decisionmaking
Today, as several scholars have observed, machine learning algorithms have been deployed in deciding who the Internal Revenue Service should audit, managing and setting social security and other public benefits, interpreting DNA evidence, assessing teacher performance, and a host of other areas. The idea of delegation in the administrative state has been thoroughly explored by Danielle Citron in an early article, Technological Due Process, which described automated decisionmaking as “de facto delegations of rulemaking power.” In a later work, Citron and Ryan Calo explain that administrative agencies have come to rely on increasingly automated decisionmaking as a way of navigating an increasingly complex work. [see: Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 EMORY L.J. 797, 816–17 (2021)]
The rise of automated decisionmaking has significant costs regarding due process. It “impair[s] individualized process, making decisions about individuals without notice and a chance to be heard, and embedding rules that lack democratic imprimatur.” Delegations to private industry are especially troubling, several scholars have argued, because they circumvent the general practices of notice-and-comment rulemaking and other forms to ensure deliberative participation. Less clear, but equally important, is the way in which trade secrecy concretizes the absence of due process—not only foreclosing transparency but also accountability and explainability. In these situations, trade secrecy—and the deference afforded to trade secrets’ owners—creates a double bind of deference, where the deference enjoyed by the trade secret owner can be readily mapped and extended to the results of these instruments of automated decisionmaking as well.
Example: Arkansas DHS Home Care Services
In one case, the Arkansas Department of Human Services (DHS) decided to replace its system of individualized nurse-led evaluations for home care services to a nonprofit that licenses its “Resource Utilization Group system” to various state agencies; the system is a machine learning algorithm that uses classifications and statistical calculations to arrive at a result. The new system, while promising efficiency, also “produced arbitrary and illogical results,” according to Calo and Citron. For example, the algorithm would indicate that a person had “no foot problem” if the person was a foot amputee, even though they would need more assistance rather than less, and underestimated the cost of multiple conditions.
After a number of physically disabled Arkansas residents discovered that their home care had been reduced by forty-three percent under adoption of the new system, they sued in court, leading to an injunction that prevented DHS from using an automated system until it was able to explain its reasoning and eventually culminating in a ruling that observed that the state had “failed to follow [its own] rulemaking procedures” by failing to provide adequate notice to affected parties of the switch to the new methodology.
Conclusion
We have identified several areas of law where would-be trade secret owners are pushing the boundaries of laws designed to protect certain commercial, marketplace information, not to protect business reputations or forestall investigations of wrongdoing. Because claimants can assert trade secrecy in misappropriation lawsuits and confidentiality in open-records disputes with ease, we anticipate that the problems we discuss here may worsen in the years to come. Those working in disparate areas of legal scholarship and practice need solutions that are generalizable to new situations as they arise, in order to speak with a common voice about problems that share common elements. We offer practical paths toward reform in the service of a more balanced approach to trade secret law that takes better account of the many public interests at stake.
Read the entire article: “From Trade Secrecy to Seclusion” by Charles Tait Graves and Sonia Katyal (Georgetown Law Journal, Vol. 109, Issue 6, June 2021).
Sonia Katyal is the Distinguished Haas Chair, Chancellor's Professor of Law, and a faculty Director of the Berkeley Center for Law & Technology at the University of California, Berkeley School of Law. Professor Katyal’s work focuses on the intersection of technology, intellectual property, and civil rights (including antidiscrimination, privacy, and freedom of speech). Her current projects focus on artificial intelligence and intellectual property; the intersection between the right to information and human rights; trademark law and branding; and a variety of projects on the intersection between museums, cultural property and new media.
Charles Tait Graves is Adjunct Professor of Law at UC Hastings Law. He has taught a course on trade secret law since 2009. A partner at Wilson Sonsini in San Francisco, he has more than twenty years of experience handling countless trade secret matters in Silicon Valley and around the country.
Further Reading
The links open TAP’s summary of the articles and provide a link to the full article online.
- Trademark Search, Artificial Intelligence and the Role of the Private Sector by Sonia Katyal and Aniket Kesari (Berkeley Technology Law Journal, Vol. 35, pp. 501-586, 2021)
- Private Accountability in the Age of Artificial Intelligence by Sonia Katyal (UCLA Law Review, Vol. 66, pp. 54-141, 2019)
- The Paradox of Source Code Secrecy by Sonia Katyal (Cornell Law Review, Vol. 104, No. 5, pp. 1184-1279, 2019)
- The Automated Administrative State: A Crisis of Legitimacy by Ryan Calo and Danielle Citron (Emory Law Journal, Vol 70, Issue 4, 797-846, 2021)
About Sonia Katyal
Sonia Katyal is the Roger J. Traynor Distinguished Professor of Law, Co-Director of the Berkeley Center for Law & Technology, and Associate Dean of Faculty Development and Research at the University of California, Berkeley School of Law. Professor Katyal’s work focuses on the intersection of technology, intellectual property, and civil rights (including antidiscrimination, privacy, and freedom of speech).