Frank Pasquale Examines the Hidden Algorithms that Determine Reputation, Credit Score, and the Price of Vacation Packages in “The Black Box Society”

By TAP Staff Blogger

Posted on January 23, 2015


Share

In his new book, The Black Box Society: The Secret Algorithms That Control Money and Information, law professor Frank Pasquale explores the practice of data collection for profit.


Every day, corporations connect the dots about individuals’ personal behavior by scrutinizing clues left behind by work habits, Internet use, and cellphone use. The data compiled and portraits created by algorithms are incredibly detailed; Professor Pasquale argues this has reached the point of being invasive. And data-intensive technology continues to evolve and become more ubiquitous.


The Black Box Society shows how hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy.


In an interview with Fleishman-Hillard’s TRUE, Professor Pasquale explains why he chose to use the black box metaphor:


Engineers use the term black box to describe a system where we can watch inputs going in and see outputs coming out, but in between exists some opaque, intangible process that transforms the inputs into outputs under a veil of secrecy. Google sometimes refers to its black box algorithm as the secret sauce that gives them a competitive advantage. Businesses are increasingly relying on this secrecy strategy when it comes to data, using the rationalization that people would game the system if they knew how it worked. So that’s the most obvious explanation of the term black box.


But when I came up with the title, I was also thinking about the black box on airplanes that most often comes into play in crashes. That black box is something that records all that’s going on in the plane and monitors something like 30,000 variables. If you look at the Internet of Things, big data and the ubiquity of sensor networks, it’s almost like we are all being monitored just like the plane. We all have our own black boxes. We expect our Internet use to be tracked, sliced and diced, and monitored, but now with the Internet of Things, even our real space beyond the Internet is being monitored as well.


Professor Pasquale provides an example of how lack of transparency with data mining can be harmful (also from the TRUE interview, “How Big Data’s Inaccuracy Hurts People”):


People need to be aware that their health data and even health profiles of them are being compiled that have no protection under HIPAA. For anything not covered by HIPAA — and that’s virtually anything that isn’t collected by a health provider — there’s no regulation. Detailed profiles can be put together based on sites you visit, searches you perform. Most of this is used for marketing purposes, but it is filtering into other uses. What if prospective employers have access to lists of possible diabetics or people who are depressed or alcoholics? There are laws that restrict that. But in the world of big data in which we live, it’s impossible for people to find out everything that an employer is looking at.


For instance, if an employer tells you he is not hiring you because you’re a diabetic, that’s clearly illegal. But what if there is some scoring entity that puts out a number about your robustness as an employee or potential costs to the company and the scoring entity secretly included data on an employee’s potential diagnosis, that’s problematic and almost impossible to prove because employees almost never know what is going into an employer’s decision not to interview someone or not to give them a job and the employer doesn’t know what’s going into the score.


Now go one step further and ask what if the information the scoring entity is using is wrong. For instance, let’s say they have concluded you are suffering from depression based on Google searches or sites you have visited or information you have requested. That may be for a friend or colleague or maybe just research. They don’t have access to actual health data because of HIPAA.


In an op-ed piece he wrote for The New York Times, Professor Pasquale exposes the eroding privacy and potential harm to individuals’ reputations from the practices of data miners, data brokers, and resellers.


Having eroded privacy for decades, shady, poorly regulated data miners, brokers and resellers have now taken creepy classification to a whole new level. They have created lists of victims of sexual assault, and lists of people with sexually transmitted diseases. Lists of people who have Alzheimer’s, dementia and AIDS. … There are lists of “impulse buyers.” Lists of suckers. … And lists of those deemed commercially undesirable because they live in or near trailer parks or nursing homes.


The market in personal information offers little incentive for accuracy; it matters little to list-buyers whether every entry is accurate — they need only a certain threshold percentage of “hits” to improve their targeting. But to individuals wrongly included on derogatory lists, the harm to their reputation is great.


Below are a few of the interviews about The Black Box that Professor Pasquale has completed:


Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law. His research agenda focuses on challenges posed to information law by rapidly changing technology, particularly in the health care, Internet, and finance industries.

 


Share