Frank Pasquale Defends Human Expertise in the Age of AI

By TAP Staff Blogger

Posted on December 17, 2020


Share

In New Laws of Robotics: Defending Human Expertise in the Age of AI, Brooklyn Law professor Frank Pasquale explores the myriad ways that technological advances affect how we work, what media we consume, and how law is made and enforced; and he addresses the problems that arise when robots advance into hospitals, schools, and the military.

 

New Laws of Robotics tells the story of educated professionals – doctors, nurses, teachers, home health aides, journalists, and others – who collaborate with roboticists and computer scientists to develop the kind of technological advances that could bring better health care, education, and other services to all of us. Rather than seeking data sources to ‘teach’ automated systems to do the work and thus replace human jobs, a focus on augmentation could enhance services and allow professionals to concentrate on the meaningful work best done by humans. Professor Pasquale emphasizes that public policy and establishing laws of robotics can help guide the development of artificial intelligence (AI) in the direction of creating a sustainable balance between machines and humans.

 

Referencing sci-fi author Isaac Asimov’s laws of robotics, introduced in the 1942 short story “Runaround,” Professor Pasquale developed four new laws of robotics to navigate AI’s growing role in society.

 

Professor Pasquale’s Laws of Robotics

 
  • Employ robotics and AI in ways that complement professionals, instead of replacing them.
     
  • Prevent robots from mimicking human qualities like empathy or emotion.
     
  • Avoid "arms races" between robotics researchers.
     
  • Keep record of who designed or controls a robot, in order to know who to contact — or hold responsible — in case something goes wrong.
     

Below are excerpts from two articles, written by Professor Pasquale, that share excerpts from New Laws of Robotics. The first one, “’Machines set loose to slaughter’: the dangerous rise of military AI,” looks at military’s use of artificial intelligence and robotics amidst a mindset of domination abroad and mutual deterrence. And the second article, “When medical robots fail: Malpractice principles for an era of automation,” explores the advances of robotics and automation in health care services and proposes solutions to the question, who should be responsible when these devices fail?

 

AI in the Military

In “’Machines set loose to slaughter’: the dangerous rise of military AI,” Professor Pasquale examines the prevalence of autonomous machines in modern warfare. Given that many are capable of deadly force, there are numerous ethical concerns.

 

Below are a few excerpts from “’Machines set loose to slaughter’: the dangerous rise of military AI.”

 

Can the Experiences of Soldiers Be Exported into Datasets?

 

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where there is a massive dataset with clearly understood examples of good and bad, right and wrong. For example, credit card companies have improved fraud detection mechanisms with constant analyses of hundreds of millions of transactions, where false negatives and false positives are easily labelled with nearly 100% accuracy. Would it be possible to “datafy” the experiences of soldiers in Iraq, deciding whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)?

 

A Machine’s Power of Discrimination

 

International humanitarian law, which governs armed conflict, poses even more challenges to developers of autonomous weapons. A key ethical principle of warfare has been one of discrimination: requiring attackers to distinguish between combatants and civilians. But guerrilla or insurgent warfare has become increasingly common in recent decades, and combatants in such situations rarely wear uniforms, making it harder to distinguish them from civilians. Given the difficulties human soldiers face in this regard, it’s easy to see the even greater risk posed by robotic weapons systems.

 

Proponents of such weapons insist that the machines’ powers of discrimination are only improving. Even if this is so, it is a massive leap in logic to assume that commanders will use these technological advances to develop just principles of discrimination in the din and confusion of war. As the French thinker Grégoire Chamayou has written, the category of “combatant” (a legitimate target) has already tended to “be diluted in such a way as to extend to any form of membership of, collaboration with, or presumed sympathy for some militant organization”.

 

Is It Possible to Impart Ethics to Military Robots?

 

At present, the military-industrial complex is speeding us toward the development of drone swarms that operate independently of humans, ostensibly because only machines will be fast enough to anticipate the enemy’s counter-strategies. This is a self-fulfilling prophecy, tending to spur an enemy’s development of the very technology that supposedly justifies militarisation of algorithms. To break out of this self-destructive loop, we need to question the entire reformist discourse of imparting ethics to military robots. Rather than marginal improvements of a path to competition in war-fighting ability, we need a different path – to cooperation and peace, however fragile and difficult its achievement may be.

 

Read the full article, “’Machines set loose to slaughter’: the dangerous rise of military AI,” written by Frank Pasquale (The Guardian, October 15, 2020).

 

AI in Medicine

Examining technological change in medicine, Professor Pasquale emphasizes the importance of promoting accountability for AI vendors and supporting the domain expertise of physicians. In “When medical robots fail: Malpractice principles for an era of automation,” Professor Pasquale says, “distinguishing between technology that substitutes for human expertise and that which complements professionals is fundamental to both labor policy and the political economy of automation.”

 

Below are a few excerpts from “When medical robots fail: Malpractice principles for an era of automation.”

 

Technological Changes in Medicine

 

In recent years, researchers have developed medical robots and chatbots to monitor vulnerable elders and assist with some basic tasks. Artificial intelligence-driven therapy apps aid some mentally ill individuals; drug ordering systems help doctors avoid dangerous interactions between different prescriptions; and assistive devices make surgery more precise and more safe—at least when the technology works as intended. And these are just a few examples of technological change in medicine.

 

The gradual embrace of AI in medicine also raises a critical liability question for the medical profession: Who should be responsible when these devices fail? Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market.

 

Holding the AI Manufacturer Liable

 

Under a strict liability standard, in the case of an adverse event, the manufacturer, distributor, and retailer of the product may be liable, even if they were not negligent. In other words, even a system that was well-designed and implemented may still bear responsibility for error. Think, for instance, of a manufacturing process which, while well-designed, still via some inadvertent mistake or happenstance ended up producing a defective product that harmed someone. In such a case, strict product liability can result in a judgment against the manufacturer, even without a finding of negligence. This may seem like an unduly harsh standard. However, the doctrine incentivizes ongoing improvements in technology, which could remain unduly error-prone and based on outdated or unrepresentative data sets if tort law sets unduly high standards for recovery.

 

A strict liability standard would also function to deter the premature automation of fields where human expertise is still sorely needed. In the medical field, there has long been a standard of competent professional supervision and monitoring of the deployment of advanced technology. When substitutive automation short-circuits that review and a preventable adverse event occurs, compensation is due. The amount of compensation may be limited by state legislatures to avoid over-deterring innovation. But compensation is still due because a “person in the loop” might have avoided the harm.

 

Ensuring the Medical Workforce Is a Lasting Partner in Developing Automation

 

The sequence and shape of automation in health care cannot simply be dictated from on high by engineers. Rather, domain experts (including physicians, nurses, patients’ groups, and other stakeholders) need to be consulted, and they need to buy into a larger vision of progress in their field. Perhaps more of medicine should indeed be automated, but we should ensure that the medical workforce is a lasting partner in that process. In most cases, they should be helped, not replaced, by machines—both for present aims (such as overriding errant machines), and for the future (to develop new and better ones).

 

As courts develop such evolving standards of care, they will also face predictable efforts by owners of AI to deflect liability. Policymakers are struggling to keep pace with the speed of technological development. Legislators are fearful of inhibiting growth and innovation in the space. However, there is increasing public demand for policy interventions and protections regarding critical technology. These demands do not necessarily impede economic or technological advance. Some innovation may never get traction if customers cannot be assured that someone will be held accountable if an AI or robot catastrophically fails. Developing appropriate standards of responsibility along the lines prescribed above should reassure patients while advancing the quality of medical AI and robotics.

 

Read the full article, “When medical robots fail: Malpractice principles for an era of automation,” written by Frank Pasquale (Brookings Tech Stream, November 9, 2020).

 

Read More:

Frank Pasquale is Professor of Law at the Brooklyn Law School. He is a noted expert on the law of artificial intelligence (AI), algorithms, and machine learning. His work focuses on how information is used across a number of areas, including health law, commerce, and technology. His wide-ranging expertise encompasses the study of the rapidity of technological advances and the unintended consequences of the interaction of privacy law, intellectual property, and antitrust laws, as well as the power of private sector intermediaries to influence healthcare and education finance policy.


Share