John Palfrey and Urs Gasser Discuss the Promise and Perils of Interconnected Systems

By TAP Staff Blogger

Posted on July 11, 2012


Share
John Palfrey and Urs Gasser, both directors with Harvard’s Berkman Center for Internet & Society, recently published their latest book, "Interop: The Promise and Perils of Highly Interconnected Systems.” TAP recently had the opportunity to talk with the authors about interoperability and a few key points their newest collaboration presents.
 

Would you talk about the importance of ‘interoperability’ for those of us who rely on our computers and smartphones to work, but have no desire to know how they work?
 
As consumers, we usually recognize the importance of interoperability in our everyday lives when things don’t work together as we hope. Maybe your slide presentation created on one computer doesn’t properly display on another one. Or maybe a DVD sent as a gift from a friend in Europe doesn’t play properly on the DVD reader here in the US. Your digital music or books purchased from one company won’t work on your newest gadget. Or perhaps your emails in an old email program can’t be exported to a new one without big hassles. Such instances in which we lack interoperability illustrate how the right degrees of interconnectedness benefit us as consumers at least in terms of ease-of-use and consumer choice. Whether we are aware of it or not, we all heavily depend on the working together of complex systems.
 
 
In a very big picture way, what do you see as the biggest perils of highly interconnected systems? And on the flip side, what are the greatest potentials?
 
Poorly designed interoperability comes with high costs for users. Privacy and security concerns are the most pressing challenges when information flows across systems and components, but also between organizations. Higher degrees of interop might increase the vulnerability of systems by giving more people access to sensitive information – access that can also be misused, as in the case of hacker attacks, to name just one example. From a larger societal perspective, we need to make sure that interop is designed in ways that it doesn’t lead to too much uniformity. Interop is actually about diversity. It is important that we preserve high degrees of diversity in complex systems – that’s true in biological systems, but also in the digital environment. These challenges, however, are outweighed by the benefits of high levels of interop in most cases. Interoperability is generally a good thing as it increases user autonomy and user experience (put another way: our ease of use). Moreover, our research demonstrates that higher levels of interop usually increase competition, foster innovation, and result in economic growth.
 
 
Can you provide an example from history of how implementing an effective interoperable system greatly improved safety or efficiencies or quality of life in some way?
 
Air traffic control systems are a great illustration where more interop has massively improved safety, efficiency, and quality of service. It took a long time and much effort since the first flight of the Wright brothers to develop a global system of air traffic as we know it today, with hundreds of thousands of flight operations daily. This case study is also particularly important because it shows that interop has to occur not only at the technical layer – for instance by standardizing radio frequencies or implementing interoperable radar technology – to improve a system. In addition, organizational and human as well as legal interoperability is often needed to increase the safety and efficiency of a complex system. In the air traffic example, the development of a standard language used among pilots and air traffic controllers was for instance a key element to make the system both safer and more efficient. That said, the air traffic control story also shows that it can be very hard to maintain the optimum level of interop over time. Due to its complexity and legacy problems, it has been very hard to upgrade the air traffic control system by implementing advanced communication and information technologies such as, for instance, GPS.
 
 
Can you envision a system that would thrive without interconnecting to other systems?
 
Especially in markets with strong network effects, we can observe tech companies successfully deploying strategies that are based on non-interoperability. Consider situations in which the firm has a strong user base or another initial advantage over its competitors. Apple is a good case in point. Starting with the iPod and the iTunes Store, it built a relatively closed ecosystem of devices and services that, at least by and large, inter-operate well within Apple’s system, but not so well with the components made by their competitors. Economic theory suggests that firms might have strong incentives to innovate in such cases where they see the possibility to compete for the entire market – a “winner takes all” situation. However, in the digital space we often observe a shift towards increased interoperability over time, even if a company started with a strategy that was largely based on non-interoperability.  
 
 
What about innovation? Wouldn’t establishing standards to facilitate interoperability stifle innovation and perhaps even prevent the next great technological advancement from getting off the ground?
 
Here as elsewhere: the devil is in the detail. First, it’s important to understand that standards are only one instrument to work towards higher levels of interoperability. We need to evaluate which approach is best suited to address a given interop problem we seek to solve. Second, different types of standards exist – including government mandated standards and open standards. Each type comes with its own advantages and disadvantages regarding a potential lock-in – or “freezing” of a given state of technology when the standard was adopted. It is one of the big challenges to design standard setting processes in such ways that standards can evolve over time and periodically adjust to changing technologies. Open standards are particularly promising in this respect.
 
 
Has your research for this book led you to see hopeful possibilities for upholding security – both personal privacy and national cybersecurity? Or is the glass now half empty and you have a healthy dose of fear for security in our interconnected future?
 
The research for this book has highlighted how important it is to aim for optimum, rather than maximum, levels of interoperability. The search for optimal interoperability includes a careful and pro-active analysis of the privacy and security challenges in a given environment. Such risks might be very different from case to case – compare, for instance, the case of e-health records with the online distribution of music. Once the threat model is determined for each use case, one has to think hard about the design of the system – and the best instruments that can be used to work towards interoperability. In short, we see privacy and security concerns not as an argument against interoperable systems per se – but as a serious design challenge.
 
 
In your book, you say that “society needs interoperability, but systems must be designed to harness its benefits while minimizing its costs.” Do you have any proposals for policy makers that walk this fine line to consider?
 
Yes. In an earlier paper on this topic, we developed a process-oriented framework with six steps that can be a starting point for policy makers. We emphasize a thorough case-by-case analysis of the interoperability problem at stake and the facts of the situation, including the timeline; the maturity of the relevant technologies and markets; and the user practices and norms. Another important step in the framework is the evaluation of the different approaches to interoperability and the “match-making” between the approaches and the interop problem one seeks to address. In many areas – take health care or the smart energy as examples – the private sector and the state have to work together to increase interop in such complex systems. In our book, we present a broad framework that can work in a wide range of circumstances and then offer suggestions for how to get there in a few specific scenarios.
 
 
Did anything surprise you during your research for this book?
 
We have been surprised to learn how much we all depend in our personal and professional lives on the working together of technical systems, humans, organizations, and institutions – all that without having a theory about or much insight into the “mechanics” of the underlying links and connections. We have very hard choices we need to make when designing systems with optimum levels of interoperability.
 
 
How did the two of you come together to write this book?
 
We have been long-time friends and collaborators at the Berkman Center for Internet & Society at Harvard University, where we have studied interoperability and innovation for some years together. After the great experience with “Born Digital,” our first co-authored book, we decided to also turn our interop research into a book as we got more and more excited about this important and multi-faceted topic. We both think that our work is stronger by virtue of working together – in fact, a core interop theme in itself.

Share