Milling the F/LOSS: Export Controls, Free and Open Source Software, and the Regulatory Future of the Internet

This Note investigates U.S. export controls as they relate to free and open source software (FOSS), arguing that the U.S. government has responded to the challenges of modern software by attempting to force an ill-fitting framework to accommodate FOSS. A contemporary reexamination of the state of export controls over FOSS can help in mapping out the responses generated by national security interests to the challenges of the internet. In particular, the Note offers a detailed account of the ways in which federal export controls have excluded FOSS from their regulatory purview through a powerful public availability exemption. In doing so, regulators have essentially labeled publicly available software as unthreatening to national security, regardless of the potential uses of any particular code.

This paper has been published by the NYU Journal of Legislation & Public Policy, Vol 23, Issue 3 (2021). It originated in the Guarini Colloquium: Regulating Global Digital Corporations and also contributed to the Open Source Software as Digital Infrastructure project.

Indicators 2.0: From Ranks and Reports to Dashboards and Databanks

The World Bank Headquarter Atrum as depicted by Jaakko H., licensed CC-BY-SA.

In September 2021, the World Bank Group’s management announced its decision to discontinue one of its most notable and controversial products – the Doing Business Report. Michael Riegner had welcomed the death of indicators as a technology of governance, noting that we are now in the era of “governance by data”. Proliferation of digital data, increased reliance on sensing technologies, creation of digital products by international organizations, and the funding of large-scale digital infrastructure projects (e.g., e-government, e-health) by the multilateral development banks, including the World Bank, are ushering new forms of global governance.  Riegner suggests that this turn to digital technologies and computational capacity for big data analytics is one of the reasons for indicators’ demise:

“why use aggregated indicators based on expert surveys when you can digitally collect and process actual raw data, disaggregated all the way down to the smallest unit of relevance?”

If by this question Riegner intimates that indicators – understood as “named collection of rank-ordered data that purports to represent the past or projected performance of different units…[wherein] data are generated through a process that simplifies raw data about a complex social phenomenon” (see here) – can be written off as a technology of governance, his dismissal may be too swift. First, the kind of “raw” data that would be required to make accurate assessments may not be readily available. Moreover, if commensurability is to be achieved, one would require access to roughly similar type of data for each unit of analysis – no small feat given the unequal availability and distribution of data across countries, within countries, and between public and private actors. Second, even if global governance actors increasingly embrace differentiated governance that is tailored to specific actors or entities, there will be continued demand for metrics and representations that simplify and translate complex data into legible and comparable information. Third, as Riegner himself acknowledges, other prominent indicators likes PISA, Human Development Index, Rule of Law Index, and Freedom Scores continue to exist. Whether their influence is declining, as Riegner suggests, remains to be seen.

The World Bank itself is showing no sign of giving up on the production of indicators. At the same time, how indicators are disseminated have changed: the World Bank has turned to dashboards as a means of presenting and contextualizing indicators, and has “datified” indicators, making them accessible as data through the DataBank. The Bank has also begun experimenting with new methodologies, embracing open-source “big data” to construct indicators.

These changes – dashboardization, datafication, and the turn to “big data” as a source for indicators – alter not only how indicators are produced and used (and by whom), but also how they govern, shifting and re-constituting the sites of expertise and power. The cancellation of the Doing Business report thus might not be evidence of the demise of the indicators but a consequence of a shift (begun several years earlier) towards a different process of indicator construction and dissemination that, in turn, implicates different means by which governance effects are achieved.

This blogpost was published by the Völkerrechtsblog as a response to Michael Riegner’s “The End of Indicators”. It draws on ideas developed in the Institute for International Law & Justice projects on indicators as a technology of global governance and on infrastructures-as-regulation.

The Evolution of European Data Law

This new chapter for the 3rd edition of Paul Craig and Gráinne de Búrca’s Evolution of EU Law conceptualizes European data law as an area of EU law that gravitates around but transcends data protection law. It traces the origins of the EU’s data protection law to national and international antecedents, stresses the significance of recognizing data protection and privacy in the EU’s Charter of Fundamental Rights, and explores the gradual institutionalization of data protection law through exceptionally independent data protection authorities, firmly embedded data protection officers, and emergent structures for supranational coordination. It then contrasts the EU law on personal data with the EU law on non-personal data and scrutinizes two other domains of European data law that intersect in complicated ways with data protection law: data ownership laws and access to data laws. European data protection law has been globally diffused through extraterritorial application, conditionalities for transfers of personal data, international agreements, and the “Brussels Effect” but whether the EU will retain its role as global data regulator is far from certain. As the European Commission is executing its data strategy, it needs to move beyond simplistic understandings of data as a resource, recognize the salience of data infrastructures, and confront the reality that data is more than a regulatory object.

The chapter draws on ideas from Guarini Global Law & Tech’s Global Data Law project.

Artificial Intelligence and International Economic Law

Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz (eds.)

Artificial intelligence (AI) technologies are transforming economies, societies, and geopolitics. Enabled by the exponential increase of data that is collected, transmitted, and processed transnationally, these changes have important implications for international economic law (IEL). This edited volume examines the dynamic interplay between AI and IEL by addressing an array of critical new questions, including: How to conceptualize, categorize, and analyze AI for purposes of IEL? How is AI affecting established concepts and rubrics of IEL? Is there a need to reconfigure IEL, and if so, how? Contributors also respond to other cross-cutting issues, including digital inequality, data protection, algorithms and ethics, the regulation of AI-use cases (autonomous vehicles), and systemic shifts in e-commerce (digital trade) and industrial production (fourth industrial revolution).

This book is available as a physical object (hardcover) for purchase from Cambridge University Press and freely available (open access) as an electronic copy on Cambridge Core.

A book review by Anupam Chander and Noelle Wurst has been published by the Journal of International Economic Law. They conclude: “This book is an important contribution to our understanding of the way that international economic law governs AI. It will certainly be a foundational text for future work."

A further book review by Gabrielle Marceau and Federico Daniele has been published by the World Trade Review. They say: “… Artificial Intelligence and International Economic Law promises to become a seminal work on AI and international law and to open the path for future research and publishing on the matter."

China’s Influence in Global Data Governance Explained: The Beijing Effect

In today’s global economy, digital data enable transnational communication, serve as a resource for commercial gain and economic development, and facilitate the decision-making by private and public entities alike. As questions of control over digital data have become flashpoints in global governance, Chinese technology companies and the government of the People’s Republic of China (PRC) increasingly shape and influence these contests. The “Digital Silk Road” through which the PRC promises “connectedness” in the digital domain alongside the physical transport capacity of the land- and sea-based planks of the Belt and Road Initiative (BRI) manifests the PCR’s aspirations to facilitate digital development in host states. The prerequisite digital infrastructure investments are orchestrated by its gigantic technology companies, which are acquiring an increasingly prominent presence abroad.

In our article “The Beijing Effect: China’s ‘Digital Silk Road’ as Transnational Data Governance”, which is forthcoming with the New York University Journal of International Law and Politics, we analyze China’s growing influence in global data governance. The term “Beijing Effect” pays homage to Anu Bradford’s account of the EU’s global regulatory influence as the “Brussels Effect”, which is said to be particularly prominent in the digital domain, where the EU’s General Data Protection Regulation (GDPR) has been heralded as a global benchmark for multinational corporations and a template to be emulated by countries without comprehensive data protection laws. Even the PRC is sometimes following in the GDPR’s footsteps, as illustrated by the draft for a Personal Information Protection Law (PIPL) which – together with the Data Security Law – is set to complement China’s existing data governance framework which revolves around cybersecurity. Like the GDPR, the PIPL is set to apply to personal information handling outside PRC borders when the purpose is to provide products or services to people within the territory of the PRC or when conducting analysis or assessment of their activities. In this way, both the GDPR and the PIPL apply extraterritorially in recognition of the Internet’s cross-jurisdictional reach. While such parallels must be recognized, their effects must not be overstated or equated. We concur with Professor Bradford that Beijing will not be able to replicate the Brussels Effect which occurs when globally operating corporations choose to amplify European law. However, we posit that a Beijing Effect of a different kind is already materializing and might gain further strength since the COVID-19 pandemic has revealed the global economy’s reliance on digital infrastructures.

Our account of the Beijing Effect explains how the PRC is increasingly influencing data governance outside its borders, in particular in developing countries in need of digital infrastructures with only nascent data governance frameworks. Indeed, the most consequential vector may be the construction, operation, and maintenance of digital infrastructure by major Chinese technology companies. More than twenty years after Lawrence Lessig’s famous insight that “code is law,” the creators of the hardware and software that penetrate and regulate our increasingly digitally-mediated lives globally are increasingly based in Beijing, home to Baidu and ByteDance, Hangzhou, where Alibaba is based, or Shenzhen, where Huawei and Tencent are headquartered. As their digital infrastructures become ingrained in the social, economic, and legal structures of host states, they affect where and how data flows, and, by extension, how people communicate and transact with, and generally relate to, other individuals, the private sector, and public authorities.

At the same time, the PRC challenges the Silicon Valley Consensus which heralded the unconditional desirability of “free flow” of data and, instead, promotes “data sovereignty” as a leitmotif for international and domestic data governance. This tension materializes in the “digital trade” and “electronic commerce” chapters of recent megaregional trade agreement: While members of the Trans-Pacific Partnership (TPP) can challenge the necessity of data transfer restrictions and data localization requirements under threat of dispute settlement proceedings, the Regional Comprehensive Economic Partnership (RCEP) agreement allows its members to self-assess which restrictions they deem necessary.

As some governments in BRI host states seem drawn towards the dual promise of social control and economic development as reflected in the PRC’s transition towards a digitally-advanced techno-authoritarian society, a critical reevaluation of extant digital development narratives and China’s self-representation as an alternative center for global governance is warranted. Our account of the Beijing Effect is one piece in this larger puzzle, which requires more theoretically informed and empirically grounded research into China’s unique approach to law and development.

This blog post was initially published by the Machine Lawyering Blog hosted by the Chinese University of Hong Kong (CUHK). It is reposted here with permission since the original post is no longer available.


Personalization of Smart-Devices: Between Users, Operators, and Prime-Operators

Your relationships with your devices are about to get complicated. Remote operability of smart-devices introduces new actors into the previously intimate relationship between the user and the device—the operators. The Internet of Things (IOT) also allows operators to personalize a specific smart-device for a specific user. This Article discusses the legal and social opportunities and challenges that remote operability and personalization of smart-devices bring forth.

Personalization of smart-devices combines the dynamic personalization of code with the influential personalization of physical space. It encourages operators to remotely modify the smart-device and influence specific users’ behaviors. This has significant implications for the creation and enforcement of law: personalization of smart-devices facilitates the application of law on spaces and activities that were previously unreachable, thereby also paving the way for the legalization of previously unregulated spaces and activities.

The Article also distinguishes between two kinds of smart-devices operators: ordinary and prime-operators. It identifies different kinds of ordinary operators and modes of constraints they can impose on users. It then normatively discusses the distribution of first-order and second-order legal powers between ordinary operators.

Finally, the Article introduces the prime-operators of smart-devices. Prime-operators have informational, computational, and economic advantages that uniquely enable them to influence millions of smart-devices and extract considerable social value from their operation. They also hold unique moderating powers—they govern how other operators and users operate the smart-devices, and thereby influence all interactions mediated by smart-devices. The Article discusses the nature and role of prime-operators and explores paths to regulate them.

Published in the DePaul Law Review, Vol. 70, Issue 3 (Spring 2021), pp. 497-549. This paper originated in the Global Tech Law: Selected Topics Seminar.

Transparency as a First Step to Regulating Data Brokers

Over the past few years a number of legislative bodies have turned their focus to ‘data brokers.’ Data brokers hold huge amounts of data, both personally identifiable and otherwise, but attempts at data regulation have failed to bring them sufficiently out of the shadows. A few recent regulations, however, aim to increase transparency in this secretive industry. While transparency alone will not fully address concerns surrounding the data brokerage industry without additional actionable consumer rights, it is an important and necessary first step.

These bills present a new course for legislatures interested in protecting consumer privacy. The primary effect of these measures is to heighten transparency. The data brokerage industry lacks transparency because these companies do not have direct relationships with the consumers whose data they buy, package, analyze, and resell, and there is no opportunity for the consumer to opt out, correct, or even know of the data that is being sold. For companies regulated by the Fair Credit Reporting Act, such as traditional credit bureaus, customers have the right to request their personal data and request corrections if anything is wrong. But most collectors of data are not covered by the FCRA, and in those instances consumers often agree to click-wrapped Terms of Service provisions that include buried provisions allowing the collecting company to resell their data. Customers are left unaware that they have signed up to have their data sold, and with no assurances that that data is accurate.

Concerns with data brokers center on brokers’ relative opacity and the lack of public scrutiny over their activities. They control data from consumers with which they have no relationship, and in turn, consumers do not know which data brokers may have their data, or what they are doing with it. Standard Terms of Service contracts allow the original data collector to sell collected data to third parties, and allow those buyers to sell the data in turn, which creates a rapid cascade in which consumers, agreeing to the terms of service of one company, have allowed their personal data to proliferate to numerous companies of whose existence they may not even be aware. Proposed legislation would increase consumers’ access to information about how their data is being used, shining a light on the data brokerage industry and enabling consumers to limit the unfettered sharing of their data.

This paper was published by the NYU Journal of Legislation & Public Policy. Dillon took the first iteration of the Global Data Law course and worked subsequently as a Student Research Assistant in the Global Data Law project.

The Global “Last Mile” Solution: High-Altitude Broadband Infrastructure

This paper explains the reasons for communications infrastructure underdevelopment historically, taking into account the myriad ways governments, usually through national universal service mechanisms, have attempted to correct the underprovision and positing why this opportunity to create global broadband infrastructure has surfaced. In essence, this portion of the paper explains the last mile problem that innovative infrastructure projects purport to solve. It then describes the broadband infrastructure projects, the consequences of multi-jurisdictional regulatory complexities for bringing the projects to market, and the disruptive potential of the infrastructure to change the economics of broadband access and provision. Lastly, it considers whether the companies are indeed solving the last mile problem beyond mere provision. Accordingly, the potential impacts of Internet access are surveyed using Amartya Sen’s capability approach, which seeks to place the individual and his or her freedom at the center of development.

The paper originated in what was then the IILJ Colloquium: “International Law of Google” and is now the Guarini Colloquium: Regulating Global Digital Corporations. It got published in the Georgetown Law Technology Review, Vol. 4 (2019), 47-123.

Safe Sharing Sites

Lisa M. Autin & David Lie

In this Article, Lisa Austin and David Lie argue that data sharing is an activity that sits at the crossroads of privacy concerns and the broader challenges of data governance surrounding access and use. Using the Sidewalk Toronto “smart city” proposal as a starting point for discussion, we outline these concerns to include resistance to data monopolies, public control over data collected through the use of public infrastructure, public benefit from the generation of intellectual property, the desire to broadly share data for innovation in the public interest, social—rather than individual— surveillance and harms, and that data use be held to standards of fairness, justice, and accountability. Data sharing is sometimes the practice that generates these concerns and sometimes the practice that is involved in the solution to these concerns.

Their safe sharing site approach to data sharing focuses on resolving key risks associated with data sharing, including protecting the privacy and security of data subjects, but aims to do so in a manner that is independent of the various legal contexts of regulation and governance. Instead, we propose that safe sharing sites connect with these different contexts through a legal interface consisting of a registry that provides transparency in relation to key information that supports different forms of regulation. Safe sharing sites could also offer assurances and auditability regarding the data sharing, further supporting a range of regulatory interventions. It is therefore not an alternative to these interventions but an important tool that can enable effective regulation.

A central feature of a safe sharing site is that it offers an alternative to the strategy of de-identifying data and then releasing it, whether within an “open data” context or in a more controlled environment. In a safe sharing site, computations may be performed on the data in a secure and privacy-protective manner without releasing the raw data, and all data sharing is transparent and auditable. Transparency does not mean that all data sharing becomes a matter of “public” view, but rather that there is the ability to make these activities visible to organizations and regulators in appropriate circumstances while recognizing the potential confidentiality interests in data uses.

In this way, safe sharing sites facilitate data sharing in a manner that manages the complexities of sharing while reducing the risks and enabling a variety of forms of governance and regulation. As such, the safe sharing site offers a flexible and modular piece of legal-technical infrastructure for the new economy.

This paper was prepared for and presented at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. It was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 581-623.

The False Promise of Health Data Ownership

In recent years there have been increasing calls by patient advocates, health law scholars, and would-be data intermediaries to recognize personal property interests in individual health information (IHI). While the propertization of IHI appeals to notions of individual autonomy, privacy, and distributive justice, the implementation of a workable property system for IHI presents significant challenges. This Article addresses the issues surrounding the propertization of IHI from a property law perspective. It first observes that IHI does not fit recognized judicial criteria for recognition as personal property, as IHI defies convenient definition, is difficult to possess exclusively, and lacks justifications for exclusive control. Second, it argues that if IHI property were structured along the lines of traditional common law property, as suggested by some propertization advocates, prohibitive costs could be imposed on socially valuable research and public health activity and IHI itself could become mired in unanticipated administrative complexities. Third, it discusses potential limitations and exceptions on the scope, duration, and enforceability of IHI property, both borrowed from intellectual property law and created de novo for IHI.

Yet even with these limitations, inherent risks arise when a new form of property is created. When owners are given broad rights of control, subject only to enumerated exceptions that seek to mitigate the worst effects of that control, constitutional constraints on governmental takings make the subsequent refinement of those rights difficult if not impossible, especially when rights are distributed broadly across the entire population. Moreover, embedding a host of limitations and exceptions into a new property system simply to avoid the worst effects of propertization begs the question whether a property system is needed at all, particularly when existing contract, privacy, and anti-discrimination rules already exist to protect individual privacy and autonomy in this area. It may be that one of the principal results of propertizing IHI is enriching would-be data intermediaries with little net benefit to individuals or public health. This Article concludes by recommending that the propertization of IHI be rejected in favor of sensible governmental regulation of IHI research coupled with existing liability rules to compensate individuals for violations of their privacy and abusive conduct by data handlers.

Ideas contained in this paper were discussed during the roundtable on data ownership at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. The paper was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 624-661.