Artificial Intelligence and International Economic Law

Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz (eds.)

Artificial intelligence (AI) technologies are transforming economies, societies, and geopolitics. Enabled by the exponential increase of data that is collected, transmitted, and processed transnationally, these changes have important implications for international economic law (IEL). This edited volume examines the dynamic interplay between AI and IEL by addressing an array of critical new questions, including: How to conceptualize, categorize, and analyze AI for purposes of IEL? How is AI affecting established concepts and rubrics of IEL? Is there a need to reconfigure IEL, and if so, how? Contributors also respond to other cross-cutting issues, including digital inequality, data protection, algorithms and ethics, the regulation of AI-use cases (autonomous vehicles), and systemic shifts in e-commerce (digital trade) and industrial production (fourth industrial revolution).

This book is available as a physical object (hardcover) for purchase from Cambridge University Press and freely available (open access) as an electronic copy on Cambridge Core.

A book review by Anupam Chander and Noelle Wurst has been published by the Journal of International Economic Law. They conclude: “This book is an important contribution to our understanding of the way that international economic law governs AI. It will certainly be a foundational text for future work."

A further book review by Gabrielle Marceau and Federico Daniele has been published by the World Trade Review. They say: “… Artificial Intelligence and International Economic Law promises to become a seminal work on AI and international law and to open the path for future research and publishing on the matter."

China’s Influence in Global Data Governance Explained: The Beijing Effect

In today’s global economy, digital data enable transnational communication, serve as a resource for commercial gain and economic development, and facilitate the decision-making by private and public entities alike. As questions of control over digital data have become flashpoints in global governance, Chinese technology companies and the government of the People’s Republic of China (PRC) increasingly shape and influence these contests. The “Digital Silk Road” through which the PRC promises “connectedness” in the digital domain alongside the physical transport capacity of the land- and sea-based planks of the Belt and Road Initiative (BRI) manifests the PCR’s aspirations to facilitate digital development in host states. The prerequisite digital infrastructure investments are orchestrated by its gigantic technology companies, which are acquiring an increasingly prominent presence abroad.

In our article “The Beijing Effect: China’s ‘Digital Silk Road’ as Transnational Data Governance”, which is forthcoming with the New York University Journal of International Law and Politics, we analyze China’s growing influence in global data governance. The term “Beijing Effect” pays homage to Anu Bradford’s account of the EU’s global regulatory influence as the “Brussels Effect”, which is said to be particularly prominent in the digital domain, where the EU’s General Data Protection Regulation (GDPR) has been heralded as a global benchmark for multinational corporations and a template to be emulated by countries without comprehensive data protection laws. Even the PRC is sometimes following in the GDPR’s footsteps, as illustrated by the draft for a Personal Information Protection Law (PIPL) which – together with the Data Security Law – is set to complement China’s existing data governance framework which revolves around cybersecurity. Like the GDPR, the PIPL is set to apply to personal information handling outside PRC borders when the purpose is to provide products or services to people within the territory of the PRC or when conducting analysis or assessment of their activities. In this way, both the GDPR and the PIPL apply extraterritorially in recognition of the Internet’s cross-jurisdictional reach. While such parallels must be recognized, their effects must not be overstated or equated. We concur with Professor Bradford that Beijing will not be able to replicate the Brussels Effect which occurs when globally operating corporations choose to amplify European law. However, we posit that a Beijing Effect of a different kind is already materializing and might gain further strength since the COVID-19 pandemic has revealed the global economy’s reliance on digital infrastructures.

Our account of the Beijing Effect explains how the PRC is increasingly influencing data governance outside its borders, in particular in developing countries in need of digital infrastructures with only nascent data governance frameworks. Indeed, the most consequential vector may be the construction, operation, and maintenance of digital infrastructure by major Chinese technology companies. More than twenty years after Lawrence Lessig’s famous insight that “code is law,” the creators of the hardware and software that penetrate and regulate our increasingly digitally-mediated lives globally are increasingly based in Beijing, home to Baidu and ByteDance, Hangzhou, where Alibaba is based, or Shenzhen, where Huawei and Tencent are headquartered. As their digital infrastructures become ingrained in the social, economic, and legal structures of host states, they affect where and how data flows, and, by extension, how people communicate and transact with, and generally relate to, other individuals, the private sector, and public authorities.

At the same time, the PRC challenges the Silicon Valley Consensus which heralded the unconditional desirability of “free flow” of data and, instead, promotes “data sovereignty” as a leitmotif for international and domestic data governance. This tension materializes in the “digital trade” and “electronic commerce” chapters of recent megaregional trade agreement: While members of the Trans-Pacific Partnership (TPP) can challenge the necessity of data transfer restrictions and data localization requirements under threat of dispute settlement proceedings, the Regional Comprehensive Economic Partnership (RCEP) agreement allows its members to self-assess which restrictions they deem necessary.

As some governments in BRI host states seem drawn towards the dual promise of social control and economic development as reflected in the PRC’s transition towards a digitally-advanced techno-authoritarian society, a critical reevaluation of extant digital development narratives and China’s self-representation as an alternative center for global governance is warranted. Our account of the Beijing Effect is one piece in this larger puzzle, which requires more theoretically informed and empirically grounded research into China’s unique approach to law and development.

This blog post was initially published by the Machine Lawyering Blog hosted by the Chinese University of Hong Kong (CUHK). It is reposted here with permission since the original post is no longer available.


Personalization of Smart-Devices: Between Users, Operators, and Prime-Operators

Your relationships with your devices are about to get complicated. Remote operability of smart-devices introduces new actors into the previously intimate relationship between the user and the device—the operators. The Internet of Things (IOT) also allows operators to personalize a specific smart-device for a specific user. This Article discusses the legal and social opportunities and challenges that remote operability and personalization of smart-devices bring forth.

Personalization of smart-devices combines the dynamic personalization of code with the influential personalization of physical space. It encourages operators to remotely modify the smart-device and influence specific users’ behaviors. This has significant implications for the creation and enforcement of law: personalization of smart-devices facilitates the application of law on spaces and activities that were previously unreachable, thereby also paving the way for the legalization of previously unregulated spaces and activities.

The Article also distinguishes between two kinds of smart-devices operators: ordinary and prime-operators. It identifies different kinds of ordinary operators and modes of constraints they can impose on users. It then normatively discusses the distribution of first-order and second-order legal powers between ordinary operators.

Finally, the Article introduces the prime-operators of smart-devices. Prime-operators have informational, computational, and economic advantages that uniquely enable them to influence millions of smart-devices and extract considerable social value from their operation. They also hold unique moderating powers—they govern how other operators and users operate the smart-devices, and thereby influence all interactions mediated by smart-devices. The Article discusses the nature and role of prime-operators and explores paths to regulate them.

Published in the DePaul Law Review, Vol. 70, Issue 3 (Spring 2021), pp. 497-549. This paper originated in the Global Tech Law: Selected Topics Seminar.

Transparency as a First Step to Regulating Data Brokers

Over the past few years a number of legislative bodies have turned their focus to ‘data brokers.’ Data brokers hold huge amounts of data, both personally identifiable and otherwise, but attempts at data regulation have failed to bring them sufficiently out of the shadows. A few recent regulations, however, aim to increase transparency in this secretive industry. While transparency alone will not fully address concerns surrounding the data brokerage industry without additional actionable consumer rights, it is an important and necessary first step.

These bills present a new course for legislatures interested in protecting consumer privacy. The primary effect of these measures is to heighten transparency. The data brokerage industry lacks transparency because these companies do not have direct relationships with the consumers whose data they buy, package, analyze, and resell, and there is no opportunity for the consumer to opt out, correct, or even know of the data that is being sold. For companies regulated by the Fair Credit Reporting Act, such as traditional credit bureaus, customers have the right to request their personal data and request corrections if anything is wrong. But most collectors of data are not covered by the FCRA, and in those instances consumers often agree to click-wrapped Terms of Service provisions that include buried provisions allowing the collecting company to resell their data. Customers are left unaware that they have signed up to have their data sold, and with no assurances that that data is accurate.

Concerns with data brokers center on brokers’ relative opacity and the lack of public scrutiny over their activities. They control data from consumers with which they have no relationship, and in turn, consumers do not know which data brokers may have their data, or what they are doing with it. Standard Terms of Service contracts allow the original data collector to sell collected data to third parties, and allow those buyers to sell the data in turn, which creates a rapid cascade in which consumers, agreeing to the terms of service of one company, have allowed their personal data to proliferate to numerous companies of whose existence they may not even be aware. Proposed legislation would increase consumers’ access to information about how their data is being used, shining a light on the data brokerage industry and enabling consumers to limit the unfettered sharing of their data.

This paper was published by the NYU Journal of Legislation & Public Policy. Dillon took the first iteration of the Global Data Law course and worked subsequently as a Student Research Assistant in the Global Data Law project.

The Global “Last Mile” Solution: High-Altitude Broadband Infrastructure

This paper explains the reasons for communications infrastructure underdevelopment historically, taking into account the myriad ways governments, usually through national universal service mechanisms, have attempted to correct the underprovision and positing why this opportunity to create global broadband infrastructure has surfaced. In essence, this portion of the paper explains the last mile problem that innovative infrastructure projects purport to solve. It then describes the broadband infrastructure projects, the consequences of multi-jurisdictional regulatory complexities for bringing the projects to market, and the disruptive potential of the infrastructure to change the economics of broadband access and provision. Lastly, it considers whether the companies are indeed solving the last mile problem beyond mere provision. Accordingly, the potential impacts of Internet access are surveyed using Amartya Sen’s capability approach, which seeks to place the individual and his or her freedom at the center of development.

The paper originated in what was then the IILJ Colloquium: “International Law of Google” and is now the Guarini Colloquium: Regulating Global Digital Corporations. It got published in the Georgetown Law Technology Review, Vol. 4 (2019), 47-123.

Safe Sharing Sites

Lisa M. Autin & David Lie

In this Article, Lisa Austin and David Lie argue that data sharing is an activity that sits at the crossroads of privacy concerns and the broader challenges of data governance surrounding access and use. Using the Sidewalk Toronto “smart city” proposal as a starting point for discussion, we outline these concerns to include resistance to data monopolies, public control over data collected through the use of public infrastructure, public benefit from the generation of intellectual property, the desire to broadly share data for innovation in the public interest, social—rather than individual— surveillance and harms, and that data use be held to standards of fairness, justice, and accountability. Data sharing is sometimes the practice that generates these concerns and sometimes the practice that is involved in the solution to these concerns.

Their safe sharing site approach to data sharing focuses on resolving key risks associated with data sharing, including protecting the privacy and security of data subjects, but aims to do so in a manner that is independent of the various legal contexts of regulation and governance. Instead, we propose that safe sharing sites connect with these different contexts through a legal interface consisting of a registry that provides transparency in relation to key information that supports different forms of regulation. Safe sharing sites could also offer assurances and auditability regarding the data sharing, further supporting a range of regulatory interventions. It is therefore not an alternative to these interventions but an important tool that can enable effective regulation.

A central feature of a safe sharing site is that it offers an alternative to the strategy of de-identifying data and then releasing it, whether within an “open data” context or in a more controlled environment. In a safe sharing site, computations may be performed on the data in a secure and privacy-protective manner without releasing the raw data, and all data sharing is transparent and auditable. Transparency does not mean that all data sharing becomes a matter of “public” view, but rather that there is the ability to make these activities visible to organizations and regulators in appropriate circumstances while recognizing the potential confidentiality interests in data uses.

In this way, safe sharing sites facilitate data sharing in a manner that manages the complexities of sharing while reducing the risks and enabling a variety of forms of governance and regulation. As such, the safe sharing site offers a flexible and modular piece of legal-technical infrastructure for the new economy.

This paper was prepared for and presented at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. It was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 581-623.

The False Promise of Health Data Ownership

In recent years there have been increasing calls by patient advocates, health law scholars, and would-be data intermediaries to recognize personal property interests in individual health information (IHI). While the propertization of IHI appeals to notions of individual autonomy, privacy, and distributive justice, the implementation of a workable property system for IHI presents significant challenges. This Article addresses the issues surrounding the propertization of IHI from a property law perspective. It first observes that IHI does not fit recognized judicial criteria for recognition as personal property, as IHI defies convenient definition, is difficult to possess exclusively, and lacks justifications for exclusive control. Second, it argues that if IHI property were structured along the lines of traditional common law property, as suggested by some propertization advocates, prohibitive costs could be imposed on socially valuable research and public health activity and IHI itself could become mired in unanticipated administrative complexities. Third, it discusses potential limitations and exceptions on the scope, duration, and enforceability of IHI property, both borrowed from intellectual property law and created de novo for IHI.

Yet even with these limitations, inherent risks arise when a new form of property is created. When owners are given broad rights of control, subject only to enumerated exceptions that seek to mitigate the worst effects of that control, constitutional constraints on governmental takings make the subsequent refinement of those rights difficult if not impossible, especially when rights are distributed broadly across the entire population. Moreover, embedding a host of limitations and exceptions into a new property system simply to avoid the worst effects of propertization begs the question whether a property system is needed at all, particularly when existing contract, privacy, and anti-discrimination rules already exist to protect individual privacy and autonomy in this area. It may be that one of the principal results of propertizing IHI is enriching would-be data intermediaries with little net benefit to individuals or public health. This Article concludes by recommending that the propertization of IHI be rejected in favor of sensible governmental regulation of IHI research coupled with existing liability rules to compensate individuals for violations of their privacy and abusive conduct by data handlers.

Ideas contained in this paper were discussed during the roundtable on data ownership at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. The paper was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 624-661.

Contracting for Personal Data

Is contracting for the collection, use, and transfer of data like contracting for the sale of a horse or a car or licensing a piece of software? Many are concerned that conventional principles of contract law are inadequate when some consumers may not know or misperceive the full consequences of their transactions. Such concerns have led to proposals for reform that deviate significantly from general rules of contract law. However, the merits of these proposals rest in part on testable empirical claims. We explore some of these claims using a hand-collected data set of privacy policies that dictate the terms of the collection, use, transfer, and security of personal data. We explore the extent to which those terms differ across markets before and after the adoption of the General Data Protection Regulation (GDPR). We find that compliance with the GDPR varies across markets in intuitive ways, indicating that firms take advantage of the flexibility offered by a contractual approach even when they must also comply with mandatory rules. We also compare terms offered to more and less sophisticated subjects to see whether firms may exploit information barriers by offering less favorable terms to more vulnerable subjects.

This paper was prepared for and presented at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. It was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 662-705.

Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation

In previous work, I wrote about how trade secrecy drives the plot of Roald Dahl’s novel Charlie and the Chocolate Factory, explaining how the Oompa-Loompas are the ideal solution to Willy Wonka’s competitive problems. Since publishing that piece I have been struck by the proliferating Oompa-Loompas in contemporary life: computing machines filled with software and fed on data. These computers, software, and data might not look like Oompa-Loompas, but they function as Wonka’s tribe does: holding their secrets tightly and internally for the businesses for which these machines are deployed.

Computing machines were not always such effective secret-keeping Oompa Loompas. As this Article describes, at least three recent shifts in the computing industry—cloud computing, the increasing primacy of data and machine learning, and automation—have turned these machines into the new Oompa-Loompas. While new technologies enabled this shift, trade secret law has played an important role here as well. Like other intellectual property rights, trade secret law has a body of built-in limitations to ensure that the incentives offered by the law’s protection do not become so great that they harm follow-on innovation—new innovation that builds on existing innovation—and competition. This Article argues that, in light of the technological shifts in computing, the incentives that trade secret law currently provides to develop these contemporary Oompa-Loompas are excessive in relation to their worrisome effects on follow-on innovation and competition by others. These technological shifts allow businesses to circumvent trade secret law’s central limitations, thereby overfortifying trade secrecy protection. The Article then addresses how trade secret law might be changed—by removing or diminishing its protection—to restore balance for the good of both competition and innovation.

Ideas contained in this paper were discussed during the roundtable on data ownership at the NYU Law Review Symposium 2018 on “Data Law in a Global Digital Economy”. The paper was published by the NYU Law Review in Volume 94, Number 4 (October 2019), pp. 706-736.

Digital Megaregulation Uncontested? TPP’s Model for the Global Digital Economy

The United States championed the creation of new rules for the digital economy in TPP. Analyzing this effort as “digital megaregulation” foregrounds aspects that the conventional “digital trade” framing tends to conceal. On both accounts, TPP’s most consequential rules for the digital economy relate to questions of data governance. In this regard, TPP reflects the Silicon Valley Consensus of uninhibited data flows and permissive privacy regulation. The paper argues that the CPTPP parties endorsed the Silicon Valley Consensus due to a lack of alternatives and persistent misperceptions about the realities of the global digital economy, partly attributable to the dominant digital trade framing. It suggests a new approach for the inclusion of data governance provisions in future international trade agreements that offers more flexibility for innovative digital industrial policies and experimental data regulation.

This paper was published in Megaregulation Contested: Global Economic Ordering After TPP (edited by Benedict Kingsbury, David M. Malone, Paul Mertenskötter, Richard B. Stewart, Thomas Streinz, and Atsushi Sunami, Oxford University Press 2019), chapter 14 (pp. 312-342).