The former General Counsel of the US Department of Commerce discusses challenges in privacy, artificial intelligence and information technology law.
Privacy law is a subject that has interested me for a long time. Even as a college student – although I was the paragon of a classic liberal arts major who avoided hard sciences – my best paper was on comparative law issues between French and American rights to privacy. However, it was not until I began working as a lawyer that I started engaging with cybersecurity and data protection as anything other than abstract concepts.
In my early career I was a communications lawyer and a litigator in the cable television and telecommunications industries. These are sectors that have had privacy protections for customer data for some time – in the case of cable television these protections date back to 1984. Working in that field gave me a lot of exposure to communications technologies and helped me to understand how various systems operate, the type of data flowing over them and what sort of information is captured by providers.
When I joined the Department of Commerce as general counsel in 2009, I was aware that privacy and cybersecurity were becoming increasingly important issues. Even before I was confirmed by the Senate, we spent time working on these topics, thinking about what we should be doing. Very early in the Obama administration, after I had deepened my familiarity with the matter, I advocated for action to deal with privacy issues.
The government seemed interested, and the White House empowered me to lead an Inter-Agency Committee to look at this more closely, which led to the development of what ultimately became the Consumer Privacy Bill of Rights Act in 2015. This was a compelling leap forward.
I resigned as Acting Secretary of Commerce in late 2013, since which time I have been a visiting scholar at the Massachusetts Institute of Technology Media Lab and at the Brookings Institution, where I am a member of the Center for Technology Innovation. My work at these institutions follows the ways in which public policy and the law is adapting to the evolution of technology, but also to design better governance for advanced and transformational technologies such as artificial intelligence.
Over the past decade or so, I have been involved in high-level exchanges on artificial intelligence policies among several countries – the US, the UK, Canada, Singapore, Australia, Japan, and also with the EU. Along with other experts, I have been looking at opportunities for stronger international cooperation on this front. The appreciation that such cooperation is necessary has certainly grown over this time, and the channels allowing for inter-governmental cooperation have become much more sophisticated.
My experience in politics and familiarity with legislative processes has undoubtedly helped me in this work – it is impossible to design good governance without appreciating how things get done at a governmental level, how to gauge what is possible, and how to frame issues in ways that speak to members of Congress or to the public.
This is especially important when it comes to topics such as analytics and big data. Because of their ability to discern unique patterns in a data set, or to link one data set with others, these technologies are turning things that have traditionally not been regarded as personal information into powerful and exploitable data sets.
In such an environment, defining limits and setting legal requirements can be more complicated than ever before. There is so much value in data now that society and enterprises have increasingly important interests in how it is used. That is why, even after a life spent in the field, I still consider the legal implications of technology to be among the most important questions we face today.