A federal privacy law in the U.S. seems increasingly likely. When? It is not yet clear. However, we can say with much certainty that in the coming months we will see many draft laws that will join the ones we have already seen from Senators, members of Congress, Intel, CDT and others. The current series in The New York Times, The Privacy Project, helps illustrate why legislation is needed. However, it also illustrates how complex that legislation will be. This is one of the reasons why the IAF will issue its own model legislation late this Spring. Another reason is that while fair processing principles are a very useful way to describe a framework, in order for legislators to translate a framework into legislation, the principles need to be couched in the language of legislation.
A key question is what should legislation seek to provide? Legislation should strive to make us safer and create guideposts for digital innovation. So where might one start when reviewing privacy legislation? At the IAF, we believe a good place to start are key foundational issues.
The day after the Federal Trade Commission privacy hearings on April 9 and 10, the IAF held a small table discussion on the topic of foundational issues. In many ways, the IAF discussion was informed by the FTC hearings. Questions such as the current effectiveness of consent, the utility associated with accountability, and the role the FTC might play in any new legislative framework were raised at the hearings. The IAF small table discussion included a diverse group of stakeholders. It was held under Chatham House rules, and we are still reviewing what we learned. These issues will be further explored at the IAPP Global Privacy Summit May 3 in a session entitled “Not Grand’s Privacy Law: Fundamental Questions for Comprehensive U.S. Regime.” However, we believe it useful to share the foundational issues we raised with the broader privacy community.
- When we say privacy, what do we mean? Rather than defining information privacy, we tend to frame it based on concerns. From a functional perspective, those concerns may be grouped based on intrusion into seclusion, autonomy or control over the information that defines us or the fair processing of data that pertains to us. Linking solutions to the privacy interest in play tends to yield the best results. For example, consent (autonomy) will not solve discrimination problems linked to insufficient data to train AI systems (fair processing). How should we think about manipulation across this spectrum? Which interests are we attempting to resolve with new legislation?
- Are we creating comprehensive privacy protection on amending consumer protections? Section 5 of the FTC Act and most of U.S. sector specific laws have confronted privacy as a consumer protection. Restrictions on government data use have been based on constitutional rights. Are we attempting to amend or add to those protections, or are we looking towards something more focused on a broad range of individual interests related to a digital economy and society? This question begins to define the structure of a new law and more importantly how it might be overseen and enforced.
- Do we intend the law to be administered through enforcement only, or do we expect a federal entity to oversee implementation of complex legal provisions? Comprehensive privacy laws tend to require public authorities to provide both some level of oversight as well as enforcement when the law is broken. Provisions such as unfair and deceptive practices have tended to be driven by enforcement of abuses rather than ongoing oversight. If one has a comprehensive law that requires privacy by design, risk assessments that drive corporate decision making, and transparency around how one might fairly conduct AI and other advanced analytics, one is pushing towards a model that requires oversight. That raises questions of what type of oversight? If one houses this function at an existing law enforcement agency, does this change the nature of that agency?
- How much will it cost to oversee comprehensive legislation and are we prepared to pay that price? The UK Information Commissioner has a staff of 500, with at least half dedicated to privacy. Ireland has a staff of over a 100 dedicated to privacy. The FTC’s privacy staff is much smaller. What is a proportionate staff level for a U.S. agency? What should its role and staffing mix look like? If that staff were added to an existing agency, would it change the nature of that agency?
- How do we reconcile the U.S. free expression values, that have included the freedom to observe in the public forum and the ability to do research with data with limited restrictions, with the concerns that drive legislation, a sense we are over-observed and algorithmic decisions have been unfair? Some attribute the U.S. competitive advantage to the ability of American companies to freely think about what data might predict and how those insights might be used to drive commerce. The U.S. likely wants to maintain this competitive advantage. So how do we reconcile the interests of knowledge driven value creation and a desire for more trustworthiness?
- How do we not stifle new data driven knowledge creation? Most international data protection and privacy laws require a permission before data is processed in any form or fashion. Repurposing data for research, whether commercial or academic, must be permitted by the law. Today, in the U.S., for the most part, using data to create new insights is not regulated by the law. Organizations are free to think and learn with data. It is only when the data is used that explicit sector specific laws, or general protections against unfair and deceptive practices, kick in. Some of the applications coming from analytics have increasingly been seen as harmful. How do we protect data driven knowledge creation and be sensitive to provisions that, whether intentional or not, make thinking with data much more difficult? IAF’s recent blog addressed many of these issues.
- How do we prevent advanced analytics and AI from hiding prohibited discrimination? As a society, we have decided that certain factors may not be used in decision making. For example, one’s gender, age and race may not be used in making a credit decision, even if those factors are predictive of performance. AI and big data both may obscure the factors that lead to decision making. How do we structure law that is protective of inappropriate discrimination without creating restrictions that are overly prescriptive?
- Should new legislation primarily drive legal liability avoidance or proactive accountable behavior? Privacy law may be structured as a list of prohibitions, a description of processes and objectives, or a combination of the two. While not a law, the Canadian regulators’ guidance on using a comprehensive privacy management program to drive accountability is an example of regulatory guidance that describes objectives and processes to achieve those objectives. Since data use is dynamic, lists of prohibited activities leads to legal structures that are often dated when they go into effect. Some have suggested an approach that gets to legal certainty by creating white lists and black lists of activities. Others try to square the circle by governing fair processing through data subject sovereignty. And still others are in favor of specifying goals and demonstrable processes. What approach makes sense in a dynamic environment?
- Is the subject in question personal data or impactful data? The definition of personal data is often used to define the domain for privacy law. As data not captured by the definition become consequential in decisions made, regulators often look for ways to capture that non-personal data as personal data. Some have suggested that the jurisdictional boundaries should be based on data impactful on a distinct individual. Such a concept is more dynamic but less certain. What data should a new privacy law cover?
- Which organizations should be subject to rigorous requirements? Rigorous accountability requirements are expensive and maybe not all companies should have to create such rigorous programs. But what is the criteria to determine who is in or out? Small companies with ten employees may create real consequences for people, while some large organization make relatively simple use of data. How does one design the criteria to determine what types of companies the more rigorous accountability provisions should apply to?
Are there questions we missed? Please let us know. This discussion will continue over the coming months. Our intent is to move beyond issues to legislative building blocks. This will be the focus of the dialogue at our June 26 Summit. Stay tuned for details on this event.
Related Articles