Fairness has become a huge data protection policy driver in Europe and the Americas. Fairness is often hard to define in definitive terms, but the parameters of fairness are well known. A fair data application creates identifiable value for individuals, mitigates risks to those individuals, and confirms the data is used within the context of the relationship between the individuals and the data user. Fairness has become more important as consent has lost some of its effectiveness for governing data use. As fairness has become more important, privacy regimes globally have increasingly looked for means to establish that processing is fair. As sensors collect data, insights come from big data processing, and artificial intelligence is applied in a growing number of circumstances, the popularity of assessments to assess risks and determine fairness has increased as well. We see this reach for assessments to assure fairness in the European General Data Protection Regulation, the consent consultation in Canada, and even the draft legislation in Argentina. The Information Accountability Foundation work on Effective Data Protection Governance is grounded in the concept of fairness, and our assessment processes are based on creating a means for data users to demonstrate what they are doing is legal, fair and just.
The United States’ unique approach to privacy is always a challenge to global data flows. Free expression is guaranteed by the First Amendment to the Constitution, so an organization gets the benefit of the doubt as it both observes what it is free to monitor in the public commons and uses that data to creatively think and communicate. That freedom has boundaries. One cannot use observation and insights to cause substantial injury, and one cannot deceive people for commercial gain.
Deception as an enforcement norm was fully explored in the 1990’s. The Clinton Administration and the Federal Trade Commission pushed companies hard to disclose what they were collecting and observing, how they would use that data, and how it might be shared. Once a company published its privacy policy, if it lied, the FTC had an enforceable action. Section 5 of the FTC Act covers Unfair and Deceptive Acts, and the FTC was more than willing to use deception.
However, unfairness was a different story. Robert Pitofsky, FTC Chairman in the 1990’s, told me personally the FTC would not use unfairness in privacy enforcement. Chairman Pitofsky believed that unfairness required the FTC to prove that a specific data use ran the risk of causing substantial injury, and he believed that test was a bridge too far.
That reluctance changed when Timothy Muris became Chairman of the FTC in 2001. Chairman Muris believed protection should occur where there was substantial injury, part of the test required for unfairness under the FTC Act. He also found substantial injury could be an obnoxious intrusion in one’s life the consumer could not avoid. He found endless phone calls at dinner hawking all sorts of goods and services as substantial injury that could not be counter balanced by competition. While the injury to each individual was small, aggregated over
millions of people, the combined injury was substantial. 92 percent of the American public found telemarketing to be always intrusive so the Telemarketing Sales Rule was revised under his leadership and the Do-Not-Call list was created. For the next fifteen years, the concept of unfairness at the FTC, mostly related to data security, slowly expanded on a case by case basis.1
Unfairness has always been a tricky concept for the FTC. In the 1980’s, when Congress believed the FTC was over using unfairness without establishing substantial injury, the agency ran into difficulties and had its budget and staff cut. As unfairness began to emerge as a more important enforcement tool for privacy and security, many argued that substantial injury requires something more than a sense of moral outrage. Instead, it was argued that there needed to be some empirical means to measure potential injury so abridgement of free enterprise is not unwarranted.
On February 6, 2017, the FTC settled with Vizio in a case related to the second by second collection of data from smart TVs on the programs consumers were watching. The FTC asserted that Vizio’s behavior was both deceptive and unfair. Acting FTC Chairman Maureen Ohlhausen, in a concurring statement, agreed Vizio was deceptive but questioned the unfairness allegation. The FTC staff argued that Vizio’s collection of sensitive viewing habits was unfair. Ohlhausen argued that from a policy perspective, the information might indeed be sensitive, but the FTC Act requires that for a behavior to be unfair it must be “a practice that causes substantial injury that is not reasonably avoidable by the consumer and is not outweighed by the benefits to competition or consumers.” She went on to say she will “launch an effort to examine this important issue [what constitutes substantial injury] further.”
In the telemarketing issue, Muris found millions of intrusions aggregated over millions of consumers created collective substantial injury. Where is the quantifiable injury at Vizio? As a consumer that owns a Vizio smart TV (I actually do), I do not believe the behavior was fair. However, I also do not believe Vizio’s behavior meets the unfairness test in the FTC Act because something more than a sense of moral outrage is necessary.
The European Working Party 29 issued an Opinion on Legitimate Interests in 2014 that essentially said it is up to a company that wants to use legitimate interest to demonstrate the processing will be fair. In Canada, under the accountability principle, companies using data robustly must demonstrate the use is fair. Draft legislation in Argentina places the burden on the company to demonstrate fairness. As concepts of unfairness were slowly expanding in the United States, and as governance outside the United States was increasingly relying on
1 The concept of privacy harm also became part of the privacy mix when Muris was FTC Chairman. The Asia Pacific Economic Forum (“APEC”) adopted a privacy framework that added a ninth principle to the OECD eight, with the title prevention of harm. The concept that prevention of harm is a core data protection goal was ratified in the new European General Data Protection Regulation. Harm seems to be broader concept than injury, not requiring the same level of empirical evidence.
assessments to demonstrate fairness, the differences in consumer privacy protections were narrowing.
However, there has always been a sense that the test for unfairness, risk of substantial injury, creates a high bar in the privacy area. Furthermore, in the United States, the burden is on the FTC to prove unfairness, where in other locations the burden is on the organization to prove its activities are fair. Ohlhausen’ s project to better define what constitutes substantial injury in relationship to personal data is prudent and timely. I support it. However, at least for some period of time, this issue will increase the divide between the U.S. and these other jurisdictions.
There is never a good time for the privacy divide between the United States and the balance of the world to widen. New technology is creating increasing stresses, and highly innovative smaller companies are pioneering applications that only make logical sense when expanded to a global environment.
The U.S. was not deemed adequate by Europe before this case, and Ohlhausen’ s concurring statement will not change that. So it is up to companies to demonstrate that they manage data in a fair fashion. For companies providing global services, the direction of Effective Data Protection Governance(link) is prudent. If one can demonstrate that data is processed in a legal, fair and just manner, the processing will almost surely not be unfair. On the other hand, if one manages to a standard based on substantial injury, not avoidable by the consumer or counter balanced by benefits to markets and consumers, one may be safe from enforcement in the United States, but one may be out of step with global expectations. While regime gaps might widen, accountability gives companies guidance that minimizes global corporate risk.
Related Articles