Freight trains heading through a tunnel are pretty hard to miss. The equivalent of a freight train in the world of privacy is a privacy trust deficit related to information use in an observational age driven by analytics. The many, many indicators that have piled up over the recent days, weeks and months are coming at us like a “trust deficit train”, and it is pretty hard to miss. Trust deficit simply means an organisation does not get the benefit of the doubt, since the doubters believe the organisation will put its interests in front of individuals and society when using data in a robust manner.
The U.S. Federal Communications Commission’s (FCC) new privacy regulations for broadband service providers is just the latest example that trust deficits lead to bright-line rules which are generally ineffective in protecting individuals in today’s observational data-driven world. The FCC has essentially said we are going to put oversight of data use based on broadband participation on the backs of individuals who are not equipped to make decisions on what is appropriate and what is not. IAF public comments on the draft regulation contain more detail on our views, but, as my Australian colleague Malcolm Crompton recently argued in Marrakech, “this (privacy) is the only regulatory environment in which policy makers so burden individuals.”
That does not mean I do not support consent, and individuals having meaningful control of data that pertains to them. I just believe means of governance should be effective, and consent is not always the effective means for governance.
Organisations are smart, and they will figure out how to get consent, and overcome the “take it or leave it” prohibition. Just look at Europe and the ePrivacy directive. I visit a site, a pop-up occurs and I grant consent. There is no thought, I just do so. However, backward-facing regulation, heavily reliant on consent is disturbing at a time where the limits on consent are being recognised.
We see some dissonance in privacy and data protection regulators. On one hand, they increasingly understand the limits of consent, as we see in the consent consultation by the Office of the Privacy Commissioner of Canada. They understand society’s need to assure both protection for individuals and the beneficial impacts in health, education, safety and economic growth that comes from robust data use. This increasingly means data protection by design that looks at the full range of human interest, not just autonomy. However, the trust deficit that exists in all markets drives regulators to question whether processes that are subjective in nature, (all assessment processes that require judgement are subjective), can ever consistently be done with integrity. This leads to the conclusion that subjective processes must have some form of scalable, independent verification. At present, we have no good models on how such verification might work, which leads to bright-line rules, like consent.
The IAF has concluded that effective data governance requires a family of assessment frameworks that range from simple triage of new processing to determine the level of assessment necessary, through traditional privacy impact assessments, to ethical big data assessments, and finally, to comprehensive data impact assessments. It is about utilising the right assessment for the specific governance challenge. We further believe that assessment processes must be supported by some oversight process to create trust, turning self-regulation into co-regulation.
I, personally, cut my teeth on co-regulatory processes with the Individual Reference Services Group (IRSG) that developed a set of governance principles in 1997. The principles covered the use of data for the purpose of creating reference and look-up services on individuals. The IRSG principles required organisations to assert they were following those principles so that they would be enforceable under the deceptive practices authority of the U.S. Federal Trade Commission (FTC). Furthermore, the principles required an annual external assessment using criteria developed in cooperation with PricewaterhouseCoopers. A statement of compliance for that assessment would then be made public and sent to the FTC, creating compliance certainty across the ecosystem.
The fourteen companies that came together to create the IRSG principles did not do so for altruistic reasons. An industry player had created a product that provided social security numbers (SSNs) to customers without either masking the data or qualifying the users and the uses. One could buy one’s neighbour’s SSN. Public concern threatened a legislative solution that would ban SSNs in a bright-line fashion. The co-regulatory process offered a far more nuanced governance over SSNs and fixed the trust deficit at the time. IRSG eventually disappeared after the Gramm-Leach-Bliley Act was passed, regulating the primary sources of SSNs previously covered under the IRSG principles. However, the lessons learned from that process are still applicable today.
Today’s trust deficits are related to many public concerns. Data security concerns have led to data breach notification laws. Adoption of broad observation technologies has led to rules, such as the FCC privacy regulations, that are intended to stop tracking for certain entities.
The European General Data Protection Regulation (GDPR) creates some flexibility for data-driven research that was not part of the Directive. However, that flexibility requires clarifying guidance from the EU member states. The flexibility also requires the types of assessments that we are developing at the IAF. Viktor Mayer-Schonberger and Yann Padova argue, in ”Regime Change? Enabling Big Data Through Europe’s New Data Protection Regulation,” that this limited flexibility in the GDPR is the opening for further reform in data governance. That will only happen if there is an oversight process for assessments that break through the trust deficit.
Towards that end, the IAF is proposing a project to create options for oversight of the assessments required by modern data driven processing. We need corporate participation first, and then participation by other stakeholders, including regulators. The driver for business is not altruism, but rather the corporate need for rigorous processes that are trusted. As we have seen with the FCC, trust deficits produce backwardly focused bright-line rules that neither protect people or facilitate innovation that improves our lives.