Writing privacy legislation that is fit for its intended purpose is not easy. Just using the term “privacy legislation” can be counterproductive. The term “privacy” insinuates the ability to control the assumptions about individuals based on the history of their interaction with others. The observational age has made effective control of personal data and the insights they beget almost impossible. Any suggestion that individuals are able to block observation runs into the growing practicality of how things work and the corresponding necessity for observation. Smart phones have to connect, and smart cars need to stop. Pandemics need to be traced, and personalized medicine needs to be personalized. Much modern technology is dependent on observation.
Yet observation often begets manipulation, and manipulation has an effect on the individuals they become. Further, manipulation over many has an effect on the world they live in. Increasingly, solutions need to be found, or the environment individuals live in will be more and more hostile.
Pushing against all evidence that individual control is not fully effective, privacy regulation and privacy legislation tends to slot back into the same old grove that individuals can govern the information market by the choices they make if they just have the opportunity to understand. Many individuals do not have the time, knowledge base, and focus to fully understand information disclosed with the intent to be as transparent as possible. As irrational as it may seem to return to the same groove, such an approach is reflected in pending legislation in Canada, newly enacted legislation in Virginia and proposed legislation in the United States. Furthermore, it is amplified in guidance from the European Data Protection Board. If all else fails, fall back on the same old explanations and make the same mistakes again.
An excellent description of this conundrum is the newly published paper by Professor Julie E. Cohen, “How (Not) to Write a Privacy Law”, published by the Knight First Amendment Institute at Columbia University. Professor Cohen, in her introductory section, defines the problem:
“Current approaches to crafting privacy legislation are heavily influenced by the antiquated private law ideal of bottom-up governance via assertion of individual rights, and that approach, in turn, systematically undermines prospects for effective governance of networked processes that operate at scale. . . . Effective privacy governance requires a model organized around problems of design, networked flow, and scale.”
Professor Cohen goes on to analyze most of the proposed federal privacy bills and finds that they settle back into the same groove first cut by Alan Westin back in 1967. If there truly is an interest in understanding the problem before discussing solutions, Cohen’s paper is mandatory reading.
The IAF’s mission is not enacting legislation, but its mission is research that begins to spell out what outside the box legislation might look like. So, the IAF has pursued accountability based fair processing legislation. The more it has ventured into this endeavor, the more complicated the puzzle has become. Accountability based fair processing legislation would require not only a retrofit for corporate processes, but it also would require a reset of the expectations of those that oversee the fair governance of data.
To get there, the IAF has returned to the basic principles that would drive the process:
PRINCIPLES FOR FAIR PROCESSING ACCOUNTABILITY
A risk-centered approach to fair data processing, necessary to achieve shared goals for beneficial innovation, trust and fairness, bases decisions on the likelihood and the severity of harm and the degree of benefit to people, groups of people, society and organizations if data are processed or not processed.
Accountable and Measured
Such a risk-based approach requires organizations be accountable, with accountability defined as organizations being responsible for how data are used and being answerable to others for the means taken to be responsible. While organizations have primary responsibility for fair processing, individuals still have control where uses are impactful and individual controls are effective. A decision is not risk-based unless there is a measurement of the risks and benefits at issue and the integrity of the assessment is demonstrable to others. Risk/benefit decisions are not always intuitive. They require assessments that identify the parties that might be impacted by the use of data, how they might be impacted, and whether the risks and benefits are mapped to the people, groups of people and society. The matching of risks to benefits might not be one-to-one, but discrepancies must be understood and reasonable. Decisions must be explainable to others based on objective measures. While loss of individual autonomy is a risk factor, risks and enhancements to other fundamental human interests like health, employment, education and the ability to conduct a business must also be part of an assessment.
Informing and Empowering
Organizations have a proactive obligation to inform stakeholders about the data processed and the processes used to assess and mitigate risk. While fair data processing is less dependent on individuals’ decisions, where individuals do have rights, they should be transparent and easily exercisable. This relationship between individual rights and fair data processing facilitates organizations being held to account.
Competency, Integrity and Enforcement
Organizations are evaluated by the competency they demonstrate in reaching decisions to process, their honesty in making decisions that serve stakeholders that are impacted and the alignment of their disclosures and actions. All organizations will make mistakes, and some of those mistakes will impact people, groups of people or society. Organizations are responsible for those outcomes, but there is a difference between systematically bad decisions and anomalies. A well-resourced regulatory enforcement mechanism is necessary for a risk-centered, accountability-based governance system to be trusted.
The IAF believes these principles can be charted to the language contained in its model legislation. The IAF believes that such legislation would avoid the well-worn groove that imperils privacy law that should be fit for tomorrow.
The IAF will explore these principles during day 2 of the IAF Summit to be held on April 14.