For the past decade, the privacy ecosystem, company CPOs, policymakers, regulators, academics, advocates, the courts and others have been using the term “risk-based” without any clear definition of what that term should mean in practice. The EU General Data Protection Regulation (GDPR) was the first privacy/data protection law that, from its inception, was intended to be risk-based. EU Commissioner Vivien Redding said so when she introduced the regulation in 2012. Ten years later, it is crystal clear that while risk-based was the GDPR’s intent, there was no real discussion of “risk of what?” This failure is not just a European issue, since almost every privacy law since the GDPR went into effect has been framed as risk-based. Risk management requires a definition of the negative outcomes to be avoided or mitigated. If there is a disagreement on what is at issue, then there will be consequences that, at the minimum, will lead to wasted allocation of people, time, and resources.
In the Spring of 2021, the IAF endeavored to answer the question, “risk of what?” This endeavor culminated in a September 2021 workshop. Since the workshop, the IAF has continued to try to answer this fundamental question. The IAF’s interim report, “Risk of What,” issued on April 2, 2022, provides some needed context to this question. However, it is an interim report because the more the IAF delved into the “risk of what?” question, the more it became apparent that the answer to the question became like the “Boggart” from “Harry Potter and the Prisoner of Azkaban” – a shape-shifting creature that assumes the form of whatever most frightens the person who encounters it. Each individual stakeholder’s biggest fears frame the answer to “risk of what?” The IAF observed that there often has been confusion on whose Boggart should be addressed by the party charged with managing risk, whether controller or processor, regulator, news reporter, advocate or academic.
Today, data protection and privacy management are intended to be risk-based in their execution. So, the identification of risk impacts how a privacy program is structured and implemented, how that program is overseen, and how individual issues trigger investigations or enforcement actions. The same is true for data and cyber security programs. The IAF’s hypothesis was that the failure to gain a consensus on the risks to be managed has led to a less than optimal implementation of a risk-based approach to data protection and that creating a consensus on the defined risks would lead to prioritization and then strategic management of them. The “Risk of What?” project and subsequent developments showed that this hypothesis was much too simple an answer.
So, this is an interim report. For today, the IAF staff believes that the answer to “risk of what?” is based on negative outcomes to be avoided and therefore that identification of “adverse processing impacts” offers promise. The term “adverse processing impacts,” is defined in the IAF’s model legislation, the FAIR and OPEN USA ACT:
“Adverse Processing Impact” means detrimental, deleterious, or disadvantageous consequences to an Individual arising from the Processing of that Individual’s Personal Data or to society from the Processing of Personal Data, including— direct or indirect financial loss or economic harm; physical harm, harassment, or threat to an Individual or property; psychological harm, including anxiety, embarrassment, fear, and other mental trauma; inconvenience or expenditure of time; a negative outcome or decision with respect to an Individual’s eligibility for a right, privilege, or benefit related to— a. employment, including hiring, firing, promotion, demotion, reassignment, or compensation; b. credit and insurance, including denial of an application, obtaining less favorable terms, cancellation, or an unfavorable change in terms of coverage; c. housing; d. education admissions; e. financial aid; f. professional certification; g. issuance of a license; or h. the provision of health care and related services. 6. stigmatization or reputational injury; 7. disruption and intrusion from unwanted commercial communications or contacts; 8. discrimination in violation of Federal antidiscrimination laws or antidiscrimination laws of any State or political subdivision thereof; 9. loss of autonomy 4through acts or practices that are not reasonably foreseeable by an Individual and that are intended to materially— i. alter that Individual’s experiences; ii. limit that Individual’s choices; iii. influence that Individual’s responses; or iv. predetermine results or outcomes for that Individual; or5 10. other detrimental or negative consequences that affect an Individual’s private life, privacy affairs, private family matters or similar concerns, including actions and communications within an Individual’s home or similar physical, online, or digital location, where an Individual has a reasonable expectation that Personal Data or other data will not be collected, observed, or used. |
“Adverse processing impact,” as defined in the IAF’s model legislation, is broad enough to encompass each stakeholder’s Boggart. It is flexible enough to be useful in a full range of contexts, and it is specific enough to be useful in setting controls and objectives for privacy by design and creating links to the full range of human interests, and possibly the basis for oversight and enforcement.
Over the next year, the IAF will explore, with partners, how this answer to the “risk of what?” question can be better framed in terms of enterprise risk management, privacy by design engineering, and oversight based external standards and protections. Please join the work.
Related Articles