New State Privacy Laws Square the Assessment and Controls Circle

Demand for advanced data analytic impact assessments has moved from a nice to have to a legal requirement. Twelve new state privacy laws in the United States require these impact assessments that weigh the benefits to all stakeholders and the potential risks to the rights of consumers related to the data processing. These rights are broader than data protection rights more commonly associated with existing privacy laws. To comply with these new laws, many organizations will have to evolve assessment and governance processes to meet the new legal requirements. That much is clear. As of yet, however, there is no common defined standard or common regulatory expectation to direct what the new demonstrable processes should look like. The IAF believes that we can help clarify the evolving regulatory environment by developing a framework of standards for demonstrable accountability.

The advent of artificial intelligence (AI) generated the demand for AI impact assessments. Academics, NGOs, and some policymakers increasingly have recommended that organizations enhance their governance systems. Many have pointed to the need for Algorithmic Impact Assessments (AIAs), which assess the potential benefits, risks, and controls to achieve responsible and ethical AI. In response, for example, the Information Accountability Foundation (IAF) and PwC in 2021 drafted Evolving to an Effective Algorithmic Impact Assessment. The 2021 AIA Paper was a result of the drafters’ views that the risks associated with AI introduce a need for impact assessments that are much more expansive and rigorous than those required by current data protection and privacy laws. However, such an assessment soon could be mandated by new laws governing the use of AI and the associated fair implications to people, e.g., EU Proposed Artificial Intelligence Regulation and Canada’s Bill C-27 (specifically the Artificial Intelligence and Data Act (AIDA) part of C-27).  In the U.S., standalone AI related legislation is part of some proposed and enacted Federal and State legislation.

No consensus has emerged to give us reliable best practices for structuring AI impact assessments. We can say, however, that their scope is broader than requirements found in, for example, GDPR Article 35’s Data Protection Impact Assessments or what is outlined in the EU’s Proposed AI Regulation relating to conformity assessments. While work continues on a federal U.S. AI regulation, an EU Proposed Artificial Intelligence Regulation, and Canada’s Bill C-27, several U.S. States have passed privacy laws that in effect regulate AI through their broad and comprehensive data protection assessment (DPA) requirements when processing activities “present a heightened risk of harm to consumers” (i.e., risky processing). Key to this particular requirement are the  Colorado Rules promulgated under the Colorado Privacy Act (CPA) that came into effect July 1, 2023 and California’s Draft Risk Assessment Regulations (Draft California Regulations). The IAF conducted a rich analysis of this impact in our blog US State Privacy Laws Will Fundamentally Change the Way Businesses Assess Harm and our related Assessment Framework.

These new state laws require that a DPA identify and weigh the benefits that may flow, directly or indirectly, to the controller, the consumer, other stakeholders, and the public. The assessments must account for potential risks to the rights of consumers that could result from data processing, while documenting a plan to mitigate risks through commensurate safeguards. The benefit versus risk analysis required by these state privacy laws is different from the weighing required by privacy laws anywhere else in the world.

These closely aligned developments in the world of AI governance have resulted in the squaring of the many different impact assessment and control requirement circles from public policy, legislative, and responsible AI standpoints.  And so, these twelve new state privacy laws have the potential to influence the way current laws, regulations, and rules are applied. Further, they are well-positioned to shape the way future laws will be structured globally. However, these new laws and proposed AI laws, such as in Europe and Canada, could have a much broader impact in at least two respects.

First, these new or emergent pieces of legislation require enhanced governance controls and processes (either explicitly or implicitly) that go beyond those envisioned by the original Essential Elements of Accountability and Accountability Guidance issued by several regulators.[1]  For example, the Colorado Rules and Draft California Regulations require the explicit description and communication of specific processes that have been employed by the organization to mitigate risk. The Guidance on AI and Data Protection by the Information Commissioner’s Office in the United Kingdom and the NIST AII Risk Management Framework are two examples of where policy makers and regulators are expecting more explicit and demonstrable accountability processes. In many organizations, these controls do not exist, and so organizations are caught playing catch up in a quickly evolving regulatory landscape.

Second, and relatedly, the new set of laws coming out of the U.S. could pave the way for increased interest from regulators in impact assessments. A regulator, for example, a State Attorney General, can ask to receive a DPA, and it is therefore likely they also would ask for details on processes and controls associated with key risk mitigators. This same requirement to produce an assessment to a regulator exists in the proposed EU AI Act and Canada’s C-27. This production potential leads to the asking of the following types of questions outlined in IAPP’s Regulators’ rulebook for AI: Bit by bit (iapp.org):

  • What policies, procedures and people do you have in place to assess AI risk and safety?
  • Who is involved in assessing AI risks and have they have been sufficiently mitigated for product release? What are those individuals’ roles, reporting structures, titles, departments, and relevant expertise?
  • What risks did you take into consideration?
  • What risk mitigation measures did you implement?
  • What methods did you use to train or retrain your models?

New requirements for DPAs will trigger regulators’ requests to show or demonstrate new accountability requirements. The immediate impact of this likely will be felt first in the U.S. as a result of the new state privacy laws.

Today, there is no common standard or common regulatory expectation as to what these new demonstrable processes should consist of. The unknown factors include how DPAs and other assessment requirements should be structured. This lack of clarity can create uncertainty for businesses who wish to increase their use of data as part of their strategies.  And so, we risk regulators stepping in and setting standards that may not reflect a full understanding about business imperatives or how technology works, all without the involvement of business. Let us hope that in the months and years ahead we do not affirm the saying that “bad facts make for bad law”. But let us do more than hope. By working together, business and regulators can develop effective “demonstrable accountability” standards of practice for regulatory guidance (this was the result of the original accountability dialogue in 2009-11) and provide some clarity for business.

The IAF believes that we can help clarify the evolving regulatory environment by developing a framework of standards for demonstrable accountability. Look for a specific proposal from the IAF in the coming months as our research progresses.  


[1]In April 2012, the Office of the Privacy Commissioner of Canada (OPC) and the Offices of the Information and Privacy Commissioners (OIPCs) of Alberta and British Columbia worked together to develop “Getting Accountability Right with a Privacy Management Program” (Canadian Guidance), which set forth the appropriate policies and procedures an accountable organization must have in place that promote good practices which, taken as a whole, constitute a privacy management program.