Assessments in an AI World – Requirements for US State Privacy Laws

By: Lynn Goldstein and Peter Cullen

Data Protection Assessment required by U.S. state privacy laws call for balancing of all stakeholder’s risks and benefits but don’t tell how to balance them.

The Information Accountability Foundation (IAF) for many years has advocated for a multi-dimensional stakeholder, benefits, and risks balancing assessment. This multi-dimensional balancing assessment makes sure all interests are reflected in today’s complex data use scenarios. IAF’s development of these assessments began with the big data revolution, and in 2015, the IAF developed the Big Data Assessment Framework.  This work evolved, and in 2016 the Office of the Privacy Commissioner of Canada gave the IAF a grant to develop a Canadian Version of Big Data Assessment.  In 2018, the Office of the Privacy Commissioner for Personal Data in Hong Kong funded the IAF’s drafting of an Ethical Accountability Framework for Hong Kong China.  The growing use of Artificial Intelligence (AI) and data associated with Large Language Models (LLMs) reinforced the need for an end-to-end governance approach to the use of data and an evolved way of assessing impact. Therefore, in 2021, the IAF published a Model Algorithmic Impact Assessment (Model AIA) that included an evolved framework for multi-dimensional stakeholder, risks, and benefits balancing. Each of these research projects advanced and matured multi-dimensional assessments.

Each of these assessments reinforced two core facets:

  • A methodology is needed to, in an orderly and repeatable fashion, identify and demonstrate three components of multi-dimensional balancing.
  • As new laws and regulations are increasingly requiring assessments that include multi-dimensional balancing, a “normative framework” is going to benefit both business and regulators by establishing a set of standard expectations.

As the many new U.S. state privacy laws explicitly require this type of multi-dimensional balancing, the next obvious iteration of the IAF’s assessment is the Demonstrable Assessments to Meet U.S. State Privacy Laws (U.S. Assessment). These new assessments are broader than typical privacy or data protection requirements and are more built for purpose in an AI world.

The requirement in the two of the U.S. state privacy laws to produce assessments to regulators on demand or to provide them at least annually will trigger requests to show or demonstrate new accountability requirements. To meet these laws’ explicit and implicit requirements, the IAF believes organizations will have to adopt new governance processes, which include substantive guidelines, policies, and procedures. These requirements are in some cases much more aligned with governance associated with responsible AI.  The IAF’s U.S. Assessment expands upon the requirements of the laws and regulations of the U.S. state privacy laws.

To develop the U.S. Assessment, the IAF created the Demonstrable Accountability Project (the Project).

The Project Process – Consistent with the IAF’s project methodology, a group of interested parties that included business, regulators and the NGO community was convened. The project process consisted of the following:

  • The IAF developed a draft solution framework addressing key problem areas.
    • The IAF incorporated relevant U.S. state laws and rules and draft regulations and other developing models as part of a draft solution model.
    • This stage was supported by individual dialogue with select business participants and regulatory authorities.
  • The draft framework was reviewed first through a convening meeting with participating business organizations.
  • Once the draft framework was fine-tuned, the IAF convened a multi-stakeholder session that engaged the business, regulatory, and NGO communities. The framework was presented and discussed, and suggestions were made to finalize the framework.

Going Forward – There are two possible separate work streams going forward:

  • Further socialization of the U.S. Assessment with other state regulators and with the NGO community. The current version of the U.S. Assessment was developed over a period of several months with input from business and academics as part of a Multi-Stakeholder Session. The IAF hopes this dialogue will expand to include regulators so the makeup and format of the U.S. Assessment will help advance any forthcoming guidance and will enable business to meet the requirements of these new and future U.S. state privacy laws. It was noted that U.S. states’ future enactment of AI laws may impact the scope and content of the U.S. Assessment.
  • Potential New Projects – As additional requirements are added by new U.S. state privacy laws and regulations, the U.S. Assessment could be updated. Also, attendees at the Multi-Stakeholder Session thought use cases on, for example, AI model development, training, bias removal, anti-money laundering, fraud, and direct marketing and advertising, would be helpful. Furthermore, since the U.S. Assessment addresses only predictive AI, it could be expanded to address generative AI as well.

The new report Issued by the IAF shows how to do the balancing. It can be accessed here.

Posted in