IAF responds to the Executive Order on Safe, Secure and Trustworthy AI

This blog reflects the views of policy staff and does not necessarily reflect the views of the IAF corporate and policy boards.

The Biden-Harris Administration issued an expansive and ambitious Executive Order on Safe, Secure and Trustworthy Artificial Intelligence (EO). The order articulates boundaries and expectations for U.S. Government Agencies – and for much of the private sector (directly or indirectly) – regarding the responsible development and use of Artificial Intelligence (AI).

The IAF commends the EO’s emphasis on risk assessment and risk mitigation. Those foundational concepts underpin IAF’s decade-long work to promote fair and responsible uses of data. We further applaud the Administration’s efforts to assert global leadership and raise the bar for ethical-AI practices internationally. The order is a long and complex document, one that will help guide other governments around the world (as well as those of U.S. States) now drafting rules and passing laws to regulate AI. While AI remains a burgeoning field of innovation, laws and regulations often come with unintended consequences.

It is worth noting that the definition of AI in the EO is very broad. The definition is not limited to systems that have the capacity to “learn” or adapt to novel scenarios without following explicit instructions. The EO really covers “automated processing”: making decisions without human involvement or with limited human involvement.

The EO calls for a government-wide, coordinated effort that involves dozens of initiatives and work streams to promote baseline standards for responsible AI. It also compels consistent mechanisms to assess and mitigate AI-related risk. The EO includes:

  • enhanced security standards;
  • red-team testing and reporting protocols;
  • expectations around protecting Americans’ privacy, equity and safety (specifically for consumers, patients and students); and
  • steps to prevent discrimination and biased outcomes.

While promoting innovation, the EO also calls for the passage of enhanced data privacy legislation. To advance American leadership abroad, the order directs many government actions for adjusting federal procurement or bolstering national security. 

Notably, the EO invokes the Defense Production Act of 1950 to mobilize government agency action. That EO gives the President broad authority for compelling U.S. companies to support efforts related to national defense. Going forward, national security interests must be considered within AI risk assessments.

The IAF has promoted risk-based approaches to data use in advanced analytics and AI for years. The new, forward-looking EO aligns with our long-established view: policy makers and industry leaders must develop frameworks for creating responsible innovation. Measures within the EO seek to maximize data-driven benefits from AI while minimizing harm. Industry has been put on notice. Risk assessments must precede the release of products or services that deploy AI.

In light of the EO, organizations will want to develop or evolve many governance processes and systems. The new reality confirms the IAF’s findings from our analysis of recently enacted Privacy laws and rules. For example, organizations are now familiar with required privacy impact assessments, but, in the realm of AI, assessments will have to evolve and incorporate a much broader set of risks, a much broader range of stakeholders.

How much impact will the EO actually have? The order acknowledges that Congress needs to enact AI legislation. Meanwhile, the EO takes a measured, “down the middle” approach to mitigating the risks of AI, while allowing potential innovations to bear fruit. The details on how to conduct an assessment will be addressed by NIST and other agencies through a transparent process that incorporates stakeholder input. Because it is an order and not a law, the EO lacks a discussion of how to govern generative AI and machine learning. Perhaps government agencies will, in future, explain AI governance expectations.

What is a foreseeable next step? Likely, the incorporation of new standards (developed because of the EO) into public procurement requirements. The results may end up reflecting the already strict requirements of FISMA.

The EO demonstrates that the IAF that should remain actively engaged, going forward, by advancing our work on multi-stakeholder risk assessments.

For further reading, see these IAF resources:

Artificial Intelligence, Ethics and Enhanced Data Stewardship

Advanced Data Analytic Processing

Risks Raised by Data Uses in Algorithms

Multistakeholder Data Protection Risk Assessment

The IAF Team