On September 4th, the White House Office of Science and Technology Policy (“OSTP”) released its Blueprint for an AI Bill of Rights (“Blueprint”), which identifies five principles to minimize potential harms stemming from certain applications of AI.  The Blueprint recognizes the “extraordinary benefits” that AI can provide, and the Blueprint states that harms stemming from AI are not inevitable.  

OSTP clarifies that the types of systems within scope of the Blueprint include (1) automated systems that (2) have the potential to “meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”  This includes rights, opportunities, and access to civil rights, civil liberties and privacy; equal opportunities, including to education, housing, credit, employment, and other programs; and access to critical resources and services, such as healthcare, financial services, safety, social services, “non-deceptive information about goods and services,” and government benefits.

  1. Safe and Effective SystemsThe Blueprint states that individuals should be protected from unsafe or ineffective AI.  First, to implement this principle, the Blueprint recommends proactive and ongoing consultation with the public and experts, risk identification and mitigation, and oversight.  Additionally, OSTP recommends avoiding the use of inappropriate, low quality, or irrelevant data for AI, and it recommends that data derived from other data by the AI system should be identified and tracked to avoid feedback loops, compounded harms, and inaccurate results.  Notably, the Blueprint recommends that entities can demonstrate the safety and effectiveness of the AI system, including through an independent evaluation by a third party and reporting on the results of evaluations.
  2. Algorithmic Discrimination ProtectionsOSTP underscores that individuals should not face discrimination by AI based on a characteristic protected by law, and systems should be designed and used in an equitable way.  The Blueprint explains that systems should be subject to a proactive assessment of equity in design and ongoing disparity assessments, reflect a representative and robust data set used for the development of AI, ensure accessibility in design, and guard against the use of proxies that contribute to algorithmic discrimination.  The Blueprint recommends that entities are able to demonstrate that a system protects against algorithmic discrimination, including through independent evaluation of potential algorithmic discrimination and reporting, including making assessments public “whenever possible.”
  3. Data PrivacyThe third principle reflects that individuals should have agency over how their data is used and should not be subject to surveillance.  First, OSTP details that AI systems should process data consistent with the individual’s reasonable expectations and sensitive inferences should be used only for necessary functions.  The Blueprint states that systems should incorporate privacy by design, including by assessing privacy risks through the life cycle of the system, employing data minimization, and proactively identifying and mitigating privacy risks.  According to the Blueprint, systems should be designed to provide users with meaningful consent, access, and control over the data used for an AI system, and specifically, the Blueprint states that systems should not use AI for design decisions that “obfuscate user choice or burden users with defaults that are privacy invasive.”  Additionally, the Blueprint states that surveillance and monitoring systems should be subject to heightened oversight that includes an assessment of potential harms, and surveillance should not be used in contexts like housing, education, or employment, or where the surveillance would monitor the exercise of democratic rights in a way that limits civil rights and civil liberties.
  4. Notice and ExplanationThe Blueprint emphasizes the importance of clear notices, explanations as to why a decision was made by AI, and demonstrated protections for notice and explanation.  To implement this principle, OSTP recommends that individuals are provided with clear, timely, understandable, and accessible notice of how AI will be used, which identifies the individual responsible for designing the AI system and be brief and clear.  Additionally, OSTP recommends explanations about decisions made by the system, which would be tailored to the purpose for which the user is expected to use the explanation, tailored to the specific audience, and tailored to the level of risk so that higher risk systems are subject to explanation before a final decision is rendered.  The Blueprint also recommends that notice and explanation can be demonstrated, including through summary reporting.
  5. Human Alternatives, Consideration, and FallbackThe Blueprint recommends that individuals are able to opt out of the use of AI and have access to a person who can quickly consider and remedy problem.  In particular, the Blueprint recommends that entities provide a “mechanism to conveniently opt out from automated systems in favor of a human alternative, where appropriate,” though the Blueprint does not explain when an opt out would be appropriate.  Additionally, if the automated system fails, produces an error, or the individual would like to contest the decision, the Blueprint recommends a process of human consideration, and for the use of AI within sensitive domains such as criminal justice, employment, education, and health, the Blueprint recommends additional human oversight and safeguards, including human consideration before any high-risk decision is made.

The Biden Administration’s AI Bill of Rights follows updates from prior administrations on the development and deployment of AI technologies.  Specifically, former President Trump signed an Executive Order in February 2019, titled “Maintaining American Leadership in Artificial Intelligence,” setting forth five pillars for a government strategy to increase American AI competitiveness.  And the Obama administration released two reports on AI, both of which focused on the anticipated effects of AI on the American workforce. 

This latest executive development joins a number of legislative proposals, both at the U.S. federal and state level, to regulate AI.  For example, at the federal level, the American Data Privacy Protection Act would require certain types of businesses developing and operating AI to undertake risk assessments, and at the state level, regulators in California and Colorado are engaging in a rulemaking process to define requirements for profiling using automated processing.  At the same time, the Federal Trade Commission issued its Advanced Notice of Proposed Rulemaking, which solicits input on questions related to automated decision-making.  Although these updates remain underway, they signal a continued trend of increased interest in regulating AI technologies. 

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.