On April 8, 2020, the Federal Trade Commission (“FTC”) released a blog post about the use of artificial intelligence (“AI”) and algorithms in automated decisionmaking. The blog highlighted the potentially great benefits and risks presented by increasingly sophisticated technologies, particularly in the “Health AI” space. However, it also emphasized that automated decisionmaking is not a new phenomenon—and the FTC already has a long history of assessing and addressing its challenges. Based on prior FTC enforcement actions, studies, reports, and other sources of guidance, the post outlined five general principles for using AI and algorithms while adequately managing consumer protection risks:

  1. Be transparent. Entities should be upfront with consumers about how they use their AI solutions. For example, if automated tools, such as chatbots, are used to interact with consumers, the nature of this interaction should not be deceiving (i.e., it should be clear to consumers that they are interacting with an AI tool). In particular, entities should be transparent when collecting sensitive data—as secretly using sensitive data to “feed” an algorithm could give rise to an FTC action. Finally, entities should consider whether certain notices are required when they make automated decisions based on information obtained from a third-party vendor that may be considered a “consumer reporting agency” under the Fair Credit Reporting Act (“FCRA”). For example, when using data obtained from a credit reporting agency to deny someone an apartment, an “adverse action” notice must inform the consumer of his or her right to see and contest the information reported about them.
  2. Explain your decision to the consumer. When denying consumers something of value based on an algorithmic decision, entities should be able to explain what data was used in the model and how that data was used to arrive at the decision. Similarly, entities that use algorithms to generate “scores” about consumers should disclose the factors that went into the score and their relative importance with respect to influencing the score. Importantly, if automated tools may alter the terms of an existing deal (such as tools that might reduce consumers’ credit limits based on their purchasing habits), this must be disclosed to consumers, as well.
  3. Ensure that your decisions are fair. Entities should ensure that their use of AI does not result in discrimination against protected groups—which is prohibited by several existing antidiscrimination laws. The post emphasized that when evaluating whether one of these laws has been violated, the FTC will look at both the inputs into the AI algorithm (e.g., whether the model contains ethnically-based factors or proxies for such factors, such as census tracts), and the outcomes of the inputs (i.e., whether a facially neutral tool results in a discriminatory outcome). Finally, the FTC notes that consumers are entitled under FCRA to obtain a copy of the information on file about them and to have the ability to correct it.
  4. Ensure that your data and models are robust and empirically sound. In certain use cases, entities will be legally obligated to ensure that their data and models are robust and empirically sound. For example, entities acting as consumer reporting agencies are required under the FCRA to implement “reasonable measures” to ensure that the information provided is as accurate as possible. Even if an entity is not considered a consumer reporting agency, it may still be considered a “furnisher” if the company provides data to consumer reporting agencies, and furnishers are required to have in place written policies and procedures to ensure that the data provided is “accurate and has integrity.” In all cases, the FTC recommends that a company’s AI models are statistically “validated and revalidated to ensure that they work as intended, and do not illegally discriminate.”
  5. Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination. Before using an algorithm, the FTC’s blog post recommends asking four key questions: (1) how representative is the data set? (2) does the data model account for biases? (3) how accurate are the predictions based on big data? and (4) does a company’s reliance on big data raise ethical or fairness concerns?

In addition, when developing AI for others to use, entities should ensure that appropriate access and use controls are put in place to prevent misuse (e.g., contractual obligations, such as a terms of use for the AI tool, and technical measures, such as running the technology on the developer’s own servers). Further, entities should ensure they have appropriate accountability mechanisms in place, and should consider using tools and services to test algorithms for potential problems.

As the FTC’s blog post noted at the outset, these five principles undoubtedly will come into play as AI increasingly is deployed in critical industries, such as the healthcare sector. As we mentioned in a previous blog post, AI and other digital health technology has the potential to play an integral role in managing the current COVID-19 pandemic. In particular, researchers are considering whether AI can be applied to patient monitoring, preventing the spread of infection, and vaccine development efforts. As these and other technologies are developed to address the global health crisis, it will be critical to ensure that regulatory guidance (including the FTC’s blog post) is considered and applied throughout the product lifecycle.

For companies developing AI and other technology solutions to aid in the efforts against COVID-19, please take a look at our Coronavirus/COVID-19 Checklist to better understand some of the potential regulatory and other legal considerations. We also have posted some simple steps companies can take to mitigate their product liability risk as they develop these new innovative technologies.

To learn more about AI, please access our AI toolkit.

Photo of Terrell McSweeny Terrell McSweeny

Terrell McSweeny, former Commissioner of the Federal Trade Commission (FTC), has held senior appointments in the White House, Department of Justice (DOJ), and the U.S. Senate. At the FTC and DOJ Antitrust Division, she played key roles on significant antitrust and consumer protection…

Terrell McSweeny, former Commissioner of the Federal Trade Commission (FTC), has held senior appointments in the White House, Department of Justice (DOJ), and the U.S. Senate. At the FTC and DOJ Antitrust Division, she played key roles on significant antitrust and consumer protection enforcement matters. She brings to bear deep experience with regulations governing mergers and non-criminal, anti-competitive conduct, as well as issues relating to cybersecurity and privacy facing high-tech, financial, health care, pharmaceutical, automotive, media, and other industries. Terrell is internationally recognized for her work at the intersection of law and policy with cutting edge technologies including Artificial intelligence (“AI”), Digital Health, Fintech, and the Internet of Things (“IoT”). Clients benefit considerably from her extensive relationships with other enforcement agencies around the world.

Prior to joining the Commission, Terrell served as Chief Counsel for Competition Policy and Intergovernmental Relations for the U.S. Department of Justice, Antitrust Division. She joined the Antitrust Division after serving as Deputy Assistant to the President and Domestic Policy Advisor to the Vice President from January 2009 until February 2012, advising President Obama and Vice President Biden on policy in a variety of areas.

Terrell’s government service also includes her work as Senator Joe Biden’s Deputy Chief of Staff and Policy Director in the U.S. Senate, where she managed domestic and economic policy development and legislative initiatives, and as Counsel on the Senate Judiciary Committee, where she worked on issues such as criminal justice, innovation, women’s rights, domestic violence, judicial nominations, immigration, and civil rights.