On December 1, the Washington State AI Task Force (“Task Force”) released its Interim Report with AI policy recommendations to the Governor and legislature. Established by the legislature in 2024, the Task Force is responsible for evaluating current and potential uses of AI in Washington and recommending regulatory and legislative actions to “ensure responsible AI usage.”

The Interim Report notes that the federal government has largely maintained a “hands-off approach” to the AI sector, creating a “crucial regulatory gap that leaves Washingtonians vulnerable.”  Building on the findings in a 2024 preliminary report, and in the absence of “meaningful federal action,” the Interim Report identifies several recommendations for balancing the promotion of technological innovation with the protection of individual rights, privacy, and economic stability, including:  

  • Adoption of NIST AI Principles.  The Task Force recommends that Washington formally adopt the principles for ethical and trustworthy AI in the National Institute of Standards and Technology (NIST)’s 2023 AI Risk Management Framework as the “guiding policy framework” for the development, deployment, and use of AI in Washington.
  • AI Developer Transparency and Disclosure Requirements.  The Task Force recommends, among other things, requiring AI developers to make information publicly available regarding the provenance, quality, quantity, and diversity of datasets used for training AI models, including explanations of the sources of data and methods of data acquisition, the types and volume of data processed, and the processes used to prepare and annotate data prior to processing. The Task Force further recommends requiring disclosures about how training data is processed to mitigate errors and biases during AI model development, with appropriate protections for trade secrets and proprietary information protected by law.
  • AI Governance Requirements for Developers and Deployers.  The Task Force distinguishes between “low-risk” and “high-risk” uses of AI, and describes “high-risk AI systems” as those with the potential to significantly impact people’s lives, health, safety, or fundamental rights. The report recommends mandating that developers and deployers of high-risk AI systems adopt and implement recognized AI governance frameworks, such as the NIST AI Risk Management Framework and ISO/IEC 42001, and publicly disclose their risk management practices and risk mitigations. The Task Force also calls on the legislature to “carefully evaluate” whether high-risk uses of AI should require “additional safeguards, restrictions, or outright bans.”
  • AI in Education.  The Task Force recommends investment in education related to AI, as well as financial support for educators and students to integrate AI tools into their curriculum.
  • AI and Healthcare Regulations.  Among other recommendations, the Task Force calls for legislation requiring that any decision to deny, delay, or modify health services based on a determination of medical necessity be made only by qualified clinicians, while permitting the use of AI to facilitate, but not as the “sole means” for, such decisions.  According to the Task Force, any AI tools used to facilitate prior authorization requests should be required to apply the same clinical criteria as licensed healthcare professionals.
  • AI Workplace Guidelines.  In addition to creating a “multi-stakeholder advisory group” to establish “AI workplace guiding principles,” the Task Force recommends requiring employers to disclose when AI is being “used in ways that directly affect employees,” including uses of AI for employee monitoring, discipline, termination, and promotion.

The Task Force’s Final Report is due by July 1, 2026, and is expected to contain additional recommendations related to AI companion chatbot safeguards and climate and energy impacts of AI infrastructure. The Task Force also is considering additional recommendations regarding use of AI in education, labor, consumer protection, and healthcare. Task Force subcommittee meetings are open to the public with public comments accepted at least 24 hours in advance; written comments are accepted at any point. If the Washington legislature enacts legislation codifying some or all of these recommendations, it would join California, Texas, and other states that have enacted new state AI laws in recent years.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

Photo of Rosie Moss Rosie Moss

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology…

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology regulatory issues, including emerging artificial intelligence compliance matters. She assists clients in complying with federal and state privacy laws and Federal Communications Commission (FCC) regulations. Rosie also maintains an active pro bono practice.