On December 16, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (“Cyber AI Profile” or “Profile”).  According to the draft, the Cyber AI Profile is intended to “provide guidelines for managing cybersecurity risk related to AI systems [and] identify[] opportunities for using AI to enhance cybersecurity capabilities.”  The draft Profile uses the existing voluntary NIST Cybersecurity Framework (“CSF”) 2.0 — which “provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks” — and overlays three AI Focus Areas (Secure, Detect, Thwart) on top of the CSF’s outcomes (Functions, Categories, and Subcategories) to suggest considerations for organizations to prioritize when securing AI implementations, using AI to enhance cybersecurity defenses, or defending against adversarial uses of AI.  This draft guidance will likely be familiar to organizations that already leverage the CSF 2.0 in their cybersecurity programs and might be complimentary to existing frameworks that organizations already have in place.  Even so, the outcomes are designed to be flexible such that a range of organizations (with mature or novel programs) can leverage the guidance to help manage AI-related cybersecurity risk.  

For entities or stakeholders that might be interested in offering feedback on the preliminary draft, NIST is planning to host a workshop on January 14, 2026, to discuss the draft.  The Profile is also open for comment until January 30, 2026.  Below, we briefly summarize the Profile’s organizational structure, as well as areas on which NIST is seeking public comment.

Focus Areas

The Cyber AI Profile is organized into three Focus Areas that address AI-related cybersecurity risk from different but overlapping angles.

  • Secure – “[F]ocuses on managing cybersecurity challenges when” organizations integrate an AI system into their environment.  Examples of the use of AI that fall within the scope of Secure include the use of AI by “[p]ower grids to balance loads” and “[c]ustomer service organizations to perform initial interactions with customers.”
  • Defend – Aims to identify opportunities for the uses of AI that support cybersecurity processes and activities.  For example, AI can enhance cyber defense capabilities related to mission assurance, proactive risk management, predictive maintenance and risk forecasting, “[a]dvanced threat detection and analysis,” adversarial training and simulation, and automated incident response.
  • Thwart – Emphasizes building resilience to protect against AI-enabled threats.  For example, AI-enabled spear-phishing attacks exploit users through more realistic manipulation using deepfakes and generative AI.  These AI-enabled attacks help stress the need to update training for personnel and to have automated defenses that bolster security measures.

Cybersecurity Framework 2.0 Core

As indicated above, the Cyber AI Profile is divided into the six CSF functions: Govern, Identify, Protect, Detect, Respond, and Recover. Within each CSF function, the Cyber AI Profile offers sample focus area considerations.  Like the CSF 2.0, the Cyber AI Profile does not offer specific instructions for how to achieve the recommended outcomes but instead provides references for how an organization could consider achieving the recommended outcomes — example informative references include sources such as technical papers, the European Union Agency for Cybersecurity’s (“ENISA”) Threat Landscape 2025 paper, NIST Special Publication (“SP”) 800-218, and Databricks AI Security Framework Version 2.0.  The Cyber AI Profile also offers recommended priorities to help determine which Subcategory organizations should prioritize:  “1” for High Priority, “2” for Moderate Priority, and “3” for Foundational Priority.

Govern

The Govern profile captures the establishment, communication, and monitoring of “[t]he organization’s cybersecurity risk management strategy, expectations, and policy.”  This profile is further divided into the CSF categories of (1) organization context, (2) risk management strategy, (3) roles, responsibilities, and authorities, (4) policy, (5) oversight, and (6) cybersecurity supply chain risk management.  AI considerations under these categories include, for example:

  • Secure/Defend: Identifying and communicating AI dependencies.
  • Secure/Defend: “[I]ntegrat[ing] AI-specific risks into the organization’s formal risk appetite and tolerance statements.”
  • Defend/Thwart: Developing AI-specific threat information sharing channels.
  • Defend: Augmenting cybersecurity teams with AI agents.
  • Defend: Using AI to conduct governance checks to identify operational conflicts.
  • Defend: Conducting threat detection of supplier-provided AI models.

Identify

The Identify profile focuses on ensuring that “[t]he organization’s cybersecurity risks are understood.”  This profile includes the CSF categories of (1) asset management, (2) risk assessment, and (3) improvement.  AI considerations under these categories include, for example:

  • Defend: Inventorying software and systems to include “AI models, APIs, keys, agents, data . . . and their integrations and permissions.”
  • Defend/Thwart: Incorporating AI-specific attacks as part of one’s vulnerability management program.
  • Defend: “Defin[ing] conditions for disabling AI autonomy during risk response.”
  • Defend: Integrating “AI-specific procedures for containment” during incident response.

Protect

The Protect profile serves as a framework to ensure that “[s]afeguards to manage the organization’s cybersecurity risks are used” and is divided into sections focused on (1) identifying management, authentication, and access control, (2) awareness and training, (3) data security, (4) platform security, and (5) technology infrastructure resilience.  AI considerations under these categories include, for example:

  • Secure: Issuing AI systems unique identities and credentials.
  • Secure/Defend/Thwart: Developing AI-related awareness and training for personnel.
  • Secure/Defend: “Maintain[ing] protected, regularly test backups of critical AI assets.”
  • Secure: Restricting the execution of arbitrary code by AI agent systems.
  • Defend: “Implement[ing] AI-specific resilience mechanisms.”

Detect

The Detect profile aims to provide a standard to ensure that “[p]ossible cybersecurity attacks and compromises are found and analyzed.”  This profile is divided up into two categories: (1) continuous monitoring, and (2) adverse event analysis.  AI considerations under these categories include, for example:

  • Defend: “[F]lagging anomalies, correlating suspicious behaviors, and spotting unusual patterns faster than humans and other automated tools.”
  • Thwart: “Personnel may be subject to AI-enabled phishing or deepfake attacks.”
  • Thwart: “AI-enabled cyber attacks could identify and exploit” vulnerabilities introduced by “[t]hird-part[ies] . . . [through] updates and/or patches to software and systems.”
  • Secure: Determining what “new monitoring is needed to track actions taken by AI.”

Respond

The Respond profile provides guidance to make sure “[a]ctions regarding a detected cybersecurity incident are taken” and includes the CSF categories of (1) incident management, (2) incident analysis, (3) incident response reporting and communication, and (4) incident mitigation.  AI considerations under these categories include, for example:

  • Secure: “Establish[ing] criteria for triaging and validating AI-related incidents.”
  • Defend: “Integrat[ing] AI-driven analytics into incident categorization and prioritization to identify and flag AI-influenced events.”
  • Secure: Diagnosing complex attacks with new tools and methods.
  • Thwart: “[S]earch[ing] for indicators of adversary AI usage in the incident.”

Recover

The Recover profile covers restoring “[a]ssets and operations affected by a cybersecurity incident” and includes (1) incident recovery plan execution, and (2) incident recovery communication.  AI considerations under these categories include, for example:

  • Defend: Using AI to “accelerate[] recovery by calculating which systems to restore first, track[] progress, and draft[] clear updates to keep stakeholders informed.”
  • Defend: Using AI to “forecast hardware failures and system degradation.”
  • Defend: Evaluating “how AI defense systems performed” after an incident.

While in preliminary draft form, the Cyber AI Profile joins a growing list of AI-related guidance such as the NIST AI Risk Management Framework as well as other guidance that is under development such as the NIST SP 800-53 Control Overlays for Securing AI Systems (“COSAiS”).

Request for Public Comment

As noted above, NIST is accepting comments on the draft until January 30, 2026, in addition to planning a workshop on January 14, 2026, to discuss the draft.  The preliminary draft states that NIST is specifically seeking public comment on the draft in the following areas:

  1. Document structure and topics:
    • How do you envision using this publication?  What changes would you like to see to increase/improve that use?
    • How do you expect this publication to influence your future practices and processes?
    • Are the proposed topics in this document sufficient to help your organization prioritize cybersecurity outcomes for AI?
  2. Focus Area descriptions (Section 2.1):
    • How well do the Focus Area descriptions reflect the scope and characteristics of AI usage?  Are any characteristics missing, and if so, what are they and how should we describe them?
  3. Profile content (Sections 2.3–2.8):
    • When thinking about applying the Cyber AI Profile, how useful (or not) is it for all three Focus Areas to be shown alongside each other (as they are currently reflected)?  What value might there be in providing Profile content for each Focus Area separately?
    • What format(s) would be useful for providing the information in the Cyber AI Profile (e.g., a spreadsheet/workbook, the NIST Cybersecurity and Privacy Reference Tool (CPRT))?
    • How well do the priorities and considerations discussed in Sections 2.3–2.8 relate to existing practices and standards leveraged by your organization?  Are there significant gaps between current practices and those that are necessary to address unique characteristics of AI in each Focus Area that this publication should address?  How should the AI-specific considerations inform the prioritization of each Subcategory?
    • NIST published the Cybersecurity Framework (CSF) 2.0 Informative References and Implementation Examples to show potential ways to achieve the outcome in each Subcategory.  This preliminary draft includes examples of Informative References for the Cyber AI Profile.  Further literature review is in progress and NIST is seeking more input on Informative References to include.  Which additional AI cybersecurity guidelines, standards, best practices, or mappings are you using that you recommend adding as Informative References for the Cyber AI Profile?  For any Informative References you recommend, please share with us why you recommend them as well as how and why you would prioritize them for this document.
  4. Glossary (Appendix B):
    • NIST welcomes requests and suggestions for terms that should be added to this document’s Glossary
Photo of Caleb Skeath Caleb Skeath

Caleb Skeath helps companies manage their most complex and high‑stakes cybersecurity and data security challenges, combining deep regulatory insight, technical fluency, and practical judgment informed by leading incident response matters.

Caleb Skeath advises in‑house legal and security teams on the full lifecycle of…

Caleb Skeath helps companies manage their most complex and high‑stakes cybersecurity and data security challenges, combining deep regulatory insight, technical fluency, and practical judgment informed by leading incident response matters.

Caleb Skeath advises in‑house legal and security teams on the full lifecycle of cybersecurity and privacy risk—from governance and preparedness through incident response, regulatory engagement, and follow‑on litigation. A Certified Information Systems Security Professional (CISSP), he is trusted by clients across highly regulated and technology‑driven sectors to provide clear, practical guidance at moments when legal judgment, technical understanding, and business realities must be aligned.

Caleb has deep experience leading and overseeing responses to complex cybersecurity incidents, including ransomware, data theft and extortion, business email compromise, advanced persistent threats and state-sponsored threat actors, insider threats, and inadvertent data loss. He regularly helps in‑house counsel structure and manage investigations under attorney‑client privilege; coordinate with internal IT, information security, and executive stakeholders; and engage with forensic firms, crisis communications providers, insurers, and law enforcement. A central focus of his practice is advising on notification obligations and strategy, including the application of U.S. federal and state data breach notification laws and requirements along with contractual notification obligations, and helping companies make defensible, risk‑informed decisions about timing, scope, and messaging.

In addition to his work responding to cybersecurity incidents, Caleb works closely with clients’ legal, technical, and compliance teams on cybersecurity governance, regulatory compliance, and pre‑incident planning. He has extensive experience drafting and reviewing cybersecurity policies, incident response plans, and vendor contract provisions; supervising cybersecurity assessments under privilege; and advising on training and tabletop exercises designed to prepare organizations for real‑world incidents. His work frequently involves translating evolving regulatory expectations into actionable guidance for in‑house counsel, including in highly-regulated sectors such as the financial sector (including compliance with NYDFS cybersecurity regulations, the Computer Security Incident Notification Rule, and GLBA guidelines and guidance) and the pharmaceutical and healthcare sector (including compliance with GxP standards, FDA medical device guidance, and HIPAA).

Caleb’s practice also addresses evolving and emerging areas of cybersecurity and data security law, including advising clients on compliance with the Department of Justice’s Data Security Program, CISA‑related security requirements for restricted transactions, and preparation for new regulatory regimes such as the CCPA cybersecurity audit requirements and federal incident reporting obligations. He regularly counsels clients on how artificial intelligence and connected devices intersect with cybersecurity, privacy, and consumer protection risk, and how to support innovation while managing regulatory exposure.

Caleb also has extensive experience helping clients navigate high-stakes cybersecurity-related inquiries from the Federal Trade Commission, state Attorneys General, and other sector-specific regulators, including incident-specific inquiries as well as broader inquiries related to an entity’s cybersecurity practices and the security of product or service offerings. For companies that have entered into cybersecurity-related settlement agreements with regulators, Caleb has helped guide them through compliance with settlement agreement obligations, including navigating required third-party assessments and strategically responding to cybersecurity incidents that can arise while a company is subject to a settlement agreement. Caleb also routinely works hand-in-hand with colleagues in Covington’s class action litigation, commercial litigation, and insurance recovery practices to prepare for and successfully navigate incident-related disputes that can devolve into litigation.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Alexandra Bruer Alexandra Bruer

Alexandra Bruer is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity and CFIUS Practice Groups.

Photo of Bryan Ramirez Bryan Ramirez

Bryan Ramirez is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity Practice Group. He advises clients on a range of regulatory and compliance issues, including compliance with state privacy laws. Bryan also maintains…

Bryan Ramirez is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity Practice Group. He advises clients on a range of regulatory and compliance issues, including compliance with state privacy laws. Bryan also maintains an active pro bono practice.