On April 10, 2019, European Commission Directorate-General for Health and Food Safety issued a revised Q&A analyzing the interplay between the EU Clinical Trials Regulation (“CTR”) and the  EU General Data Protection Regulation (“GDPR”).  The revised Q&A takes into account the opinion of the European Data Protection Board (“EDPB”) issued on January 23, 2019, on

On Wednesday, the U.S. Department of Justice released a white paper and FAQ on the Clarifying Lawful Overseas Use of Data (“CLOUD”) Act, which was enacted in March 2018 and creates a new framework for government access to data held by technology companies worldwide.  The paper, titled “Promoting Public Safety, Privacy, and the Rule of Law Around the World: The Purpose and Impact of the CLOUD Act,” addresses the scope and purpose of the CLOUD Act and responds to 29 frequently asked questions about the Act.

On April 3, 2019, the Association of German Supervisory Authorities (“Datenschutzkonferenz” or “DSK”) issued a paper (available here in German) on the interpretation of “broad consent” for scientific research in Recital 33 of the GDPR and the interplay with the definition of consent  and the principle of purpose limitation.

According to the DSK, broad consent

On April 2, 2019, FDA released a discussion paper entitled “Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” (the “AI Framework”). The AI Framework is the Agency’s first policy document describing a potential regulatory approach for medical devices that use artificial intelligence (“AI”) and machine learning (“ML”). The AI Framework does not establish new requirements or an official policy, but rather was released by FDA to seek early input prior to the development of a draft guidance. FDA acknowledges that the approach “may require additional statutory authority to implement fully.”

In an accompanying press release, former FDA Commissioner Scott Gottlieb outlined the need for a “more tailored” regulatory paradigm for algorithms that learn and adapt in the real world. FDA’s medical device regulation scheme was not designed for dynamic machine learning algorithms, as the Agency traditionally encounters products that are static at the time of FDA review. The AI Framework is FDA’s attempt to develop “an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet [FDA’s] gold standard for safety and effectiveness throughout the product’s lifecycle.”

April began with Washington learning of the first-quarter fundraising hauls of Democratic presidential hopefuls, many of whom are current or former senators and House members. Meanwhile, several additional potential presidential candidates continue to weigh their options for jumping into the race, with much of the attention on former Vice President Joe Biden, who is trying

Last month, the Department of Defense Inspector General announced that it was undertaking an audit of the Foreign Military Sales (FMS) Agreement Development Process.  The audit will assess how the Defense Security Cooperation Agency (DSCA), Military Departments, and other organizations coordinate foreign government requirements for defense articles and services and whether DoD maximizes the results of the FMS agreement development process.

The audit is in response to a congressional reporting requirement included in the House Report to the National Defense Authorization Act for Fiscal Year 2019.  The House Report noted Congressional concern that the FMS process is “slow, cumbersome, and overly complicated,” and that the acquisition decisions supporting the FMS process are “stovepiped,” leading to an FMS program that is “not coordinated holistically across [DoD] to prioritize resources and effort in support of U.S. national security objectives and the defense industrial base.”  Consequently, Congress directed DoD to conduct this audit of the FMS program and submit a final report to Congress.  The tone and language of the House Report indicates that Congress is seeking to streamline the process for all stakeholders, including the U.S. military, foreign partners, and industry.  The House Report specifically calls out precision guided munitions as a focal point for additional foreign military sales that may mitigate risk to the U.S. industrial base.

On 8 April 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

The recent passage of the Justice Against Corruption on K Street Act of 2018 (“JACK Act” or the “Act”) imposes new requirements on those registering and filing reports under the Lobbying Disclosure Act (“LDA”). The Act amends the LDA to require that LDA registrants disclose listed lobbyists’ convictions for criminal offenses involving bribery, extortion, embezzlement,