The European Commission (the “Commission”) issued a White Paper on Outbound Investments (the “White Paper”) on 24 January 2024, setting out non-binding proposals for a detailed analysis of EU outbound investment. With its initiative, the Commission aims to understand whether the current limited regulation in the area of outbound investments is allowing leakage of strategic technologies and leading to potential risks to security. The conclusions of any review would inform possible EU policy responses, including whether to adopt EU-level rules regarding the screening of outbound investment to third countries. The White Paper is one of five initiatives set out in the Commission’s European Economic Security Package (the “EESP”) that aim to address the national security and public order concerns that the Commission has identified (see our Global Policy Watch blog).

In this blogpost, we discuss the key aspects of the outbound investment White Paper at the EU level. These are the main takeaways:

  • The White Paper does not introduce any immediate change to legislation or create an EU- level outbound investment screening framework, but is a step towards the EU identifying whether (and what) legislation may be necessary to close perceived ‘gaps’ in regulation that permit outbound investments made by EU businesses which could lead to potential security risks.
  • The White Paper envisages a joint effort by the Commission and Member States to explore the need to regulate and control outbound investment, prompted by the Commission’s perception of growing geopolitical tensions and technological shifts.
  • The Commission suggests a multi-stage process to evaluate risks potentially associated with outbound investments. This process began with a consultation (following the White Paper’s publication) followed by a monitoring period. Based on the findings of both the public consultation and monitoring, the Commission will assess the need and possible content of any policy response in Autumn 2025.

On January 24, 2024, the European Commission (“Commission”) announced that, following the political agreement reached in December 2023 on the EU AI Act (“AI Act”) (see our previous blog here), the Commission intends to proceed with a package of measures (“AI Innovation Strategy”) to support AI startups and small and medium-size enterprises (“SMEs”) in the EU.

Alongside these measures, the Commission also announced the creation of the European AI Office (“AI Office”), which is due to begin formal operations on February 21, 2024.

This blog post provides a high-level summary of these two announcements, in addition to some takeaways to bear in mind as we draw closer to the adoption of the AI Act.

In a unanimous decision, the U.S. Supreme Court rejected an argument that would have made it harder for whistleblowers to prevail on retaliation claims under the Sarbanes-Oxley Act (“SOX”). The decision, Murray v. UBS Securities, LLC, No. 22-660, may be welcome news to whistleblowers, but as a practical matter, employers will likely not see a significant change in SOX whistleblower retaliation claims or awards.

On February 9, the Third Appellate District of California vacated a trial court’s decision that held that enforcement of the California Privacy Protection Agency’s (“CPPA”) regulations could not commence until one year after the finalized date of the regulations.  As we previously explained, the Superior Court’s order prevented the CPPA from enforcing the regulations

On 26 January 2024, the European Medicines Agency (EMA) announced that it has received a €10 million grant from the European Commission to support regulatory systems in Africa, and in particular for the setting up of the African Medicines Agency (AMA). Although still in its early stages as an agency, AMA shows significant promise to

Significant changes are on the horizon for clinical trials in Germany. At the end of January 2024, the German Federal Health Ministry has presented the draft for a “Medical Research Act” (Medizinforschungsgesetz or MFG). The draft bill proposes legislative amendments in several areas that span from clinical trials, GMP issues for

On February 6, the Federal Communications Commission (“FCC”) announced that it had sent a letter to Lingo Telecom, LLC (“Lingo”) to demand that Lingo “immediately stop supporting unlawful robocall traffic on its networks.”  As background, Lingo is a Texas-based telecommunications provider that, according to the FCC’s letter, was the originating provider for “deepfake” calls made by Life Corp. to New Hampshire voters on January 21, 2024.  The calls, which imitated President Biden’s voice and falsified caller ID information, took place two days before the New Hampshire presidential primary and reportedly advised Democratic voters to refrain from voting in the primary.  

This blog post summarizes recent telemarketing developments emerging at the federal level and from Missouri, Wisconsin and West Virginia.

Federal Legislation

On January 29, 2024, Congressman Frank Pallone (D-NJ), Ranking Member of the U.S. House Energy and Commerce Committee, introduced H.R. 7116, the “Do Not Disturb Act.”  A press release accompanying the bill’s introduction stated that Congressman Pallone introduced the bill “to protect consumers from the bombardment of dangerous and unwanted calls and texts that have been exacerbated by the Supreme Court’s decision in Facebook, Inc. v. Duguid . . .”  If enacted, the bill would, among other things, do the following:

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.

On February 6, the U.S. Department of Health and Human Services (“HHS”), Office of Civil Rights (“OCR”), announced that it had settled a cybersecurity investigation with Montefiore Medical Center (“Montefiore”), a non-profit hospital system based in New York City, for $4.75 million.  As brief background, OCR is responsible for administering and enforcing the Health Insurance Portability and Accountability Act of 1996, as amended, and its implementing regulations (collectively, “HIPAA”).  Among other things, HIPAA requires that regulated entities take steps to protect the privacy and security of patients’ protected health information (“PHI”).