The Federal Communications Commission (FCC) recently adopted two Notices of Apparent Liability (NALs) in connection with its investigation into AI-based “deepfake” calls made to New Hampshire voters on January 21, 2024. The NALs follow a cease-and-desist letter sent on February 6 to Lingo Telecom, LLC (Lingo), a voice service provider that originated the calls, demanding that it stop originating unlawful robocall traffic on its network, which we previously blogged about here.
The first NAL was issued to the political consultant alleged to have been responsible for the calls, citing apparent violations of the Truth in Caller ID Act, which makes it unlawful to “cause any caller identification service to knowingly transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value.”
That first NAL proposes to fine the consultant $6,000,000 for “perpetuating an illegal robocall campaign,” which “carried a deepfake generative artificial intelligence (AI) voice message that imitated U.S. President Joseph R. Biden, Jr.’s voice and encouraged potential voters not to vote in the upcoming Primary Election.”
The second NAL was issued to Lingo for apparent violations of the FCC’s rules to implement the STIR/SHAKEN authentication framework in its internet Protocol networks.
That second NAL proposes to fine Lingo $2,000,000 for “falsely authenticating spoofed traffic with the highest level of attestation permitted under the STIR/SHAKEN rules” sent through its network to New Hampshire voters.