Le journal pour les jeunes, par les  jeunes

Taylor Swift files trademarks to protect her voice and image

In April 2026, Taylor Swift, through her company TAS Rights Management, filed several trademark applications with the United States Patent and Trademark Office (USPTO) for elements associated with her public identity. According to several industry media outlets, these applications concern two sound trademarks beginning with “Hey, it’s Taylor” and “Hey, it’s Taylor Swift,” as well as an application related to a specific image of the singer on stage during the Eras Tour. At this stage, these are trademark applications currently under review, not trademarks that have already been definitively granted.

Partagez ce post

Ronald Woan, CC BY-SA 2.0 , via Wikimedia Commons

Sound and visual trademarks registered with the USPTO

The filings concern three distinct trademarks : two sound trademarks and one figurative trademark. The audio clips mentioned are said to consist of phrases in which Taylor Swift introduces herself in a promotional context related to her artistic persona. According to available information, the visual application pertains to a specific photograph of the singer holding a pink guitar with a black strap against a pink stage backdrop. These elements suggest that the filings do not seek broad protection for her appearance or voice, but rather the precise identification of specific signs.

These filings do not mean that Taylor Swift already has absolute protection against all vocal or visual imitations generated by artificial intelligence. Under U.S. trademark law, registering a mark can help oppose deceptive or commercial uses, but it does not in itself constitute a general safeguard against deepfakes or identity clones. At this stage, the USPTO has not yet issued a final decision on these applications. Their actual scope will therefore depend on the outcome of the examination and, if necessary, on the courts’ interpretation.

This approach comes at a time when certain public figures are exploring new legal avenues to regulate the use of their voice, image, or other identifying characteristics in the digital environment. Actor Matthew McConaughey, for example, has reportedly explored trademark protections related to his voice and signature catchphrases, in order to protect himself against unauthorized use. These initiatives illustrate a trend toward exploring trademark law as a complementary tool, but they do not constitute a legally established precedent that is systematically applicable to all cases.

The value of this approach lies in the fact that, in certain scenarios, it can provide an additional lever when third parties use a voice, an identifying phrase, or a visual likeness closely resembling a well-known personality. However, it does not replace the right of publicity, nor the protections potentially offered by other branches of law, nor specific federal regulations on digital replicas. Its effectiveness will depend on how the USPTO reviews applications and, subsequently, how courts interpret these registrations.

A context where deepfakes are already visible and widely reported

This initiative comes at a time when the misuse of generative AI using the images or voices of public figures has already sparked strong reactions. In January 2024, pornographic images generated by artificial intelligence and depicting Taylor Swift without her consent circulated widely on X, sparking a scandal and public reactions from several political figures, including those in the White House. These incidents illustrated how quickly artificial content can be created, disseminated, and made to go viral before legal or technical responses can effectively contain them.

Taylor Swift has also been the subject of other political uses of AI. In 2024, for example, Donald Trump posted AI-generated images on Truth Social appearing to support his campaign, which fueled concerns about visual disinformation and image manipulation. These incidents have helped make the protection of digital identity a particularly sensitive issue, not only in the singer’s case but also in the broader public debate on deepfakes.

The Taylor Swift case has helped fuel the U.S. legislative debate on deepfakes and unauthorized digital replicas. The NO FAKES Act, introduced in the Senate in July 2024 and then in the House of Representatives in September 2024, aims to establish a federal framework to protect individuals from certain realistic digital replicas of their voice or image when produced or distributed without authorization. Another bill, the No AI FRAUD Act, was also introduced in this context as a potential response to the abuses brought to light by cases such as those involving Taylor Swift.

To date, none of these initiatives has resulted in the passage of a comprehensive federal law that has fully entered into force. U.S. states also have a variety of measures in place: some provide remedies against pornographic deepfakes, against deceptive use of images, or against certain infringements of personality rights, but legal coverage remains uneven from one state to another. This article should therefore be read with the understanding that a legal framework exists, but that it remains fragmented and incomplete.

Taylor Swift’s trademark filings could be interpreted as an attempt to adapt protection mechanisms to a rapidly changing digital environment. Several legal commentators believe that this move could be closely watched by the entertainment industry, but it is still too early to gauge its concrete effects. The applications filed with the USPTO do not guarantee that a fully effective right against vocal or visual clones will be recognized in all cases, nor that they will automatically prevent all AI-based imitations.

As it stands, this initiative can be interpreted as an exploration of a legal tool that has yet to be widely used at this scale to address the misuse of generative AI technologies. It does not constitute a definitive solution to the issue of deepfakes, but rather one avenue among others for strengthening the protection of individuals in an increasingly generative digital environment. Beyond the case of Taylor Swift, the issue concerns anyone whose voice, image, or certain identifying characteristics might be reproduced without consent, placing the question of digital identity protection at the heart of contemporary debates on artificial intelligence.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Total
0
Share

CSMAG, votre point actu’ mensuel !

Nous souhaitons faire de ce magazine le reflet de l’esprit de CSactu, en y intégrant toute nos nouveautés : articles de fond, podcasts, émissions sur Twitch, conférences et bien plus encore. 

Chaque mois, nous nous engageons à vous offrir un magazine qui, tout en valorisant le travail de notre rédaction, mettra en lumière l’ensemble des initiatives portées par CSactu.