Taylor Swift is taking bold steps to safeguard her voice and image against misuse by AI. The global pop superstar filed trademark applications for two audio clips of her voice on Friday. In one clip, she says, "Hey, it’s Taylor Swift, and you can listen to my new album, ‘The Life of a Showgirl,’ on demand on Amazon Music Unlimited." In the other, she speaks in a lower register: "Hey, it’s Taylor. My brand new album ‘The Life of a Showgirl’ is out on Oct. 3 and you can click to presave it so you can listen to it on Spotify." Swift also filed for a third trademark to protect an image of her onstage, wearing one of her signature sparkly bodysuits and strumming a pink guitar.
Swift has been the target of numerous deepfakes in recent years. Fake clips of her promoting a brand of cookware have tricked fans online, sexually suggestive deepfakes of her have gone viral on social media, and even President Donald Trump shared manipulated photos of her supporting his candidacy.

Swift is one of many celebrities confronting the issue as AI content generation tools become ever more sophisticated, even as AI companies add guardrails to prevent harmful uses of their models. In January, Matthew McConaughey became the first A-lister to file a series of trademarks, including images, video, and audio of himself, to protect his own likeness as AI-generated deepfakes become increasingly realistic and easy to create.
AI experts have suggested that individual trademarks from celebrities like Swift could become more common as stars attempt to attain stronger legal standing to sue if their likenesses were replicated without explicit permission. Sound marks, or trademarks of distinctive audio cues, have historically been filed to protect iconic brand sounds such as MGM’s lion roar, NBC’s chimes, or the Pillsbury Doughboy’s giggle.
But trademark attorney Josh Gerben, who first reported Swift’s new trademark applications, wrote in a blog post that “attempting to register a celebrity’s spoken voice is a new use of trademark registration that has not been tested in court before.” He added that "Historically, singers relied on copyright law to protect their recorded music. But AI technologies now allow users to generate entirely new content that mimics an artist’s voice without copying an existing recording, creating a gap that trademarks may help fill." By registering specific phrases tied to her voice, Swift could potentially challenge not only identical reproductions but also imitations that are ‘confusingly similar,’ a key standard in trademark law."
Swift has filed hundreds of trademarks throughout her career, but most have aimed to protect her name, lyrics, merchandise, and other components of her brand identity. This appears to be the first instance where Swift has pursued sound mark protection. The attorney listed on the applications, Rebecca Liebowitz, did not immediately respond to a request for comment. A spokesperson for Swift also did not immediately respond to a request for comment.
Celebrities continue to sound the alarm over AI misuse, with some calling for stronger protections. In 2023, Scarlett Johansson’s attorney demanded that an AI app stop using her likeness in an advertisement. The actor also called out OpenAI in 2024 for using an “eerily similar” voice to hers for their GPT-4o chatbot despite having declined the company’s request to provide her voice. OpenAI subsequently announced it would no longer be using the voice but did not indicate why. In 2024, Tom Hanks called out the “multiple ads over the internet falsely using my name, likeness, and voice promoting miracle cures and wonder drugs.”
“Breaking Bad” star Bryan Cranston raised similar concerns last year about OpenAI’s Sora 2 product and its ability to replicate his and other celebrities’ likenesses without permission. (OpenAI announced last month that it was shutting down its Sora video creation app.) In January, more than 700 creatives, including Johansson, Cate Blanchett, and Joseph Gordon-Levitt, backed a campaign called “Stealing Isn’t Innovation.” Organized by a coalition called The Human Artistry Campaign (composed of a mix of unions representing creators, artists’ rights groups, and trade associations), the movement calls on tech companies to stop training their generative AI tools on copyrighted works without express permission from creatives.
As likeness protection becomes a growing concern for public figures, some companies appear eager to partner on the issue. Last week, YouTube unveiled a deal with several talent agencies that would open up its proprietary deepfake detection tool to celebrities and entertainers who can now more easily request their unauthorized likenesses be removed from the platform. AI also appears to be making inroads with entertainers as some cautiously lean into the technology as it becomes more mainstream.