Is This AI? Taylor Swift, Drake, and a Music Industry in Trouble

Here are some signs users can look for as pop culture begins to embrace AI.
Drake Kendrick Lamar Taylor Swift and AI robot hand against glitchy background
Getty Images/Art Treatment by Kaitlyn McNab

All products are independently selected by our editors. If you buy something, we may earn an affiliate commission.

In the rapidly evolving world of AI tools, celebrities have become commonplace for people’s curiosity and creativity; how do you tell what's AI or real? From voice dupes to visual depictions, between Katy Perry’s fake Met Gala look and Drake’s use of Tupac’s voice in his “Taylor Made Freestyle,” artificial intelligence has found its way into pop culture, and the use of stars’ likeness has started to raise concerns surrounding the ethics and legality of generative AI models.

Nevertheless, pop culture has seemed to open its arms to this fast-evolving technology — so here’s everything you need to know about how to identify AI-generated content. (And if you thought that Taylor Swift “Fortnight” leak was real, you might need the help.)

With images, every small detail matters

When film distributor A24 unveiled the promotional posters for its new film Civil War on social media, users quickly called out inaccuracies and oddities in the illustrations, which showed different American cities taken over by war forces. The faces depicted looked weird, the arms stretched too long, the cars had too many doors — the details, generally, were off. One poster depicted the swan-shaped pedal boats in California’s Echo Park lake as oversized real-life swans, while another showed Chicago’s iconic Marina City towers (located next to each other in real life) separated by a river that does not cut between them in real life.

Instagram content

This content can also be viewed on the site it originates from.

Sam Smith, a graphic artist who has created movie posters for distributors like A24 and the Criterion Collection, commented on the posters and the controversy surrounding them on X (formerly known as Twitter).

“This is sloppy work,” Smith tells Teen Vogue. There are glaring errors in the images like unintelligible signs or comically large life-like swans, sloppy parts of these images that could have easily been cleaned up by a human designer very quickly.”

The mistakes, however, reveal the truth of the AI used in their creation. They show us what to look for. In addition to inaccurately depicting locations, warping text, and misunderstanding objects, AI-generated images featuring human faces tend to have a slightly off, deeply unsettling look to them.

In other images, the absence and inaccuracy of visual details like light refraction, shadows, and reflections can be telltale signs of generative AI usage.

“Current generative AI available may not always have a good understanding or coverage of such regularities in the world,” says Siwei Lyu, a computer science and engineering professor at SUNY University at Buffalo and the director of the school’s Media Forensic Lab.

But many of these details can slip past the human eye, Lyu says. Instead, he suggests fighting AI with AI and using algorithms trained on computer-generated media to detect patterns and signals that might not be perceptible to us. Detection tools can work by picking up unique patterns in the metadata or in the output media, like intentional watermarks or subtle imperfections, that can signal AI generation.

With music, hearing someone’s voice might not be enough

Weeks before Taylor Swift’s The Tortured Poets Department was released, “leaked” songs started circulating on social media. Nearly immediately, fans suggested the tracks could be AI-generated, and their concerns ended up being valid.

One of these AI-generated versions of what a TTPD song could sound like — an upbeat snippet of what “Fortnight,” Swift’s collaboration with Post Malone, could sound like — gathered the attention of thousands of fans on TikTok, with many believing it to be an authentic leak. It didn’t help that a clearly AI joke Swift song about her relationship with Travis Kelce (“so happy that my Travvy made it to the big game”) had circulated shortly before; surely, some fans thought, they’d be able to tell a real from a fake if the fakes were that obvious.

When the album came out and fans heard the actual song, some were left disappointed; the AI was fake, but hey, it was catchy.

“Justice for fortnight AI version,” a commenter wrote under a video comparing the actual song to the AI snippet. One user claiming to be the song’s creator, who later released a full version despite alleged pressure from Swift’s publicist — or what they called “Tree Paine nightmares” in one post referring to the singer’s publicist — did not reply to Teen Vogue’s request for comment by the time of this publication.

TikTok content

This content can also be viewed on the site it originates from.

Paying attention to the quality of the vocals can often reveal significant signs of AI manipulation, says Tom Collins, a music engineering associate professor leading the Music Computing and Psychology Lab at the University of Miami’s Frost School of Music.

Timbre transfer models, which apply a set of existing vocal characteristics (like a famous artist’s) to existing voice recordings, can generate the most natural results from AI vocal tools thanks to their human performance. Still, the resulting duped vocal’s qualities tend to be “rough around the edges” and have a robotic sound to them, Collins says.

The quality of the sound mixing, the musical phrasing in consonants, connecting words, and the consistency in vocal tone are other factors that can also hint at AI usage. In some cases, AI-generated vocals won’t differentiate between sung and instrumental parts, resulting in odd-sounding creations.

In the AI “Fortnight” snippet, both singers’ voices have a robotic and emotionless quality to them, as well as a lack of audible natural breathing in between phrases, both signs of AI-generated vocals. Since AI models can only pull from data they’ve been given, the snippet also sounds like previous work from each of the artists, lacking the fresh and unique sound of a new, legitimate collaboration.

More generally, users commented that the generic pop production and lyrics missed the melancholic sound and literary aesthetic that Swift seemed to be hinting at for the new album. Context and eagle-eyed fan perception are things AI can’t yet replicate.

With their ethical and technical bugs, what is it that draws fans to these kinds of AI dupes? Familiarity, says musician David Montesi. Earlier this year, the music producer employed timbre transfer tools he found through a simple Google search to create his own Drake and Tupac collaboration, which he released —and got taken down — before the controversial release of Drake’s “Taylor Made Freestyle.”

To Montesi, the artists’ voices function almost as musical instruments in this context. “It's no longer a voice. It's no longer a person,” he told Teen Vogue. “[People] are just so familiar with the tone of their voice that [people] can put out something that's less than industry standard, and it will still be popular.”

Consider the behind-the-scenes when it comes to music creation

On April 19, hours after dropping “Push Ups,” Drake took it to Instagram to release a second track dissing Kendrick Lamar. In his “Taylor Made Freestyle,” the Toronto rapper prominently featured AI-generated voice clones of Tupac and Snoop Dogg, two of Lamar’s inspirations, to taunt the rapper.

Days later, on April 24, Tupac’s estate sent the rapper a cease-and-desist letter threatening legal action and calling the unauthorized use of Tupac’s voice a “blatant abuse of the legacy of one of the greatest hip-hop artists of all time.” By April 29, the song was taken off the platform.

After the controversy over the use of Tupac’s voice, the Drake and Kendrick Lamar feud accidentally produced another round of discourse over the use of AI. This time, from a sample in Metro Boomin’s response to the Canadian rapper’s diss.

Metro Boomin’s “BBL Drizzy,” released on SoundCloud on May 5, layers 808s over an AI-generated sample. Comedian Willonius Hatcher, who created and released the sample, wrote the lyrics himself and then used Udio, an AI music creation tool, to set them to ‘70s soul-inspired melodies, instrumentals, and vocals generated by the AI.

Josh Antonuccio, director of the School of Media Arts and Studies at Ohio University, believes that tools like Udio, which allow non-musicians to create songs without the requirement of musical talent, will revolutionize the music industry and usher in a new era of “hyper remixing” culture and direct-to-artist fan connection.

With Gen-Z audiences in particular, Antonuccio considers that the preferred way of engaging with content is collaborating rather than passively consuming. Whether it’s slowing down or speeding up a song or using their favorite artist’s voice to create a form of “remix,” younger audiences on platforms like TikTok are eager to interact hands-on with the content they engage with — and AI tools can allow them to have a hand in the production aspect. As these tools become more widely accessible, music production will become more of a “conversational process’ between artists and fans, Antonuccio says.

In turn, the relationship between artists and fans changes, encouraging a direct connection from “superfans” whether it’s through ticket sales or streaming.

“The sheer magnitude of what's going to be available for music is just going to become gargantuan,” Antonuccio adds. “For artists, I think you're gonna see a narrowing of where they create pipelines to get through this massive ocean of noise.”

As for a litmus test for determining if a song’s production was done by AI, evidence is subjective and likely to be improved in the near future. Still, fuzzy details in audio mixing quality, like grainy, distorted, or congested sounds, remain a signature of AI-composed audio, which compare sonically to early versions of .mp3 files, Antonuccio says. Users can look for overlaps in sounds, unintelligible instruments, ill-measured beats and awkward cut-offs in vocals or melodies when dissecting suspected AI production.

Rather than relying on individual-based AI detection, which edges on obsolescence as AI music becomes nearly impossible to identify, fans’ knowledge of their favorite artist’s style and taste will become the filter for authenticity online, says Tracy Chan, a veteran music executive and CEO of Splash, an AI music company. “This is where fandom is actually quite important,” Chan says. “Fans do know their artists, and they pay attention to how they produce songs, or how they sing or use vocal effects.”

When an AI cover of Taylor Swift’s “this is me trying” using Kanye West’s voice went viral on TikTok last year, fans of the rapper were far from fooled, even when listening with no reference of the original Swift song.

TikTok content

This content can also be viewed on the site it originates from.

One viral post shows a fan’s thought process while listening: despite the vocals resembling West’s, the style and production of the song didn’t match the sound that loyal listeners have come to associate with the rapper.

Despite not being fooled at all by the AI cover, fans on social media found themselves enjoying West’s Version, ironically and not.

With writing, think about how people talk

Users have noticed that AI text tools like ChatGPT like to use certain oddly specific words; among its top ten most-used, according to one dataset, is “delve.” Along with terms like “captivating,” “explore,” and “tapestry,” “delve” is one of the various words people are starting to use as litmus tests for AI usage. It is arguably the most notorious, with figures like Paul Graham, co-founder of the tech startup Y Combinator, even claiming that people don’t use the word in spoken English in a tweet.

While controversial, musician Marina Sneider notes that the claim that AI has a writing tone that doesn’t quite align with conversational English does hold some truth, at least in songwriting.

The singer-songwriter, who makes commercial music for film and TV, noticed that songwriting generated by large language models, or AI trained on large amounts of text to generate its own, tends to be cheesy, overly descriptive and rhyme-focused to a fault.

“The machines have learned how to rhyme, but they don't know what rhymes people will think are cringe,” she says.

In some cases, the output will feature words that technically fit a rhyme scheme but don’t make sense semantically or thematically within the piece. Rather than the human craft of composing, performing and producing music, AI simply “spits out sounds,” Sneider explains.

Algorithms, by nature, can’t appeal to the universal human experiences that writing and music reflect.

“Songwriting is so much about being human,” Sneider says. “AI can't relate.”

One common mistake that unedited AI-generated text can include is a disclaimer provided by the large language models used to generate the text. The phrase “as an AI language model” has found its way into social media, user reviews and academic journals, as reported by The Verge.

When the tech publication Gizmodo published a story in June of last year listing the chronological order of Star Wars movies and TV shows under a “Gizmodo Bot” byline, staffers and readers were quick to criticize the move and point out inaccuracies in the copy.

James Whitbrook, a deputy editor at the publication, pointed out 18 unique “concerns, corrections and comments” in an email to the publication’s editor-in-chief. The biggest one, as many pointed out: getting the chronological order wrong.

Text generated by AI can often include factual inaccuracies, replicate and amplify biases or omit information. When engaging with text suspected to be written by AI, staying mindful of language flow and awkward phrasing, as well as fact-checking, can help spot AI generation.

With video, seeing isn’t always believing

Video deepfakes have come a long way from the Snapchat face-swapping days, but some visual clues can still indicate when a video has been AI-manipulated. Generative AI models tend to struggle with teeth, skin tones, hair texture, facial expressions, and movement, such as blinking, breathing and moving lips, to match the words being said.

With models improving exponentially, however, these visual clues only suggest areas where they might lack right now. But those areas can and will likely be fixed sooner rather than later.



Ben Colman, the CEO of Reality Defender, an AI detection platform used by enterprises, governments, and platforms to identify AI-generated and manipulated media, warns that the technologies are quickly advancing far beyond what humans can detect.

“In the last six months, it's gotten so advanced that even the PhDs on my team can't tell the difference with their own ears and eyes,” says Colman.

For now, however, looking out for inconsistent shadows and lighting, awkward blurs or camera motion, and paying close attention to body language and human patterns like blinking or breathing can help differentiate real videos from AI-generated ones. As always, taking sources and context into consideration can also help determine the legitimacy of a video — if something looks too outlandish to be true, it probably isn't true.

Stay vigilant — but know the tools for AI-spotting are evolving fast

While remaining skeptical of social media content and communally inspecting and questioning media can be useful skills in our AI-filled feeds, only relying on individuals to spot cues can lend itself to human error.

On the other hand, AI detection tools like the ones Reality Defender works with can scan media, such as pictures or videos, and filter — or flag — content with AI-manipulated elements far beyond the scale that can be perceived by the human eye and ear.

“We actually believe that consumers should not be required themselves to know the difference between real and fake,” Colman says. “It should be on the platforms.”

Generative AI tools are evolving quickly, and the mistakes that could have easily been spotted with a simple glance months ago are becoming harder and harder to identify. At the University at Buffalo, Lyu has worked on research to use “AI to combat AI,” but he worries that investment in AI detection technologies falls far behind that in AI generation. “There is a gap between what the detection methods can catch and what generative models can create,” he says.

In engaging with content online, Lyu suggests always considering the contextual information available and “using common sense.” Beyond cues that might suggest poor AI executions, paying attention to the context in which content appears might help users more.

Investigating the source, finding additional coverage and tracing the context in which a piece of media appears can help users better understand whether or not something is legitimate. Reading comments of what other users are saying, looking up additional sources and employing general fact-checking practices can help discern AI-generated fakes from real, human-made media.

If we can still take a page from the ‘00s media literacy book from the early internet days, try not to believe everything you see online — and don’t immediately assume a human made it.