Several weeks ago, Andy Hall, the Grammy-award winning dobro player and member of bluegrass jam band The Infamous Stringdusters, went on Instagram to explain “what it’s like to be replaced by AI.” While listening to a song on which he had been asked to overdub, he said, he discovered that it already had “the most . . . incredible beautiful dobro that I’ve ever heard in my life.” Thinking there must be a mistake, Hall emailed the songwriter: “I guess . . . you got somebody to play . . . dobro on it?” The songwriter responded: “No! I made this on SUNO”—an AI music generator—“as an example of what I want it to sound like.” Hall commented, “now I have to go home and record and play as good as this absolutely virtuosic dobro . . . What’s going on?”
Hall isn’t the only one asking. Millions of workers in the United States are in danger of being replaced by AI, including many artists and musicians. The recent deals struck by major record labels with generative AI companies like Suno, Udio, and Klay will allow AI to train on the entire history of recorded music.
Some labels are claiming that AI-generated content will remain within a “walled garden.” But such claims about new recording formats have proven false in the past. Digital Rights Management, for example, failed to prevent CDs from being “shared” online. Suno’s subscriptions, which today range from $8-$24 per month , currently include user rights to “commercial use” of any tracks produced.
These legally bulletproof AI-produced “ghost tracks” will flood an already oversaturated streaming market and harm the human musicians who record music and license it for use in films, commercials, television programs, and video games.
Major labels have been in a rush to assure their copyright holders—the songwriters and signed artists—that none of their music will be ingested by AI without their consent. But there is a wrinkle: session musicians, like many creative workers, often do work-for-hire and hence are not copyright holders. For example, indie labels often hire session musicians on a commission basis or without written contracts at all. AI could result in the mass displacement of musicians and the devaluation and degradation of the work of those who remain.
AI apologists have spent substantial sums on marketing to convince the public that such displacement is just “technological displacement”: that AI is simply more efficient than its predecessor, and therefore morally acceptable. Indeed,we may shed a tear when poor John Henry dies in his competition with a steam-powered drill, but no train passenger today longs for tunnels quarried by workers using hand-driven spikes.
Before being told that the “beautiful dobro” was AI-generated, Hall had guessed that it was recorded by the Nashville session-musician Jerry Douglas, because “only a handful of people on the planet can play like that.” Hall, who has spent his life mastering the nuances of the dobro, believed he was hearing Jerry Douglas playing because he was: SUNO had ingested Douglas’s work, along with thousands of others’, and then prioritized it in a “fine tuning” process in which a human curator, knowledgeable in the handful of top recording instrumentalists and vocalists in each genre, matched those specific musicians’ work with the specific prompt of Hall’s employer.
That isn’t “technological displacement” or “creative destruction.” It doesn’t transform natural resources into useful commodities. It’s a sleight-of-hand that turns Douglas’s labor into SUNO’s property without any concern for consent, credit, or compensation. To the extent that the process involves a human doing the fine tuning, it isn’t even technological. It’s just plain wrong.
Musicians are all too familiar with this type of wrong. In a December 2025 article for Forbes, tech writer Virginie Berger writes that AI companies profiting off of material used without permission “face lawsuits only from those powerful enough to sue,” then eventually sanitize their model “through selective licensing deals while the work of countless other creators remains in [their] training data, uncompensated and unacknowledged.”
Berger’s article goes on to note: “This isn’t an oversight but the architecture of the settlements. The major labels had the legal resources and financial leverage to force negotiations.” Independent artists and labels, who constitute the large majority of the workforce and 46.7 percent of the global recorded music market do not.
In June 2024, Warner Music Group (WMG) and others sued SUNO for copyright infringement. SUNO settled the lawsuit by licensing WMG’s music—instead of illegally ingesting it without consent, as they had previously. WMG now claims it will obtain the consent of its copyright holding artists and songwriters before permitting SUNO to ingest their work. But what of musicians who are not WMG copyright holders? Where can they turn for redress?
In normal times, the belief that such practices violated laws against unfair competition would have garnered bipartisan support. But a little more than a year ago, FTC chair Lina Khan resigned after encountering pushback from the Trump Administration for having pursued AI regulation that would have, among other anti-trust goals, protected musicians from being unfairly forced to compete with their own work.
Union action is a key line of defense. SUNO’s ingestion of major label recordings may violate the labels’ “Sound Recording Labor Agreement” (SRLA) with the American Federation of Musicians (AFM). That contract covers all major label music recorded since 1948. The SRLA requires a separate payment to musicians for every “new use” of their work, and every new derivative commercial use after that, and calls for negotiation on how those terms will apply whenever a new technology exploiting recordings made under the contract emerges.
Major labels may well contest the above interpretation in the currently ongoing SLRA contract negotiations with the AFM. But musicians are fed up with being roadkill on the information highway. “We Need a Union,” a rank-and-file coalition of musicians, is pressuring the AFM to address the issue and also calling on the public to boycott all AI-generated music unless the record company, streaming service, or AI service profiting from it can certify that every musician performing on the tracks used to train the model was given a right to consent, to credit, and to compensation. Whether the AFM prevails is largely dependent on how the public responds.
There are tens of thousands of working recording musicians like Andy Hall. I happen to be one of them. Sometimes our job involves imitation of the handful of studio greats who invented the language of our instruments. If a machine can do that job reasonably well, why (other than the fact that AI’s power and water needs are accelerating environmental collapse) should the public care if we disappear?
The answer to that question requires us to acknowledge not only the limits of artificial so-called “intelligence,” but also the complexity of our human culture. According to Harold Bloom, the late Yale literary critic, even our attempts to imitate are expressions of creativity. Our failure to imitate perfectly, our little mistakes, and our “misreadings,” are not random errors but expressions of subconscious needs. What we individual “mistakers” and “misreaders” have produced over time is a culture capable of changing in accordance with human needs.
So, when an AI system imitates something perfectly, it doesn’t produce culture. It kills it. And when an AI system fails to imitate perfectly, its statistically random hallucinations and glitches are as unlikely to meet human needs as countless monkeys banging on typewriters to reproduce the writings of William Shakespeare. That kills culture too.
We all know what happens when crops fail, or when the fuel supply runs out. But what happens when a culture fails? Jacques Attali, a cultural critic and economic counselor to the French President François Mitterrand, claims in his book, Noise: The Political Economy of Music, that contemporary concert music originated in the need of early societies to control communal violence by channeling it into ritual. Early human sacrifice rituals were accompanied by music. The music replaced the sacrifices over time, but still had to fulfill the ritual’s original function of channeling social violence.
This explains why, from the harsh reviews that greeted everything from Joseph Hayden’s Symphony No. 94 (the “Surprise Symphony”) and Igor Stravinsky’s The Rite of Spring to Big Band Jazz, punk rock, and the latest condemnations of “gangster rap,” each new iteration of music has been met by critics wailing that it’s too violent, sexualized, earsplitting, and a dangerous breakdown in the social order. The next generation thinks it just sounds tame.
If Attali is correct, new music is needed to channel new social conflict into ritual. What happens when the wonderful AI machines, by definition, hear only the past? When their errors no longer grope in the subconscious dark towards the next socially necessary outrage? When the pool of human gropers has been trimmed to a ghost crew—and a deadly combination of high costs and low returns assures that those from the disempowered margins best able to intuit and articulate the social conflicts of our times (yes, I’m talking about the Black, poor white Appalachian, Latine, and other working-class heroes whose hard work, desire, and rage have fueled our pop revolutions) will be the least able to afford participation? What happens when the mountains of AI music can’t channel violence? When the music is hollow, boring, and fake? Will the ritual break, and unchanneled violence return?
If we don’t fight back now, I guess we’ll see. Maybe we’re already seeing.
