More stories

  • in

    Bourdain Documentary’s Use of A.I. to Mimic Voice Draws Questions

    The documentary “Roadrunner” by Morgan Neville uses 45 seconds of a voice that sounds like Bourdain, generated with artificial intelligence. Is it ethical?The new documentary about Anthony Bourdain’s life, “Roadrunner,” is one hour and 58 minutes long — much of which is filled with footage of the star throughout the decades of his career as a celebrity chef, journalist and television personality.But on the film’s opening weekend, 45 seconds of it is drawing much of the public’s attention.The focus is on a few sentences of what an unknowing audience member would believe to be recorded audio of Bourdain, who died by suicide in 2018. In reality, the voice is generated by artificial intelligence: Bourdain’s own words, turned into speech by a software company who had been given several hours of audio that could teach a machine how to mimic his tone, cadence and inflection.One of the machine-generated quotes is from an email Bourdain wrote to a friend, David Choe.“You are successful, and I am successful,” Bourdain’s voice says, “and I’m wondering: Are you happy?”The film’s director, Morgan Neville, explained the technique in an interview with The New Yorker’s Helen Rosner, who asked how the filmmakers could possibly have obtained a recording of Bourdain reading an email he sent to a friend. Neville said the technology is so convincing that audience members likely won’t recognize which of the other quotes are artificial, adding, “We can have a documentary-ethics panel about it later.”The time for such a panel appears to be now. Social media has erupted with opinions on the issue — some find it creepy and distasteful, others are unbothered.And documentary experts who frequently consider ethical questions in nonfiction films are sharply divided. Some filmmakers and academics see the use of the audio without disclosing it to the audience as a violation of trust and as a slippery slope when it comes to the use of so-called deepfake videos, which include digitally manipulated material that appears to be authentic footage.The director Morgan Neville said in a statement on Friday about the use of A.I. that “it was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”Bryan Bedder/Getty Images for Tribeca Festival“It wasn’t necessary,” said Thelma Vickroy, chair of the Department of Cinema and Television Arts at Columbia College Chicago. “How does the audience benefit? They’re inferring that this is something he said when he was alive.”Others don’t see it as problematic, considering that the audio pulls from Bourdain’s words, as well as an inevitable use of evolving technology to give voice to someone who is no longer around.“Of all the ethical concerns one can have about a documentary, this seems rather trivial,” said Gordon Quinn, a longtime documentarian known for executive producing titles like “Hoop Dreams” and “Minding the Gap.” “It’s 2021, and these technologies are out there.”Using archival footage and interviews with Bourdain’s closest friends and colleagues, Neville looks at how Bourdain became a worldwide figure and explores his devastating death at the age of 61. The film, “Roadrunner: A Film About Anthony Bourdain,” has received positive reviews: A film critic for The New York Times wrote, “With immense perceptiveness, Neville shows us both the empath and the narcissist” in Bourdain.In a statement about the use of A.I., Neville said on Friday that the filmmaking team received permission from Bourdain’s estate and literary agent.“There were a few sentences that Tony wrote that he never spoke aloud,” Neville said in the statement. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”Ottavia Busia, the chef’s second wife, with whom he shared a daughter, appeared to criticize the decision in a Twitter post, writing that she would not have given the filmmakers permission to use the A.I. version of his voice.A spokeswoman for the film did not immediately respond to a request for comment on who gave the filmmakers permission.Experts point to historical re-enactments and voice-over actors reading documents as examples of documentary filmmaking techniques that are widely used to provide a more emotional experience for audience members.For example, the documentarian Ken Burns hires actors to voice long-dead historical figures. And the 1988 documentary “The Thin Blue Line,” by Errol Morris, generated controversy among film critics when it re-enacted the events surrounding the murder of a Texas police officer; the film received numerous awards but was left out of Oscar nominations.But in those cases, it was clear to the audience that what they were seeing and hearing was not authentic. Some experts said they thought Neville would be ethically in the clear if he had somehow disclosed the use of artificial intelligence in the film.“If viewers begin doubting the veracity of what they’ve heard, then they’ll question everything about the film they’re viewing,” said Mark Jonathan Harris, an Academy Award-winning documentary filmmaker.Quinn compared the technique to one that the director Steve James used in a 2014 documentary about the Chicago film critic Roger Ebert, who, when the film was made, could not speak after losing part of his jaw in cancer surgery. In some cases, the filmmakers used an actor to communicate Ebert’s own words from his memoir, or they relied on a computer that spoke for him when he typed his thoughts into it. But unlike in “Roadrunner,” it was clear in the context of the film that it was not Ebert’s real voice.To some, part of the discomfort about the use of artificial intelligence is the fear that deepfake videos may become increasingly pervasive. Right now, viewers tend to automatically believe in the veracity of audio and video, but if audiences begin to have good reason to question that, it could give people plausible deniability to disavow authentic footage, said Hilke Schellmann, a filmmaker and assistant professor of journalism at New York University who is writing a book on A.I.Three years after Bourdain’s death, the film seeks to help viewers understand both his virtues and vulnerabilities, and, as Neville puts it, “reconcile these two sides of Tony.”To Andrea Swift, chair of the filmmaking department at the New York Film Academy, the use of A.I. in these few snippets of footage has overtaken a deeper appreciation of the film and Bourdain’s life.“I wish it hadn’t been done,” she said, “because then we could focus on Bourdain.”Christina Morales contributed reporting. More

  • in

    Robots Can Make Music, but Can They Sing?

    At an international competition called the A.I. Song Contest, tracks exploring the technology as a tool for music making revealed the potential — and the limitations.LONDON — For its first 30 seconds, the song “Listen to Your Body Choir” is a lilting pop tune, with a female voice singing over gentle piano. Then, everything starts to fracture, as twitchy beats and samples fuse with bizarre lyrics like “Do the cars come with push-ups?” and a robotic voice intertwines with the human sound.The transition is intended to evoke the song’s co-writer: artificial intelligence.“Listen to Your Body Choir,” which won this year’s A.I. Song Contest, was produced by M.O.G.I.I.7.E.D., a California-based team of musicians, scholars and A.I. experts. They instructed machines to “continue” the melody and lyrics of “Daisy Bell,” Harry Dacre’s tune from 1892 that became, in 1961, the first to be sung using computer speech synthesis. The result in “Listen to Your Body Choir” is a track that sounds both human and machine-made.The A.I. Song Contest, which started last year and uses the Eurovision Song Contest’s format for inspiration, is an international competition exploring the use of A.I. in songwriting. After an online ceremony broadcast on Tuesday from Liège in Belgium, a judging panel led by the musician Imogen Heap and including academics, scientists and songwriters praised “Listen to Your Body Choir” for its “rich and creative use of A.I. throughout the song.”In a message for viewers of the online broadcast, read out by a member of M.O.G.I.I.7.E.D., the A.I. used to produce the song said that it was “super stoked” to have been part of the winning team.The contest welcomed 38 entries from teams and individuals around the world working at the nexus of music and A.I., whether in music production, data science or both. They used deep-learning neural networks — computing systems that mimic the operations of a human brain — to analyze massive amounts of music data, identify patterns and generate drumbeats, melodies, chord sequences, lyrics and even vocals.The resulting songs included Dadabots’ unnerving 90-second sludgy punk thrash and Battery-operated’s vaporous electronic dance instrumental, made by a machine fed 13 years of trance music over 17 days. The lyrics to STHLM’s bleak Swedish folk lament for a dead dog were written using a text generator known for being able to create convincing fake news.While none of the songs are likely to break the Billboard Hot 100, the contest’s lineup offered an intriguing, wildly varied and oftentimes strange glimpse into the results of experimental human-A.I. collaboration in songwriting, and the potential for the technology to further influence the music industry.Karen van Dijk, who founded the A.I. Song Contest with the Dutch public broadcaster VPRO, said that since artificial intelligence was already integrated into many aspects of daily life, the contest could start conversations about the technology and music, in her words, “to talk about what we want, what we don’t want, and how musicians feel about it.”Many millions of dollars in research is invested in artificial intelligence in the music industry, by niche start-ups and by branches of behemoth companies such as Google, Sony and Spotify. A.I. is already heavily influencing the way we discover music by curating streaming playlists based on a listener’s behavior, for example, while record labels use algorithms studying social media to identify rising stars.Using artificial intelligence to create music, however, is yet to fully hit the mainstream, and the song contest also demonstrated the technology’s limitations.While M.O.G.I.I.7.E.D. said that they had tried to capture the “soul” of their A.I. machines in “Listen to Your Body Choir,” only some of the audible sounds, and none of the vocals, were generated directly by artificial intelligence.“Robots can’t sing,” said Justin Shave, the creative director of the Australian music and technology company Uncanny Valley, which won last year’s A.I. Song Contest with their dance-pop song “Beautiful the World.”“I mean, they can,” he added, “but at the end of the day, it just sounds like a super-Auto-Tuned robotic voice.”Only a handful of entries to the A.I. Song Contest comprised purely of raw A.I. output, which has a distinctly misshapen, garbled sound, like a glitchy remix dunked underwater. In most cases, A.I. — informed by selected musical “data sets” — merely proposed song components that were then chosen from and performed, or at least finessed, by musicians. Many of the results wouldn’t sound out of place on a playlist among wholly human-made songs, like AIMCAT’s “I Feel the Wires,” which won the contest’s public vote.A.I. comes into its own when churning out an infinite stream of ideas, some of which a human may never have considered, for better or for worse. In a document accompanying their song in the competition, M.O.G.I.I.7.E.D. described how they worked with the technology both as a tool and as a collaborator with its own creative agency.That approach is what Shave called “the happy accident theorem.”“You can feed some things into an A.I. or machine-learning system and then what comes out actually sparks your own creativity,” he said. “You go, ‘Oh my god, I would never have thought of that!’ And then you riff on that idea.”“We’re raging with the machine,” he added, “not against it.”The musician Imogen Heap, left, who led the A.I. Song Contest’s judging panel, and Max Savage, a member of M.O.G.I.I.7.E.D, the team that won this year’s competition.via AI Song ContestHendrik Vincent Koops is a co-organizer of the A.I. Song Contest and a researcher and composer based in the Netherlands. In a video interview, he also talked of using the technology as an “idea generator” in his work. Even more exciting to him was the prospect of enabling people with little or no prior experience to write songs, leading to a much greater “democratization” of music making.“For some of the teams, it was their first time writing music,” Koops said, “and they told us the only way they could have done it was with A.I.”The A.I. composition company Amper already lets users of any ability quickly create and purchase royalty-free bespoke instrumentals as a kind of 21st-century music library. Another service, Jukebox, created by a company co-founded by Elon Musk, has used the technology to create multiple songs in the style of performers such as Frank Sinatra, Katy Perry and Elvis Presley that, while messy and nonsensical, are spookily evocative of the real thing.Songwriters can feel reassured that nobody interviewed for this article said that they believed A.I. would ever be able to fully replicate, much less replace, their work. Instead, the technology’s future in music lies in human hands, they said, as a tool perhaps as revolutionary as the electric guitar, synthesizer or sampler have been previously.Whether artificial intelligence can reflect the complex human emotions central to good songwriting is another question.One standout entry for Rujing Huang, an ethnomusicologist and member of the jury panel for the A.I. Song Contest, was by the South Korean team H:Ai:N, whose track is the ballad “Han,” named after a melancholic emotion closely associated with the history of the Korean Peninsula. Trained on influences as diverse as ancient poetry and K-pop, A.I. helped H:Ai:N craft a song intended to make listeners hear and understand a feeling.“Do I hear it?” said Huang. “I think I hear it. Which is very interesting. You hear very real emotions. But that’s kind of scary, too, at the same time.” More