The documentary “Roadrunner” by Morgan Neville uses 45 seconds of a voice that sounds like Bourdain, generated with artificial intelligence. Is it ethical?The new documentary about Anthony Bourdain’s life, “Roadrunner,” is one hour and 58 minutes long — much of which is filled with footage of the star throughout the decades of his career as a celebrity chef, journalist and television personality.But on the film’s opening weekend, 45 seconds of it is drawing much of the public’s attention.The focus is on a few sentences of what an unknowing audience member would believe to be recorded audio of Bourdain, who died by suicide in 2018. In reality, the voice is generated by artificial intelligence: Bourdain’s own words, turned into speech by a software company who had been given several hours of audio that could teach a machine how to mimic his tone, cadence and inflection.One of the machine-generated quotes is from an email Bourdain wrote to a friend, David Choe.“You are successful, and I am successful,” Bourdain’s voice says, “and I’m wondering: Are you happy?”The film’s director, Morgan Neville, explained the technique in an interview with The New Yorker’s Helen Rosner, who asked how the filmmakers could possibly have obtained a recording of Bourdain reading an email he sent to a friend. Neville said the technology is so convincing that audience members likely won’t recognize which of the other quotes are artificial, adding, “We can have a documentary-ethics panel about it later.”The time for such a panel appears to be now. Social media has erupted with opinions on the issue — some find it creepy and distasteful, others are unbothered.And documentary experts who frequently consider ethical questions in nonfiction films are sharply divided. Some filmmakers and academics see the use of the audio without disclosing it to the audience as a violation of trust and as a slippery slope when it comes to the use of so-called deepfake videos, which include digitally manipulated material that appears to be authentic footage.The director Morgan Neville said in a statement on Friday about the use of A.I. that “it was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”Bryan Bedder/Getty Images for Tribeca Festival“It wasn’t necessary,” said Thelma Vickroy, chair of the Department of Cinema and Television Arts at Columbia College Chicago. “How does the audience benefit? They’re inferring that this is something he said when he was alive.”Others don’t see it as problematic, considering that the audio pulls from Bourdain’s words, as well as an inevitable use of evolving technology to give voice to someone who is no longer around.“Of all the ethical concerns one can have about a documentary, this seems rather trivial,” said Gordon Quinn, a longtime documentarian known for executive producing titles like “Hoop Dreams” and “Minding the Gap.” “It’s 2021, and these technologies are out there.”Using archival footage and interviews with Bourdain’s closest friends and colleagues, Neville looks at how Bourdain became a worldwide figure and explores his devastating death at the age of 61. The film, “Roadrunner: A Film About Anthony Bourdain,” has received positive reviews: A film critic for The New York Times wrote, “With immense perceptiveness, Neville shows us both the empath and the narcissist” in Bourdain.In a statement about the use of A.I., Neville said on Friday that the filmmaking team received permission from Bourdain’s estate and literary agent.“There were a few sentences that Tony wrote that he never spoke aloud,” Neville said in the statement. “It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”Ottavia Busia, the chef’s second wife, with whom he shared a daughter, appeared to criticize the decision in a Twitter post, writing that she would not have given the filmmakers permission to use the A.I. version of his voice.A spokeswoman for the film did not immediately respond to a request for comment on who gave the filmmakers permission.Experts point to historical re-enactments and voice-over actors reading documents as examples of documentary filmmaking techniques that are widely used to provide a more emotional experience for audience members.For example, the documentarian Ken Burns hires actors to voice long-dead historical figures. And the 1988 documentary “The Thin Blue Line,” by Errol Morris, generated controversy among film critics when it re-enacted the events surrounding the murder of a Texas police officer; the film received numerous awards but was left out of Oscar nominations.But in those cases, it was clear to the audience that what they were seeing and hearing was not authentic. Some experts said they thought Neville would be ethically in the clear if he had somehow disclosed the use of artificial intelligence in the film.“If viewers begin doubting the veracity of what they’ve heard, then they’ll question everything about the film they’re viewing,” said Mark Jonathan Harris, an Academy Award-winning documentary filmmaker.Quinn compared the technique to one that the director Steve James used in a 2014 documentary about the Chicago film critic Roger Ebert, who, when the film was made, could not speak after losing part of his jaw in cancer surgery. In some cases, the filmmakers used an actor to communicate Ebert’s own words from his memoir, or they relied on a computer that spoke for him when he typed his thoughts into it. But unlike in “Roadrunner,” it was clear in the context of the film that it was not Ebert’s real voice.To some, part of the discomfort about the use of artificial intelligence is the fear that deepfake videos may become increasingly pervasive. Right now, viewers tend to automatically believe in the veracity of audio and video, but if audiences begin to have good reason to question that, it could give people plausible deniability to disavow authentic footage, said Hilke Schellmann, a filmmaker and assistant professor of journalism at New York University who is writing a book on A.I.Three years after Bourdain’s death, the film seeks to help viewers understand both his virtues and vulnerabilities, and, as Neville puts it, “reconcile these two sides of Tony.”To Andrea Swift, chair of the filmmaking department at the New York Film Academy, the use of A.I. in these few snippets of footage has overtaken a deeper appreciation of the film and Bourdain’s life.“I wish it hadn’t been done,” she said, “because then we could focus on Bourdain.”Christina Morales contributed reporting. More