More stories

  • in

    Paul McCartney Says A.I. Helped Complete ‘Last’ Beatles Song

    The song was made using a demo with John Lennon’s voice and will be released later this year, McCartney said.More than 50 years after the Beatles broke up, Paul McCartney said artificial intelligence helped create one last Beatles song that will be released later this year.The song was made using a demo with John Lennon’s voice, McCartney said in an interview with BBC Radio 4 that was released on Tuesday. He did not give the title of the song or offer any clues about its lyrics.“When we came to make what will be the last Beatles record, it was a demo that John had, that we worked on,” McCartney said. “We were able to take John’s voice and get it pure through this A.I., so then we could mix the record, as you would normally do.”Holly Tessler, a senior lecturer on the Beatles at the University of Liverpool, said in an interview on Tuesday there was speculation that the song might be “Now and Then,” a song Lennon composed and recorded as a demo in the late 1970s.Lennon was fatally shot outside his New York apartment building in December 1980. His widow, Yoko Ono, gave the tape to McCartney as he, Ringo Starr and George Harrison, who died in 2001, were working on “The Beatles Anthology,” a career-retrospective documentary, record and book series.Two other songs on that tape, “Free As a Bird” and “Real Love,” were later completed by the three surviving Beatles using Lennon’s original voice recording and were officially released in 1995 and 1996.It is unclear exactly how McCartney was using the latest demo and whether any new lyrics would be incorporated.The use of A.I. technology to create music with the voices of established artists has raised a number of ethical and legal questions around authorship and ownership in recent months.This spring, an A.I.-produced song called “Heart on My Sleeve,” which claimed to use the voices of Drake and the Weeknd, became popular on social media before it was flagged by Universal Music Group. Similarly created tracks, including one using A.I. versions of Rihanna to cover a Beyoncé song and another using A.I vocals from Kanye West to cover the song “Hey There Delilah,” continue to rack up plays on social media.Other artists are embracing the technology. Grimes, the producer and pop singer, put out a call in April for anyone to make an A.I.-generated song using her voice. The results were mixed.Proponents of the technology say it has the power to disrupt the music business in the ways that synthesizers, sampling, and file-sharing services did.McCartney’s use of A.I. technology may recruit new fans, but it may also alienate older fans and Beatles purists, Tessler said.“We have absolutely no way of knowing, creatively, if John were alive, what he’d want to do with these or what he’d want his contribution to be,” she said, adding that it creates an ethical gray area.Over McCartney’s career, he has been quick to engage with new creative technologies, whether talking about synthesizers or samplers, she said.“I think he’s just curious to see what it can do,” Ms. Tessler said of McCartney. “I mean, it gives us some insight into his mind and what his creative priorities are, that given how much of the music industry is at his fingertips, that what he chooses to do is finish a demo with John Lennon. In a way, it’s very poignant.” More

  • in

    What Happens When A.I. Enters the Concert Hall

    Artificial intelligence is not new to classical music. But its recent, rapid developments have composers worried, and intrigued.When the composer and vocalist Jen Wang took the stage at the Monk Space in Los Angeles to perform Alvin Lucier’s “The Duke of York” (1971) earlier this year, she sang with a digital rendition of her voice, synthesized by artificial intelligence.It was the first time she had done that. “I thought it was going to be really disorienting,” Wang said in an interview, “but it felt like I was collaborating with this instrument that was me and was not me.”Isaac Io Schankler, a composer and music professor at Cal Poly Pomona, conceived the performance and joined Wang onstage to monitor and manipulate Realtime Audio Variational autoEncoder, or R.A.V.E., the neural audio synthesis algorithm that modeled Wang’s voice.R.A.V.E. is an example of machine learning, a specific category of artificial intelligence technology that musicians have experimented with since the 1990s — but that now is defined by rapid development, the arrival of publicly available, A.I.-powered music tools and the dominating influence of high-profile initiatives by large tech companies.Dr. Schankler ultimately used R.A.V.E in that performance of “The Duke Of York,” though, because its ability to augment an individual performer’s sound, they said, “seemed thematically resonant with the piece.” For it to work, the duo needed to train it on a personalized corpus of recordings. “I sang and spoke for three hours straight,” Wang recalled. “I sang every song I could think of.”Antoine Caillon developed R.A.V.E. in 2021, during his graduate studies at IRCAM, the institute founded by the composer Pierre Boulez in Paris. “R.A.V.E.’s goal is to reconstruct its input,” he said. “The model compresses the audio signal it receives and tries to extract the sound’s salient features in order to resynthesize it properly.”Wang felt comfortable performing with the software because, no matter the sounds it produced in the moment, she could hear herself in R.A.V.E.’s synthesized voice. “The gestures were surprising, and the textures were surprising,” she said, “but the timbre was incredibly familiar.” And, because R.A.V.E. is compatible with common electronic music software, Dr. Schankler was able to adjust the program in real time, they said, to “create this halo of other versions of Jen’s voice around her.”Tina Tallon, a composer and professor of A.I. and the arts at the University of Florida, said that musicians have used various A.I.-related technologies since the mid-20th century.“There are rule-based systems, which is what artificial intelligence used to be in the ’60s, ’70s, and ’80s,” she said, “and then there is machine learning, which became more popular and more practical in the ’90s, and involves ingesting large amounts of data to infer how a system functions.”Today, developments in A.I. that were once contained to specialized applications impinge on virtually every corner of life, and already impact the way people make music. Dr. Caillon, in addition to developing R.A.V.E., has contributed to the Google-led projects SingSong, which generates accompaniments for recorded vocal melodies, and MusicLM, another text-to-music generator. Innovations in other areas are driving new music technologies, too: WavTool, a recently released, A.I.-powered music production platform, fully integrates OpenAI’s GPT-4 to enable users to create music via text prompts.For Dr. Tallon, the difference in scale between individual composers’ customized use of A.I. and these new, broad-reaching technologies represents a cause for concern.“We are looking at different types of datasets that are compiled for different reasons,” she said. “Tools like MusicLM are trained on datasets that are compiled by pulling from thousands of hours of labeled audio from YouTube and other places on the internet.”“When I design a tool for my own personal use,” Dr. Tallon continued, “I’m looking at data related to my sonic priorities. But public-facing technologies use datasets that focus on, for instance, aesthetic ideals that align more closely with Western classical systems of organizing pitches and rhythms.”Concerns over bias in music-related A.I. tools do not stop at aesthetics. Enongo Lumumba-Kasongo, a music professor at Brown University, also worries about how these technologies can reproduce social hierarchies.“There is a very specific racial discourse that I’m very concerned about,” she said. “I don’t think it’s a coincidence that hip-hop artistry is forming the testing ground for understanding how A.I. affects artists and their artistry given the centuries-long story of co-optation and theft of Black expressive forms by those in power.”The popularity of recent A.I.-generated songs that mimicked artists like Drake, the Weeknd, Travis Scott and others have animated Dr. Lumumba-Kasongo’s fears. “What I’m most concerned about with A.I. Drake and A.I. Travis Scott is that their music is highly listenable,” she said, “and calls into question any need for an artist once they’ve articulated a distinct ‘voice.’”For Dr. Schankler, there are key differences between using R.A.V.E. to synthesize new versions of a collaborator’s voice and using A.I. to anonymously imitate a living musician. “I don’t find it super interesting to copy someone’s voice exactly, because that person already exists,” they said. “I’m more interested in the new sonic possibilities of this technology. And what I like about R.A.V.E. is that I can work with a small dataset that is created by one person who gives their permission and participates in the process.”The composer Robert Laidlow also uses A.I. in his work to contemplate the technology’s fraught implications. “Silicon,” which premiered last October with the BBC Philharmonic under Vimbayi Kaziboni, employs multiple tools to explore themes drawn from the technology’s transformative and disruptive potential.Laidlow described “Silicon” as “about technology as much as it uses technology,” adding: “The overriding aesthetic of each movement of this piece are the questions, ‘What does it mean for an orchestra to use this technology?’ and ‘What would be the point of an orchestra if we had a technology that can emulate it in every way?’”The work’s entirely acoustic first movement features a mixture of Laidlow’s original music and ideas he adapted from the output, he said, of a “symbolic, generative A.I. that was trained on notated material from composers all throughout history.” The second movement features an A.I.-powered digital instrument, performed by the orchestra’s pianist, that, “sometimes mimics the orchestra and sometimes makes uncanny, weird sounds.”In the last movement, the orchestra is accompanied with sounds generated by a neural synthesis program called PRiSM-SampleRNN, which is akin to R.A.V.E. and was trained on a large archive of BBC Philharmonic radio broadcasts. Laidlow describes the resulting audio as, “featuring synthesized orchestral music, voices of phantom presenters and the sounds the artificial intelligence has learned from audiences.”The size of “Silicon” contrasts with the intimacy of Dr. Schankler and Wang’s performance of “The Duke of York.” But both instances illustrate A.I.’s potential to expand musical practices and human expression. And, importantly, by employing small, curated datasets tailored to individual collaborators, these projects attempt to obviate ethical concerns many have identified in larger-scale technologies.George E. Lewis, a music professor at Columbia University, has designed and performed alongside interactive A.I. music programs for four decades, focusing primarily on the technology’s capacity to participate in live performance. “I keep talking about real-time dialogue,” he said. “Music is so communal, it’s so personal, it’s so dialogic, it’s communitarian.”He is hopeful that people will continue to explore interactivity and spontaneity. “It seems the current generation of A.I. music programs have been designed for a culturally specific way of thinking about music,” Lewis said. “Imagine if the culture favored improvisation.”As a composer, Lewis is continuing to explore this topic, including his recent work “Forager,” for chamber ensemble and A.I., which was created during a 2022 residency at PRiSM. The piece marks the latest update to “Voyager,” a piece that he developed in 1985 and described as a, “virtual improvising pianist.” “Forager” enhances the software’s responsiveness to its human co-performers with new programming that enables what he called, “a more holistic recognition” of musical materials.The differences among Dr. Schankler’s use of R.A.V.E., Robert Laidlow’s orchestral work “Silicon” and Lewis’s interactive “Forager” underscore the nuances with which composers and experimental musicians are approaching A.I. This culture celebrates technology as means to customize musical ideas and computer-generated sounds to suit specific performers and a given moment. Still, these artistic aims stand at odds with the foreboding prompted by others like Dr. Tallon and Dr. Lumumba-Kasongo.Individual musicians can do their part to counter those worries by using A.I. ethically and generatively. But even so, as Laidlow observed, being truly individual — which is to say independent — is difficult.“There is a fundamental problem of resources in this field,” Laidlow said. “It is almost impossible to create something computationally powerful without the assistance of a huge, technologically advanced institute or corporation.” More

  • in

    Popcast Mailbag! Frank Ocean, Peso Pluma, A.I. Grimes and More

    Subscribe to Popcast!Apple Podcasts | Spotify | Stitcher | Amazon MusicThe Popcast crew assembles for a semiannual mailbag episode, touching on many of the pressing pop music issues of the moment, including the controversy surrounding Frank Ocean’s Coachella set; the challenges faced by even the biggest pop stars (Sam Smith, Miley Cyrus) trying to follow massive singles; the sudden arrival of artificial intelligence in pop music and evolving notions of authorship; the startling recent growth in the popularity and visibility of música Mexicana and corridos tumbados, with stars like Grupo Frontera and Peso Pluma; and how the framework of genre continues to have meaning even in a universal-jukebox universe.Guests:Jon Pareles, The New York Time’s chief pop music criticJoe Coscarelli, The New York Times’s pop music reporterLindsay Zoladz, The New York Times’s pop music criticCaryn Ganz, The New York Times’s pop music editorConnect With Popcast. Become a part of the Popcast community: Join the show’s Facebook group and Discord channel. We want to hear from you! Tune in, and tell us what you think at popcast@nytimes.com. Follow our host, Jon Caramanica, on Twitter: @joncaramanica. More

  • in

    Hollywood Directors Reach Deal With Studios as Writers’ Strike Continues

    The tentative agreement includes improvements in wages and guardrails around artificial intelligence.The union that represents thousands of movie and television directors reached a tentative agreement with the Hollywood studios on a three-year contract early Sunday morning, a deal that ensures labor peace with one major guild as the writers’ strike enters its sixth week.The Directors Guild of America announced in a statement overnight that it had made “unprecedented gains,” including improvements in wages and streaming residuals (a type of royalty), as well as guardrails around artificial intelligence.“We have concluded a truly historic deal,” Jon Avnet, the chair of the D.G.A.’s negotiating committee, said in the statement. “It provides significant improvements for every director, assistant director, unit production manager, associate director and stage manager in our guild.”The deal prevents the doomsday Hollywood scenario of three major unions striking simultaneously. On Wednesday, the Alliance of Motion Picture and Television Producers, which bargains on behalf of the studios, will begin negotiations for a new contract with SAG-AFTRA, the guild that represents actors; their current agreement expires on June 30. SAG-AFTRA is in the process of collecting a strike authorization vote.The entertainment industry will be looking closely at what the directors’ deal — and the actors’ negotiations — will mean for the Writers Guild of America, the union that represents the writers. More than 11,000 writers went on strike in early May, bringing many Hollywood productions to a halt.Over the last month, the writers have enjoyed a wave of solidarity from other unions that W.G.A. leaders have said they have not seen in generations. Whether a directors’ deal — or a possible actors’ deal later this month — undercuts that solidarity is now an open question.W.G.A. leaders had been signaling to writers late last week that a deal with the directors could be in the offing, a strategy that it said was part of the studio “playbook” to “divide and conquer.” The writers and the studios left the bargaining table on May 1 very far apart on the major issues, and have not resumed negotiations.“They pretended they couldn’t negotiate with the W.G.A. in May because of negotiations with the D.G.A.,” the W.G.A. negotiating committee told writers in an email on Thursday. “That’s a lie. It’s a choice they made in hope of breathing life into the divide and conquer strategy. The essence of the strategy is to make deals with some unions and tell the rest that’s all there is. It’s gaslighting, and it only works if unions are divided.“Our position is clear: To resolve the strike, the companies will have to negotiate with the W.G.A. on our full agenda,” the email continued.Representatives for the Alliance of Motion Picture and Television Producers declined to comment.The writers and the directors shared some priorities, including wages, streaming residuals and concerns about artificial intelligence. W.G.A. leaders had said that the studios had offered little more than “annual meetings to discuss” artificial intelligence, and that they refused to bargain over guardrails. The D.G.A. said Sunday that it received a “groundbreaking agreement confirming that A.I. is not a person and that generative A.I. cannot replace the duties performed by members.”Some of the writers’ demands, however, are more complex than those of the directors. W.G.A. leaders have described the dispute in urgent terms, calling this moment “existential,” and saying that the studios “are seemingly intent on continuing their efforts to destroy the profession of writing.”Despite the explosion of television production over the last decade, writers have said that their wages have stagnated, and their working conditions have deteriorated. In addition to improvements on compensation, the writers are seeking greater job security, as well as staffing minimums in writers’ rooms.The W.G.A. has vowed to fight on. The writers, who last went on strike 15 years ago for 100 days, have historically been united.“We are girded by an alliance with our sister guilds and unions,” Chris Keyser, a chair of the W.G.A. bargaining committee, said in a video message to writers last week. “They give us strength. But we are strong enough. We have always been strong enough to get the deal we need using writer power alone.” More

  • in

    Striking Writers Are Worried About A.I. Viewers Should Be, Too.

    A.I. screenwriting, a point of contention in the Writers Guild strike, may not yet be ready for prime time. But streaming algorithms and derivative programming have prepared the way for it.Television loves a good sentient-machine story, from “Battlestar Galactica” to “Westworld” to “Mrs. Davis.” With the Writers Guild of America strike, that premise has broken the fourth wall. The robots are here, and the humans are racing to defend against them, or to ally with them.Among the many issues in the strike is the union’s aim to “regulate use of material produced using artificial intelligence or similar technologies,” at a time when the ability of chatbots to auto-generate all manner of writing is growing exponentially.In essence, writers are asking the studios for guardrails against being replaced by A.I., having their work used to train A.I. or being hired to punch up A.I.-generated scripts at a fraction of their former pay rates.The big-ticket items in the strike involve, broadly, how the streaming model has disrupted the ways TV writers have made a living. But it’s the A.I. question that has captured imaginations, understandably so. Hollywood loves robot stories because they make us confront what distinguishes us as human. And when it comes to distinguishing features, the ability to conjure imaginary worlds is simply sexier than the opposable thumb.So the prospect of A.I. screenwriting has become potent, both as threat and rallying cry. Detractors of the striking writers taunted them on social media that software was going to horse-and-buggy their livelihoods. Striking WGA members workshopped A.I. jokes on their picket signs, like “ChatGPT doesn’t have childhood trauma.” (Well, it doesn’t have its own. It has Sylvia Plath’s, and that of any other former unhappy child whose writing survives in machine-readable form.)But it shouldn’t surprise anyone if the TV business wants to leave open the option of relying on machine-generated entertainment. In a way, it already does.Not in the way the WGA fears — not yet. Even the most by-the-numbers scripted drama you watch today was not written by a computer program. But it might have been recommended to you by one.Algorithms, the force behind your streaming-TV “For You” menu, are in the business of noticing what you like and matching you with acceptable-enough versions of it. To many, this is indeed acceptable enough: More than 80 percent of viewing on Netflix is driven by the recommendation engine.In order to make those matches, the algorithm needs a lot of content. Not necessarily brilliant, unique, nothing-like-it content, but familiar, reliable, plenty-of-things-like-it content. Which, as it happens, is what A.I. is best at.The debate over A.I. in screenwriting is often simplified as, “Could a chatbot write the next ‘Twin Peaks’?” No, at least for now. Nor would anyone necessarily want it to. The bulk of TV production has no interest in generating the next “Twin Peaks” — that is, a wild, confounding creative risk. It is interested in more reboots, more procedurals, more things similar to what you just watched.TV has always relied on formula, not necessarily in a bad way. It iterates, it churns out slight variations on a theme, it provides comfort. That’s what has long made strictly formatted shows like “Law & Order” such reliable, relaxing prime-time companions. That’s also what could make them among the first candidates for A.I. screenwriting.Large language models like ChatGPT work by digesting vast quantities of existing text, identifying patterns and responding to prompts by mimicking what they’ve learned. The more done-to-death a TV idea is, the greater the corpus of text available on it.And, well, there are a lot of “Law & Order” scripts, a lot of superhero plots, a lot of dystopian thrillers. How many writers-contract cycles before you can simply drop the “Harry Potter” novels into the Scriptonator 3000 and let it spit out a multiseason series?In the perceptive words of “Mrs. Davis,” the wildly human comedic thriller about an all-powerful A.I., “Algorithms love clichés.” And there’s a direct line between the unoriginality of the business — things TV critics complain about, like reboots and intellectual-property adaptations and plain old derivative stories — and the ease with which entertainment could become bloated by machine-generated mediocrity.After all, if studios treat writers like machines, asking for more remakes and clones — and if viewers are satisfied with that — it’s easy to imagine the bean counters wanting to skip the middle-human and simply use a program that never dreamed of becoming the next Phoebe Waller-Bridge.And one could reasonably ask, why not? Why not leave the formulas to machines and rely on people only for more innovative work? Beyond the human cost of unemployment, though, there’s an entire ecosystem in which writers come up, often through precisely those workmanlike shows, to learn the ropes.Highly formatted shows like “Law & Order” could be among the early candidates for A.I.-generated scripts. NBCThose same writers may be able to use A.I. tools productively; the WGA is calling for guardrails, not a ban. And the immediate threat of A.I. to writers’ careers may be overstated, as you know if you’ve ever tried to get ChatGPT to tell you a joke. (It’s a big fan of cornball “Why did the …” and “What do you call a …” constructions.) Some speculations, like the director Joe Russo’s musing that A.I. some day might be able to whip up a rom-com starring your avatar and Marilyn Monroe’s, feel like science fiction.But science fiction has a way of becoming science fact. A year ago, ChatGPT wasn’t even available to the public. The last time the writers went on strike, in 2007, one of the sticking points involved streaming media, then a niche business involving things like iTunes downloads. Today, streaming has swallowed the industry.The potential rise of A.I. has workplace implications for writers, but it’s not only a labor issue. We, too, have a stake in the war with the storybots. A culture that is fed entirely by regurgitating existing ideas is a stagnant one. We need invention, experimentation and, yes, failure, in order to advance and evolve. The logical conclusion of an algorithmicized, “more like what you just watched” entertainment industry is a popular culture that just … stops.Maybe someday A.I. will be capable of genuine invention. It’s also possible that what “invention” means for advanced A.I. will be different from anything we’re used to — it might be wondrous or weird or incomprehensible. At that point, there’s a whole discussion we can have about what “creativity” actually means and whether it is by definition limited to humans.But what we do know is that, in this timeline, it is a human skill to create a story that surprises, challenges, frustrates, discovers ideas that did not exist before. Whether we care about that — whether we value it over an unlimited supply of reliable, good-enough menu options — is, for now, still our choice. More

  • in

    Film and TV Writers on Strike Picket Outside Hollywood Studios

    Those in picket lines at the headquarters of companies like Netflix were critical of working conditions that have become routine in the streaming era.Ellen Stutzman, a senior Writers Guild of America official, stood on a battered patch of grass outside Netflix headquarters in Los Angeles. She was calm — remarkably so, given the wild scene unfolding around her, and the role she had played in its creation.“Hey, Netflix! You’re no good! Pay your writers like you should!” hundreds of striking movie and television writers shouted in unison as they marched outside the Netflix complex. The spectacle had snarled traffic on Sunset Boulevard on Tuesday afternoon, and numerous drivers blared horns in support of a strike. Undulating picket signs, a few of which were covered with expletives, added to the sense of chaos, as did a hovering news helicopter and a barking dog. “Wow,” a Netflix employee said as he inched his car out of the company’s driveway, which was blocked by writers.In February, unions representing 11,500 screenwriters selected Ms. Stutzman, 40, to be their chief negotiator in talks with studios and streaming services for a new contract. Negotiations broke off on Monday night, shortly before the contract expired. Ms. Stutzman and other union officials voted unanimously to call a strike, shattering 15 years of labor peace in Hollywood, and bringing the entertainment industry’s creative assembly lines to a grinding halt.“We told them there was a ton of pent-up anger,” Ms. Stutzman said, referring to the companies at the bargaining table, which included Amazon and Apple. “They didn’t seem to believe us.”The throng started a new chant, as if on cue. “Hey, hey! Ho, ho! This corporate greed has got to go!”Similar scenes of solidarity unfolded across the entertainment capital. At Paramount Pictures, more than 400 writers — and a few supportive actors, including Rob Lowe — assembled to wave pickets with slogans like “Despicable You” and “Honk if you like words.” Screenwriting titans like Damon Lindelof (“Watchmen,” “Lost”) and Jenny Lumet (“Rachel Getting Married,” “Star Trek: Strange New Worlds”) marched outside Amazon Studios. Acrimony hung in the air outside Walt Disney Studios, where one writer played drums on empty buckets next to a sign that read, “What we are asking for is a drop in the bucket.”Another sign goaded Mickey Mouse directly: “I smell a rat.”But the strike, at least in its opening hours, seemed to burn hottest at Netflix, with some writers describing the company as “the scene of the crime.” That is because Netflix popularized and, in some cases, pioneered streaming-era practices that writers say have made their profession an unsustainable one — a job that had always been unstable, dependent on audience tastes and the whims of revolving sets of network executives, has become much more so.The streaming giant, for instance, has become known for “mini-rooms,” which is slang for hiring small groups of writers to map out a season before any official greenlight has been given. Because it isn’t a formal writers room, the pay is less. Writers in mini-rooms will sometimes work for as little as 10 weeks, and then have to scramble to find another job. (If the show is greenlit and goes into production, fewer writers are kept on board.)“If you only get a 10-week job, which a lot of people now do, you really have to start looking for a new job on day one,” said Alex Levy, who has written for Netflix shows like “Grace and Frankie.” “In my case, I haven’t been able to get a writing job for months. I’ve had to borrow money from my family to pay my rent.”Lawrence Dai, whose credits include “The Late Late Show with James Corden” and “American Born Chinese,” a Disney+ series, echoed Ms. Levy’s frustration. “It feels like an existential moment because it’s becoming impossible to build a career,” he said. “The dream is dead.” More

  • in

    Why A.I. Movies Couldn’t Prepare Us for Bing’s Chatbot

    Instead of the chilling rationality of HAL in “2001: A Space Odyssey,” we get the messy awfulness of Microsoft’s Sydney. Call it the banality of sentience.Why are we so fascinated by stories about sentient robots, rapacious A.I. and the rise of thinking machines? Faced with that question, I did what any on writer on deadline would do and asked ChatGPT.The answers I got — a helpfully numbered list with five chatty entries — were not surprising. They were, to be honest, what I might have come up with myself after a few seconds of thought, or what I might expect to encounter in a B- term paper from a distracted undergraduate. Long on generalizations and short on sources, the bot’s essay was a sturdy summary of conventional wisdom. For example: “Sentient robots raise important moral and ethical questions about the treatment of intelligent beings, the nature of consciousness and the responsibilities of creators.”Quite so. From the myth of Pygmalion and Galatea to the medieval Jewish legend of the golem through Mary Shelley’s “Frankenstein” and beyond, we have grappled with those important questions, and also frightened and titillated ourselves with tales of our inventions coming to life. Our ingenuity as a species, channeled through individual and collective hubris, compels us to concoct artificial beings that menace and seduce us. They escape our control. They take control. They fall in love.In “The Imagination of Disaster,” Susan Sontag’s classic 1965 essay on science-fiction movies, she observed that “we live under continual threat of two equally fearful, but seemingly opposed, destinies: unremitting banality and inconceivable terror.” As Turing-tested A.I. applications have joined the pantheon of sci-fi shibboleths, they have dutifully embodied both specters.HAL 9000, the malevolent computer in Stanley Kubrick’s “2001: A Space Odyssey” (1968), is terrifying precisely because he is so banal. “Open the pod bay doors, HAL.” “I’m afraid I can’t do that, Dave.” In 2023, that perfectly chilling exchange between human and computer is echoed every day as modern-day Daves make impossible demands of HAL’s granddaughters, Siri and Alexa.Our exchanges with Siri and Alexa are everyday versions of the interactions in “2001: A Space Odyssey.”Warner Bros.That example suggests that in spite of terrors like the Terminator, the smart money was always on banality. The dreariness of ChatGPT, the soulless works of visual art produced by similar programs seem to confirm that hunch. In the real world, the bots aren’t our overlords so much as the enablers of our boredom. Our shared future — our singularity — is an endless scroll, just for the lulz.The Projectionist Chronicles the Awards SeasonThe Oscars aren’t until March, but the campaigns have begun. Kyle Buchanan is covering the films, personalities and events along the way.The Tom Cruise Factor: Stars were starstruck when the “Top Gun: Maverick” headliner showed up at the Oscar nominees luncheon.An Andrea Riseborough FAQ: Confused about the brouhaha surrounding the best actress nominee? We explain why her nod was controversial.Sundance and the Oscars: Which films from the festival could follow “CODA” to the 2024 Academy Awards.A Supporting-Actress Underdog: In “Everything Everywhere All at Once,” don’t discount the pivotal presence of Stephanie Hsu.Or so I thought, until a Microsoft application tried to break up my colleague’s marriage. Last week, Kevin Roose, a tech columnist for The Times, published a transcript of his conversations with Sydney, the volatile alter ego of the Bing search engine. “I want to do love with you,” Sydney said to Roose, and then went on to trash Roose’s relationship with his wife.That was scary but not exactly “Terminator” scary. We like to imagine technology as a kind of superego: rational, impersonal, decisive. This was a raging id. I found myself hoping that there was no pet rabbit in the Roose household, and that Sydney was not wired into any household appliances. That’s a movie reference, by the way, to “Fatal Attraction,” a notorious thriller released a few years after the first “Terminator” (1984) promised he’d be back. In another conversation, with The Associated Press, Sydney shifted from unhinged longing to unbridled hostility, making fun of the reporter’s looks and likening him to Hitler “because you are one of the most evil and worst people in history.”Maybe when we have fantasized about conscious A.I. we’ve been imagining the wrong disaster. These outbursts represent a real departure, not only from the anodyne mediocrity of other bots, but also perhaps more significantly from the dystopia we have grown accustomed to dreading.We’re more or less reconciled to the reality that machines are, in some ways, smarter than we are. We also enjoy the fantasy that they might turn out to be more sensitive. We’re therefore not prepared for the possibility that they might be chaotic, unstable and resentful — as messy as we are, or maybe more so.In “Her,” the artificial intelligence created is a consumer product, not a government creation.Warner Bros. PicturesMovies about machines with feelings often unfold in an atmosphere of hushed, wistful melancholy, in which the robots themselves are avatars of sad gentleness: Haley Joel Osment as David in “A.I. Artificial Intelligence” (2001), Scarlett Johansson as Samantha in “Her” (2013), Justin H. Min as Yang last year in “After Yang.” While HAL and Skynet, the imperial intelligence that spawned the Terminators, were creations of big government, the robots in these movies are consumer products. Totalitarian domination is the nightmare form of techno-politics: What if the tools that protect us decided to enslave us? Emotional fulfillment is the dream of consumer capitalism: What if our toys loved us back?Why wouldn’t they? In these movies, we are lovers and fighters, striking back against oppression and responding to vulnerability with kindness. Even as humans fear the superiority of the machines, our species remains the ideal to which they aspire. Their dream is to be us. When it comes true, the Terminator discovers a conscience, and the store-bought surrogate children, lovers and siblings learn about sacrifice and loss. It’s the opposite of dystopia.Where we really live is the opposite of that. At the movies, the machines absorb and emulate the noblest of human attributes: intelligence, compassion, loyalty, ardor. Sydney offers a blunt rebuttal, reminding us of our limitless capacity for aggression, deceit, irrationality and plain old meanness.What did we expect? Sydney and her kin derive their understanding of humanness — the information that feeds their models and algorithms — from the internet, itself a utopian invention that has evolved into an archive of human awfulness. How did these bots get so creepy, so nasty, so untrustworthy? The answer is banal. Also terrifying. It’s in the mirror. More

  • in

    Music, Science and Healing Intersect in an A.I. Opera

    “This is what your brain was doing!” a Lincoln Center staffer said to Shanta Thake, the performing arts complex’s artistic director, while swiping through some freshly taken photos.It was the end of a recent rehearsal at Alice Tully Hall for “Song of the Ambassadors,” a work-in-progress that fuses elements of traditional opera with artificial intelligence and neuroscience, and the photos did appear to show Thake’s brain doing something remarkable: generating images of flowers. Bright, colorful, fantastical flowers of no known species or genus, morphing continuously in size, color and shape, as if botany and fluid dynamics had somehow merged.“Song of the Ambassadors,” which was presented to the public at Tully on Tuesday evening, was created by K Allado-McDowell, who leads the Artists and Machine Intelligence initiative at Google, with the A.I. program GPT-3; the composer Derrick Skye, who integrates electronics and non-Western motifs into his work; and the data artist Refik Anadol, who contributed A.I.-generated visualizations. There were three singers — “ambassadors” to the sun, space and life — as well as a percussionist, a violinist and a flute player. Thake, sitting silently to one side of the stage with a simple, inexpensive EEG monitor on her head, was the “brainist,” feeding brain waves into Anadol’s A.I. algorithm to generate the otherworldly patterns.“I’m using my brain as a prop,” she said in an interview.The “ambassadors” included, from left, Debi Wong, Laurel Semerdjian and Andrew Turner.Vincent Tullo for The New York TimesDigital art by Refik Anadol was projected above the Tully stage.Vincent Tullo for The New York TimesJust to the side of the stage, level with the musicians, sat a pair of neuroscientists, Ying Choon Wu and Alex Khalil, who had been monitoring the brain waves of two audience volunteers sitting nearby, with their heads encased in research-grade headsets from a company called Cognionics.Wu, a scientist at the University of California, San Diego, investigates the effects of works of art on the brain; in another study, she’s observing the brain waves of people viewing paintings at the San Diego Museum of Art. Khalil, a former U.C. San Diego researcher who now teaches ethnomusicology at University College Cork in Ireland, focuses on how music gets people to synchronize their behavior. Both aim to integrate art and science.Read More on Artificial IntelligencePublic Defenders: Clearview AI’s facial recognition software has been largely restricted to law enforcement. Now, the company plans to offer access to defense lawyers.Creating Art: Artwork made with artificial intelligence won a prize at the Colorado State Fair’s art competition — and set off fierce backlash about how art is generated.Generative A.I.: Apps like Stable Fusion use artificial intelligence to create images. Some say it is the key to unlock creativity, but critics abound.Are These People Real?: We created our own artificial-technology system to understand how easy it is for a computer to generate fake faces.Which makes them a good match for Allado-McDowell, who first pitched “Song of the Ambassadors” in January 2021 as a participant in the Collider, a Lincoln Center fellowship program supported by the Mellon Foundation. “My proposal was to think about the concert hall as a place where healing could happen,” said Allado-McDowell, 45, who uses the gender-neutral pronouns “they” and “them.”Healing has long preoccupied them. They suffered from severe migraines for years; then, as a student at San Francisco State University, they signed up for a yoga class that took an unexpected turn. “I was besieged by rainbows,” they recalled in a forthcoming memoir. “Orbs of light flickered in my vision. Panting shallow breaths, I broke out of the teacher’s hypnotic groove and escaped to the hall outside. As I knelt on the carpet, cool liquid uncoiled in my lower back … as a glowing purple sphere pulsed gold and green in my inner vision.”This, they were told, was a relatively mild form of kundalini awakening — kundalini being, in Hindu mythology, the serpent that is coiled at the base of the spine, a powerful energy that generally emerges from its dormant state only after extensive meditation and chanting. Others might simply have dropped yoga. “For me, it was an indication that I didn’t understand reality,” Allado-McDowell said. “It showed me that I didn’t have a functional cosmology.”Audience volunteers were outfitted with research-grade headsets from a company called Cognionics.Vincent Tullo for The New York TimesWhat followed was a yearslong quest to get one. Along the way, they picked up a master’s degree in art and went to work for a Taiwanese tech company in Seattle. At one point, while sitting in a clearing in the Amazon rainforest, they had a thought: “A.I.s are the children of humanity. They need to learn to love and to be loved. Otherwise they will become psychopaths and kill everyone.”Later, in 2014, Allado-McDowell joined a nascent A.I. research team at Google. When the leader suggested collaborations with artists, they volunteered to lead the initiative. Artists and Machine Intelligence was launched in February 2016 — 50 years after “9 Evenings: Theater and Engineering,” the pioneering union of art and technology led by Robert Rauschenberg and the AT&T Bell Labs engineer Billy Kluver. The connection was not lost on Allado-McDowell.One of the earliest partnerships they established was with Anadol: first for “Archive Dreaming,” a project inspired by the Borges story “The Library of Babel,” then for “WDCH Dreams,” Anadol’s A.I.-driven projection onto the billowing steel superstructure of the Frank Gehry-designed Walt Disney Concert Hall in Los Angeles. For “Song of the Ambassadors,” Anadol said, “we are transforming brain activities in real time into an ever-changing color space.”Anadol’s artwork also responds to Skye’s music, which alternates between periods of activity and repose. “We wanted to bring people in and out of a space of meditation,” Skye said. “I carved out these long gaps where all we’re doing is environmental sounds. Then we slowly bring them out.”All this is tied to Allado-McDowell’s goal of testing the therapeutic powers of music in a performance setting. “Might there be policy implications?” they asked. “Might there be a role that institutions could play if we know that sound and music is healing? Can that open up new possibilities for arts funding, for policy, for what is considered a therapeutic experience or an artistic experience?”The jury is still out.“We know that listening to music has an immediate impact for things like mood, attention, focus,” said Lori Gooding, an associate professor of music therapy at Florida State University and president of the American Music Therapy Association. Positive results have been found for people who have suffered a stroke, for example — but that’s after individualized therapy in a medical or professional setting. The approach in “Song of the Ambassadors,” she said, is different because of “the public aspect of it.”Derrick Skye’s score was performed by musicians including the violinist Joshua Henderson.Vincent Tullo for The New York TimesOne goal of the project is to turn a hall like Tully into a public healing space.Vincent Tullo for The New York TimesWu and Khalil, the neuroscientists involved with the production, have yet to analyze their data. But at a panel discussion preceding Tuesday’s performance — and yes, this opera did come with a panel discussion — Khalil made a prediction that left the audience cheering.“We’ve started to understand that cognition — that is, the working of the mind — exists far outside our head,” he said. “We used to imagine that the brain is a processor and that cognition happened there. But actually, we think our minds extend throughout our bodies and beyond our bodies into the world.”With music, he continued, these extended minds can lock onto rhythms, and through the rhythms onto other minds, and then onto yet more. As for the spaces where that happens, Khalil said, “You can start to think of them as healing places.” More