Composer Tod Machover speaks on collaborative AI, progress of media technology
While discussing his City Symphony projects, musician and inventor Tod Machover presents an image of a New York Times headline reading, “How a Philly Cheesesteak Goes From the Grill to Carnegie Hall.” The MIT Media Lab faculty member spoke about the future of generative artificial intelligence in the field of music at Lani Hall on Wednesday evening. (Leydi Cris Cobo Cordon/Daily Bruin senior staff)
Dec. 7, 2023 8:22 p.m.
Tod Machover is generating new collaborations.
On Wednesday evening, the MIT Media Lab faculty member and Musical America’s 2016 Composer of the Year gave a two-hour presentation at the Herb Alpert School of Music. Titled “Collaborative Composing: Sharing Authorship with People and AI,” the event explored the utility of generative artificial intelligence for music composition and production. Machover was introduced by a lineup that included Inaugural Dean Eileen Strempel, Chair of the Music Industry Program Robert Fink and adjunct professor Don Franzen. The event was attended by the Forensic Musicology class, a course that Franzen said places a focus on the intersection between music, law and technology. At 6:18 p.m., Machover took the stage and reflected on the ideas presented in the opening speeches.
“What I’d like to do is give you a little bit of a through line of things that I’ve been interested in,” Machover said. “The sort of through line that I wanted to connect things with is this idea of collaboration because, after all, everything we do has to do with collaborating with something or with someone – whether it’s with people we care about (or) whether it’s with machines.”
To kick off the conversation, Machover showed attendees a screengrab of what he said was an AI-generated article recapping the evening’s event before it had even commenced. Its existence, he added, made him question the future of the technology without collaboration. Considering this, he said the only way to ensure a productive world is by maintaining partnerships with these new machines. For this talk, he said he would outline the mechanisms by which he believes this technology can be used to move in a positive direction.
Reflecting on his time at Julliard, Machover said composing was initially seen as an act done in isolation. Later on, technology became a source of freedom for him, he said, as machines allowed him to shape and mold music in an interactive manner. However, he said the newfound accessibility of music technology can also harm the art form as it can lead to rushed works.
“If a machine can write an article about the talk I haven’t given yet, a machine also … (can) generate hours and hours of music without very much instruction at all,” Machover said. “So what is the kind of collaboration that this new AI world allows us to participate in? And what would we have to change to make it an even more ideal world?”
One such advancement in the blend of music and technology is hyperinstruments, which the MIT Media Lab describes as instruments that offer more controls for performers. After summarizing multiple related opera projects, Machover introduced several works created at the intersection of the fields of music and medicine. Other technologies, such as Hyperscore, give more individuals the ability to create their own pieces, he said. Complete with bright colors and lines, he said the simple interface can even be used by children. Furthermore, Machover introduced the collection of City Symphony projects, which he said take inspiration from both the music scenes and natural sounds of any given location.
After reflecting on major advancements in music technology, including the development of the Musical Instrument Digital Interface, Machover said generative AI is a rapidly moving technology. He said although it is able to replicate and continue notes, AI struggles with capturing the humanity of emotion in music. Machover said the two primary downfalls of the technology that is currently being developed are its goals of replicating pre-existing musicians and generating more established works without rhyme or reason.
“They don’t know anything about musical structure,” Machover said. “They don’t really know how and why music works. They don’t know how musical emotion works, why music affects us the way it does. They don’t really know anything about the model or how performance and interpretation and collaboration and improvisation work. They’re not adaptable, so they don’t learn as they progress.”
To remedy these issues, Machover said musical common sense needs to be incorporated into these technologies. He said there are various forms these systems could take, such as a technology that is reactive to inputs from the performer. Ultimately, Machover said the right balance in collaboration with AI is unique to each individual.
“I’m not particularly interested, personally, in pushing a button and having a piece of music just generate on its own,” Machover said. “If you put some of your own personality in to shape something, it’s much more powerful. I’m really interested in using these systems to discover something, to find the sound, to find an idea, to find a relationship that’s really interesting … There’s a sweet spot, which isn’t random and which isn’t exactly copying, and it’s a different sweet spot, depending on who you are.”