After teaching AI to draw and paint with AutoDraw, Google has set its sight on conquering another art form: music
The company’s AI research team, Google Magenta, announced a new project in April called Neural Synthesizer, or NSynth, which generates audio using deep neural networks. That technology will be demonstrated at Durham, North Carolina’s annual arts and technology festival, Moogfest, later this week.
To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement.
The resulting sound is not like playing two of the individual sounds together, Cinjon Resnick, a member of the Magenta team told Wired. Instead, the software is actually producing an entirely new sound that would be impossible or nearly impossible to do so otherwise. The end product resembles sounds that are “in between” other instruments, combined in a way that can only be done digitally.
The code for the project is open source, which means that anyone can download, modify and use it.
The Magenta team has already produced several interesting pieces of music, one of which even one best demo at NIPS, an industry conference for neural networks and machine learning projects.
Several other systems like IBM’s Watson have been working on similar projects for music made by or with AI. For now, Google itself is not offering the software as a product, NSynth is meant to be a dataset for other developers to play with and try creative projects.
But don’t worry, Google is not trying get rid of human musicians with NSynth. The team is focused on making new sounds that are “intuitive” and “expressive,” and want to work with musicians rather than replace them.