Google built a hardware interface for its AI music maker

Music and technology go together hand in hand; drum machines and modular synths are just some of the more recent music technologies to emerge. Last year, a Google Brain project called Magenta created NSynth (Neural Synthesizer), a set of AI and machine learning tools that learn the characteristics of sound and create entirely new sounds from those attributes. Now, in collaboration with Google Creative Lab, the team has built NSynth Super, hardware to interface with NSynth using up to four source sounds at once to algorithmically create new sounds.

The team recorded 16 sounds sources across a 15-pitch range for input to the NSynth algorithm, which resulted in more than 100,000 newly created sounds, not just blends. These new sounds were then loaded into the NSynth Super, which has a touch screen musicians can drag their fingers across to play the new sounds. It’s still early days with this music tech, but the project is open source; code and design files can be found on GitHub if you want to make your own.

Source: Google Brain, Google

Source: Engadget - Read the full article here

Author: Daily Tech Whip

This article is part of our 'News Tiles' service. The site is currently in Beta. When it is fully operational you will be able to search through and arrange the 'Tiles' to display a keyword, product or technology over your chosen time period. For example you would be able to display all of the leading tech articles on the new Kindle Fire, in one spot in real time. You will also have access to our own original reporting and analysis as well as a polished place to post your own thoughts & reviews here, amongst the Daily Tech Whip Community. Please let us know if you have any feedback via the contact form or via Twitter. Don't forget to come back next week and see our full site and claim your name and your own free tech blog.

Share This Post On