This can be a bad news for songwriters as they may be replaced by machine learning in a few years. Yes, that is true because Google has done that, the technology is called “magenta “. There may not be a lot of time required from human to write and compose songs it would be possible by just a click of a button.

So how does it work?

The first and main thing to write and compose a song is a DataSet (some priorly known data to produce results based on them). So the Magenta first look for the data set matches and it will produce some results based on prior datasets or input/outputs. Music is all about emotions so the output would be a set of chords representing the emotions you wanted.

What Google Brain Team Says?

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models.

Datasets Available

Maestro Dataset:

MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) is a dataset composed of over 172 hours of virtuosic piano performances! They have partnered with organizers of the International Piano-e-Competition for the raw data used in this dataset. The dataset contains over a week of paired audio and MIDI recordings from nine years of International Piano-e-Competition.

Demo application: Wave2Midi2Wave

Download Library

NSynth Dataset:

A large-scale and high-quality dataset of annotated musical notes. NSynth is an audio dataset containing 305,979 musical notes, each with a unique pitch, timbre, and envelope. For 1,006 instruments from commercial sample libraries.

They also annotated each of the notes with three additional pieces of information based on a combination of human evaluation and heuristic algorithms:

Source: The method of sound production for the note’s instrument. This can be one of acoustic or electronic. Family: The high-level family of which the note’s instrument is a member. Each instrument is a member of exactly one family. Qualities: Sonic qualities of the note. Each note is annotated with zero or more qualities.

Read here to know how to use it.

Quick, Draw Dataset

You must try this demo here. They have collected 50 million drawings and collecting more via you Playing their game. You can select any drawing and know how many ways it has been drawn. Also when you play the game you have to draw a structure they tell you in 20 sec. It also recognizes the structure of the image as soon as you draw that and speak and show you the results they have on that drawing immediately. This is kind of cool!

Play the game

Demo/working Projects:

Piano Genie: An intelligent controller that maps 8-button input to a full 88-key piano in real time. You can An intelligent controller that maps 8-button input to a full 88-key piano in real time. You can try it out yourself via our interactive web demo to understand! Piano Transcription in the Browser- Try Demo This app converts raw audio to MIDI using Onsets and Frames, a neural network trained for polyphonic piano transcription. Record yourself playing piano or choose an audio file with solo piano from your device to transcribe! Multitrack MusicVAE: Try Demo The model generates individual measures with up to 8 different instruments, conditioned on the underlying chord and a latent vector. By holding the latent vector fixed and changing the underlying chord, the model can generate an arrangement over a chord progression with a consistent style. Find More Here

[amazon_link asins=’178728638X,9352135210,B01J9UUO7A,9352136101,1259096955|1491962291,178728638X,1787125939,B01M0LNE8C,1498738486,1724530682,B07335JNW1′ template=’ProductCarousel’ store=’lifenews525-21|lifenews-21′ marketplace=’IN|UK’ link_id=’ 8eaecd63-e194-11e8-9b00-a7743fa22056′]