AI

New generative AI tools open the doors of music creation

1 Mins read

This work was made possible by core research and engineering efforts from Andrea Agostinelli, Zalán Borsos, George Brower, Antoine Caillon, Cătălina Cangea, Noah Constant, Michael Chang, Chris Deaner, Timo Denk, Chris Donahue, Michael Dooley, Jesse Engel, Christian Frank, Beat Gfeller, Tobenna Peter Igwe, Drew Jaegle, Matej Kastelic, Kazuya Kawakami, Pen Li, Ethan Manilow, Yotam Mann, Colin McArdell, Brian McWilliams, Adam Roberts, Matt Sharifi, Ian Simon, Ondrej Skopek, Marco Tagliasacchi, Cassie Tarakajian, Alex Tudor, Victor Ungureanu, Mauro Verzetti, Damien Vincent, Luyu Wang, Björn Winkler, Yan Wu, and Mauricio Zuluaga.

MusicFX DJ was developed by Antoine Caillon, Noah Constant, Jesse Engel, Alberto Lalama, Hema Manickavasagam, Adam Roberts, Ian Simon, and Cassie Tarakajian in collaboration with our partners from Google Labs including Obed Appiah-Agyeman, Tahj Atkinson, Carlie de Boer, Phillip Campion, Sai Kiran Gorthi, Kelly Lau-Kee, Elias Roman, Noah Semus, Trond Wuellner, Kristin Yim, and Jamie Zyskowski. We give our deepest thanks to Jacob Collier, Ben Bloomberg, and Fran Haincourt for their valuable feedback throughout the development process.

Music AI Sandbox was developed by Andrea Agostinelli, George Brower, Ross Cairns (xWF), Michael Chang, Yeawon Choi, Chris Deaner, Jesse Engel, Reed Enger, Beat Gfeller, Tom Hume, Tom Jenkins, Max Edelmann (xWF), Drew Jaegle, DY Kim, David Madras, Hema Manickavasagam, Ethan Manilow, Yotam Mann, Colin McArdell, Chris Reardon, Felix Riedel, Adam Roberts, Arathi Sethumadhavan, Eleni Shaw, Sage Stevens, Amy Stuart, Luyu Wang, Pawel Wluka, and Yan Wu in collaboration with our partners in YouTube and Tech & Society.

Dream Track was developed by Andrea Agostinelli, Zalán Borsos, Geoffrey Cideron, Timo Denk, Michael Dooley, Christian Frank, Sertan Girgin, Myriam Hamed Torres, Matej Kastelic, Pen Li, Brian McWilliams, Matt Sharifi, Ondrej Skopek, Marco Tagliasacchi, Mauro Verzetti, Mauricio Zuluaga, in collaboration with our partners in YouTube.

Special thanks to Aäron van den Oord, Tom Hume, Douglas Eck, Eli Collins, Mira Lane, Koray Kavukcuoglu, and Demis Hassabis for their insightful guidance and support throughout the research process. Thanks to Mahyar Bordbar and DY Kim for helping coordinate these efforts, as well as the YouTube Artist Partnerships team for their support partnering with the music industry.

We also acknowledge the many other individuals who contributed across Google DeepMind and Alphabet, including our partners at YouTube.


Source link

Related posts
AI

CtrlSynth: Controllable Image-Text Synthesis for Data-Efficient Multimodal Learning

1 Mins read
Pretraining robust vision or multimodal foundation models (e.g., CLIP) relies on large-scale datasets that may be noisy, potentially misaligned, and have long-tail…
AI

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

9 Mins read
This post is co-written with Jerry Henley, Hans Buchheim and Roy Gunter from Classworks. Classworks is an online teacher and student platform…
AI

MUSCLE: A Model Update Strategy for Compatible LLM Evolution

1 Mins read
Large Language Models (LLMs) are regularly updated to enhance performance, typically through changes in data or architecture. Within the update process, developers…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *