Go to JKU Homepage
Institute of Computational Perception
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Whither Music?

Exploring Musical Possibilities via Machine Simulation

 Putting the project into context: from "Con Espressione" to "Whither Music?" ... :

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Project Summary

Vision

"Whither Music?" was the motto of Leonard Bernstein's 1973 Norton Lectures at Harvard, opens an external URL in a new window ("The Unanswered Question"), where he analysed the musical developments that led to what he called the 20th Century Crisis of Music: the gradual decline of tonality, driven by a takeover of tonal ambiguity in the late 19th and early 20th centuries, eventually leading to complete abandonment of tonality in Schönberg's dodecaphony - a historical process that Bernstein portrays as equally inevitable and problematic.

WHITHER MUSIC? is a project that aims to establish model-based computer simulation (via methods of AI, (deep) Machine Learning and probabilistic modelling) as a viable methodology for asking questions about musical processes, developments, possibilities and alternatives - for music research, for didactic purposes, for creative music exploration scenarios. Computer simulation here means the design of predictive or generative computational models of music (of certain styles), learned from large corpora, and their purposeful and skilful application to answer, e.g., "what if" questions, make testable predictions, or generate musical material for further analysis, musicological or aesthetic. We believe that this would open new possibilities for music research, education, and for creative engagement with music, some of which will be further explored in the project.

Research Goals

This vision of purposeful application of computational models dictates the central methodological principles for our research:
veridical modeling and simulation require stylistically faithful, tightly controllable, transparent, and explainable models. These requirements, in turn, motivate us to develop and pursue a musically informed approach to computational modeling, as an alternative to the currently prevailing trend of end-to-end learning with huge, opaque neural networks. The cornerstones of our approach will be structured modeling (rather than end-to-end learning), multi-level and multi-scale modeling and structural projection (rather than note-by-note prediction), and exploiting musical knowledge (rather than purely data-driven inductive learning) at all levels - including the design of appropriately informed model architectures and loss functions.

Modeling Domains and Applications

In terms of modeling domains, we will be concerned with three types of computational models: models of music generation, of expressive performance, and of musical expectancy, mirroring the three major components in the system of music: the composer, the performer, and the listener.

In addition to developing fundamental machine learning and modeling methods, we will explore concrete simulation and application scenarios for our computer models, in the form of musicological studies, creative and didactic tools and exhibits, and public educational events, in cooperation with musicologists, music educators, and institutions from the creative arts and sciences sector.

At a fundamental level, the goal of his project is thus really two-fold: beyond developing the technology for, and demonstrating, controlled musical simulation for serious purposes, we wish to develop and propagate an alternative approach to AI-based music modeling, hoping to contribute to a re-orientation of the field of Music Information Research (MIR) towards more musically informed modeling - a mission we already started in our previous ERC project Con Espressione.

[And a final disclaimer, in case this is needed: of course we will not ever seriously attempt to look at Bernstein's Unanswered Question, and possible alternative paths that music could have taken, with computational methods. This only served as a grand motivation, and to give a name to our project.]

Project Details

Call identifier

ERC-2020-AdG

Project Number

101019375

Project Period

Jan 2022 - Dec 2026

Funding Amount

€ 2,500,000.00

Enjoy creative and fun experiments via our
PAOW! Live Stream, opens an external URL in a new window directly from our music lab!

                                               Trailer / Teaser Video:

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Project Results:
Publications, Resources, Presentations

Want to know more about the scientific work and results of the project?

Here's an up-to-date list of our scientific publications related to the project.

In addition to publishing our research code and experimental data along with our scientific papers (which is the norm nowadays in our field), we also provide a number of specific resources to the scientific community, in order to support and stimulate future research. These come in the form of meticulously curated, annotated datasets and in the form of software libraries that we develop and are maintaining. The resources currently offered are described in the following papers (titles and abstracts of the linked papers should be sufficient to give you a good idea of what they are about):

Datasets

1. Note-level Alignments to the ASAP Piano Performance Dataset:

2. The Batik-plays-Mozart Corpus:

Software Libraries and Specifications

Demonstrators, Videos, Online Interactive Demos

PAOW! - The Live Video Channel from our Piano Lab:

THE ACCOMPANION: (Co-)Expressive Human-AI Co-Performance:

SCHmUBERT: Constrained Music Generation:

"ON A JOURNEY TOGETHER" - Human-AI Co-Creation for the AI Song Contest 2022:

 

Science Shows for the General Public

WIENER VORLESUNGEN (Vienna Lectures), 2024:

ZIRKUS DES WISSENS (Circus of Knowledge), 2023:

SCIENCE MATINEE @ MAINS, HEIDELBERG, 2022:

2024-06-03: Gerhard Widmer and the Whither Music? team present a scientific evening event, opens an external URL in a new window at the City Hall Vienna, as part of the Wiener Vorlesungen, opens an external URL in a new window lecture series; this was broadcast live on TV by W24, opens an external URL in a new window ; the live archived livestream can also be watched on YouTube, opens an external URL in a new window .

2023-03-02: Gerhard WIdmer gave an evening show on "Measurable and Non-measurable Things in Music" at JKU's Circus of Knowledge (with two guest stars at the piano)

2022-10-23: Gerhard Widmer and team gave a public presentation ("Matinee"), opens an external URL in a new window at the Mathematics-Informatics Station (MAINS), opens an external URL in a new window in Heidelberg, with a live demonstration of our ACCompanion, opens an external URL in a new window, with Carlos Cancino Chacon, opens an external URL in a new window on the piano. Organized by the Heidelberg Laureate Forum Foundation, opens an external URL in a new window and IMAGINARY, opens an external URL in a new window.

2022-07-25: Gerhard Widmer and the Whither Music? team gave a keynote presentation, opens an external URL in a new window at the 31st International Joint Conference on Artificial Intelligence and 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022), opens an external URL in a new window, Vienna, Austria. Here's the full presentation video, opens an external URL in a new window. (Title: "AI & Music: On the Role of AI in Studying a Human Art Form")

Project Team & Research Opportunities

There are currently no open positions in this project.

News & Media Coverage

Acknowledgments

This project receives funding from the European Research Council (ERC), opens an external URL in a new window under the European Union's Horizon 2020 research and innovation programme under grant agreement No 101019375 (Whither Music?).