AMPACT
AUTOMATIC MUSIC PERFORMANCE ANALYSIS AND COMPARISON TOOLKIT
pyAMPACT is Python package for aligning score and audio representations of music and estimating performance parameters.
This NEH-funded project facilitates not only the estimation of performance data, but provides a suite of tools for reading, audio-linking, and writing symbolic music files in a variety of formats.
Computational Bioacoustics
DEVELOPING DEEP LEARNING MODELS FOR BIOACOUSTIC MONITORING
Across North America, Arctic and boreal regions have been warming at a rate two to three times higher than the global average. At the same time, human development continues to encroach and intensify, primarily due to demand for natural resources, such as oil and gas.
This NSF-funded project is developing the computational techniques to analyze large volumes of soundscape data from autonomous recording networks installed in Arctic-boreal Alaska and northwestern Canada.
Integrating Domain Knowledge into Deep Learning Models
CONSTRAINING DEEP LEARNING MODELS FOR music and AUDIO
Humans are able to learn with greater efficiency than machine learning models, in large part because they learn not just from exposure, but also from domain knowledge, which includes codified knowledge and guided practice.
The goal of the project is to help machines learn more efficiently by mimicking the ways in which humans learn, as well as to develop models with increased accuracy and interpretability. A central hypothesis underlying this project is that the types of pedagogies that are useful for efficiently teaching humans are also useful for teaching machines.
SingWell
GROUP SINGING FOR PEOPLE LIVING WITH COMMUNICATION CHALLENGES
Our lab contributes acoustic analysis knowledge and techniques to the SSHRC-funded SingWell partnership grant.