Sound Technology for a Noisy World
Whether you're a small company looking to add sophisticated features to your products, a media production team that needs a custom audio engine, or an artist exploring what's possible with sound, Heaviside Research can help. We have deep expertise in the latest techniques and tools, as well as thorough knowledge of classic approaches that have withstood the test of time.We have hands-on experience building and deploying real-world systems, from data pipelines to art installations. We've worked with Fortune-100 companies, top universities, scrappy startups, and independent artists.
Get in touch with us at info@heavisideresearch.com.
Services
Algorithm Development
Extract Maximum Value from Signals and Time-Series Data
Hybrid DSP/AI Solutions
Balance Mathematical Soundness and Pragmatic Engineering
System Implementation
Interactive Installations
High-Performance Data Pipelines
Custom Hardware
Mechanical Design
Education and Training
Level up Your Team's Signal Processing Knowledge
Adopt Data and Code Best Practices
Experience teaching at MIT, Columbia University, and Berklee School of Music
Selected Projects
spatial audio installation
Collective Echos
Collective Echos is a sound art installation that premiered at Ars Electronica 2023. The MIT Center for Constructive Communication approached us, looking for a way to highlight the incredible stories they have recorded. The concept was to create separate spatial zones, each corresponding to a different voice telling their story.We extended a video game audio engine to support multiple listeners, and render the sonic landscape. A custom computer vision system tracks the position and orientation of each visitor's head, moving a matching listener in the virtual world. The spatialized audio is then transmitted wirelessly to the listeners' headphones.Despite the technology under the hood, the user experience is simple: put on a pair of modified headphones and walk freely through the space. The tech becomes invisible, putting the focus wholly on the stories. Follow your ears.
Computer Vision
Unity Game Engine
WWise Audio Engine
Mechanical Design
3D Printing
advanced algorithm research
Soundfield Resynthesis
For decades, engineers have had the technology to record sound in all directions simultaneously, enabling us to capture a surround-sound experience.Taking this technology one step further, Soundfield Resynthesis is a technology to capture a whole volumetric field of sound, freeing the listener to move through the space and listen from any perspective.Movement is a fundamental part of how we hear the world. There has been abundant research in synthesizing these soundscapes in VR, AR, and gaming. This complementary technology enables us to record the world the same way we experience it.
Source Separation
Acoustic Localization
Spatial Audio Rendering
Microphone Arrays
deep learning and ai
Music De-Mixing
In collaboration with an industry-leading developer of audio plugins and tools, we developed and trained an AI model that un-mixes a music track into individual instruments. It was productized and released as part of a world-class tool for audio post production and repair.
Source Separation
Deep Neural Networks and AI
Critical Listening
Multichannel DSP
data science
Biomedical Analysis
A client came to us with a collection of features extracted from on-body motion trackers. They wanted deeper insight into the structure of their data, and whether it could be used to characterize injuries.We were able to organize and analyze their data, and deliver insights and visualizations that gave them a more complete understanding of it. Not only was this analysis useful internally, but the client included much of it directly in presentations to their customers. This added significant value to the work of their teams.
Clustering
Statistical Analysis
Data Manipulation
Visualization
cross-reality
Hakoniwa
Our team built an augmented-reality experience, miniaturizing a real-world wetland and overlaying it in the user's surroundings. The name, Hakoniwa [箱庭], is an homage to Japanese miniature gardens.The experience is a window into the real-time state of the wetland. Sensor data from the Tidmarsh Living Observatory is visualized on a living three-dimensional map. Environmental conditions such as the temperature and humidity are sonified into a dynamic music composition, creating a listening experience that is both aesthetic and rooted to a distant natural space.Participants can use their gaze to navigate a rich library of recordings collected during the wetland's restoration, facilitating natural exploration and a deeper connection to this place and its history.
Augmented Reality
Unity Game Engine
Real Time Sensor Data Pipeline
Data Sonification
This work was a collaboration with members of the Responsive Environments Group at the MIT Media Lab
musical interface development
Snyderphonics Manta
The Snyderphonics Manta is a highly sensitive and expressive musical controller. It can control synthesis, video playback, audio processing, or whatever else the player wants to expressively control in real time.In collaboration with the inventor, Jeff Snyder, we wrote the core software driver that enables communication with the device via USB. We also created modules to easily integrate the Manta into the Max/MSP computer music system.Our software is the foundation of several user-facing tools used by the Manta community.
USB Communication
Low-Latency Hardware Interfaces
Custom Max/MSP Development
API Design
hardware and sensors
HearThere
HearThere is a head tracking system intended for spatial audio applications. Our team developed a custom development board for prototyping, and validated the accuracy and performance with extensive experiments.The device transmits the position and orientation of a listener's head to a mobile device over a low-latency BLE connection, allowing the phone to render realistic binaural spatial audio. By combining head tracking with bone-conduction headphones, the listener hears the rendered spatial soundscape mixed seamlessly with their actual environment. This combination of of head tracking and audio transparency anticipated the mainstream adoption of similar technology (such as with Apple's AirPods Pro) by 5 years.HearThere was miniaturized and deployed at the Tidmarsh Living Observatory, allowing visitors to extend their hearing by tapping into a network of microphones deployed throughout a wetland.
Inertial sensing (IMU)
Ultra-Wideband Localization
Bluetooth Low-Energy (BLE)
PCB Design and Fabrication
This work was a collaboration with members of the Responsive Environments Group at the MIT Media Lab
About The founder
Spencer Russell has been a full-time touring musician, written firmware at a startup, received a Ph.D. from the MIT Media Lab, and designed advanced signal processing algorithms for smart speakers deployed by the millions. He has had three patents granted, in the fields of acoustic localization and ultrasonics.He has performed, given talks, and installed sound art throughout the US, UK, Europe, and the Middle East. He is a research affiliate at the MIT Media Lab and Assistant Professor at Berklee College of Music.You can get his CV here.
Contact
Thank you!
We'll get back to you soon.
Meeting Availability