Learning Synths and RNBO
How RNBO helped Learning Synths to take a high quality synthesizer and run it in the browser.
Contributed by Sam Tarakajian
We've seen several groundbreaking milestones in Web Audio, including Patatap, Typatone , Gibber , Pink Trombone , Leimma & Apotome, and Google's Shared Piano. To that list I’d also add Learning Synths, released by Ableton in 2019, a completely free, interactive tool for learning about sound synthesis. When you first navigate to Learning Synths, you’re presented with a big, inviting XY pad with a one-word prompt: drag.
As soon as you do, your computer starts to make a gleefully wobbly, electronic bass sound with lots of satisfying filter modulation. Impressively, it’s not a recording—it’s synthesized in real time. It’s a tantalizing invitation to dive deeper into the site, where you’ll learn how simple oscillators, envelopes, and filters all work together to build a rich sound. Here's their real-life example of an audio filter.
For anyone who has built anything for the web, it’s clear that a lot of hard work, technical ability, and artistic vision has come together to produce something special. What might not be obvious is the inside story, how diverse skills came together around technologies that were literally under development at the same time as the site itself. I'm not just talking about WebAssembly and Audio Worklets, I'm also talking about RNBO, the tool for exporting embeddable code from Max. In fact, RNBO was developed in collaboration with Learning Synths, and the synthesizer that the website uses was exported from a Max patch, adapted from a Max for Live device called Poli (scroll down, you'll see it).
Learning Synths is that special kind of teaching resource that’s brimming over with love for its subject matter. It’s obvious that everyone on the team has a close relationship with music, and an affection for electronic music in particular. Dennis DeSantis, who heads the team in addition to “writing content: learning materials, musical and sound design examples,” has a particular passion for helping people to learn to make music. In addition to maintaining the manual for Live, he’s also the author of Making Music: 74 Creative Strategies for Electronic Music Producers. Explaining the motivation behind Learning Synths, he says:
It felt like a pretty logical place to go after Learning Music, partly because it just made sense to repeat that pattern for other topics around music making. Synthesis is a big topic and it’s pretty hard to get your head around. What is an envelope actually doing, for example? If you try to learn about this, you come across a lot of language that is technically true but difficult to really get.
Dennis is referencing here an earlier project, Learning Music, which was officially released in 2017. Similar to Learning Synths in scope, Learning Music uses samples drawn from a contemporary corpus including Beyonce's “Single Ladies (Put a Ring on It)” to teach the principles of producing electronic music. That project grew out of a close collaboration between Dennis and Jack Schaedler, who had just finished a website called Seeing Circles, Sines, and Signals.
Jack also highlights the importance of visualization and experimentation:
People learn in different ways. I think in many cases it's hard to develop a feel or intuition for some concept unless you're able to play around with it yourself. It's great to read some concise text that explains how (for example) an audio filter works. It's also great to be able to play around with an audio filter yourself to develop a more intuitive understanding for how it feels and sounds and operates.
As work on Learning Synths started to accelerate, Chris Peck and Maya Shenfeld would join the team. A songwriter and music educator, who was once inspired by the “German faux-Andes group Cusco down a years-long detour of learning to play the flute,” Chris explains his role: “My official title is “software engineer,” but I end up contributing to the pedagogical design as well because of my background as a musician and teacher.” Maya joined a bit later into Learning Synths' development, coming from a background in music education. “Dennis’s ideas really resonated with me, especially the part about getting learners to experiment with music making from the very beginning as opposed to learning theory or a set of skills first, and getting creative later.”
Cycling '74 collaborated with Learning Music from the start, though at first the RNBO team was very small. My colleague, Stefan Brunner, a "guitar player who went rogue by diving into experimental music and media art,” was deeply involved in getting the first Learning Synths patch to run in the browser. As the RNBO team grew, we kept refining exactly how the RNBO export connected to the web browser. To understand a bit better why that’s significant, we need to talk about Web Audio.
Audio in the Browser
All of the examples on Learning Synths are powered by the same synthesizer, and if you skip ahead to the Playground then you can see how everything fits together. Experimenting in the Playground, watching all the animations and automations play off each other, the most impressive thing is probably what you don’t notice. On the right hardware, there are no clicks, pops, or audio discontinuities. With everything running so smoothly, even while the site is rendering dynamic visuals, it's easy to forget that you're looking at a website, and not a native application.
The secret to all of this is WebAssembly and Audio Worklets. The former is a highly efficient machine language that can run in a browser, and the latter is a browser technology that lets sound processing happen in its own high-priority thread. As Jack says,
Of course, that last bit is covering up a lot of work. Compiling a synthesizer to WebAssembly isn't necessarily easy. This is where RNBO comes in. With RNBO, the Learning Synths team could build a patch in Max, and then export the complete patch to WebAssembly code. One of the goals for RNBO was to eliminate as much of this work as possible. With RNBO you can export a patch directly to WebAssembly, and the rnbo.js library makes it easy to integrate that export into a web app.
The Poli Synthesizer
Behind the scenes, everything that you hear on the Learning Synths website is driven by a single synthesizer. That synthesizer is adapted from a Max for Live synth called Poli: a subtractive synthesizer driven by a blend between a square wave oscillator, a sawtooth oscillator, and noise. The pieces are simple but they come together in a precisely measured way—browsing through the recipes on the Learning Synths site you can get a sense of just what a wide range of sounds these components are able to make.
The Poli synthesizer was designed by Christian Kleine as a Max for Live instrument, which meant that it was already working as a Max patch. Often, sound designers at Ableton will prototype a synthesizer in Max, rapidly iterating on their designs until it’s time to refine a final version. Sometimes, the project then gets handed off to programmers who realize the synthesizer in hand-written code. However, Learning Synths wanted to do something that no team at Ableton had done before: release a version of their synthesizer that could run in the browser. This is where RNBO can help. In fact, you can download right here the same Poli-based synthesizer that was used to build Learning Synths, as a RNBO patch in Max.
Building the synthesizer in Max allows the team to collaborate much more effectively. Jack explains the process more in depth:
Roughly, the site is split into two major components. The user-interface and the audio processing engine for the synthesizer. We use standard tools like Node, TypeScript, HTML, and CSS to develop the interface, and RNBO is our tool for developing the synthesizer engine.
Some team members would work on our synthesizer patch in RNBO and then export the WebAssembly code as a JSON file for inclusion into the site. Then, other team members would write TypeScript code to add visualizations, controls, and pages which made use of the synth's features.
Because the RNBO synth can just be treated like a single JSON asset, and RNBO provides a nice SDK in the form of a npm package for sending and receiving messages from the synthesizer, integrating it into our project was fairly straightforward.
To see how they did that integration, let’s look at some working code. Learning Synths is full of cute interactive examples. My personal favorite is the carousel analogy for LFO modulation—the animation of a speaker on a carousel explains how a low-frequency change can shape the amplitude of a continuous tone.
But how do they do it? This is where the RNBO API steps in. When you design a RNBO patch, you can choose how that patch will communicate with its host environment after you export it. You can use audio inputs and outputs, parameters, and general purpose message inputs/outputs. To make an example like this carousel:
- Create a parameter that determines the frequency of an LFO in RNBO.
- Use a snapshot~ object to convert that audio-rate LFO to a series of control-rate messages.
- Make a general purpose message output to report the state of the LFO.
- Use that message output to drive the state of an animation.
Here’s a codepen that shows it all coming together, with my own very minimal impression of how Learning Synths is doing it.
Exporting and the Future
In 2021, Learning Synths got a big update: the ability to take the exact state of the synthesizer in the browser and open it in Ableton Live. After exporting, you get a Live set containing a MIDI Instrument track with a special Max for Live device in it.
When you click and drag the XY pad labeled “Perform,” you’ll hear your synth sound exactly as you configured it on the Learning Synths website. The visual presentation had to be translated by hand, but the synthesizer itself is based on the exact same RNBO patch. To run in the browser, you export the Poli synthesizer patch to Web Assembly. To run in a Max for Live device, you just have to export the patch to a Max External. Whether the host application is the web browser, or Max running inside Live, the patch will sound exactly the same.
For me, this is a tiny taste of what’s so exciting about RNBO. On some level, RNBO doesn’t really do anything new. It’s already possible to build an audio synthesizer by writing C++ code, compiling that code to Web Assembly, and then running that code in the browser. But any time there’s a real creative flourishing around a new technology, it only comes after specialized tools make it easier to work with that new technology. I’m old enough to remember a time when a lot of the interesting art on the internet was happening over at Newgrounds . Newgrounds was (is still?) the capital of Macromedia Flash-based art. Even before broadband internet was a thing—I vividly remember waiting literal hours for a Flash game to load on a rainy Sunday—internet users were drowning in a flood of animations, games, playable novels, and other forms of interactive art. Jack captures the sentiment well:
I'm most excited about RNBOs ability to export to the web. Similar to Max, I think the web is a great platform for independent and hobbyist developers. Since the Max community is so creative, quirky, and artistic, I'm really looking forward to seeing lots of novel and interesting sonic experiences coming to the web in the next few years. It seems like these two platforms are a good match for one another vibe-wise.
Flash is mostly a relic of the past now, but a glut of new technologies have stepped in to replace and extend it. The power of the browser, leveraged by libraries like p5.js, Three.js, Paper.js, PixiJS, Tone.js, ammo.js, Tensorflow.js, etc., creates an explosive venue for the singular and the weird. My personal hope is that RNBO can join these technologies in helping people make and share art. I mentioned Patatap, Pink Trombone, Leimma and others—all amazing pieces of art, but I hope to watch the list of internet-based audio artwork get much longer. Learning Synths is already an amazing showcase of what’s possible, and I’m also excited to see what comes next.