Learn Working with Web Audio Contexts

Getting Started

Welcome to RNBO

Quickstart

RNBO Basics

Key Differences

Why We Made RNBO

Fundamentals

Audio IO

Messages to rnbo~

Using Parameters

MIDI in RNBO

Messages and Ports

Polyphony and Voice Control

Audio Files in RNBO

Using Buffers

Using the FFT

Export Targets

Export Targets Overview

VST/AudioUnit
Max External Target
Raspberry Pi Target
The Web Export Target
The C++ Source Code Target

Code Export

Working with JavaScript
Working with C++

Working with Web Audio Contexts

In order to play audio in the browser, you need to create a WebAudio AudioContext.

In order to play audio in the browser, you need to create a WebAudio AudioContext. The AudioContext is responsible for managing audio processing, as well as creating and connecting the AudioNodes that define the audio processing graph. A RNBO device will need access to an AudioContext in order to process audio, and will need to be connected to an input and output node in order to receive audio from an audio input device and send it to an audio output device.

// Some browsers may use an older version of WebKit as their browser engine, and may implement the WebAudio specification using webkitAudioContext instead of AudioContext. This line of code accounts for both.
let WAContext = window.AudioContext || window.webkitAudioContext;
let context = new WAContext();

Importantly, an AudioContext must be resumed before it will start processing audio. It's only possible to resume an AudioContext from a user-initiated event, like clicking on a button or pressing a key.

let button = document.getElementById("some-button");
button.onpointerdown = () => { context.resume() };

If at some point you expect to hear audio but don't, one of the first things to check is your browser's developer console. If you see a message like The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page., then it's likely that you need to remember to call resume on your audio context in some user-initiated context.

The AudioContext will always have a destination that you can connect other audio processing nodes to. Once you've created a RNBO device, you can get its node and connect to the AudioContext destination in order to get sound out of RNBO.

// Optionally, you can create a gain node to control the level of your RNBO device
const gainNode = context.createGain();
gainNode.connect(context.destination);
// Assuming you've created a device already, you can connect its node to other web audio nodes
device.node.connect(gainNode);
// This connects the RNBO device to the gain node, which is connected to audio output. Now sound
// coming from the RNBO device should reach the speakers.

In order to receive audio input, you'll first need to create an input node from a media stream. This can be done using the getUserMedia function with a callback that connects to your RNBO device.

// Assuming you have a RNBO device already, and an audio context as well
const handleSuccess = (stream) => {
    const source = context.createMediaStreamSource(stream);
    source.connect(device.node);
}
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
    .then(handleSuccess);