{"database": "24ways", "table": "articles", "is_view": false, "human_description_en": "where author = \"Ben Foxall\" and year = 2017", "rows": [[209, "Feeding the Audio Graph", "In 2004, I was given an iPod.\nI count this as one of the most intuitive pieces of technology I\u2019ve ever owned.  It wasn\u2019t because of the the snazzy (colour!) menus or circular touchpad.  I loved how smoothly it fitted into my life. I could plug in my headphones and listen to music while I was walking around town.  Then when I got home, I could plug it into an amplifier and carry on listening there.\nThere was no faff. It didn\u2019t matter if I could find my favourite mix tape, or if my WiFi was flakey - it was all just there.\nNowadays, when I\u2019m trying to pair my phone with some Bluetooth speakers, or can\u2019t find my USB-to-headphone jack, or even access any music because I don\u2019t have cellular reception; I really miss this simplicity.\nThe Web Audio API\nI think the Web Audio API feels kind of like my iPod did.\nIt\u2019s different from most browser APIs - rather than throwing around data, or updating DOM elements - you plug together a graph of audio nodes, which the browser uses to generate, process, and play sounds.\nThe thing I like about it is that you can totally plug it into whatever you want, and it\u2019ll mostly just work.\nSo, let\u2019s get started. First of all we want an audio source.\n<audio src=\"night-owl.mp3\" controls />\n\n(Song - Night Owl by Broke For Free)\nThis totally works. However, it\u2019s not using the Web Audio API, so we can\u2019t access or modify the sound it makes.\nTo hook this up to our audio graph, we can use an AudioSourceNode.  This captures the sound from the element, and lets us connect to other nodes in a graph.\nconst audioCtx = new AudioContext()\n\nconst audio = document.querySelector('audio')\nconst input = audioCtx.createAudioSourceNode(audio)\n\ninput.connect(audioCtx.destination)\n\nGreat. We\u2019ve made something that looks and sounds exactly the same as it did before. Go us.\nGain\nLet\u2019s plug in a GainNode - this allows you to alter the amplitude (volume) of an an audio stream.\nWe can hook this node up to an <input> element by setting the gain property of the node.  (The syntax for this is kind of weird because it\u2019s an AudioParam which has options to set values at precise intervals).\nconst node = audioCtx.createGain()\n\nconst input = document.querySelector('input')\ninput.oninput = () => node.gain.value = parseFloat(input.value)\n\ninput.connect(node)\nnode.connect(audioCtx.destination)\n\nYou can now see a range input, which can be dragged to update the state of our graph. This input could be any kind of element, so now you\u2019ll be free to build the volume control of your dreams.\nThere\u2019s a number of nodes that let you modify/filter an audio stream in more interesting ways.  Head over to the MDN Web Audio page for a list of them.\nAnalysers\nSomething else we can add to our graph is an AnalyserNode. This doesn\u2019t modify the audio at all, but allows us to inspect the sounds that are flowing through it. We can put this into our graph between our AudioSourceNode and the GainNode.\nconst analyser = audioCtx.createAnalyser()\n\ninput.connect(analyser)\nanalyser.connect(gain)\ngain.connect(audioCtx.destination)\nAnd now we have an analyser. We can access it from elsewhere to drive any kind of visuals.  For instance, if we wanted to draw lines on a canvas we could totally do that:\nconst waveform = new Uint8Array(analyser.fftSize)\nconst frequencies = new Uint8Array(analyser.frequencyBinCount)\nconst ctx = canvas.getContext('2d')\n\nconst loop = () => {\n    requestAnimationFrame(loop)\n    analyser.getByteTimeDomainData(waveform)\n    analyser.getByteFrequencyData(frequencies)\n\n    ctx.beginPath()\n    waveform.forEach((f, i) => ctx.lineTo(i, f))\n    ctx.lineTo(0,255)\n    frequencies.forEach((f, i) => ctx.lineTo(i, 255-f))\n    ctx.stroke()\n}\nloop()\n\nYou can see that we have two arrays of data available (I added colours for clarity):\n\nThe waveform - the raw samples of the audio being played.\nThe frequencies - a fourier transform of the audio passing through the node.\n\nWhat\u2019s cool about this is that you\u2019re not tied to any specific functionality of the Web Audio API.  If it\u2019s possible for you to update something with an array of numbers, then you can just apply it to the output of the analyser node.\nFor instance, if we wanted to, we could definitely animate a list of emoji in time with our music.\nspans.forEach(\n  (s, i) => s.style.transform = `scale(${1 + (frequencies[i]/100)})`\n)\n\n\ud83d\udd08\ud83c\udfa4\ud83c\udfa4\ud83c\udfa4\ud83c\udfba\ud83c\udfb7\ud83d\udcef\ud83c\udfb6\ud83d\udd0a\ud83c\udfb8\ud83c\udfba\ud83c\udfa4\ud83c\udfb8\ud83c\udfbc\ud83c\udfb7\ud83c\udfba\ud83c\udfbb\ud83c\udfb8\ud83c\udfbb\ud83c\udfba\ud83c\udfb8\ud83c\udfb6\ud83e\udd41\ud83c\udfb6\ud83c\udfb5\ud83c\udfb5\ud83c\udfb7\ud83d\udcef\ud83c\udfb8\ud83c\udfb9\ud83c\udfa4\ud83c\udfb7\ud83c\udfbb\ud83c\udfb7\ud83d\udd08\ud83d\udd0a\ud83d\udcef\ud83c\udfbc\ud83c\udfa4\ud83c\udfb5\ud83c\udfbc\ud83d\udcef\ud83e\udd41\ud83c\udfbb\ud83c\udfbb\ud83c\udfa4\ud83d\udd09\ud83c\udfb5\ud83c\udfb9\ud83c\udfb8\ud83c\udfb7\ud83d\udd09\ud83d\udd08\ud83d\udd09\ud83c\udfb7\ud83c\udfb6\ud83d\udd08\ud83c\udfb8\ud83c\udfb8\ud83c\udfbb\ud83c\udfa4\ud83e\udd41\ud83c\udfbc\ud83d\udcef\ud83c\udfb8\ud83c\udfb8\ud83c\udfbc\ud83c\udfb8\ud83e\udd41\ud83c\udfbc\ud83c\udfb6\ud83c\udfb6\ud83e\udd41\ud83c\udfa4\ud83d\udd0a\ud83c\udfb7\ud83d\udd0a\ud83d\udd08\ud83c\udfba\ud83d\udd0a\ud83c\udfbb\ud83c\udfb5\ud83c\udfbb\ud83c\udfb8\ud83c\udfb5\ud83c\udfba\ud83c\udfa4\ud83c\udfb7\ud83c\udfb8\ud83c\udfb6\ud83c\udfbc\ud83d\udcef\ud83d\udd08\ud83c\udfba\ud83c\udfa4\ud83c\udfb5\ud83c\udfb8\ud83c\udfb8\ud83d\udd0a\ud83c\udfb6\ud83c\udfa4\ud83e\udd41\ud83c\udfb5\ud83c\udfb9\ud83c\udfb8\ud83d\udd08\ud83c\udfbb\ud83d\udd09\ud83e\udd41\ud83d\udd09\ud83c\udfba\ud83d\udd0a\ud83c\udfb9\ud83e\udd41\ud83c\udfb7\ud83d\udcef\ud83c\udfb7\ud83c\udfb7\ud83c\udfa4\ud83c\udfb8\ud83d\udd09\ud83c\udfb9\ud83c\udfb7\ud83c\udfb8\ud83c\udfba\ud83c\udfbc\ud83c\udfa4\ud83c\udfbc\ud83c\udfb6\ud83c\udfb7\ud83c\udfa4\ud83c\udfb7\ud83d\udcef\ud83d\udcef\ud83c\udfbb\ud83c\udfa4\ud83c\udfb7\ud83d\udcef\ud83c\udfb9\ud83d\udd08\ud83c\udfb5\ud83c\udfb9\ud83c\udfbc\ud83d\udd0a\ud83d\udd09\ud83d\udd09\ud83d\udd08\ud83c\udfb6\ud83c\udfb8\ud83e\udd41\ud83c\udfba\ud83d\udd08\ud83c\udfb7\ud83c\udfb5\ud83d\udd09\ud83e\udd41\ud83c\udfb7\ud83c\udfb9\ud83c\udfb7\ud83d\udd0a\ud83c\udfa4\ud83c\udfa4\ud83d\udd0a\ud83c\udfa4\ud83c\udfa4\ud83c\udfb9\ud83c\udfb8\ud83c\udfb9\ud83d\udd09\ud83c\udfb7\n\n\nGenerating Audio\nSo far, we\u2019ve been using the <audio> element as a source of sound.\nThere\u2019s a few other sources of audio that we can use.  We\u2019ll look at the AudioBufferNode - which allows you to manually generate a sound sample, and then connect it to our graph.\nFirst we have to create an AudioBuffer, which holds our raw data, then we pass that to an AudioBufferNode which we can then treat just like our AudioSource node. This can get a bit boring, so we\u2019ll use a helper method that makes it simpler to generate sounds.\nconst generator = (audioCtx, target) => (seconds, fn) => {\n  const { sampleRate } = audioCtx\n\n  const buffer = audioCtx.createBuffer(\n      1, sampleRate * seconds, sampleRate\n  )\n  const data = buffer.getChannelData(0)\n\n  for (var i = 0; i < data.length; i++) {\n    data[i] = fn(i / sampleRate, seconds)\n  }\n\n  return () => {\n    const source = audioCtx.createBufferSource()\n    source.buffer = audioBuffer\n\n    source.connect(target || audioCtx.destination)\n    source.start()  \n  }\n}\n\nconst sound = generator(audioCtx, gain)\nOur wrapper will let us provide a function that maps time (in seconds) to a sample (between 1 and -1). This generates a waveform, like we saw before with the analyser node.\nFor example, the following will generate 0.75 seconds of white noise at 20% volume.\nconst noise = sound(0.75, t => Math.random() * 0.2)\n\nbutton.onclick = noise\n\nNoise\n\n\nNow we\u2019ve got a noisy button! Handy.\nRather than having a static set of audio nodes, each time we click the button, we add a new node to our graph. Although this feels inefficient, it\u2019s not actually too bad - the browser can do a good job of cleaning up old nodes once they\u2019ve played.\nAn interesting property of defining sounds as functions is that we can combine multiple function to generate new sounds. So if we wanted to fade our noise in and out, we could write a higher order function that does that.\nconst ease = fn => (t, s) =>\n  fn(t) * Math.sin((t / s) * Math.PI)\n\nconst noise = sound(0.75, ease(t => Math.random() * 0.2))\n\nease(noise)\n\n\nAnd we can do more than just white noise - if we use Math.sin, we can generate some nice pure tones.\n// Math.sin with period of 0..1\nconst wave = v => Math.sin(Math.PI * 2 * v)\nconst hz = f => t => wave(t * f)\n\nconst _440hz = sound(0.75, ease(hz(440)))\nconst _880hz = sound(0.75, ease(hz(880)))\n\n440Hz\n880Hz\n\n\nWe can also make our functions more complex. Below we\u2019re combining several frequencies to make a richer sounding tone.\nconst harmony = f => [4, 3, 2, 1].reduce(\n    (v, h, i) => (sin(f * h) * (i+1) ) + v\n)\n\nconst a440 = sound(0.75, ease(harmony(440)))\n\n440Hz\n880Hz\n\n\nCool.\nWe\u2019re still not using any audio-specific functionality, so we can repurpose anything that does an operation on data. For example, we can use d3.js - usually used for interactive data visualisations - to generate a triangular waveform.\nconst triangle = d3.scaleLinear()\n    .domain([0, .5,  1])\n    .range([-1,  1, -1])\n\nconst wave = t => triangle(t % 1)\n\nconst a440 = sound(0.75, ease(harmony(440)))\n\n440Hz\n880Hz\n\n\nIt\u2019s pretty interesting to play around with different functions. I\u2019ve plonked everything in jsbin if you want to have a play yourself.\nA departure from best practice\nWe\u2019ve been generating our audio from scratch, but most of what we\u2019ve looked at can be implemented by a series of native Web Audio nodes. This would be way performant (because it\u2019s not happening on the main thread), and more flexible in some ways (because you can set timings dynamically whilst the note is playing). But we\u2019re going to stay with this approach because it\u2019s fun, and sometimes the fun thing to do might not technically be the best thing to do.\nMaking a keyboard\nHaving a button that makes a sound is totally great, but how about lots of buttons that make lots of sounds?  Yup, totally greater-er.\nThe first thing we need to know is the frequency of each note. I thought this would be awkward because pianos were invented more than 250 years before the Hz unit was defined, so surely there wouldn\u2019t be a simple mapping between the two?\nconst freq = note => 27.5 * Math.pow(2, (note - 21) / 12)\nThis equation blows my mind; I\u2019d never really figured how tightly music and maths fit together.  When you see a chord or melody, you can directly map it back to a mathematical pattern.\nOur keyboard is actually an SVG picture of a keyboard, so we can traverse the elements of it and map each element to a sound generated by one of the functions that we came up with before.\nArray.from(svg.querySelector('rect'))\n  .sort((a, b) => + a.x - b.x)\n  .forEach((key, i) =>\n    key.addEventListener('touchstart',\n      sound(0.75, ease(harmony(freq(i + 48))))\n    )\n  )\n\nrect {stroke: #ddd;}\nrect:hover {opacity: 0.8; stroke: #000}\n\nEt voil\u00e0. We have a keyboard.\nWhat I like about this is that it\u2019s completely pure - there\u2019s no lookup tables or hardcoded attributes; we\u2019ve just defined a mapping from SVG elements to the sound they should probably make.\nDoing better in the future\nAs I mentioned before, this could be implemented more performantly with Web Audio nodes, or even better - use something like Tone.js to be performant for you.\nWeb Audio has been around for a while, though we\u2019re getting new challenges with immersive WebXR experiences, where spatial audio becomes really important. There\u2019s also always support and API improvements (if you like AudioBufferNode, you\u2019re going to love AudioWorklet)\nConclusion\nAnd that\u2019s about it. Web Audio isn\u2019t some black box, you can easily link it with whatever framework, or UI that you\u2019ve built (whether you should is an entirely different question).\nIf anyone ever asks you \u201ccould you turn this SVG into a musical instrument?\u201d you don\u2019t have to stare blankly at them any more.\n(function(a,c){var b=a.createElement(\"script\");if(!(\"noModule\"in b)&&\"on\"+c in b){var d=!1;a.addEventListener(c,function(a){if(a.target===b)d=!0;else if(!a.target.hasAttribute(\"nomodule\")||!d)return;a.preventDefault()},!0);b.type=\"module\";b.src=\".\";a.head.appendChild(b);b.remove()}})(document,\"beforeload\");", "2017", "Ben Foxall", "benfoxall", "2017-12-17T00:00:00+00:00", "https://24ways.org/2017/feeding-the-audio-graph/", "code"]], "truncated": false, "table_rows_count": 336, "filtered_table_rows_count": 1, "expanded_columns": [], "expandable_columns": [], "columns": ["rowid", "title", "contents", "year", "author", "author_slug", "published", "url", "topic"], "primary_keys": [], "units": {}, "query": {"sql": "select rowid, * from articles where \"author\" = :p0 and \"year\" = :p1 order by rowid limit 101", "params": {"p0": "Ben Foxall", "p1": "2017"}}, "facet_results": {}, "suggested_facets": [], "next": null, "next_url": null, "query_ms": 11.547088623046875}