February 17, 2026

Data Sonification Archive

project

Northeastern University Center for Design · 2020

  • A curated collection of nearly 400 sonification examples across domains — climate, health, astronomy, cybersecurity — organized as a browsable reference for designing sound-based data representations
  • Maintained by the same researchers behind the Datascapes anomaly-detection framework, grounding the archive in active research on sonification for real-world monitoring
  • Treats sonification as a design discipline with its own patterns and anti-patterns, the same way a visualization gallery teaches chart selection — except the medium is sound

Listen to Wikipedia

project

Hatnote · 2013

  • Turns the real-time Wikipedia edit stream into an ambient soundscape — bells for additions, strings for deletions, pitch mapped to edit size — proving that a continuous data feed becomes legible the moment you route it through the ear
  • Demonstrates peripheral monitoring through sound: you can leave it running in the background and still notice when edit activity spikes, exactly the way an ops team should hear their infrastructure
  • Won the Information is Beautiful award not for a chart but for a sonification, evidence that the auditory channel can carry the same informational weight as visualization when the mapping is well designed

TwoTone — Data-Driven Music from Tabular Data

project

Sonify / Google News Initiative · 2019

  • Open-source web app that turns any CSV into a multi-track sonification without writing code — upload a spreadsheet, map columns to instruments, and hear your data — lowering the barrier to treating sound as a data analysis tool
  • Outputs to industry-standard MIDI so sonifications can feed directly into a DAW or a live-coded performance pipeline
  • Demonstrates that sonification tools belong in the same everyday toolkit as charting libraries, not locked in a research lab
February 16, 2026

node-serum2-preset-packager

project

GitHub · 2025

  • Encodes and decodes Xfer Serum 2 preset files to/from JSON, making opaque binary synth state human-readable and programmatically manipulable
  • Treats sound design parameters as structured data — the same move sonification makes in reverse, mapping data to audio parameters
  • Demonstrates that synth presets are just serialized configuration; unpacking them into JSON is the first step toward generating and transforming sounds through code rather than GUIs

Sonic Pi — The Live Coding Music Synth

project

sonic-pi.net · 2012

  • Live-coding environment that treats music as a programming problem: sequences, loops, concurrency, and timing
  • Built on SuperCollider but exposes a Ruby DSL that makes audio synthesis accessible without a signal processing background
  • Used in education to teach programming through immediate auditory feedback rather than visual output
February 15, 2026

SuperCollider

project

supercollider.github.io · 1996

  • Platform for audio synthesis and algorithmic composition with a real-time audio server (scsynth) controlled by a client language (sclang)
  • Client-server architecture decouples sound generation from control logic — the audio graph runs at audio rate while patterns and scheduling run at control rate
  • The foundation that Sonic Pi, TidalCycles, and many other live-coding tools build on top of
February 12, 2026

TidalCycles

project

tidalcycles.org · 2009

  • Live-coding environment for music that treats patterns as first-class values — sounds are described by composing, transforming, and combining pattern functions
  • Built in Haskell but sends OSC messages to SuperCollider; the language is the sequencer, not a GUI
  • Demonstrates that complex rhythmic and timbral structures emerge from simple pattern transformations applied in sequence
February 11, 2026

NASA Data Sonification

project

NASA · 2020

  • NASA’s project translating astronomical data (X-ray, optical, infrared) into sound — each wavelength mapped to a different audio parameter
  • Demonstrates that sonification reveals patterns invisible in visual representations: periodicities, gradients, and anomalies become audible textures
  • Validates the core thesis that audio is an underexplored observability channel — if it works for galaxy clusters, it works for system metrics