Apps/DJ Reactor

DJ Reactor

Live

Reachy reacts to your music

An audio-reactive experience where Reachy Mini responds to music in real-time. The robot analyzes audio frequencies, detects beats, and translates sound into synchronized movements — head bobs, antenna waggles, and LED color changes.

PythonGradioNumPylibrosaReachy SDK

Demo Coming Soon

We're working on recording a demo. In the meantime, try running it yourself!

Features

Real-time Audio Analysis

Processes audio input in real-time using FFT. Separates frequencies into bass, mid, and treble bands for nuanced reactions.

Beat Detection

Identifies beats and tempo changes. Reachy bobs its head on the beat and adjusts movement intensity to the music's energy.

LED Visualization

Antenna LEDs change color based on frequency spectrum. Bass pulses red, mids glow green, highs shimmer blue.

Movement Library

Pre-choreographed movement patterns that blend based on audio characteristics. From subtle vibes to full party mode.

How It Works

1

Audio Input

Feed audio from your microphone, system audio, or a direct file. The app captures a continuous audio stream.

2

Frequency Analysis

FFT breaks the audio into frequency bands. Each band maps to different robot behaviors — bass to head movement, highs to antenna speed.

3

Beat Sync

Onset detection identifies beats. The robot's movements synchronize to the rhythm, staying on beat even when you can't.

4

Expressive Output

Head position, antenna angles, and LED colors all update in real-time. The result: a dancing robot DJ.

Getting Started

Prerequisites

  • Reachy Mini Lite (physical robot or simulation)
  • Python 3.10+
  • Audio input (microphone or system audio)
  • Reachy daemon running on port 8000
# Clone the repo
git clone https://github.com/BioInfo/reachy.git
cd reachy/apps/dj-reactor

# Install dependencies
pip install -r requirements.txt

# Run with microphone input
python app.py --input mic

# Run with audio file
python app.py --input file --file path/to/song.mp3

The Build Story

claude code

Claude Contributions

Audio Pipeline Architecture

Designed the real-time audio processing pipeline with buffering, FFT analysis, and movement generation running in parallel threads.

"How do I process audio in real-time without blocking the robot control loop?"

Movement Choreography

Created parametric movement functions that blend smoothly based on audio intensity. Prevents jerky transitions between states.

What I Learned

  • Real-time audio requires careful buffer management — too small and you miss beats, too large and you add latency
  • Robots dancing is surprisingly delightful, even with limited degrees of freedom
  • The antenna waggle is the secret weapon — it's expressive with minimal motor wear

Related Content

← back to all apps