Focus Guardian
LiveYour robot accountability partner
A productivity body-double app that uses Reachy Mini as an accountability partner. The robot watches you work, notices when you get distracted, and provides gentle encouragement through expressions and movements. Based on the body-doubling technique used for ADHD focus.
Demo Coming Soon
We're working on recording a demo. In the meantime, try running it yourself!
Features
Attention Tracking
Uses head pose detection to notice when you look away from your screen. Reachy responds with curious or encouraging expressions.
Pomodoro Sessions
Built-in focus timer with customizable work/break intervals. Reachy celebrates completed sessions with antenna wiggles.
Expressive Feedback
A library of robot expressions — encouraging nods, playful tilts, celebratory antenna dances — that respond to your focus state.
Session Analytics
Track your focus patterns over time. See when you're most productive and what breaks your concentration.
How It Works
Start a Focus Session
Launch the app and set your work duration. Reachy settles into 'focus mode' — attentive but calm.
Work While Watched
The camera tracks your head pose. As long as you're focused, Reachy stays supportively still with occasional encouraging movements.
Get Gentle Nudges
Look away too long? Reachy notices and gives you a gentle reminder — a head tilt, an antenna waggle, nothing aggressive.
Celebrate Completion
Finish your session and Reachy celebrates with you. Take your break knowing you earned it.
Getting Started
Prerequisites
- •Reachy Mini Lite (physical robot or simulation)
- •Python 3.10+
- •Webcam for head pose detection
- •Reachy daemon running on port 8000
# Clone the repo git clone https://github.com/BioInfo/reachy.git cd reachy/apps/focus-guardian # Install dependencies pip install -r requirements.txt # Run with simulation python app.py --simulation # Run with physical robot python app.py
The Build Story
claude codeClaude Contributions
Expression System Design
Designed the mapping between focus states and robot expressions. Created a library of 12 distinct expressions with smooth transitions.
"Help me design expressions that feel supportive, not judgmental"
Head Pose Integration
Integrated MediaPipe face mesh for real-time head pose estimation. Handles edge cases like partial face visibility.
What I Learned
- →Body-doubling works even with robots — the sense of 'being watched' helps focus
- →Expressions need to be subtle; too much movement becomes distracting
- →Head pose detection is surprisingly sensitive to lighting conditions