← Back to Projects

Personal DJ

AI-powered music mashup creator that understands rhythm, harmony, and flow.

AIMusic AnalysisWeb AudioNext.jsEssentia.js
Live

Problem

Over a year spent helping friends edit music for Indian weddings—cutting songs, creating medleys, finding perfect transitions for sangeet performances. After hours in Audacity manually searching for beats, the question became: could AI do this automatically? The challenge isn't just stitching songs together. A good mashup requires musical intelligence—matching BPM, aligning rhythms to the measure, finding compatible keys, and creating smooth transitions that feel intentional.

Solution

Personal DJ is a web-based music mashup creator that analyzes songs and generates seamless transitions automatically. Upload your tracks, and it handles the rest—from beat detection to harmonic mixing. Upload two songs and it intelligently suggests where and how to blend them.

Impact

Live at personal-dj-nine.vercel.app. Client-side 2-track mixer running entirely in-browser (no server, no upload time). V2 in development with 3-4 track backend processing.

Tech Stack

Next.jsTypeScriptWeb Audio APIEssentia.jsCustom DSP algorithms

How It Works

Personal DJ analyzes musical characteristics to find optimal blend points. Intelligent Beat Detection identifies BPM and splice points where songs naturally blend. Multi-Dimensional Scoring evaluates 100+ transition candidates across rhythm alignment, harmonic compatibility, and energy flow. Creative Mixing Styles apply 9 different DJ techniques (Smooth Crossfade, Energetic Build, Drop Switch, etc.) depending on the vibe you want.

Architecture

V1.0 (Live): Client-side 2-track mixer running entirely in the browser. Fast, free, and private—no server costs, no waiting. Perfect for quick mashups. Uses Essentia.js for music analysis directly in JavaScript.

V2.0 (In Development): Backend-powered system using Python (librosa, Demucs) for 3-4 track mashups with AI stem separation. Targets users who need pro-quality output and are willing to wait 1-2 minutes for server processing.

Key Technical Insights

  • Audio Engineering ≠ Just Code — Understanding music theory (key signatures, time signatures, phrase structure) was essential. BPM detection is unreliable without understanding downbeats vs. arbitrary beats.
  • Quality is Subjective — Built a 7-factor scoring algorithm, but what sounds 'good' varies by genre. The 'vibe' selector lets users override the algorithm when their taste differs.
  • Constraints Drive Creativity — Browser limitations pushed toward client-side ML and efficient algorithms. Running full audio analysis in JavaScript required optimizations I wouldn't have explored with unlimited server resources.