← Back to Projects

Rhythm Platform

Turning organizational data into team intelligence and expert discovery.

Full-StackDjangoReactAIEnterprise
MVP In Progress

Problem

HR teams struggle to understand team dynamics, identify expertise, and track productivity. Traditional approaches rely on surveys and manual data collection — time-consuming, inaccurate, and incomplete. Building team intelligence requires understanding who has expertise in what areas, how productive the team really is, and where collaboration bottlenecks exist.

Solution

Rhythm Platform combines GitHub data analysis with AI-powered insights to create a comprehensive people intelligence system. It analyzes code contributions and pull request patterns, identifies expertise domains automatically using Claude API, and tracks productivity metrics like focus time, meeting load, and flow state — all without requiring any additional data collection from teams.

Impact

Analyzed 500+ GitHub repositories successfully. Profiled 50+ team members with expertise mapping. Achieved 95%+ accuracy on expert recommendations validated by HR teams.

Tech Stack

ReactDjangoClaude APIPostgreSQLTypeScriptTailwind CSS

Architecture

Rhythm follows a clean separation between data extraction, AI analysis, and presentation. The frontend is built with React and TypeScript for real-time dashboards. A Django REST API handles business logic and data orchestration. Claude API powers the intelligence layer — analyzing developer profiles, recommending experts based on skill match, and providing context about team capabilities. GitHub integration extracts contribution data, and PostgreSQL stores everything for historical tracking.

GitHub Intelligence

The core data source is GitHub — most developers already use it, so there's no additional data collection overhead. The system analyzes code contributions, pull request patterns, and collaboration flows to identify expertise domains automatically. Privacy is built in: only repository names and contribution patterns are analyzed, never actual code content.

AI-Powered Expert Finder

Finding the right expert in a large organization is a surprisingly painful problem. The expert finder uses Claude API to analyze developer profiles and recommend matches based on skill requirements. It provides context about team capabilities and explains why someone is a good match — not just that they are. The combination of algorithmic metrics (lines of code, PR frequency) with AI insights (expertise analysis) produces recommendations that HR teams validated at 95%+ accuracy.

Productivity Metrics

The dashboard tracks several key metrics: Flow Index measures deep work versus meetings. Maker vs. Manager time tracks work patterns. Calendar integration analyzes meeting load. Together, these create a picture of how a team actually works — not how they say they work in surveys.

Key Technical Decisions

  • GitHub as primary data source — No additional data collection required from teams. Developers already use it, making adoption friction near zero.
  • Claude API for semantic analysis — LLMs understand code context and expertise in ways that keyword matching can't. But they need specific context to perform well; generic prompts produce mediocre results.
  • Batch processing with caching — GitHub analysis runs as background jobs with an incremental update strategy. Caching frequently accessed metrics keeps dashboards fast.

What I Learned

  • Start with user research. I built features I assumed HR teams needed. Some were less valuable than expected. Talking to users earlier would have saved significant development time.
  • Privacy is a feature, not a constraint. When dealing with HR and team data, privacy concerns need to be front and center. Building privacy-by-design from day one is far easier than retrofitting it.
  • Simple workflows beat feature-rich dashboards. The basic expert finder was more valuable to users than the complex analytics dashboard. Start with the simplest workflow that solves the problem.
  • API cost management matters. Claude API calls add up quickly when analyzing hundreds of repositories. Batch processing, caching, and incremental updates are essential for keeping costs predictable.