Back to work
Confidential client / 2024–2025

Brand monitoring, structured — not a wall of links.

A social-listening backend that polls public conversation for tracked keywords across multiple languages, runs sentiment analysis through OpenAI, and surfaces what matters to ops teams — built so a non-technical team could see what’s being said without scrolling Twitter all day.

Role
Full-Stack Developer
Stack
Express 5TypeScriptNext.js 16NextAuth v5SupabaseOpenAITwitterAPI.ioResendHelmetexpress-rate-limit
Tags
Backend SystemsAI PipelinesScheduled JobsSentiment AnalysisBrand Monitoring
§ 01 / Context

Context

The team needed to track conversation around a defined set of topics across multiple languages, with real classification (sentiment, narrative themes) — not just a feed of links. Manual monitoring didn’t scale past a few keywords. Off-the-shelf social listeners cost more than the budget allowed and didn’t speak the languages they cared about.

§ 02 / Approach

Approach

Built a polling service in Express 5 + TypeScript that pulls tweets every 30 minutes per tracked keyword across two languages (English and Farsi), with configurable filters (minimum 3 favorites, minimum 1 retweet, minimum 300 followers) to cut noise. Each new tweet is dedup-checked against the database, then queued for sentiment analysis through OpenAI with structured-output prompting — sentiment, narrative theme, urgency, all returned as typed JSON. Per-keyword error handling means one bad keyword doesn’t take down the cycle. The frontend (Next.js 16 + NextAuth v5) renders dashboards with daily briefs, narrative breakdowns, and per-keyword analytics. Cost visibility was a first-class feature — every cycle reports estimated API cost so ops can tune the system economically. Supabase RLS protects admin-only views; the admin allow-list is itself an RLS-protected table.

§ 03 / Outcome

Outcome

The team now has automated, structured visibility into the conversations that matter to them. Daily briefs hit their inbox. Surprises get flagged within the polling cycle, not days later from manual scrolling.

§ 04 / Metrics

What it does

  • 0130-minute polling cycle across 2 languages (English + Farsi) with configurable engagement-threshold filters
  • 02Per-cycle cost telemetry — fetched, new, duplicate, errors, estimated API cost — surfaced for ops tuning
  • 03Replaced manual scrolling with a structured, cost-visible pipeline the ops team can trust day to day
§ / Building something similar?

Tell us what you’re working on.

We’re happy to talk through what we learned on this project and whether we’re the right partner for yours.