Alpstein isn't just a dashboard—it's a distributed system built with microservices, real-time data pipelines, and AI analysis working in concert. Here's how information flows from news sources to actionable insights.
The journey starts with specialized scrapers monitoring crypto news sources around the clock. Three scraping services run continuously:
These scrapers are coordinated by Prometheus and monitored with Grafana and Jaeger for performance tracking. Every scraped article gets timestamped and queued for processing.
Raw articles flow into two parallel services that prepare them for AI analysis:
Both services cache their work in Redis for fast access, ensuring the same article isn't processed twice and responses stay lightning-fast.
The processed articles, combined with live market data from Binance (prices, RSI, EMA, SMA, volume), flow into the OpenAI-LLM service. This is where the magic happens.
The AI doesn't just summarize—it analyzes. It considers:
The output is a structured opinion with transparent reasoning: position type (long/short/unclear), entry levels, take-profit zones, stop-loss points, and risk-reward ratios. Every recommendation explains why, not just what.
Everything gets stored for later analysis and audit trails:
Redis also uses Append-Only File (AOF) persistence, so even cache data survives server restarts.
The GOLANG HTTP Server orchestrates everything, serving as the API gateway between the backend services and the Next.js frontend. It handles:
When new analysis is ready, it's pushed instantly over WebSocket. No polling. No delays. Just live data flowing to your screen.
Running a distributed system means things can break in unexpected ways. Alpstein uses a full observability stack:
If something breaks, you see exactly where and why. If performance degrades, you know which service is the bottleneck.
Alpstein runs in production on a Hetzner VM, with every service containerized via Docker and deployed with a CI/CD pipeline:
The result? A production-grade platform that handles real traffic, scales horizontally, and stays available 24/7.
Every piece serves a purpose. Microservices allow independent scaling—if scraping load increases, only those services scale up. Redis sorted sets enable distributed rate limiting that works across multiple backend instances. The RAG service ensures AI recommendations have historical context, not just current headlines.
It's built to be reliable, observable, and fast—because in crypto markets, a few seconds can mean the difference between catching a move and missing it entirely.