How Alpstein works under the hood

Alpstein isn't just a dashboard—it's a distributed system built with microservices, real-time data pipelines, and AI analysis working in concert. Here's how information flows from news sources to actionable insights.

1. News Collection & Scraping

The journey starts with specialized scrapers monitoring crypto news sources around the clock. Three scraping services run continuously:

  • CoinTelegraph scraper pulls breaking news and analysis from one of crypto's most trusted publications
  • The Block scraper tracks institutional movements and regulatory developments
  • NewsAPI handler aggregates stories from multiple sources for broader coverage

These scrapers are coordinated by Prometheus and monitored with Grafana and Jaeger for performance tracking. Every scraped article gets timestamped and queued for processing.

2. Article Processing Pipeline

Raw articles flow into two parallel services that prepare them for AI analysis:

  • LLM SERVICE preprocesses articles—cleaning HTML, extracting key information, and structuring data for the AI model
  • RAG SERVICE runs retrieval-augmented generation to add context. It searches through historical articles and market data to find relevant patterns and similar past events

Both services cache their work in Redis for fast access, ensuring the same article isn't processed twice and responses stay lightning-fast.

3. AI Analysis & Opinion Generation

The processed articles, combined with live market data from Binance (prices, RSI, EMA, SMA, volume), flow into the OpenAI-LLM service. This is where the magic happens.

The AI doesn't just summarize—it analyzes. It considers:

  • Sentiment from multiple news articles (are narratives aligned or conflicting?)
  • Technical indicators showing momentum, overbought/oversold conditions
  • Historical context from the RAG service (has this pattern happened before?)
  • Current market structure (support/resistance levels, volume patterns)

The output is a structured opinion with transparent reasoning: position type (long/short/unclear), entry levels, take-profit zones, stop-loss points, and risk-reward ratios. Every recommendation explains why, not just what.

4. Data Storage & Persistence

Everything gets stored for later analysis and audit trails:

  • PostgreSQL holds the permanent record—articles, AI opinions, market snapshots, and metadata with timestamps
  • Redis acts as the fast cache layer, storing recent articles, rate-limiting data (using sorted sets for distributed rate limiting), and session information

Redis also uses Append-Only File (AOF) persistence, so even cache data survives server restarts.

5. Real-Time Updates to Frontend

The GOLANG HTTP Server orchestrates everything, serving as the API gateway between the backend services and the Next.js frontend. It handles:

  • Authentication via OAuth 2.0 with Google (JWT tokens for session management)
  • Rate limiting using Redis sorted sets (sliding window algorithm, distributed across instances)
  • WebSocket connections for live updates—prices, news, and AI opinions stream to your dashboard

When new analysis is ready, it's pushed instantly over WebSocket. No polling. No delays. Just live data flowing to your screen.

6. Monitoring & Observability

Running a distributed system means things can break in unexpected ways. Alpstein uses a full observability stack:

  • Prometheus collects metrics from every service (request rates, error rates, latencies)
  • Grafana visualizes those metrics in real-time dashboards
  • OpenTelemetry & Jaeger provide distributed tracing, letting you follow a single request as it flows through scrapers → processors → AI → database → frontend

If something breaks, you see exactly where and why. If performance degrades, you know which service is the bottleneck.

7. Deployment & Infrastructure

Alpstein runs in production on a Hetzner VM, with every service containerized via Docker and deployed with a CI/CD pipeline:

  • NGINX acts as the reverse proxy, routing traffic to the right services and handling TLS/SSL encryption
  • GitHub Actions manages continuous deployment—every code push triggers automated builds and deployments
  • Secrets are managed through environment variables and GitHub Secrets, never hardcoded

The result? A production-grade platform that handles real traffic, scales horizontally, and stays available 24/7.

Why this architecture?

Every piece serves a purpose. Microservices allow independent scaling—if scraping load increases, only those services scale up. Redis sorted sets enable distributed rate limiting that works across multiple backend instances. The RAG service ensures AI recommendations have historical context, not just current headlines.

It's built to be reliable, observable, and fast—because in crypto markets, a few seconds can mean the difference between catching a move and missing it entirely.