Velox Documentation

Explore the technical architecture and implementation details of a high-performance leveraged trading engine.

System Architecture

Velox System Architecture
Price Poller
Binance WS
Liquidation Engine
In-Memory State
HTTP API
Express 5
WS Server
Real-time
DB Worker
Persistence
Batch Uploader
TimescaleDB

Velox follows a distributed microservices architecture with six services communicating through Redis streams. Real-time price data flows from Binance via WebSocket, gets processed by the liquidation engine with in-memory state, and triggers automatic liquidations based on leverage and risk parameters. All state changes are event-sourced for crash recovery.

Core Components & Implementation

Liquidation Engine

Order Processing

The engine processes orders through Redis streams with real-time price validation:

• Validates leverage (1–100x) and user balance
• Fetches current bid/ask from Redis
• Volume = (margin × leverage) / price
• Deducts required margin from balance
• Calculates liquidation price (90% loss threshold)

Liquidation Mechanism

Runs on every price tick with priority-ordered triggers:

• Margin Call: remaining margin ≤ 10% of initial
• Stop Loss: user-defined loss threshold reached
• Take Profit: user-defined profit target hit
• LONG closes at BID, SHORT closes at ASK

Liquidation Price Formulas

Long Position
Pliq = P₀ × (1 − 90 / (100 × L))
Liquidates when price drops to this level
Short Position
Pliq = P₀ × (1 + 90 / (100 × L))
Liquidates when price rises to this level

Redis Stream Communication

request:stream
PRICE_UPDATE
PLACE_ORDER
CLOSE_ORDER
REGISTER_USER
GET_BALANCE
GET_USER_ORDERS
response:stream
Order confirmations
Liquidation events
Balance responses
Consumed by DB Worker
response:queue
Request-response callbacks
5-second timeout
Deleted after dispatch
Consumed by HTTP API

Price Poller & WebSocket Integration

Binance Connection

Connects to Binance for real-time trade feeds on three assets:

wss://stream.binance.com:9443/ws Subscriptions: btcusdt@trade ethusdt@trade solusdt@trade

House Spread (0.1%)

A market-maker spread is applied to all prices before distribution:

bidPrice = price × (1 − 0.001)
askPrice = price × (1 + 0.001)
Honest price (no spread) is sent to TimescaleDB for candle aggregation

Web Server & API Layer

Authentication

Dual authentication with JWT in httpOnly cookies:

• Password-based: bcrypt hashing + JWT
• Magic link: Resend API + Redis token store
• httpOnly secure cookies (no localStorage)
• One-time WS tickets (30s TTL)

EngineClient Service

Singleton for engine communication via Redis:

• Publishes to request:stream (XADD)
• Subscribes to response:queue for callbacks
• 5-second timeout per request
• Checks engine READY status before each call

API Reference

Authentication

POST/api/v1/user/signup
POST/api/v1/user/signin
POST/api/v1/user/magic-link
GET/api/v1/user/auth/verify
POST/api/v1/user/signout
GET/api/v1/user/me
GET/api/v1/user/ws-ticket

Trading

POST/api/v1/order/open
POST/api/v1/order/close/:orderId
GET/api/v1/order/user/orders
GET/api/v1/order/user/balance
GET/api/v1/order/:orderId

Market Data

GET/candles?asset=&duration=
Timeframes:
30s · 1m · 5m · 15m · 1h · 4h · 1d

WebSocket

WSws://localhost:3006
Auth flow:
GET /ws-ticket → connect → send auth msg

Database Schema

User

id uuid pk
email unique
phone bigint unique
password string

ClosedOrder

orderId pk
userId, asset, orderType
leverage int
marginInt, executionPriceInt
closePriceInt, qtyInt
stopLossInt, takeProfitInt
finalPnLInt bigint
closeReason string?
createdAt, closedAt

Trade

id cuid
time timestamptz
symbol string
priceInt bigint 10⁸
qtyInt bigint 10⁸
TimescaleDB hypertable

Snapshot

id uuid pk
timestamp datetime
lastStreamId string
data json
Crash recovery snapshots
Assets: BTCUSDT · ETHUSDT · SOLUSDT|Order Types: LONG · SHORT|Status: OPEN · CLOSED · LIQUIDATED

Technical Deep Dive

Real-time Data Flow

Price Update Sequence

Binance WS
Trade feed
Price Poller
Add 0.1% spread
Redis Stream
PRICE_UPDATE
Engine
Check liquidations
WS Server
Broadcast to clients

Order Creation Sequence

Frontend
User places order
HTTP API
Validate + auth
request:stream
PLACE_ORDER
Engine
Check balance, execute
response:queue
Callback to API

Request-Response Architecture

Async Communication Pattern

The system uses an async request-response pattern with Redis streams and an in-memory callback registry. This enables non-blocking communication while maintaining request-response semantics.

// 1. HTTP API sends order to engine const requestId = crypto.randomUUID(); await redis.xadd("request:stream", "*", "requestId", requestId, "type", "PLACE_ORDER", "payload", JSON.stringify({ userId, asset, leverage, qty }) ); // 2. Simultaneously register callback with timeout const response = await subscriber.waitForMessage(requestId); // → Promise resolves when engine publishes to response:queue // → Rejects after 5 seconds if no response

Callback Registration

waitForMessage(id: string) { return new Promise((resolve, reject) => { this.callbacks[id] = resolve; setTimeout(() => { if (this.callbacks[id]) { delete this.callbacks[id]; reject(new Error("Timeout")); } }, 5000); }); }

Engine Processing Loop

while (true) { const entries = await redis.xread( "BLOCK", 0, "STREAMS", "request:stream", lastId ); for (const [id, data] of entries) { switch (data.type) { case "PLACE_ORDER": await processOrder(data.payload); break; case "PRICE_UPDATE": await checkLiquidations(data.payload); break; } lastId = id; } }

Crash Recovery & Event Sourcing

Snapshot + Replay Strategy

The engine persists full state snapshots to PostgreSQL every 15 seconds, including the last processed stream entry ID.

• Snapshot: users, balances, all orders
• lastStreamId: replay cursor
• On restart: load snapshot → replay events
• Max data loss: 15 seconds

Engine Lifecycle

STARTINGConnecting to Redis & PostgreSQL
REPLAYINGLoading snapshot + replaying stream events
READYAccepting requests, processing prices
SHUTDOWNGraceful shutdown, final snapshot

BigInt Arithmetic (10⁸ Scale)

Why BigInt?

JavaScript Number uses IEEE 754 floating point, which introduces rounding errors in financial calculations. All prices, quantities, and margins use BigInt with 10⁸ scale factor.

PRICE_SCALE = 100_000_000n 54820.50 → 5_482_050_000_000n 0.001 BTC → 100_000n

Core Operations

toInteger(54820.50) → 5482050000000n toDecimal(5482050000000n) → 54820.50 multiply(a, b) → (a × b) / SCALE divide(a, b) → (a × SCALE) / b calculateLongPnL(current, entry, qty) → (current - entry) × qty / SCALE

Docker & Infrastructure

docker-compose.yml

services: timescaledb: image: timescale/timescaledb:latest-pg16 ports: - "5433:5432" environment: POSTGRES_DB: trading_db POSTGRES_USER: postgres POSTGRES_PASSWORD: password volumes: - timescale_data:/var/... redis: image: redis:7-alpine ports: - "6380:6379" command: redis-server --appendonly yes volumes: - redis_data:/data

Service Ports

Frontend
Next.js 16:3000
HTTP API
Express 5:3005
WS Server
WebSocket:3006
PostgreSQL
TimescaleDB:5433
Redis
Streams + Pub/Sub:6380
Quick Start
docker compose up -d
cd packages/prisma-client && bunx prisma migrate deploy
bun start:dev

Monorepo Structure

Apps (6 services)

apps/web/ — Next.js frontend
apps/http-backend/ — REST API gateway
apps/liquidation-engine/ — Stateful trading engine
apps/realtime-server/ — WebSocket server
apps/price-poller/ — Binance integration
apps/db-worker/ — Order persistence
apps/batch-uploader/ — Trade tick batching

Packages (shared libraries)

prisma-client/ — ORM schema + client
redis-client/ — Stream utilities + subscriber
redis-stream-types/ — Type-safe message definitions
price-utils/ — BigInt arithmetic (10⁸)
validation/ — Zod schemas + middleware
ui/ — Shared React components

Ready to explore?

Start trading with $1,000 in virtual funds on a platform built with event sourcing, in-memory state, and real-time liquidation.