AI Trading Platform

Interactive System Architecture & Workflow Documentation

5-Layer System Architecture

Microservices architecture with clear separation: presentation layer (React), API gateway (Flask), message queue (RabbitMQ), ML workers (Python), and storage (PostgreSQL).

graph TB subgraph L1["Frontend - React"] U1[Trader Dashboard] U2[Investor Dashboard] U3[Manager Dashboard] U4[Quant Dashboard] end subgraph L2["API Gateway - Flask"] A1["POST /predict"] A2["POST /trade-signal"] A3["POST /create-model"] end subgraph L3["Message Queue - RabbitMQ"] Q1[rpc_queue] Q2[result_queue] end subgraph L4["Workers - Python ML"] W1[Data Loader] --> W2[Feature Engine] W2 --> W3[AI Model] W3 --> W4[Signal Generator] end subgraph L5["Storage"] DB[(PostgreSQL)] FS[Model Weights] end L1 --> L2 L2 --> L3 L3 --> L4 L4 --> L5 style L1 fill:#E8F5E9,stroke:#4CAF50 style L2 fill:#E3F2FD,stroke:#2196F3 style L3 fill:#FFF3E0,stroke:#FF9800 style L4 fill:#F3E5F5,stroke:#9C27B0 style L5 fill:#FFEBEE,stroke:#F44336

Frontend

React/Next.js with TypeScript. 4 user segment dashboards with forms, charts, and real-time updates.

API Gateway

Flask REST API with Pydantic schema validation, JWT auth, and RabbitMQ RPC client.

Message Queue

RabbitMQ decouples API from workers, enables horizontal scaling, and provides reliability.

ML Workers

Python workers with custom AI models (LSTM, CNN, Transformer), feature engineering, and signal generation.

Storage

PostgreSQL for metadata and audit trail, file system for model weights and historical CSV data.

Prediction Request Flow

End-to-end sequence from user form submission to prediction result display, showing all system components and their interactions.

sequenceDiagram participant U as User participant FE as Frontend participant API as Flask API participant MQ as RabbitMQ participant W as Worker participant ML as LSTM Model participant DB as PostgreSQL U->>FE: Fill prediction form FE->>FE: Validate input FE->>API: POST /api/v1/predict API->>API: Validate schema API->>MQ: Publish to rpc_queue MQ->>W: Consume request W->>W: Load market data W->>W: Compute features W->>ML: Run AI inference ML->>W: Return prediction W->>W: Generate trade signal W->>W: Generate suggestion text W->>DB: Save prediction W->>MQ: Publish to result_queue MQ->>API: Consume result API->>FE: Return JSON response FE->>U: Display result and chart

JSON Input-Output Contract

Simple 3-box diagram showing the complete JSON transformation: User Configuration → Processing Pipeline → System Response. This is the API contract with exact field names matching your image.

graph LR A["User JSON Configuration
{
'user_segment':
'short_term_trader',
'stocks': ['AAPL', 'MSFT'],
'data_frequency': '1min',
'prediction_horizon': '1h',
'confidence_level': 0.7,
'entry_window':
{'start':'10:00','end':'11:30'},
'exit_window': {'end':'14:00'},
'entry_price': 185.50,
'stop_loss': 182.00,
'capital_at_risk': 0.02,
'features':
['SMA','EMA','MACD','volatility'],
'sentiment_enabled': true,
'model_preferences': {
'architecture': 'LSTM',
'epochs': 20
}
}"] B["Processing Pipeline"] C["System JSON Response
{
'predicted_price': 187.25,
'confidence': 0.85,
'trade_signal': 'Enter',
'direction': 'Long',
'suggested_exit_time':
'14:00',
'position_size': 200,
'suggestion_text': 'Entering
LONG on AAPL
at $185.50 with stop-loss
$182.00.
Trade size: 200 shares.
Confidence: 0.85.',
'risk_metrics': {
'max_loss': '$700',
'risk_reward_ratio': 1.5
}
}"] A --> B B --> C style A fill:#e8f5e9,stroke:#4CAF50,stroke-width:2px style B fill:#f3e5f5,stroke:#9C27B0,stroke-width:2px style C fill:#e3f2fd,stroke:#2196F3,stroke-width:2px

User Configuration (13 fields)

User segment, stocks list, timing (data frequency, prediction horizon, entry/exit windows), risk parameters (confidence level, stop loss, capital at risk), features selection, sentiment analysis toggle, model preferences.

Processing Pipeline (6 steps)

Validate Schema → Load Market Data → Compute Technical Features (SMA, EMA, MACD, volatility) → Run AI Model → Generate Trade Signals → Calculate Risk Metrics.

System Response (8 fields)

Price prediction, confidence score, trade signal (Enter/Exit/Hold), direction (Long/Short), suggested exit time, position size, human-readable suggestion text, risk metrics (max loss, risk/reward ratio).

End-to-End Flow

Complete JSON-to-JSON transformation showing exactly what the frontend sends and what the backend returns. This is the API contract between UI and ML pipeline.

JSON Examples

User JSON Configuration

{
  'user_segment': 
    'short_term_trader',
  'stocks': ['AAPL', 'MSFT'],
  'data_frequency': '1min',
  'prediction_horizon': '1h',
  'confidence_level': 0.7,
  'entry_window':
    {'start':'10:00','end':'11:30'},
  'exit_window': {'end':'14:00'},
  'entry_price': 185.50,
  'stop_loss': 182.00,
  'capital_at_risk': 0.02,
  'features':
    ['SMA','EMA','MACD','volatility'],
  'sentiment_enabled': true,
  'model_preferences': {
    'architecture': 'LSTM',
    'epochs': 20
  }
}

System JSON Response

{
  'predicted_price': 187.25,
  'confidence': 0.85,
  'trade_signal': 'Enter',
  'direction': 'Long',
  'suggested_exit_time':
    '14:00',
  'position_size': 200,
  'suggestion_text': 'Entering 
    LONG on AAPL
    at $185.50 with stop-loss 
    $182.00.
    Trade size: 200 shares.
    Confidence: 0.85.',
  'risk_metrics': {
    'max_loss': '$700',
    'risk_reward_ratio': 1.5
  }
}

Full Application Workflow - Inference + Creation

Complete state machine showing two workflows: Inference (synchronous predictions using custom-trained AI models), and Creation (asynchronous agent training with external market data). Users need zero AI knowledge - the platform handles all ML complexity and supports multiple architectures (LSTM, CNN, Transformer).

stateDiagram-v2 [*] --> Idle: Application Start Idle --> Authenticating: User Opens App Authenticating --> Login: Not Authenticated Authenticating --> Dashboard: Token Valid Login --> Authenticating: Submit Credentials Dashboard --> InferenceForm: Run Prediction Dashboard --> AgentBuilder: Create AI Agent Dashboard --> HistoryView: View History Dashboard --> Logout: Logout Logout --> Idle: Invalidate JWT HistoryView --> Dashboard: Back state INFERENCE { InferenceForm --> SelectAgent: Choose Agent SelectAgent --> InferenceValidation: Submit InferenceValidation --> InferenceForm: Validation Failed InferenceValidation --> InferenceAPI: Validation Passed InferenceAPI --> RateLimitCheck: POST predict RateLimitCheck --> RateLimitError: Limit Exceeded RateLimitCheck --> CacheCheck: Limit OK RateLimitError --> WaitRetry: Return 429 WaitRetry --> InferenceAPI: Retry CacheCheck --> CachedResult: Cache Hit CacheCheck --> InferenceQueue: Cache Miss InferenceQueue --> FetchLiveData: Publish to RabbitMQ FetchLiveData --> ComputeFeatures: Refinitiv API ComputeFeatures --> RunModel: SMA EMA MACD RunModel --> ConfidenceCheck: AI Model Prediction ConfidenceCheck --> LowConfidence: Below 0.60 ConfidenceCheck --> GenerateSignal: Above 0.60 LowConfidence --> GenerateSignal: Add Warning GenerateSignal --> PositionSizing: BUY SELL HOLD PositionSizing --> BuildSuggestion: Calculate Size BuildSuggestion --> SavePrediction: Generate Text SavePrediction --> PublishResult: Save to DB CachedResult --> DisplayResults: Return Cache PublishResult --> DisplayResults: Return via RabbitMQ } state CREATION { AgentBuilder --> ConfigureAgent: New Agent Form ConfigureAgent --> SelectEquity: Pick Stock or Equity SelectEquity --> SelectDataSource: Refinitiv or Bloomberg SelectDataSource --> SetDateRange: Choose Training Period SetDateRange --> SelectFeatures: Pick Indicators SelectFeatures --> SetRiskParams: Risk Preferences SetRiskParams --> ReviewConfig: Review Settings ReviewConfig --> ConfigureAgent: Edit Config ReviewConfig --> SubmitTraining: Confirm and Submit SubmitTraining --> TrainingQueue: POST create-agent TrainingQueue --> FetchHistoricalData: Publish to RabbitMQ FetchHistoricalData --> DataValidation: External API Call DataValidation --> TrainingFailed: Bad or Missing Data DataValidation --> InitializeModel: Data OK InitializeModel --> TrainingLoop: Build Model Architecture TrainingLoop --> EvaluateModel: Epochs Complete EvaluateModel --> TrainingFailed: Does Not Converge EvaluateModel --> SaveAgent: Meets Threshold TrainingFailed --> NotifyUserFailed: Set FAILED in DB NotifyUserFailed --> AgentBuilder: User Notified SaveAgent --> SetAgentReady: Save Weights to Storage SetAgentReady --> NotifyUserSuccess: Set READY in DB NotifyUserSuccess --> AgentReady: User Notified AgentReady --> Dashboard: Agent Available for Inference } DisplayResults --> Dashboard: Back to Dashboard

Inference (Synchronous)

User selects their custom-trained agent, submits prediction request. System fetches live data via Refinitiv, computes features, runs custom-trained AI model (LSTM, CNN, or Transformer architecture), returns signal in seconds.

Creation (Asynchronous)

User configures agent (equity, data source, date range, features, risk params). Training job queued to RabbitMQ. Worker fetches historical data, trains AI model with selected architecture. Takes minutes to hours.

Training Lifecycle

Submit → Queue → Fetch Data → Validate → Initialize Model → Training Loop → Evaluate. If fails: set FAILED in DB, notify user. If succeeds: save weights, set READY, notify user.

User Needs Zero AI Knowledge

The form is finance-focused: pick stocks, date ranges, indicators. System handles all ML complexity (architecture, hyperparameters, training). User only sees results and performance metrics.

Data Sources

System fetches data from Refinitiv or Bloomberg APIs automatically. Historical data for training, live data for inference. User never uploads data manually.

Connection: Creation feeds Inference

Agents created in the Creation workflow become available in the Inference workflow. Both share the same database - Creation writes agent weights, Inference reads them.