helium-server Crate
The Helium server is designed as a multi-mode worker system that can run different components independently or together, enabling flexible deployment strategies. Each worker mode serves a specific purpose in the overall system architecture.
Architecture
Worker Modes
The server supports six distinct worker modes:
| Worker Mode | Port | Description | Use Case |
|---|---|---|---|
grpc | 50051 | gRPC API server | Main API for client applications and admin panels |
subscribe_api | 8080 | RESTful subscription API | Public subscription endpoints |
webhook_api | 8081 | RESTful webhook handler | Payment provider callbacks, third-party integrations |
consumer | - | Background message consumer | Processing async tasks from message queue |
mailer | - | Email service worker | Sending emails and notifications |
cron_executor | - | Scheduled task executor | Running periodic maintenance tasks |
Dependencies
The server requires three core infrastructure components:
- PostgreSQL: Primary database for persistent data
- Redis: Caching, session storage, and temporary data
- RabbitMQ (AMQP): Message queuing for async processing
Module Integration
The server integrates all business logic modules:
- auth: Authentication and authorization
- shop: E-commerce and billing
- telecom: VPN node management and traffic handling
- market: Affiliate and marketing systems
- notification: Announcements and messaging
- support: Customer support tickets
- manage: Administrative functions
- shield: Security and anti-abuse measures
Deployment Guide
Prerequisites
- PostgreSQL, Redis, and RabbitMQ servers accessible
- SQLx CLI:
cargo install sqlx-cli --no-default-features --features postgres - Environment variables configured (see below)
Environment Configuration
The server is configured entirely through environment variables:
Required Variables
# Worker mode selection
WORK_MODE="grpc" # or subscribe_api, webhook_api, consumer, mailer, cron_executor
# Database connection
DATABASE_URL="postgres://user:password@localhost/helium_db"
# Message queue connection
MQ_URL="amqp://user:password@localhost:5672/"
# Redis connection
REDIS_URL="redis://localhost:6379"
Optional Variables
# Server listen address (for API workers)
LISTEN_ADDR="0.0.0.0:50051" # grpc mode default
LISTEN_ADDR="0.0.0.0:8080" # subscribe_api mode default
LISTEN_ADDR="0.0.0.0:8081" # webhook_api mode default
# Cron executor scan interval (seconds)
SCAN_INTERVAL="60" # cron_executor mode only
# OpenTelemetry Collector endpoint (optional, for observability)
OTEL_COLLECTOR="http://otel-collector:4317" # See Observability guide
Note: For comprehensive observability with distributed tracing and metrics, see the Observability with OpenTelemetry guide.
Database Migration
⚠️ CRITICAL: Database migrations must be run before starting the application.
# Install SQLx CLI
cargo install sqlx-cli --no-default-features --features postgres
# Apply all pending migrations
sqlx migrate run --database-url $DATABASE_URL
# Verify migration status
sqlx migrate info --database-url $DATABASE_URL
Basic Deployment
Running the Server
# Apply database migrations first
sqlx migrate run --database-url $DATABASE_URL
# Start the server with desired worker mode
WORK_MODE=grpc ./helium-server
Multiple Worker Deployment
For production, run different worker modes as separate processes:
# Terminal 1: Main gRPC API
WORK_MODE=grpc ./helium-server
# Terminal 2: Background consumer
WORK_MODE=consumer ./helium-server
# Terminal 3: Email worker
WORK_MODE=mailer ./helium-server
# Terminal 4: Cron jobs
WORK_MODE=cron_executor ./helium-server
Logging
The server uses structured logging:
# Enable debug logging
RUST_LOG=debug ./helium-server
# Production logging (default)
RUST_LOG=info ./helium-server
Developer Guide
Project Structure
server/
├── Cargo.toml # Dependencies and metadata
├── src/
│ ├── main.rs # Entry point and startup logic
│ ├── worker/ # Worker mode implementations
│ │ ├── mod.rs # Worker configuration and dispatch
│ │ ├── grpc.rs # gRPC server implementation
│ │ ├── consumer.rs # Background message consumer
│ │ ├── mailer.rs # Email service worker
│ │ ├── cron_executor.rs # Scheduled task executor
│ │ ├── subscribe_api.rs # Subscription REST API
│ │ └── webhook_api.rs # Webhook REST API
│ └── hooks/ # Extension points (currently unused)
│ └── mod.rs
Building from Source
# Development build
cd server
cargo build
# Release build (optimized)
cargo build --release
# Run with specific worker mode
WORK_MODE=grpc cargo run
Adding New Worker Modes
- Create worker implementation:
// src/worker/new_worker.rs
pub struct NewWorker {
// worker fields
}
impl NewWorker {
pub async fn initialize(args: YourArgs) -> anyhow::Result<Self> {
// initialization logic
}
pub async fn run(&self) -> anyhow::Result<()> {
// worker main loop
}
}
- Add to worker configuration:
// src/worker/mod.rs
pub enum WorkerArgs {
// existing variants...
NewWorker(YourArgs),
}
impl WorkerArgs {
pub fn load_from_env() -> anyhow::Result<Self> {
match work_mode.as_str() {
// existing modes...
"new_worker" => {
// parse environment variables
Ok(WorkerArgs::NewWorker(args))
}
}
}
pub async fn execute_worker(self) -> anyhow::Result<()> {
match self {
// existing modes...
WorkerArgs::NewWorker(args) => {
let worker = NewWorker::initialize(args).await?;
worker.run().await
}
}
}
}
gRPC Service Development
The gRPC worker automatically integrates all modules. To add new services:
-
Implement your service in the appropriate module (e.g.,
modules/your_module/) -
Add to gRPC worker:
// src/worker/grpc.rs
impl GrpcWorker {
pub async fn initialize(args: GrpcWorkModeArgs) -> Result<Self, anyhow::Error> {
// ... existing initialization ...
let your_service = YourService::new(database_processor.clone());
Ok(Self {
// ... existing fields ...
your_service,
})
}
pub fn server_ready(self) -> Router<...> {
tonic::transport::server::Server::builder()
// ... existing services ...
.add_service(YourServiceServer::new(self.your_service))
}
}
Database Migrations
Database schema is managed through SQLx migrations in the migrations/ directory. When adding new features:
- Create migration files:
# Create new migration
sqlx migrate add your_feature_name
# This creates:
# migrations/TIMESTAMP_your_feature_name.up.sql
# migrations/TIMESTAMP_your_feature_name.down.sql
- Run migrations:
# Apply migrations
sqlx migrate run --database-url $DATABASE_URL
# Revert last migration
sqlx migrate revert --database-url $DATABASE_URL
Testing
# Run all tests
cargo test
# Run specific module tests
cargo test --package helium-server
# Integration tests with database
DATABASE_URL=postgres://test_db cargo test
Performance Considerations
- Memory Usage: Each worker typically uses 40-200MB RAM
- CPU Efficiency: Single-core performance optimized, can handle 1000+ RPS
- Connection Pooling: Database connections are shared across services
- Async Processing: All I/O operations are non-blocking
Troubleshooting
Common Issues
Service won’t start:
# Check environment variables
env | grep -E "(DATABASE_URL|MQ_URL|REDIS_URL|WORK_MODE)"
# Verify database migrations are applied
sqlx migrate info --database-url $DATABASE_URL
Database migration issues:
# Check migration status
sqlx migrate info --database-url $DATABASE_URL
# Force apply migrations (if stuck)
sqlx migrate run --database-url $DATABASE_URL
# Revert last migration if needed
sqlx migrate revert --database-url $DATABASE_URL
# Reset database (CAUTION: destroys all data)
sqlx database reset --database-url $DATABASE_URL
Performance issues:
# Enable request tracing
RUST_LOG=helium_server=trace ./helium-server
# Profile with flamegraph
cargo flamegraph --bin helium-server
Logs and Debugging
# Debug logging
RUST_LOG=debug ./helium-server
# Trace specific modules
RUST_LOG=helium_server::worker::grpc=trace,info ./helium-server
Configuration Validation
Ensure all required environment variables are properly set:
# Validate configuration script
#!/bin/bash
set -e
echo "Validating Helium server configuration..."
# Check required variables
: "${WORK_MODE:?WORK_MODE not set}"
: "${DATABASE_URL:?DATABASE_URL not set}"
: "${MQ_URL:?MQ_URL not set}"
: "${REDIS_URL:?REDIS_URL not set}"
# Validate work mode
case "$WORK_MODE" in
grpc|subscribe_api|webhook_api|consumer|mailer|cron_executor)
echo "✓ Valid WORK_MODE: $WORK_MODE"
;;
*)
echo "✗ Invalid WORK_MODE: $WORK_MODE"
exit 1
;;
esac
# Check if migrations are applied
if command -v sqlx >/dev/null 2>&1; then
if sqlx migrate info --database-url "$DATABASE_URL" | grep -q "pending"; then
echo "⚠ Warning: Pending database migrations found"
echo "Run: sqlx migrate run --database-url $DATABASE_URL"
else
echo "✓ Database migrations are up to date"
fi
else
echo "⚠ Warning: sqlx CLI not found - cannot verify migrations"
echo "Install with: cargo install sqlx-cli --no-default-features --features postgres"
fi
echo "Configuration validation complete!"