Configuration Reference
The SMS Gateway Proxy uses two configuration files:
config_local.yaml— main application configuration (server, security, infrastructure connections, rate limiters, MCP filters, monitoring)smpp_rules.toml— SMPP link and pool definitions (connections to SMSCs, rate limits, routing)
Application Configuration — config_local.yaml
This is the primary configuration file that controls the behavior of the proxy application. It is located next to the proxy binary or mounted as a volume in Docker.
General Settings
env: "production" # Environment name: local, dev, staging, production
log_level: "info" # Log verbosity: trace, debug, info, warn, error, fatal, panic
server_name: "proxy-01" # Unique identifier for this proxy instance
storage_path: "./storage/sso.db" # Path to the local SQLite database (user accounts)
customer: "acme-corp" # Customer identifier for licensing
instance_id: "enterprise" # Instance identifier for licensing
| Parameter | Type | Description |
|---|---|---|
env | string | Environment name. Affects log formatting and behavior. Use production for deployed systems |
log_level | string | Logging verbosity. In production, use info or warn. For diagnostics, use debug |
server_name | string | Unique name for this proxy instance. Used in logs and monitoring to identify the node |
storage_path | string | Path to the SQLite database file for user accounts and application state |
customer | string | Customer identifier used for license validation |
instance_id | string | Instance identifier used for license validation |
Aerospike — Message State Storage
aerospike:
hosts: "aerospike-host"
port: 3000
namespace: "proxy"
sweep_time: 1h
write_policy_default_ttl: 86400
write_policy_index_default_ttl: 86400
| Parameter | Type | Default | Description |
|---|---|---|---|
hosts | string | — | Aerospike server address |
port | integer | 3000 | Aerospike server port |
namespace | string | — | Aerospike namespace for message state data |
sweep_time | duration | 1h | How often to sweep expired records |
write_policy_default_ttl | integer | 86400 | Default TTL for message records (seconds). 86400 = 24 hours |
write_policy_index_default_ttl | integer | 86400 | Default TTL for index records (seconds) |
The write_policy_default_ttl determines how long message delivery states are retained. For environments requiring longer tracking windows, increase this value accordingly.
Rate Limiters
The proxy supports a layered rate limiting architecture. Rate limiters are defined as a list under the rate_limiters section.
rate_limiters:
- name: "l1"
type: "memory"
description: "In-process rate limiter for global API requests"
config:
rps: 800
burst: 960
- name: "l2"
type: "redis"
description: "Distributed rate limiter for SMPP links"
config:
redis:
addr: "redis-host:6379"
password: ""
db: 0
dial_timeout: 5s
read_timeout: 10s
write_timeout: 500ms
pool_size: 20
min_idle_conns: 5
retry_attempts: 5
retry_delay: 2s
L1 — In-Process Rate Limiter
| Parameter | Type | Description |
|---|---|---|
rps | integer | Maximum requests per second across all API endpoints |
burst | integer | Maximum burst size. Should be ≥ rps |
The L1 limiter runs in memory and protects the proxy from API-level overload. It applies to all incoming requests (REST + gRPC) before any further processing.
L2 — Distributed Rate Limiter (Redis)
| Parameter | Type | Description |
|---|---|---|
addr | string | Redis server address with port |
password | string | Redis password (empty if no auth) |
db | integer | Redis database number |
dial_timeout | duration | Connection timeout |
read_timeout | duration | Read operation timeout |
write_timeout | duration | Write operation timeout |
pool_size | integer | Maximum number of Redis connections in the pool |
min_idle_conns | integer | Minimum idle connections maintained in the pool |
retry_attempts | integer | Number of retry attempts on connection failure |
retry_delay | duration | Delay between retry attempts |
The L2 limiter enforces per-link and per-pool rate limits defined in smpp_rules.toml. It uses Redis to coordinate limits across multiple proxy instances in a cluster. The actual RPS values per link are set via esme_rate in the SMPP rules — see the SMPP Links section below.
Security
security:
token_ttl: 12h
dedup_ttl: 1m
| Parameter | Type | Default | Description |
|---|---|---|---|
token_ttl | duration | 12h | JWT token lifetime. After expiry, the client must re-authenticate |
dedup_ttl | duration | 1m | Deduplication window for tokens. Prevents replay attacks within this time window |
Web (REST API)
web:
bind: "0.0.0.0"
port: 8080
tls: false
admin_api_key: "your-secure-admin-key"
| Parameter | Type | Default | Description |
|---|---|---|---|
bind | string | 0.0.0.0 | IP address to bind the HTTP server. Use 0.0.0.0 to listen on all interfaces |
port | integer | 8080 | HTTP server port for the REST API |
tls | boolean | false | Enable TLS for the REST API. Requires certificate configuration |
admin_api_key | string | — | Shared secret for Admin API endpoints (/api/v1/admin/*). Must be changed in production |
The admin_api_key provides full administrative access to the platform. Always change the default value and use a strong, randomly generated secret in production environments.
gRPC
grpc:
port: 44044
timeout: 1h
| Parameter | Type | Default | Description |
|---|---|---|---|
port | integer | 44044 | gRPC server port |
timeout | duration | 1h | Maximum duration for gRPC streaming connections. Controls how long a bidirectional stream (e.g. SendMessageStream) can remain open |
SMPP Protocol Settings
smpp:
config_path: "./smpp_rules"
config_file: "smpp_rules.toml"
client_library: "v2"
| Parameter | Type | Default | Description |
|---|---|---|---|
config_path | string | ./smpp_rules | Directory containing the SMPP rules file |
config_file | string | smpp_rules.toml | SMPP rules filename |
client_library | string | v2 | SMPP client library version: v1 or v2. Use v2 for production (supports TRX transceiver connections) |
MCP Filters (Contact Policy)
MCP filters define connections to the Message Control Platform instances. The filter name follows the pattern <strategy>_filter — for example, cp1_filter, cp2_filter. You can define as many filters as needed.
cp1_filter:
active: true
token: "mcp-access-token"
cp_endpoint: "/api/v1/check"
blacklist: false
hosts:
- "http://mcp-1:1880"
- "http://mcp-2:1880"
- "http://mcp-3:1880"
blacklist_endpoint: "/api/v1/check"
health_check_endpoint: "/api/v1/health"
description: "Frequency-based contact policy (daily + weekly limits)"
cp2_filter:
active: true
token: "mcp-access-token"
cp_endpoint: "/api/v1/check"
blacklist: true
hosts:
- "http://mcp-dedup:1880"
blacklist_endpoint: "/api/v1/check"
health_check_endpoint: "/api/v1/health"
description: "Deduplication with blacklist check"
| Parameter | Type | Description |
|---|---|---|
active | boolean | Enable or disable this filter. Inactive filters are ignored (requires service reload) |
token | string | Authentication token for MCP API requests |
cp_endpoint | string | MCP endpoint for policy check requests |
blacklist | boolean | Whether this filter includes blacklist verification |
hosts | list | List of MCP instance URLs. Traffic is distributed across all listed hosts |
blacklist_endpoint | string | MCP endpoint for blacklist-specific checks |
health_check_endpoint | string | MCP health check endpoint |
description | string | Human-readable description of this filter's purpose |
How Filter Names Map to Strategies
The filter name prefix (before _filter) corresponds to the strategy name used in the API. When sending a message with "strategy": "cp1", the proxy uses the configuration from cp1_filter. This allows you to create custom strategy names — for example, premium_filter would be invoked with "strategy": "premium".
Multiple MCP Instances
You can list multiple hosts for a single filter to achieve horizontal scaling:
cp1_filter:
hosts:
- "http://mcp-1:1880"
- "http://mcp-2:1880"
- "http://mcp-3:1880"
- "http://mcp-4:1880"
- "http://mcp-5:1880"
All instances must share the same flow definitions and connect to the same Redis backend. The proxy distributes policy check requests across all listed hosts.
Statistics Exporter
The statistics exporter sends CDR (Call Detail Records) data to external S3-compatible storage for auditing and analytics.
statistic_exporter:
enabled_export: false
local_path: "./journal/stat"
port: 30188
host: "s3-host"
hostname: "smpp-proxy-01"
prefix: "statistics"
tls: false
s3Region: "us-east-1"
access: "your-s3-access-key"
secret: "your-s3-secret-key"
bucket: "sms-stat"
subpath: "sms"
| Parameter | Type | Description |
|---|---|---|
enabled_export | boolean | Enable or disable S3 export |
local_path | string | Local directory for temporary journal files before upload |
host | string | S3-compatible storage host |
port | integer | S3-compatible storage port |
hostname | string | Identifier for this proxy instance in exported data |
prefix | string | Prefix for S3 object keys |
tls | boolean | Use TLS for S3 connection |
s3Region | string | S3 region |
access | string | S3 access key |
secret | string | S3 secret key |
bucket | string | S3 bucket name |
subpath | string | Sub-path within the bucket |
S3 credentials (access, secret) provide access to your storage. Store them securely and do not expose them in logs or version control.
Monitoring (OpenTelemetry)
monitoring:
otel: false
otel_samling: 100
otel_collector_endpoint: "otel-collector:4318"
| Parameter | Type | Default | Description |
|---|---|---|---|
otel | boolean | false | Enable OpenTelemetry tracing |
otel_samling | integer | 100 | Sampling rate (percentage). 100 = trace all requests, 10 = trace 10% |
otel_collector_endpoint | string | — | OpenTelemetry Collector endpoint (OTLP/HTTP) |
When enabled, the proxy sends distributed traces to the configured collector, allowing you to trace message processing across all internal components.
SMPP Configuration — smpp_rules.toml
This file defines the physical SMPP connections and virtual pools. It is located in the directory specified by smpp.config_path in the application config.
SMPP Links
Each link represents a physical SMPP connection to an SMSC (Short Message Service Center). Links are defined under the [links.N] section, where N is a unique numeric identifier (ESME ID).
Link Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
esme_addr | string | Yes | SMSC host address |
esme_port | integer | Yes | SMSC port number |
esme_systemid | string | Yes | SMPP system ID for authentication |
esme_password | string | Yes | SMPP password for authentication |
esme_disabled | boolean | No | Disable this link without removing it from config. Default: false |
esme_rate | integer | Yes | Maximum messages per second (RPS) for this link |
esme_rate_burst | integer | Yes | Maximum burst size for rate limiting |
esme_enquirelink | integer | No | Enquire Link interval in seconds. Default: 15 |
esme_enquirelink_timeout | integer | No | Enquire Link response timeout in seconds. Default: 40 |
esme_resp_timeout | integer | No | Submit SM response timeout in seconds. Default: 20 |
esme_routes_path | string | No | Path to routing rules file for this link |
esme_route_filter | boolean | No | Enable route-based message filtering. Default: false |
esme_use_payload | boolean | No | Use message_payload TLV instead of short_message field for long messages. Default: false |
esme_ip_whitelist | array | No | List of allowed source IP addresses for this link |
Link Example
[links.100]
esme_disabled = false
esme_addr = "smsc1.example.com"
esme_port = 2775
esme_systemid = "smppclient1"
esme_password = "password"
esme_enquirelink = 15
esme_enquirelink_timeout = 40
esme_rate = 200
esme_rate_burst = 200
esme_resp_timeout = 20
esme_routes_path = "smpp_rules/routes.ini"
esme_route_filter = false
esme_use_payload = false
esme_ip_whitelist = ["127.0.0.1"]
ESME IDs should be numeric values starting from 100. The ID 10 is reserved for internal dry-run testing and should not be used in production configurations.
Virtual Pools
Pools group multiple physical SMPP links into a single logical sender with weighted load balancing. Messages sent to a pool are automatically distributed across member links using a Smooth Weighted Round-Robin algorithm.
Pools are defined under the [pools.N] section, where N is a unique numeric pool identifier. Inside the section, each key is a link ESME ID and the value is its weight.
Pool Configuration
[pools.1001]
100 = 1000
101 = 1000
In this example, pool 1001 consists of links 100 and 101, each with a weight of 1000. Since weights are equal, traffic is distributed evenly (50/50). If you set different weights — for example 100 = 2000 and 101 = 1000 — link 100 will receive approximately twice as much traffic as link 101.
Pool IDs should start from 1000 to avoid conflicts with link IDs. A pool does not create new SMPP connections — it reuses existing links defined in the [links] section.
Multiple Pools Example
# --- Links ---
[links.100]
esme_addr = "smsc1.operator-a.com"
esme_port = 2775
esme_systemid = "client_a1"
esme_password = "pass_a1"
esme_rate = 200
esme_rate_burst = 200
[links.101]
esme_addr = "smsc2.operator-a.com"
esme_port = 2775
esme_systemid = "client_a2"
esme_password = "pass_a2"
esme_rate = 200
esme_rate_burst = 200
[links.102]
esme_addr = "smsc1.operator-b.com"
esme_port = 2775
esme_systemid = "client_b1"
esme_password = "pass_b1"
esme_rate = 100
esme_rate_burst = 100
# --- Pools ---
[pools.1001] # Operator A — balanced across two links
100 = 1000
101 = 1000
[pools.1002] # Operator B — single link
102 = 1000
Configuration Files Summary
| File | Format | Purpose | Hot reload |
|---|---|---|---|
config_local.yaml | YAML | Application settings, infrastructure, security, MCP filters | Requires restart |
smpp_rules.toml | TOML | SMPP link connections and pool definitions | Requires restart |
routes.ini | INI | Per-link routing rules (optional) | Requires restart |
Next Steps
- Installation Guide — deploy the platform using Docker Compose
- Load Balancing — detailed documentation on virtual pools and traffic distribution
- Rate Limiting & Throttling — multi-layer rate limiting architecture
- REST API — manage SMPP links and pools via the HTTP API