Configuration Reference

📦v1.0.0📅2026-03-10🔄Updated 2026-04-28👤Admin Team
administrationconfigurationproxy

The SMS Gateway Proxy uses two configuration files:

  1. config_local.yaml — main application configuration (server, security, infrastructure connections, rate limiters, MCP filters, monitoring)
  2. smpp_rules.toml — SMPP link and pool definitions (connections to SMSCs, rate limits, routing)

Application Configuration — config_local.yaml

This is the primary configuration file that controls the behavior of the proxy application. It is located next to the proxy binary or mounted as a volume in Docker.

General Settings

env: "production"          # Environment name: local, dev, staging, production
log_level: "info"          # Log verbosity: trace, debug, info, warn, error, fatal, panic
server_name: "proxy-01"    # Unique identifier for this proxy instance
storage_path: "./storage/sso.db"  # Path to the local SQLite database (user accounts)
customer: "acme-corp"      # Customer identifier for licensing
instance_id: "enterprise"  # Instance identifier for licensing
ParameterTypeDescription
envstringEnvironment name. Affects log formatting and behavior. Use production for deployed systems
log_levelstringLogging verbosity. In production, use info or warn. For diagnostics, use debug
server_namestringUnique name for this proxy instance. Used in logs and monitoring to identify the node
storage_pathstringPath to the SQLite database file for user accounts and application state
customerstringCustomer identifier used for license validation
instance_idstringInstance identifier used for license validation

Aerospike — Message State Storage

aerospike:
  hosts: "aerospike-host"
  port: 3000
  namespace: "proxy"
  sweep_time: 1h
  write_policy_default_ttl: 86400
  write_policy_index_default_ttl: 86400
ParameterTypeDefaultDescription
hostsstringAerospike server address
portinteger3000Aerospike server port
namespacestringAerospike namespace for message state data
sweep_timeduration1hHow often to sweep expired records
write_policy_default_ttlinteger86400Default TTL for message records (seconds). 86400 = 24 hours
write_policy_index_default_ttlinteger86400Default TTL for index records (seconds)

The write_policy_default_ttl determines how long message delivery states are retained. For environments requiring longer tracking windows, increase this value accordingly.


Rate Limiters

The proxy supports a layered rate limiting architecture. Rate limiters are defined as a list under the rate_limiters section.

rate_limiters:
  - name: "l1"
    type: "memory"
    description: "In-process rate limiter for global API requests"
    config:
      rps: 800
      burst: 960

  - name: "l2"
    type: "redis"
    description: "Distributed rate limiter for SMPP links"
    config:
      redis:
        addr: "redis-host:6379"
        password: ""
        db: 0
        dial_timeout: 5s
        read_timeout: 10s
        write_timeout: 500ms
        pool_size: 20
        min_idle_conns: 5
        retry_attempts: 5
        retry_delay: 2s

L1 — In-Process Rate Limiter

ParameterTypeDescription
rpsintegerMaximum requests per second across all API endpoints
burstintegerMaximum burst size. Should be ≥ rps

The L1 limiter runs in memory and protects the proxy from API-level overload. It applies to all incoming requests (REST + gRPC) before any further processing.

L2 — Distributed Rate Limiter (Redis)

ParameterTypeDescription
addrstringRedis server address with port
passwordstringRedis password (empty if no auth)
dbintegerRedis database number
dial_timeoutdurationConnection timeout
read_timeoutdurationRead operation timeout
write_timeoutdurationWrite operation timeout
pool_sizeintegerMaximum number of Redis connections in the pool
min_idle_connsintegerMinimum idle connections maintained in the pool
retry_attemptsintegerNumber of retry attempts on connection failure
retry_delaydurationDelay between retry attempts

The L2 limiter enforces per-link and per-pool rate limits defined in smpp_rules.toml. It uses Redis to coordinate limits across multiple proxy instances in a cluster. The actual RPS values per link are set via esme_rate in the SMPP rules — see the SMPP Links section below.


Security

security:
  token_ttl: 12h
  dedup_ttl: 1m
ParameterTypeDefaultDescription
token_ttlduration12hJWT token lifetime. After expiry, the client must re-authenticate
dedup_ttlduration1mDeduplication window for tokens. Prevents replay attacks within this time window

Web (REST API)

web:
  bind: "0.0.0.0"
  port: 8080
  tls: false
  admin_api_key: "your-secure-admin-key"
ParameterTypeDefaultDescription
bindstring0.0.0.0IP address to bind the HTTP server. Use 0.0.0.0 to listen on all interfaces
portinteger8080HTTP server port for the REST API
tlsbooleanfalseEnable TLS for the REST API. Requires certificate configuration
admin_api_keystringShared secret for Admin API endpoints (/api/v1/admin/*). Must be changed in production
⚠️

The admin_api_key provides full administrative access to the platform. Always change the default value and use a strong, randomly generated secret in production environments.


gRPC

grpc:
  port: 44044
  timeout: 1h
ParameterTypeDefaultDescription
portinteger44044gRPC server port
timeoutduration1hMaximum duration for gRPC streaming connections. Controls how long a bidirectional stream (e.g. SendMessageStream) can remain open

SMPP Protocol Settings

smpp:
  config_path: "./smpp_rules"
  config_file: "smpp_rules.toml"
  client_library: "v2"
ParameterTypeDefaultDescription
config_pathstring./smpp_rulesDirectory containing the SMPP rules file
config_filestringsmpp_rules.tomlSMPP rules filename
client_librarystringv2SMPP client library version: v1 or v2. Use v2 for production (supports TRX transceiver connections)

MCP Filters (Contact Policy)

MCP filters define connections to the Message Control Platform instances. The filter name follows the pattern <strategy>_filter — for example, cp1_filter, cp2_filter. You can define as many filters as needed.

cp1_filter:
  active: true
  token: "mcp-access-token"
  cp_endpoint: "/api/v1/check"
  blacklist: false
  hosts:
    - "http://mcp-1:1880"
    - "http://mcp-2:1880"
    - "http://mcp-3:1880"
  blacklist_endpoint: "/api/v1/check"
  health_check_endpoint: "/api/v1/health"
  description: "Frequency-based contact policy (daily + weekly limits)"

cp2_filter:
  active: true
  token: "mcp-access-token"
  cp_endpoint: "/api/v1/check"
  blacklist: true
  hosts:
    - "http://mcp-dedup:1880"
  blacklist_endpoint: "/api/v1/check"
  health_check_endpoint: "/api/v1/health"
  description: "Deduplication with blacklist check"
ParameterTypeDescription
activebooleanEnable or disable this filter. Inactive filters are ignored (requires service reload)
tokenstringAuthentication token for MCP API requests
cp_endpointstringMCP endpoint for policy check requests
blacklistbooleanWhether this filter includes blacklist verification
hostslistList of MCP instance URLs. Traffic is distributed across all listed hosts
blacklist_endpointstringMCP endpoint for blacklist-specific checks
health_check_endpointstringMCP health check endpoint
descriptionstringHuman-readable description of this filter's purpose

How Filter Names Map to Strategies

The filter name prefix (before _filter) corresponds to the strategy name used in the API. When sending a message with "strategy": "cp1", the proxy uses the configuration from cp1_filter. This allows you to create custom strategy names — for example, premium_filter would be invoked with "strategy": "premium".

Multiple MCP Instances

You can list multiple hosts for a single filter to achieve horizontal scaling:

cp1_filter:
  hosts:
    - "http://mcp-1:1880"
    - "http://mcp-2:1880"
    - "http://mcp-3:1880"
    - "http://mcp-4:1880"
    - "http://mcp-5:1880"

All instances must share the same flow definitions and connect to the same Redis backend. The proxy distributes policy check requests across all listed hosts.


Statistics Exporter

The statistics exporter sends CDR (Call Detail Records) data to external S3-compatible storage for auditing and analytics.

statistic_exporter:
  enabled_export: false
  local_path: "./journal/stat"
  port: 30188
  host: "s3-host"
  hostname: "smpp-proxy-01"
  prefix: "statistics"
  tls: false
  s3Region: "us-east-1"
  access: "your-s3-access-key"
  secret: "your-s3-secret-key"
  bucket: "sms-stat"
  subpath: "sms"
ParameterTypeDescription
enabled_exportbooleanEnable or disable S3 export
local_pathstringLocal directory for temporary journal files before upload
hoststringS3-compatible storage host
portintegerS3-compatible storage port
hostnamestringIdentifier for this proxy instance in exported data
prefixstringPrefix for S3 object keys
tlsbooleanUse TLS for S3 connection
s3RegionstringS3 region
accessstringS3 access key
secretstringS3 secret key
bucketstringS3 bucket name
subpathstringSub-path within the bucket
⚠️

S3 credentials (access, secret) provide access to your storage. Store them securely and do not expose them in logs or version control.


Monitoring (OpenTelemetry)

monitoring:
  otel: false
  otel_samling: 100
  otel_collector_endpoint: "otel-collector:4318"
ParameterTypeDefaultDescription
otelbooleanfalseEnable OpenTelemetry tracing
otel_samlinginteger100Sampling rate (percentage). 100 = trace all requests, 10 = trace 10%
otel_collector_endpointstringOpenTelemetry Collector endpoint (OTLP/HTTP)

When enabled, the proxy sends distributed traces to the configured collector, allowing you to trace message processing across all internal components.


SMPP Configuration — smpp_rules.toml

This file defines the physical SMPP connections and virtual pools. It is located in the directory specified by smpp.config_path in the application config.

Each link represents a physical SMPP connection to an SMSC (Short Message Service Center). Links are defined under the [links.N] section, where N is a unique numeric identifier (ESME ID).

ParameterTypeRequiredDescription
esme_addrstringYesSMSC host address
esme_portintegerYesSMSC port number
esme_systemidstringYesSMPP system ID for authentication
esme_passwordstringYesSMPP password for authentication
esme_disabledbooleanNoDisable this link without removing it from config. Default: false
esme_rateintegerYesMaximum messages per second (RPS) for this link
esme_rate_burstintegerYesMaximum burst size for rate limiting
esme_enquirelinkintegerNoEnquire Link interval in seconds. Default: 15
esme_enquirelink_timeoutintegerNoEnquire Link response timeout in seconds. Default: 40
esme_resp_timeoutintegerNoSubmit SM response timeout in seconds. Default: 20
esme_routes_pathstringNoPath to routing rules file for this link
esme_route_filterbooleanNoEnable route-based message filtering. Default: false
esme_use_payloadbooleanNoUse message_payload TLV instead of short_message field for long messages. Default: false
esme_ip_whitelistarrayNoList of allowed source IP addresses for this link
[links.100]
esme_disabled = false
esme_addr = "smsc1.example.com"
esme_port = 2775
esme_systemid = "smppclient1"
esme_password = "password"
esme_enquirelink = 15
esme_enquirelink_timeout = 40
esme_rate = 200
esme_rate_burst = 200
esme_resp_timeout = 20
esme_routes_path = "smpp_rules/routes.ini"
esme_route_filter = false
esme_use_payload = false
esme_ip_whitelist = ["127.0.0.1"]

ESME IDs should be numeric values starting from 100. The ID 10 is reserved for internal dry-run testing and should not be used in production configurations.

Virtual Pools

Pools group multiple physical SMPP links into a single logical sender with weighted load balancing. Messages sent to a pool are automatically distributed across member links using a Smooth Weighted Round-Robin algorithm.

Pools are defined under the [pools.N] section, where N is a unique numeric pool identifier. Inside the section, each key is a link ESME ID and the value is its weight.

Pool Configuration

[pools.1001]
100 = 1000
101 = 1000

In this example, pool 1001 consists of links 100 and 101, each with a weight of 1000. Since weights are equal, traffic is distributed evenly (50/50). If you set different weights — for example 100 = 2000 and 101 = 1000 — link 100 will receive approximately twice as much traffic as link 101.

💡

Pool IDs should start from 1000 to avoid conflicts with link IDs. A pool does not create new SMPP connections — it reuses existing links defined in the [links] section.

Multiple Pools Example

# --- Links ---

[links.100]
esme_addr = "smsc1.operator-a.com"
esme_port = 2775
esme_systemid = "client_a1"
esme_password = "pass_a1"
esme_rate = 200
esme_rate_burst = 200

[links.101]
esme_addr = "smsc2.operator-a.com"
esme_port = 2775
esme_systemid = "client_a2"
esme_password = "pass_a2"
esme_rate = 200
esme_rate_burst = 200

[links.102]
esme_addr = "smsc1.operator-b.com"
esme_port = 2775
esme_systemid = "client_b1"
esme_password = "pass_b1"
esme_rate = 100
esme_rate_burst = 100

# --- Pools ---

[pools.1001]   # Operator A — balanced across two links
100 = 1000
101 = 1000

[pools.1002]   # Operator B — single link
102 = 1000

Configuration Files Summary

FileFormatPurposeHot reload
config_local.yamlYAMLApplication settings, infrastructure, security, MCP filtersRequires restart
smpp_rules.tomlTOMLSMPP link connections and pool definitionsRequires restart
routes.iniINIPer-link routing rules (optional)Requires restart

Next Steps