Rate Limiting & Throttling

📦v1.0.0📅2026-04-10🔄Updated 2026-04-28👤Admin Team
conceptssms-gateway-proxyrate-limitingthrottlingredistraffic-control

Overview

The SMS Gateway Proxy implements a multi-layer rate limiting architecture to protect downstream telecom infrastructure from overload and ensure fair traffic distribution across clients and SMPP links.

Every message passes through multiple control points before reaching the SMSC:

Client Request
     ↓
Authentication
     ↓
L1 Rate Limiter (in-process)
     ↓
L2 Distributed Rate Limiter (Redis)
     ↓
Contact Policy (MCP)
     ↓
SMPP Router → SMSC

Layer 1 — In-Process Rate Limiter

The L1 rate limiter runs inside the proxy process and provides fast, low-latency per-request throttling. It acts as the first line of defense, rejecting excess traffic before it reaches the distributed layer.

This layer is transparent — no additional configuration is required.


Layer 2 — Distributed Rate Limiter (Redis)

The L2 rate limiter is the primary traffic control mechanism. It uses Redis to enforce rate limits across all proxy instances in a cluster, ensuring consistent behavior even in multi-node deployments.

How It Works

Each SMPP link and pool has its own rate limit, configured in smpp_rules.toml:

[links.100]
esme_rate = 200        # Maximum 200 messages per second
esme_rate_burst = 200  # Maximum burst size
  • esme_rate — the sustained throughput limit in messages per second (RPS). The proxy will not exceed this rate over time.
  • esme_rate_burst — the maximum number of messages that can be sent in a short burst. This allows temporary spikes above the sustained rate.

Individual links have their own rate limits as configured.

Pools automatically calculate their rate limit as the sum of all healthy member links. For example, if a pool contains two links with esme_rate = 100 each, the pool's effective rate is 200 RPS.

Dynamic Adjustment

Rate limits are not static — they change automatically based on SMPP link health:

  • When a link goes down, its rate is set to zero and the parent pool's rate is reduced.
  • When a link recovers, its configured rate is restored and the pool's rate increases.

For details on this mechanism, see Link Health Monitoring.

What Happens When the Limit Is Exceeded

If a client sends messages faster than the configured rate, the excess requests receive an HTTP 429 (Too Many Requests) or a gRPC error response. The message is not queued — it is rejected immediately, and the client is expected to retry after a brief delay.


Contact Policy Layer (MCP)

In addition to transport-level rate limiting, the platform supports business-level message frequency controls through the Message Control Platform (MCP).

MCP operates at a higher level than L1/L2 — it enforces contact policies per destination phone number, preventing excessive messaging to individual recipients.

Available Strategies

StrategyPurpose
CP1Frequency-based contact policy — limits how many messages a single phone number can receive per day and per week
CP1 On-BuildSame as CP1, but counters are managed during campaign build phase
CP2Deduplication — prevents the same message from being sent to the same number within a time window
BlacklistRejects messages to phone numbers loaded from S3/MinIO blacklist files

How MCP Differs from L2

AspectL2 Rate LimiterMCP Contact Policy
ScopePer-link / per-poolPer-destination number
PurposeProtect SMSC from overloadProtect end users from spam
Configured viasmpp_rules.tomlNode-RED flows
StorageRedisRedis + S3
Response on rejectHTTP 429Message rejected with status daily_cp, weekly_cp, duplicate, or blacklist

For detailed MCP documentation, see Message Control Platform.


Configuration Recommendations

Matching SMSC Limits

Set esme_rate and esme_rate_burst to values agreed with your SMSC provider. If the provider allows 100 RPS per connection, set:

esme_rate = 100
esme_rate_burst = 100

Setting values higher than the provider allows will result in SMSC-side rejections and wasted resources.

Burst vs. Sustained Rate

For most production deployments, set esme_rate and esme_rate_burst to the same value. Use a higher burst only if your traffic pattern includes short spikes (for example, campaign launches) and your SMSC provider can absorb them.

Pool Sizing

When building a pool, consider the total throughput you need. A pool with 3 links at 100 RPS each provides 300 RPS total — but if one link fails, capacity drops to 200 RPS. Size your pools with enough headroom to tolerate at least one link failure.

Multi-Node Deployments

In a cluster with multiple proxy instances, the L2 limiter (Redis) ensures that the total rate across all instances does not exceed the configured limit. Each proxy instance coordinates through Redis — no additional configuration is needed.


Monitoring Rate Limits

Current Throughput

curl http://localhost:8080/api/v1/dashboard/stats

Returns real-time RPS data for the entire system.

curl http://localhost:8080/api/v1/dashboard/smpp-links

Shows each link's current state and effective rate limit.

Prometheus Metrics

The /metrics endpoint exposes rate limiting counters compatible with Prometheus, enabling alerts on sustained throttling or link degradation.