Description

📦v1.0.0📅2026-03-10🔄Updated 2026-04-28👤Admin Team
conceptsmessage-control-protocolmcpcontact-policynode-redfiltering

MCP — Message Control Platform

Overview

MCP (Message Control Platform) is a flexible, visual message filtering and contact policy enforcement engine built on top of Node-RED.

It acts as the control plane of the messaging infrastructure — every outbound message is evaluated against a configurable set of policy workflows before it is allowed to reach the transmission layer (SMPP Proxy).

MCP provides a no-code/low-code approach to building sophisticated contact policy rules. Using the Node-RED visual flow editor, operators can design, modify, and deploy filtering strategies in real time — without redeploying backend services or writing application code.

Key value: MCP gives your messaging operations a powerful, real-time policy engine with a visual interface — enabling rapid iteration on business rules, compliance requirements, and contact frequency controls.

main_page_mcp.png


Architecture at a Glance

ComponentRole
Node-REDVisual flow engine — all policy logic is defined as workflows (flows)
RedisHigh-performance state store for counters, blacklists, and deduplication keys
S3 / MinIOObject storage for blacklist source files (CSV)
PrometheusMetrics exporter for monitoring request throughput per strategy

MCP is fully containerized and horizontally scalable. Multiple Node-RED instances share the same flow definitions and connect to a shared Redis and S3 backend.


How It Works

Every incoming check request hits a single HTTP API endpoint. The request is authenticated, then routed to the appropriate strategy workflow based on the strategy field in the request body. Each strategy implements its own filtering logic.

Request Lifecycle

MessageCP_StrategyRouting_Flow.svg

The response is a JSON object with two fields:

  • state — the reason code (e.g. ok, daily_cp, weekly_cp, blacklist, duplicate)
  • pass — boolean indicating whether the message is allowed to proceed

Available Strategies (Workflows)

1. CP1 — Frequency-Based Contact Policy

Purpose: Limits the number of messages a single recipient (daddr) can receive within daily and weekly windows.

How it works:

  1. A Redis key daddr:<phone_number> stores a JSON counter object with fields d (daily count) and w (weekly count).
  2. On the first message to a new recipient, a counter is created with d=1, w=1 and a configurable TTL.
  3. On subsequent messages, counters are incremented and checked against limits:
    • If the daily limit is exceeded → the message is denied with state daily_cp
    • If the weekly limit is exceeded → the message is denied with state weekly_cp
  4. If both limits are within bounds, the counter is updated (with preserved TTL) and the message is allowed.

Counter Reset:

  • Daily counters are reset to zero every day at midnight via a scheduled cron job.
  • Weekly counters are reset to zero every Monday at midnight.
  • Resets are performed atomically using Lua scripts executed directly in Redis, processing keys in batches for efficiency.
  • Manual reset is also available via HTTP endpoints (/api/v1/dcp for daily, /api/v1/wcp for weekly).

2. CP1 On-Build — Pre-flight Contact Policy Check

Purpose: A read-only variant of the CP1 strategy used for pre-validation during message preparation.

How it works:

  1. Looks up the same daddr:<phone_number> key in Redis.
  2. If no counter exists → the message is allowed (pass: true).
  3. If a counter exists → checks whether daily or weekly limits have already been reached, and returns the appropriate denial reason.
  4. Does not modify any counters — purely a read operation.

This strategy enables upstream services to check whether a message would be blocked before actually submitting it.


3. CP2 — Deduplication Policy

Purpose: Prevents duplicate messages to the same recipient within a configurable time window.

How it works:

  1. A Redis key dedup:<phone_number> is checked.
  2. If the key does not exist → the message is new:
    • A deduplication marker is created with a TTL (time-to-live window).
    • The message is allowed (pass: true).
  3. If the key exists → the message is a duplicate:
    • The message is denied with state duplicate.
    • An HTTP 429 (Too Many Requests) response is returned.

This effectively creates a cooldown window per recipient — no two messages to the same number are allowed within the configured interval.


4. Blacklist — Contact Blacklist Check

Purpose: Blocks messages to phone numbers that appear on a centrally managed blacklist.

How it works:

  1. A Redis key blacklist:<phone_number> is checked.
  2. If the key does not exist → the number is not blacklisted, and the message is allowed.
  3. If the key exists → the number is blacklisted, and the message is denied with state blacklist.

Blacklist Management:

The blacklist is maintained as CSV files stored in an S3-compatible object storage (MinIO). The blacklist lifecycle is fully automated:

  1. Load — CSV files are listed and read from the S3 bucket contact-policy/black-list/.
  2. Transform — CSV rows are split, parsed, and formatted into Redis SET commands with a transaction key and TTL.
  3. Store — All entries are written to Redis in a single atomic MULTI transaction.
  4. Clean — After loading the new blacklist, old entries (from previous loads) are identified and deleted based on the transaction key, ensuring the blacklist is always current.

Auto-Renewal: A configurable cron schedule triggers automatic blacklist refresh from S3, ensuring the blacklist stays synchronized with the source of truth.


5. Health Check

Purpose: A simple liveness probe endpoint.

  • Endpoint: GET /api/v1/health
  • Response: 200 OK

Used for container orchestration health monitoring (Docker, Kubernetes).


API Reference

MethodEndpointDescription
POST/api/v1/checkMain policy check endpoint — evaluates message against the specified strategy
GET/api/v1/healthHealth check / liveness probe
GET/api/v1/dcpTrigger manual daily counter reset
GET/api/v1/wcpTrigger manual weekly counter reset

Policy Check Request

POST /api/v1/check
Header: x-api-auth: <api_key>

Request Body:

{
  "task_id": "49",
  "transaction_id": "1C439986-E1E9-4697-A127-4079EF1180D9",
  "app_id": 1,
  "esme": 100,
  "saddr": "SENDER_NAME",
  "daddr": "998901234567",
  "message": "Hello",
  "strategy": "cp1"
}

Supported strategies: cp1, onbuild_cp1, cp2, blacklist

Response (Allowed)

{
  "state": "ok",
  "pass": true
}

Response (Denied)

{
  "state": "daily_cp",
  "pass": false
}

Possible state values: ok, daily_cp, weekly_cp, duplicate, blacklist, unknown_strategy


Observability

MCP exports Prometheus-compatible metrics for monitoring:

MetricTypeDescription
requests_totalCounterTotal number of policy check requests
req_totlCounterRequests broken down by strategy label

Metrics are accessible via the standard Node-RED Prometheus exporter endpoint.


Scalability

MCP is designed for horizontal scaling:

  • Multiple Node-RED instances (contact-policy-0 through contact-policy-4) run the same flow definitions from a shared volume.
  • All instances connect to a shared Redis backend for state consistency.
  • An external load balancer distributes traffic across instances.
  • Stateless request processing ensures any instance can handle any request.

Technology Stack

TechnologyVersionPurpose
Node-RED4.1.1 (Debian)Visual flow engine
RedislatestState storage (counters, blacklists, dedup keys)
MinIO (S3-compatible)Blacklist file storage
Prometheus Exporter1.0.5Metrics export
AWS S3 SDK3.xS3 bucket integration
Aerospike6.4.0Alternative storage backend (experimental, disabled)

gRPC Authentication