Overview
Use the HTTP Sink Destination to push streamed records from your topics to any reachable HTTP(S) endpoint. This is useful for integrating with ingestion APIs, webhooks, or custom services that accept POST requests.Prerequisites
- An HTTPS endpoint that accepts POST requests (e.g. ingestion API)
- Decision on authentication: None, Static Authorization header (Bearer/API key, Basic Auth), or OAuth2 Client Credentials
- Capacity planning for batch size vs endpoint rate limits
Endpoint Preparation
Before adding the Destination:- Confirm the endpoint supports idempotency (recommended). If possible, design it to ignore duplicate records using a unique key.
- Record the base URL you will send data to. Example:
https://api.example.com/ingest. - If using a static token/key, create it and store securely.
- If using OAuth2 Client Credentials, collect:
- Token URL
- Client ID
- Client Secret
Streamkap Setup (UI)
- Navigate to Destinations and choose HTTP Sink.
- Fill in the fields:
- Name – A memorable identifier.
- URL – The full HTTPS endpoint to POST each record or batch to.
- Authentication Type – Select
None,Static, orOAuth2.- Static: Paste the Authorization header value. Do not include the word
Authorization:– just the header value. Examples:- Bearer Token:
Bearer eyJ... - API Key (standard format):
ApiKey sk-1234567890abcdef - API Key (provider-specific): Check your API documentation—enter
<key>directly or the format specified (e.g.,Bearer,Token, etc.). - Basic Auth:
Basic dXNlcjpwYXNzMTIz(base64-encodedusername:password)
- Bearer Token:
- Custom Headers (non-Authorization): If your API uses custom headers like
X-API-Key, use the Additional Headers field with formatX-API-Key:sk-1234567890abcdef. - OAuth2: Enter Token URL, Client ID, Client Secret, optional Scope. Leave defaults for grant type unless your provider differs.
- Static: Paste the Authorization header value. Do not include the word
- Content-Type – Enter the media type expected by your endpoint (commonly
application/json). Must be non-blank. - Additional Headers – Comma separated
Header:Valuepairs (case-insensitive). Omit duplicates. Example:X-Env:prod,X-Source:streamkap. - Batching – Enable if your endpoint supports multiple records per request.
- Batch Max Size – Maximum number of records per batch (default 500). Start conservatively (50–200). Increase after observing latency.
- Enable Batch Buffering – When enabled, records are held and only sent when either batch max size is reached OR batch max time expires. When disabled, records are sent immediately as they arrive (but still grouped up to batch max size).
- Batch Max Time (ms) – Maximum time (in milliseconds) to hold records before flushing, even if batch size is not reached (default 1000 ms). Useful to ensure latency-sensitive endpoints receive data within a predictable window.
- Separator – Typically
\nfor newline-delimited JSON. Leave default unless your endpoint requires another delimiter. - Prefix/Suffix – Optional text inserted before the first and after the last record. Leave blank unless wrapping in JSON array (e.g. Prefix
[Separator,\nSuffix]).
- Timeout (Seconds) – How long to wait for an HTTP response. Raise if your endpoint is slow; default works for most.
- Retries – Max attempts for transient HTTP failures. Increase only if backend occasionally returns 5xx.
- Backoff (ms) – Delay between retries. Adjust to respect rate limits.
- Decimal Format – Choose numeric if the endpoint expects plain numbers; otherwise keep base64.
- Save the Destination.
- Attach Pipelines / Topics to the Destination as needed.
How It Works
Each record (or batch) is serialized (JSON by default) and POSTed to the configured URL. Batching writes multiple serialized records separated by the chosen delimiter within one request. Retries apply to whole batches. Failures after max retries raise task errors.Authentication Modes (UI Perspective)
- None – No Authorization header sent.
- Static – Streamkap sends
Authorization: <value>exactly once per request. Supports multiple formats:- Bearer Tokens:
Authorization: Bearer <token>for API tokens and JWTs. - API Keys: Depending on your provider, enter one of these formats:
Authorization: ApiKey <key>– Some APIs use this standardized format.Authorization: <key>– Others accept the key directly without a prefix.- If your API uses a custom header like
X-API-Key: <key>, use the Additional Headers field instead.
- Basic Auth:
Authorization: Basic <base64-credentials>for username/password authentication. You must manually convertusername:passwordto Base64 before entering it. Online tools or command-line utilities (e.g.,echo -n "user:pass" | base64) can help with encoding.
- Bearer Tokens:
- OAuth2 – Streamkap fetches and refreshes tokens automatically using Client Credentials flow. Tokens are inserted as
Authorization: Bearer <token>.
Batching Guidance
- Start small to measure endpoint latency (e.g. 100 records).
- Increase gradually while monitoring HTTP 429 or 5xx responses.
- Use newline separation for simple lines-based ingestion; use array wrapping (Prefix
[, Separator,\n, Suffix]) if the endpoint expects a JSON array.
Batch Buffering & Flushing Behavior
When batching is enabled, you control when batches are sent using the Enable Batch Buffering setting:Without Batch Buffering (Default)
- Records are immediately POSTed in batches as they arrive, up to the configured Batch Max Size.
- If fewer records arrive before the next batch, they are sent without waiting.
- Use this mode for low-latency, record-by-record processing with minimal delay.
With Batch Buffering
- Records are held in memory and only sent when either condition is met:
- Batch reaches max size – The batch hits the configured Batch Max Size limit.
- Time window expires – The Batch Max Time (ms) interval elapses since the first record arrived.
- This mode ensures predictable latency and can improve endpoint efficiency by batching sparse traffic.
Configuration Dependencies
| Setting | Requires | Effect |
|---|---|---|
| Enable Batching | – | When enabled, all batch-related options become available. |
| Batch Max Size | Batching enabled | Maximum records per request. Default 500; affects payload size. |
| Enable Batch Buffering | Batching enabled | When enabled, Batch Max Time becomes available. |
| Batch Max Time (ms) | Batching + Buffering enabled | Default 10000 ms. Only active when buffering is on. |
| Batch Prefix/Suffix/Separator | Batching enabled | Control JSON or text formatting of the batch payload. |
Example Scenarios
Scenario 1: Real-time ingestion, low latency- Enable Batching: ✓
- Enable Batch Buffering: ✗
- Batch Max Size: 100
- Result: Records sent every ~100 records or immediately if fewer arrive, minimizing delay.
- Enable Batching: ✓
- Enable Batch Buffering: ✓
- Batch Max Size: 1000
- Batch Max Time: 5000 ms
- Result: Records held up to 5 seconds or until 1000 records accumulated, maximizing throughput.
- Enable Batching: ✓
- Enable Batch Buffering: ✓
- Batch Max Size: 500
- Batch Max Time: 2000 ms
- Result: Balances latency and efficiency; sends between 2–5 seconds depending on record volume.
Security Notes
- Secrets (client secret, static token) are stored encrypted.
- Prefer HTTPS only; do not use plain HTTP for production.
- Validate payload size limits on the receiving side to prevent oversized batch denial-of-service.
Limitations
- No automatic schema negotiation with the endpoint – ensure the receiving service tolerates new fields.
- Retries are whole-batch; a single problematic record causes re-send of the entire batch.
- No built-in rate limiting; configure backoff and batch size to respect endpoint limits.