| Object | Change | Behavior |
|---|---|---|
| Column | Add | Adds the new column to the end of the destination table. |
| Remove | Keeps the column in the destination table intact but is left empty (NULL values) from the point of change onwards. | |
| Data type | Adds a new column. For example if there is a column named AGE which is the NUMBER type but is then altered to VARCHAR type, an additional column is added named AGE_CHR of type VARCHAR. AGE column is kept intact. | |
| Rename | Adds a new column. For example, if there is a column named FIRST_NAME that’s renamed to FULL_NAME, an additional column is added named FULL_NAME and will be populated from the point of change onwards. FIRST_NAME column is kept intact but is left empty (NULL values). | |
| Table | Add | Adding a table is done by modifying your Source Connector and Pipelines. Streamkap will automatically backfill/snapshot the newly added table (by default) while streaming data simultaneously. |
| Remove | Keeps the destination table intact but stops capturing change event data for it. | |
| Rename | Renaming table(s) will report an error and you will need to add the renamed table to your affected Source Connector and Pipeline(s) to capture it. |
How to Configure
Schema evolution is configured per destination connector. The setting is found in the SMT (Single Message Transform) tab of your destination configuration.Steps
- Navigate to Destinations and select your destination connector
- Open the Settings panel
- Select the SMT tab
- Locate the
schema.evolutionsetting - Choose one of the available options:
basic(default) — Automatically evolves the destination schema when source schema changes are detectednone— Source schema changes are ignored at the destination; no automatic schema modifications are made
- Click Save
Option Details
basic (default)
basic (default)
When set to
basic, Streamkap automatically adapts the destination schema in response to source changes:- New columns are added to the destination table automatically
- Data type changes are handled by adding a new column with a type suffix (e.g., an
AGEcolumn changed fromNUMBERtoVARCHARresults in a newAGE_CHRcolumn) - Renamed columns result in a new column being added; the original column is retained but receives
NULLvalues going forward - Removed columns are kept in the destination but receive
NULLvalues from the point of removal
none
none
When set to
none, Streamkap does not modify the destination schema in response to source changes:- New columns in the source are not added to the destination
- Data type changes, renames, and removals in the source are ignored
- You must manage the destination schema manually
Destination-Specific Notes
Other destinationsFor all other supported destinations, schema evolution is available regardless of the ingestion mode. No additional privileges beyond the standard connector permissions are required.
Verifying Schema Evolution
After configuring schema evolution, verify that changes propagate correctly from source to destination.Steps to Test
- Add a new column at the source database (e.g.,
ALTER TABLE orders ADD COLUMN notes VARCHAR(255);) - Insert or update a row in that table so the change event includes the new column
- Wait for propagation — the new column should appear at the destination after the pipeline processes the change event
- Check the destination — confirm the new column exists and has the expected data type
Schema evolution only applies to new change events. Existing rows that were written before the schema change will not be retroactively updated with new columns. To backfill existing rows with values for the new column, trigger an incremental snapshot of the affected table.
Column Name Case Sensitivity
When streaming data between systems, column name casing can differ between the source and destination. A column created asfirstName in one database may arrive as FIRSTNAME or firstname in another. Understanding each system’s default behavior helps prevent schema mismatches and failed writes.
How Streamkap Handles Case
Streamkap preserves the original column name casing from the source database through the pipeline. The column name that appears in the source’s change event is carried as-is through Kafka and delivered to the destination connector. However, the destination database may apply its own case-folding rules when creating or matching columns. This means the final column name at the destination depends on how that particular database handles identifiers.Sources
| Database | Default Behavior | Notes |
|---|---|---|
| PostgreSQL | Lowercase | Folds unquoted identifiers to lowercase. Double-quoted names preserve original casing. |
| MySQL | Varies | Depends on the operating system and the lower_case_table_names server setting. |
| Oracle | UPPERCASE | Folds unquoted identifiers to uppercase. Double-quoted names preserve original casing. |
| SQL Server | Varies | Case behavior depends on the database or column collation setting. |
| MongoDB | Case-sensitive | Field names are preserved exactly as-is. |
Destinations
| Database | Default Behavior | Notes |
|---|---|---|
| Snowflake | UPPERCASE | Folds unquoted identifiers to uppercase. Double-quoted names preserve original casing. |
| ClickHouse | Case-sensitive | Column names are stored and matched exactly as provided. |
| BigQuery | Case-sensitive | Column names are stored and matched exactly as provided. |
| PostgreSQL | Lowercase | Folds unquoted identifiers to lowercase. Double-quoted names preserve original casing. |
| Redshift | Lowercase | Folds all identifiers to lowercase regardless of quoting. |
| S3 / Iceberg | Case-preserving | Column names are preserved as received. Final behavior may depend on the file format (Parquet, Avro, etc.). |
Case Handling Scenarios
The following examples illustrate how column names flow from specific sources to specific destinations.PostgreSQL (lowercase) to Snowflake (uppercase)
PostgreSQL (lowercase) to Snowflake (uppercase)
PostgreSQL folds unquoted identifiers to lowercase. A column created as
firstName (unquoted) is stored as firstname. When Streamkap streams this to Snowflake, the change event carries the name firstname. Snowflake then folds unquoted identifiers to uppercase, so the column is created as FIRSTNAME in the destination table.- Source column:
firstname(PostgreSQL lowercase) - Destination column:
FIRSTNAME(Snowflake uppercase)
MySQL (case-insensitive) to PostgreSQL (lowercase)
MySQL (case-insensitive) to PostgreSQL (lowercase)
MySQL’s case sensitivity depends on the operating system and the
lower_case_table_names setting. If MySQL stores a column as CustomerName, the change event carries that exact casing. When Streamkap delivers this to PostgreSQL, PostgreSQL folds unquoted identifiers to lowercase, so the column is created as customername.- Source column:
CustomerName(MySQL mixed-case) - Destination column:
customername(PostgreSQL lowercase)
MongoDB (case-sensitive) to BigQuery (case-sensitive)
MongoDB (case-sensitive) to BigQuery (case-sensitive)
MongoDB preserves field names exactly as-is, and BigQuery also stores column names exactly as provided. A field named
userEmail in MongoDB arrives as userEmail in BigQuery with no case transformation.- Source field:
userEmail(MongoDB exact) - Destination column:
userEmail(BigQuery exact)