Skip to main content
Schema evolution enables a destination to adapt the schema of its objects (e.g. columns, fields, tables, files, collections) over time without disrupting operations or data integrity. Streamkap Connectors support - where applicable - the following types of schema changes:
ObjectChangeBehavior
ColumnAddAdds the new column to the end of the destination table.
RemoveKeeps the column in the destination table intact but is left empty (NULL values) from the point of change onwards.
Data typeAdds a new column. For example if there is a column named AGE which is the NUMBER type but is then altered to VARCHAR type, an additional column is added named AGE_CHR of type VARCHAR. AGE column is kept intact.
RenameAdds a new column. For example, if there is a column named FIRST_NAME that’s renamed to FULL_NAME, an additional column is added named FULL_NAME and will be populated from the point of change onwards. FIRST_NAME column is kept intact but is left empty (NULL values).
TableAddAdding a table is done by modifying your Source Connector and Pipelines. Streamkap will automatically backfill/snapshot the newly added table (by default) while streaming data simultaneously.
RemoveKeeps the destination table intact but stops capturing change event data for it.
RenameRenaming table(s) will report an error and you will need to add the renamed table to your affected Source Connector and Pipeline(s) to capture it.

How to Configure

Schema evolution is configured per destination connector. The setting is found in the SMT (Single Message Transform) tab of your destination configuration.

Steps

  1. Navigate to Destinations and select your destination connector
  2. Open the Settings panel
  3. Select the SMT tab
  4. Locate the schema.evolution setting
  5. Choose one of the available options:
    • basic (default) — Automatically evolves the destination schema when source schema changes are detected
    • none — Source schema changes are ignored at the destination; no automatic schema modifications are made
  6. Click Save

Option Details

When set to basic, Streamkap automatically adapts the destination schema in response to source changes:
  • New columns are added to the destination table automatically
  • Data type changes are handled by adding a new column with a type suffix (e.g., an AGE column changed from NUMBER to VARCHAR results in a new AGE_CHR column)
  • Renamed columns result in a new column being added; the original column is retained but receives NULL values going forward
  • Removed columns are kept in the destination but receive NULL values from the point of removal
This is the recommended setting for most use cases.
When set to none, Streamkap does not modify the destination schema in response to source changes:
  • New columns in the source are not added to the destination
  • Data type changes, renames, and removals in the source are ignored
  • You must manage the destination schema manually
Use this option when you need strict control over the destination schema, such as when a DBA manages table structures or when you have custom table layouts.

Destination-Specific Notes

Snowflake: Upsert mode requiredFor Snowflake destinations, schema evolution is only available when the ingestion mode is set to upsert. If you are using append mode, the schema.evolution setting is not applicable.Additionally, the Snowflake role used by the connector must have the OWNERSHIP privilege on the target tables for schema evolution to work. See the Snowflake Schema Evolution Permissions section for details and required SQL grants.
Other destinationsFor all other supported destinations, schema evolution is available regardless of the ingestion mode. No additional privileges beyond the standard connector permissions are required.

Verifying Schema Evolution

After configuring schema evolution, verify that changes propagate correctly from source to destination.

Steps to Test

  1. Add a new column at the source database (e.g., ALTER TABLE orders ADD COLUMN notes VARCHAR(255);)
  2. Insert or update a row in that table so the change event includes the new column
  3. Wait for propagation — the new column should appear at the destination after the pipeline processes the change event
  4. Check the destination — confirm the new column exists and has the expected data type
The propagation time depends on pipeline lag. Under normal operating conditions with low lag, schema changes typically propagate within seconds to minutes. During high-throughput periods or active snapshots, propagation may take longer.
Schema evolution only applies to new change events. Existing rows that were written before the schema change will not be retroactively updated with new columns. To backfill existing rows with values for the new column, trigger an incremental snapshot of the affected table.
Switching schema evolution modes mid-pipelineChanging the schema.evolution setting from basic to none (or vice versa) on an active pipeline can cause inconsistencies. For example, switching from basic to none means any future source schema changes will no longer propagate, but previously auto-evolved columns remain in the destination. Switching from none to basic will only evolve the schema for changes detected after the switch — it will not retroactively apply changes that occurred while the mode was none.Test mode changes in a staging environment first. If you need to switch modes on a production pipeline, review the current destination schema to ensure it is consistent before making the change.

Column Name Case Sensitivity

When streaming data between systems, column name casing can differ between the source and destination. A column created as firstName in one database may arrive as FIRSTNAME or firstname in another. Understanding each system’s default behavior helps prevent schema mismatches and failed writes.

How Streamkap Handles Case

Streamkap preserves the original column name casing from the source database through the pipeline. The column name that appears in the source’s change event is carried as-is through Kafka and delivered to the destination connector. However, the destination database may apply its own case-folding rules when creating or matching columns. This means the final column name at the destination depends on how that particular database handles identifiers.

Sources

DatabaseDefault BehaviorNotes
PostgreSQLLowercaseFolds unquoted identifiers to lowercase. Double-quoted names preserve original casing.
MySQLVariesDepends on the operating system and the lower_case_table_names server setting.
OracleUPPERCASEFolds unquoted identifiers to uppercase. Double-quoted names preserve original casing.
SQL ServerVariesCase behavior depends on the database or column collation setting.
MongoDBCase-sensitiveField names are preserved exactly as-is.

Destinations

DatabaseDefault BehaviorNotes
SnowflakeUPPERCASEFolds unquoted identifiers to uppercase. Double-quoted names preserve original casing.
ClickHouseCase-sensitiveColumn names are stored and matched exactly as provided.
BigQueryCase-sensitiveColumn names are stored and matched exactly as provided.
PostgreSQLLowercaseFolds unquoted identifiers to lowercase. Double-quoted names preserve original casing.
RedshiftLowercaseFolds all identifiers to lowercase regardless of quoting.
S3 / IcebergCase-preservingColumn names are preserved as received. Final behavior may depend on the file format (Parquet, Avro, etc.).

Case Handling Scenarios

The following examples illustrate how column names flow from specific sources to specific destinations.
PostgreSQL folds unquoted identifiers to lowercase. A column created as firstName (unquoted) is stored as firstname. When Streamkap streams this to Snowflake, the change event carries the name firstname. Snowflake then folds unquoted identifiers to uppercase, so the column is created as FIRSTNAME in the destination table.
  • Source column: firstname (PostgreSQL lowercase)
  • Destination column: FIRSTNAME (Snowflake uppercase)
If you need to preserve the original lowercase name in Snowflake, use a RenameFields transform or configure Snowflake to use quoted identifiers.
MySQL’s case sensitivity depends on the operating system and the lower_case_table_names setting. If MySQL stores a column as CustomerName, the change event carries that exact casing. When Streamkap delivers this to PostgreSQL, PostgreSQL folds unquoted identifiers to lowercase, so the column is created as customername.
  • Source column: CustomerName (MySQL mixed-case)
  • Destination column: customername (PostgreSQL lowercase)
The data flows correctly, but the casing changes at the destination. Queries against the PostgreSQL destination must use the lowercase name.
MongoDB preserves field names exactly as-is, and BigQuery also stores column names exactly as provided. A field named userEmail in MongoDB arrives as userEmail in BigQuery with no case transformation.
  • Source field: userEmail (MongoDB exact)
  • Destination column: userEmail (BigQuery exact)
This is the simplest scenario — both systems are case-sensitive and case-preserving, so names pass through unchanged.
If you need to normalize column names before they reach the destination (for example, converting camelCase to snake_case, or ensuring all columns are lowercase), you can use the RenameFields destination-side transform. See Transform Examples for a concrete before-and-after example.
To avoid case sensitivity issues in your CDC pipelines, keep column naming conventions consistent across your source and destination. If your source uses mixed-case or uppercase identifiers (e.g., Oracle), verify that your destination handles them as expected. When in doubt, use lowercase column names without special quoting, as this is the most widely compatible convention.