Skip to main content

Prerequisites

  • MySQL version ≥ 5.7
  • MySQL binlog enabled
  • Streamkap user and role

Granting Privileges

It’s recommended to create a separate user and role for Streamkap to access your MySQL database. Below is an example script that does that.
-- Replace { ... } placeholders as required

-- Identify version
SHOW VARIABLES LIKE 'VERSION';

-- On MySQL version 5.6 to 8.0
CREATE USER 'streamkap_user'@'%' IDENTIFIED BY '{password}';

-- On MySQL version 8.0+
CREATE USER 'streamkap_user'@'%' IDENTIFIED WITH mysql_native_password BY '{password}';

-- Grant Permissions
GRANT REPLICATION CLIENT, RELOAD, SHOW DATABASES, REPLICATION SLAVE ON *.* TO 'streamkap_user'@'%';

-- Grant Select on all schemas needed
GRANT SELECT ON {schema}.* TO 'streamkap_user';

Enable Snapshots

You can perform ad-hoc snapshots of all or some of your tables in the Streamkap app. See Snapshots & Backfilling for more information. This feature is available without any additional configuration because ‘GTID-based replication’ is enabled by default - and cannot be disabled - for MySQL Cloud SQL instances. See About replication in Cloud SQL for more information.

Heartbeats

Heartbeats ensure the Connector stays active and continues capturing changes, which is especially important for low-traffic or intermittent databases. Connectors use “offsets”—like bookmarks—to track their position in the database’s log or change stream. When no changes occur for long periods, these offsets may become outdated, and the Connector might lose its place or stop capturing changes. There are two layers of heartbeat protection: Layer 1: Connector heartbeats (enabled by default) The Connector periodically emits heartbeat messages to an internal topic, even when no actual data changes are detected. This keeps offsets fresh and prevents staleness. No configuration is necessary for this layer; it is automatically enabled. Layer 2: Source database heartbeats (requires configuration) For read-write connections (when Read only is No during Streamkap Setup—if applicable), you can configure the Connector to update a dedicated heartbeat table in the source database at regular intervals. This simulates activity, ensuring change events are generated consistently, maintaining log progress and preventing staleness. It’s especially useful for databases like PostgreSQL, where lack of regular changes can cause the write-ahead log (WAL) to grow rapidly during inactivity. This layer requires you to set up a heartbeat table in your source database.

Configure source database heartbeats

-- Create the streamkap schema
CREATE SCHEMA IF NOT EXISTS streamkap;

-- Switch to the streamkap schema
USE streamkap;

-- Create the heartbeat table with id, text, and last_update fields
CREATE TABLE streamkap_heartbeat (
    id INT AUTO_INCREMENT PRIMARY KEY,
    text TEXT,
    last_update TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Grant permission to the Streamkap user
GRANT SELECT, INSERT, UPDATE ON streamkap.streamkap_heartbeat TO STREAMKAP_USER;

-- Insert the first row into the heartbeat table
INSERT INTO streamkap_heartbeat (text) VALUES ('test_heartbeat');

Configure binary logging

If you are using a read replica, you must enable binary logging on the read replica
  1. In the Google Cloud console, go to the Cloud SQL Instances page.
  2. Go to Cloud SQL Instances
  3. Open the more actions menu for the instance you want to enable point-in-time recovery on and click Edit.
  4. Under Customize your instance, expand the Data Protection section.
  5. Select the Enable point-in-time recovery checkbox.
  6. Expand Advanced options.
  7. Enter the number of days to retain logs, from 3-7. We recommend 7 days,
  8. Click Save.

Consider Access Restrictions

Setup MySQL Connector in Streamkap

  • Go to Sources and click Create New
  • Input
    • Name for your Connector
    • Hostname
    • Port (Default 3306)
    • Username (Username you chose earlier, our scripts use streamkap_user)
    • Password
    • Heartbeat - Crucial for low and intermittent traffic databases. Enabled by default.
    • Connection Timezone - The timezone of your database
    • Timezone conversionMySQL converts TIMESTAMP values from the current time zone to UTC for storage, and back from UTC to the current time zone for retrieval. By default, the current time zone for each connection is the database server’s time zone but this option allows you to override that.As long as the time zones remain the same, you get back the same value you store.We recommend using the default SERVER option which attempts to detect the session time zone from the values configured on the MySQL server session variables ‘time_zone’ or ‘system_time_zone’. It also reduces the chance of problems with daylight savings adjustment ‘fall back’ and ‘spring forward’.If either time zones change, an ad-hoc snapshot is recommended so your source and destination timestamps are consistent.
    • Connect via SSH Tunnel. See SSH Tunnel
    • Advanced Parameters
      • Represent Binary Data As (Default bytes)
    • Add Schemas/Tables. Can also bulk upload here. The format is a simple list of each schema or table per row saved in csv format without a header.
CDC only captures base tables, not ViewsChange Data Capture reads the MySQL binary log (binlog), which only records changes to physical tables. Database Views are query-time computations with no physical storage—they don’t generate binlog entries.What you cannot capture: Views, temporary tables, MEMORY tables, BLACKHOLE tables, or system tables (information_schema, performance_schema).Solution: Specify only the underlying base tables that feed your views. You can recreate the view logic in your destination or transformation layer.
  • Click Save The connector will take approximately 1 minute to start processing data.