Prerequisites
- MySQL version ≥ 5.7
- MySQL binlog enabled
- Streamkap user and role
Granting Privileges
It’s recommended to create a separate user and role for Streamkap to access your MySQL database. Below is an example script that does that.Enable Snapshots
You can perform ad-hoc snapshots of all or some of your tables in the Streamkap app. See Snapshots & Backfilling for more information. This feature is available without any additional configuration because ‘GTID-based replication’ is enabled by default - and cannot be disabled - for MySQL Cloud SQL instances. See About replication in Cloud SQL for more information.Heartbeats
Heartbeats ensure the Connector stays active and continues capturing changes, which is especially important for low-traffic or intermittent databases. Connectors use “offsets”—like bookmarks—to track their position in the database’s log or change stream. When no changes occur for long periods, these offsets may become outdated, and the Connector might lose its place or stop capturing changes. There are two layers of heartbeat protection: Layer 1: Connector heartbeats (enabled by default) The Connector periodically emits heartbeat messages to an internal topic, even when no actual data changes are detected. This keeps offsets fresh and prevents staleness. No configuration is necessary for this layer; it is automatically enabled. Layer 2: Source database heartbeats (requires configuration) For read-write connections (when Read only is No during Streamkap Setup—if applicable), you can configure the Connector to update a dedicated heartbeat table in the source database at regular intervals. This simulates activity, ensuring change events are generated consistently, maintaining log progress and preventing staleness. It’s especially useful for databases like PostgreSQL, where lack of regular changes can cause the write-ahead log (WAL) to grow rapidly during inactivity. This layer requires you to set up a heartbeat table in your source database.Configure source database heartbeats
Configure binary logging
If you are using a read replica, you must enable binary logging on the read replica
- In the Google Cloud console, go to the Cloud SQL Instances page.
- Go to Cloud SQL Instances
- Open the more actions menu for the instance you want to enable point-in-time recovery on and click Edit.
- Under Customize your instance, expand the Data Protection section.
- Select the Enable point-in-time recovery checkbox.
- Expand Advanced options.
- Enter the number of days to retain logs, from 3-7. We recommend 7 days,
- Click Save.
Consider Access Restrictions
- Visit Connection Options to ensure Streamkap can reach your database
Setup MySQL Connector in Streamkap
- Go to Sources and click Create New
-
Input
- Name for your Connector
- Hostname
-
Port (Default
3306) -
Username (Username you chose earlier, our scripts use
streamkap_user) - Password
-
Heartbeat - Crucial for low and intermittent traffic databases. Enabled by default.
- See Heartbeats for setup instructions.
- Connection Timezone - The timezone of your database
-
Timezone conversionMySQL converts
TIMESTAMPvalues from the current time zone to UTC for storage, and back from UTC to the current time zone for retrieval. By default, the current time zone for each connection is the database server’s time zone but this option allows you to override that.As long as the time zones remain the same, you get back the same value you store.We recommend using the defaultSERVERoption which attempts to detect the session time zone from the values configured on the MySQL server session variables ‘time_zone’ or ‘system_time_zone’. It also reduces the chance of problems with daylight savings adjustment ‘fall back’ and ‘spring forward’.If either time zones change, an ad-hoc snapshot is recommended so your source and destination timestamps are consistent. - Connect via SSH Tunnel. See SSH Tunnel
-
Advanced Parameters
- Represent Binary Data As (Default
bytes)
- Represent Binary Data As (Default
- Add Schemas/Tables. Can also bulk upload here. The format is a simple list of each schema or table per row saved in csv format without a header.
- Click Save The connector will take approximately 1 minute to start processing data.