Hello, I’m DocuDroid!
Submitting feedback
Thank you for rating our AI Search!
We would be grateful if you could share your thoughts so we can improve our AI Search for you and other readers.
GitHub

Upgrade a Greengage DB cluster

Andrey Aksenov

This topic describes how to upgrade a Greengage DB cluster from one release to a newer one.

Overview

Greengage DB versions follow a three-part format: <major>.<minor>.<patch> — for example, 6.29.2 or 7.4.1:

  • major — incremented for system catalog changes, incompatible changes, or significant new features. Deprecated functionality may be removed in a major release.

  • minor — incremented when backward-compatible features are added or functionality is deprecated. Deprecated features are not removed in minor releases.

  • patch — incremented for backward-compatible bug fixes within a minor release.

Minor and patch releases never change the internal storage format, so they are always compatible with other releases of the same major version. For example, 6.29.2 is compatible with 6.28.0, and 7.4.1 is compatible with 7.2.0. To upgrade between compatible versions, replace the executables while the cluster is stopped, then restart.

Major releases may change the internal storage format. To upgrade to a new major version, dump data from the old cluster and restore it into the new one using the gpbackup and gprestore utilities.

Before completing a major upgrade, test client applications against the new version. Running parallel installations of the old and new versions helps with validation and rollback.

When planning a major upgrade, review changes in these areas:

Administration

Monitoring and management capabilities may change or be enhanced in each major release.

SQL

Changes may include new SQL features, extensions, or modifications to existing behavior. For an example of SQL-level breaking changes, see SQL incompatibilities between Greengage DB 6 and 7.

Library API

Client libraries such as libpq may introduce new functionality or changes. Compatibility is generally maintained unless otherwise noted.

System catalogs

System catalog changes affect the internal metadata structure of the database. These changes can impact administrative tools, monitoring systems, or custom queries that access catalog tables directly.

Server C-language API

Changes to backend function interfaces written in C may impact extensions or integrations that rely on internal server APIs.

Prepare to upgrade

To upgrade between major versions, dump data from the old cluster and restore it into the new one using gpbackup and gprestore.

Before you begin:

  • Review breaking changes.

    Read the release notes for every major version between your current version and the target version. Pay attention to removed features, changed defaults, and SQL or catalog incompatibilities.

  • Prepare the new cluster.

    Install and initialize a new Greengage DB cluster of the target version.

  • Install gpbackup and gprestore.

    The utilities must be installed on the master/coordinator host of both the old and new clusters.

Migrate data

The following steps walk through data migration using gpbackup and gprestore. The example assumes that both clusters have the same layout: one master/coordinator and two segment hosts, each with two primary segments.

Step 1. Back up the source database

Run gpbackup on the master/coordinator of the old cluster to create a backup of the required database, for example:

$ gpbackup \
  --dbname marketplace \
  --backup-dir /home/gpadmin \
  --single-backup-dir \
  --without-globals
  • --dbname specifies the name of the database to back up.

  • --backup-dir specifies the directory for creating backup files.

  • --single-backup-dir stores all backup files for each host in a single directory, instead of creating separate directories for each segment.

  • --without-globals excludes global objects such as roles, resource groups, and tablespaces. This option can be useful if there are incompatible changes in global object definitions between releases.

When the backup completes successfully, the output includes the following message:

[INFO]:-Backup completed successfully

The output also contains the backup timestamp, which uniquely identifies the backup and must be specified in restore commands:

[INFO]:-Backup Timestamp = 20260401143034

Step 2. Verify the backup files

gpbackup writes metadata and configuration files to the master, and data files to each segment host. Before proceeding, verify that all expected files have been created.

The following example shows the backup file layout for a cluster with one master and two segment hosts, each hosting two primary segments:

backups/
`-- 20260401
    `-- 20260401143034
        |-- gpbackup_20260401143034_config.yaml
        |-- gpbackup_20260401143034_metadata.sql
        |-- gpbackup_20260401143034_report
        `-- gpbackup_20260401143034_toc.yaml
backups/
`-- 20260401
    `-- 20260401143034
        |-- gpbackup_0_20260401143034_16385.gz
        |-- gpbackup_0_20260401143034_16438.gz
        |-- gpbackup_0_20260401143034_16446.gz
        |-- gpbackup_1_20260401143034_16385.gz
        |-- gpbackup_1_20260401143034_16438.gz
        `-- gpbackup_1_20260401143034_16446.gz
backups/
`-- 20260401
    `-- 20260401143034
        |-- gpbackup_2_20260401143034_16385.gz
        |-- gpbackup_2_20260401143034_16438.gz
        |-- gpbackup_2_20260401143034_16446.gz
        |-- gpbackup_3_20260401143034_16385.gz
        |-- gpbackup_3_20260401143034_16438.gz
        `-- gpbackup_3_20260401143034_16446.gz

Step 3. Copy backup files to the new cluster

Copy the backups directories to the corresponding hosts in the new cluster, preserving the directory structure. Each host in the new cluster must have the backup files in the same path used by gpbackup — in this example, /home/gpadmin.

If necessary, adjust file ownership so that gpadmin can access the files:

$ sudo chown -R gpadmin:gpadmin /home/gpadmin/backups

Step 4. Create the target database

On the coordinator of the target cluster, create a database with the same name as the source database:

$ createdb marketplace

Step 5. Restore data to the new cluster

Run gprestore on the coordinator of the new cluster, using the timestamp from step 1:

$ gprestore \
  --backup-dir /home/gpadmin \
  --timestamp 20260401143034

--backup-dir specifies the path to the directory that contains the backup files.

After a successful restore, the output includes the following message:

[INFO]:-Restore completed successfully

Complete migration

If you skipped any tables during the restore, migrate them using other methods — for example, export them with COPY TO and load the resulting files into the new cluster with COPY FROM.

Recreate any objects you dropped to enable the migration, such as external tables, indexes, and user-defined functions.

After all data is migrated, update SQL scripts, administration scripts, and user-defined functions as needed to reflect any changes in the new version. For an example of SQL-level breaking changes, see SQL incompatibilities between Greengage DB 6 and 7.