Use resource groups
Overview
Resource groups in Greengage DB (based on Greenplum) define limits on system resources available to users and external components. Without such limits, a single heavy OLAP query can consume a significant share of CPU and memory, disrupting service for other clients. In typical OLTP workloads, where transactions are short, resource limits are less critical but can still improve the system stability.
Resource groups manage:
-
CPU usage (cores or percentage);
-
memory usage (local and shared);
-
number of concurrent transactions.
By enforcing these constraints, resource groups enable workload isolation, ensuring that different workloads can share system resources predictably.
By default, Greengage DB uses resource queues for resource management. To use resource groups instead, switch to them in the cluster configuration as described in the Enable resource groups section.
Resource groups for roles and external components
A resource group can apply to either roles or to external components, such as PL/Container instances.
-
Resource groups for roles limit the total resources used by all sessions of the assigned roles and define how Greengage DB shares them between concurrent transactions. For example, if a resource group limits the number of concurrent transactions, that limit applies collectively to all users that have this group assigned.
-
Resource group for external components limit the total CPU and memory resources that all active instances of the assigned external components can use simultaneously. When used on external components, resource groups don’t manage resources within Greengage DB. Instead, they create isolated environments for external component execution. Resource management must be implemented in the external component itself.
Transaction queues
When a resource group limit is reached, new transactions in this group are placed in a transaction queue, waiting until sufficient resources are available. Queued transactions are processed in first-in, first-out (FIFO) order. If a resource-intensive transaction is queued and a lighter transaction arrives, the new transaction will still wait behind those already queued, even if resources are available for its execution.
By default, transactions wait in queues infinitely.
You can configure a timeout after which queued transactions are canceled using the gp_resource_group_queuing_timeout
configuration parameter.
Alternatively, you can cancel queued transactions manually or move them between resource groups as described in the Manage transactions in resource groups section.
Greengage DB uses database statistics to calculate expected resource usage, so keeping statistics up to date ensures efficient queuing and processing.
Enable resource groups
By default, Greengage DB uses resource queues for resource management. To start using resource groups instead, ensure the environment is correctly prepared and update the cluster configuration.
Greengage DB resource groups rely on cgroups v1 for enforcing CPU and memory limits. You must ensure that cgroup v1 is configured and active on all cluster hosts before enabling resource groups. To check the current cgroup version, run:
$ stat -fc %T /sys/fs/cgroup/
The output can be one of the following:
-
tmpfs
— cgroup v1 is active. No further action is required. -
cgroup2fs
— cgroup v2 is active. You must switch to cgroup v1 to use resource groups.
The following procedure shows an example procedure for configuring cgroup v1. For production environments, adapt the configuration to your specific hardware, OS, and operational requirements.
-
Open the /etc/default/grub file with root privileges:
$ sudo vi /etc/default/grub
-
Locate the line starting with
GRUB_CMDLINE_LINUX_DEFAULT
and addsystemd.unified_cgroup_hierarchy=0
to the existing options in its value.This kernel boot parameter forces the system to use cgroup v1. The updated line should look similar to this:
GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 biosdevname=0 console=ttyS0 systemd.unified_cgroup_hierarchy=0"
-
Update the GRUB bootloader configuration:
$ sudo update-grub
-
Reboot the system to apply the changes.
-
Ensure that the server has the
libcgroup-tools
(on Red Hat and CentOS) orcgroup-tools
(on Ubuntu) package installed. For example, on Ubuntu:$ sudo apt-get install cgroup-tools
-
Create the cgroup configuration file and open it for editing:
$ sudo vi /etc/cgconfig.conf
-
Insert the following configuration and save the file:
group gpdb { perm { task { uid = gpadmin; gid = gpadmin; } admin { uid = gpadmin; gid = gpadmin; } } cpu { } cpuacct { } cpuset { } memory { } }
This configuration grants the
gpadmin
user permissions to manage CPU and memory resources on the host. -
Start the cgroups service:
$ sudo cgconfigparser -l /etc/cgconfig.conf
In this case, you need to manually start the cgroups service after each system startup. To automate this on boot, consider adding it to a systemd service or your system’s init routines.
Once cgroups v1 is configured, enable using resource groups in Greengage DB by setting the gp_resource_manager
configuration parameter to group
:
$ gpconfig -c gp_resource_manager -v "group"
Then restart the cluster:
$ gpstop -r
Default resource groups
Two resource groups exist in Greengage DB by default:
-
admin_group
limits resources for superuser activities. It is automatically assigned to all superusers. -
default_group
used by default for regular users.
The default resource limits of admin_group
and default_group
are provided in the table below.
Attribute | admin_group | default_group |
---|---|---|
CONCURRENCY |
10 |
20 |
CPU_RATE_LIMIT |
10 |
30 |
CPUSET |
-1 |
-1 |
MEMORY_LIMIT |
10 |
0 |
MEMORY_SHARED_QUOTA |
80 |
80 |
MEMORY_SPILL_RATIO |
0 |
0 |
MEMORY_AUDITOR |
vmtracker |
vmtracker |
For detailed descriptions of resource group attributes, see the Attributes and limits section.
You can modify the default limits of admin_group
and default_group
using the ALTER RESOURCE GROUP
command.
Manage resource groups
Use the following SQL commands to manage resource groups in Greengage DB:
-
CREATE RESOURCE GROUP
-
ALTER RESOURCE GROUP
-
DROP RESOURCE GROUP
Create a resource group
The CREATE RESOURCE GROUP
command creates a resource group with specified attributes:
CREATE RESOURCE GROUP <group_name> WITH (<group_attribute>=<value> [, ... ])
The following attributes are available:
-
CPU_RATE_LIMIT
— the maximum percentage of the server’s CPU available to the group. -
CPUSET
— specific CPU cores reserved exclusively for the group.NOTECPU_RATE_LIMIT
andCPUSET
are mutually exclusive. Each resource group must use only one of them defined. -
MEMORY_LIMIT
— the maximum percentage of system memory available to the group. Mandatory for resource groups assigned to external components. -
CONCURRENCY
— the maximum number of concurrent transactions allowed in the group. This includes both active and queued transactions. In resource groups for external components,CONCURRENCY
must be0
. -
MEMORY_SHARED_QUOTA
— percentage of the group’s memory that can be shared among transactions. -
MEMORY_SPILL_RATIO
— memory usage threshold for memory-intensive queries. When a transaction reaches this threshold, spill files are created. -
MEMORY_AUDITOR
— the memory auditor to use in the group:-
vmtracker
(default) — used in resource groups for roles. -
cgroup
— used in resource groups for external components.
-
All attributes define limits that apply collectively to all concurrent sessions that use the group.
For resource groups for roles, either CPU_RATE_LIMIT
or CPUSET
is required.
Example:
CREATE RESOURCE GROUP rg_analysts WITH (
CPU_RATE_LIMIT = 20,
MEMORY_LIMIT = 20,
CONCURRENCY = 30
);
For resource groups for external components, you must specify either CPU_RATE_LIMIT
or CPUSET
, and also MEMORY_LIMIT
.
The MEMORY_AUDITOR
must be set to cgroup
, and CONCURRENCY
must be 0
:
CREATE RESOURCE GROUP rg_plcontainer WITH (
CPUSET = '1;1',
MEMORY_LIMIT = 20,
CONCURRENCY = 0,
MEMORY_AUDITOR = cgroup
);
Resource groups can’t be created in transactions.
Alter a resource group
To change a resource group’s limits, use the ALTER RESOURCE GROUP
command:
ALTER RESOURCE GROUP <group_name> SET <group_attribute> <value>;
Each ALTER RESOURCE GROUP
statement can modify only one attribute at a time.
Example:
ALTER RESOURCE GROUP rg_analysts SET CONCURRENCY 10;
New limits are applied differently depending on the attribute and the resource group’s target (role or external component):
-
CPU limits — changes are applied immediately, including switching the CPU resource allocation mode between
CPUSET
andCPU_RATE_LIMIT
. -
Memory limits:
-
In resource groups for roles, new memory limits apply immediately if the current usage does not exceed the new limit. Otherwise, Greengage DB waits until the usage drops below the new limit.
-
In resource groups for external components, behavior depends on the component. Increasing memory limits usually takes effect immediately if resources are available. When the limit is decreased, the behavior depends on a specific component. For example, PL/Container may terminate running containers with an out-of-memory error if memory is reduced significantly.
-
The MEMORY_AUDITOR
attribute cannot be changed.
The built-in admin_group
and default_group
can be modified as well as user-defined resource groups.
Drop a resource group
To delete a resource group, use the DROP RESOURCE GROUP
command:
DROP RESOURCE GROUP <group_name>
A resource group can be dropped only when it is not in use. For resource groups assigned to roles, this means that:
-
The group is not assigned to any roles.
-
No running or queued transactions are using the group.
To delete a resource group for roles:
-
Terminate all transactions that are using the resource group with
pg_cancel_backend()
. See Cancel a transaction in a resource group to learn to do this. -
Unassign the resource group from its users by reassigning them to another group or setting their resource group to
NONE
. -
Drop the resource group.
Example:
ALTER ROLE bob RESOURCE GROUP NONE;
DROP RESOURCE GROUP rg_analysts;
When deleting a resource group for external components, deletion behavior depends on the component implementation. For PL/Container, dropping a component’s resource group immediately stops all containers running within it.
Assign resource groups
To apply limits defined by a resource group, assign it to the relevant targets: roles or external components.
Assign resource groups to roles
To assign a resource group to a role, use ALTER ROLE
with the RESOURCE GROUP
clause:
ALTER ROLE alice RESOURCE GROUP rg_analysts;
You can also assign a resource group at the role creation in the CREATE ROLE
command:
CREATE ROLE bob RESOURCE GROUP rg_analysts;
Each role can have one resource group at a time, while a resource group can be shared by multiple roles.
For effective cluster resource management, it is generally recommended to assign user-defined resource groups to all user roles in the database.
Unassign resource groups from roles
To remove a resource group assignment, use ALTER ROLE
with RESOURCE GROUP NONE
.
This reverts the role to the default group: admin_group
for superusers and default_group
for regular users.
ALTER ROLE bob RESOURCE GROUP NONE;
Assign resource groups to external components
The way resource groups are assigned to external components depends on the specific component.
For example, for PL/Container, the resource group is specified using the resource_group_id
parameter during runtime configuration.
Check resource groups
Greengage DB provides system catalog views for monitoring resource groups and their current utilization across the cluster.
View resource groups
Use the gp_toolkit.gp_resgroup_config
view to display all resource groups and their configured limits and their configurations:
SELECT * FROM gp_toolkit.gp_resgroup_config;
Example output:
groupid | groupname | concurrency | cpu_rate_limit | memory_limit | memory_shared_quota | memory_spill_ratio | memory_auditor | cpuset ---------+----------------+-------------+----------------+--------------+---------------------+--------------------+----------------+-------- 6437 | default_group | 20 | 30 | 0 | 80 | 0 | vmtracker | -1 6438 | admin_group | 10 | 10 | 10 | 80 | 0 | vmtracker | -1 16411 | rg_plcontainer | 0 | -1 | 20 | 80 | 0 | cgroup | 1;1 16403 | rg_analysts | 10 | 20 | 20 | 80 | 0 | vmtracker | -1 (4 rows)
Check the load of resource groups
The following system catalog views help monitor resource groups utilization at cluster, host, and segment levels:
-
gp_toolkit.gp_resgroup_status
— clusterwide usage:SELECT rsgname, num_running, num_queued, num_executed, total_queue_duration FROM gp_toolkit.gp_resgroup_status;
Example output:
rsgname | num_running | num_queued | num_executed | total_queue_duration ----------------+-------------+------------+--------------+---------------------- default_group | 0 | 0 | 0 | 00:00:00 admin_group | 1 | 0 | 38 | 00:00:00 rg_analysts | 0 | 0 | 4 | 00:00:00 rg_plcontainer | 0 | 0 | 0 | 00:00:00 (4 rows)
The
gp_resgroup_status
view also provides thecpu_usage
andmemory_usage
columns with real-time utilization in JSON format:-
cpu_usage
: keys are segment IDs; values are the sum of CPU core percentages used by the group on each segment.Example:
{"-1":0.08, "0":0.03, "1":0.03, "2":0.04, "3":0.04}
-
memory_usage
: keys are segment IDs; values provide detailed memory metrics, including used, available, quota, and shared memory.Example of a pair:
"1": { "used": 0, "available": 531, "quota_used": 10, "quota_available": 90, "quota_granted": 100, "quota_proposed": 100, "shared_used": 0, "shared_available": 431, "shared_granted": 431, "shared_proposed": 431 }
{ "0": { "used": 0, "available": 531, "quota_used": 10, "quota_available": 90, "quota_granted": 100, "quota_proposed": 100, "shared_used": 0, "shared_available": 431, "shared_granted": 431, "shared_proposed": 431 }, "1": { "used": 0, "available": 531, "quota_used": 10, "quota_available": 90, "quota_granted": 100, "quota_proposed": 100, "shared_used": 0, "shared_available": 431, "shared_granted": 431, "shared_proposed": 431 }, "2": { "used": 0, "available": 531, "quota_used": 10, "quota_available": 90, "quota_granted": 100, "quota_proposed": 100, "shared_used": 0, "shared_available": 431, "shared_granted": 431, "shared_proposed": 431 }, "3": { "used": 0, "available": 531, "quota_used": 10, "quota_available": 90, "quota_granted": 100, "quota_proposed": 100, "shared_used": 0, "shared_available": 431, "shared_granted": 431, "shared_proposed": 431 }, "-1": { "used": 1, "available": 1596, "quota_used": 31, "quota_available": 279, "quota_granted": 310, "quota_proposed": 310, "shared_used": 0, "shared_available": 1287, "shared_granted": 1287, "shared_proposed": 1287 } }
-
-
gp_toolkit.gp_resgroup_status_per_host
— per-host usage. Example:SELECT rsgname, hostname, cpu, memory_used, memory_available FROM gp_toolkit.gp_resgroup_status_per_host;
Example output:
rsgname | hostname | cpu | memory_used | memory_available ----------------+----------+------+-------------+------------------ admin_group | mdw | 0.01 | 2 | 1595 default_group | sdw1 | 0.00 | 0 | 0 rg_plcontainer | sdw2 | 0.00 | 0 | rg_analysts | sdw1 | 0.00 | 0 | 2126 admin_group | sdw2 | 0.03 | 0 | 1062 default_group | sdw2 | 0.00 | 0 | 0 admin_group | sdw1 | 0.04 | 0 | 1062 default_group | mdw | 0.00 | 0 | 0 rg_plcontainer | mdw | 0.00 | 0 | rg_analysts | sdw2 | 0.00 | 0 | 2126 rg_analysts | mdw | 0.00 | 0 | 3195 rg_plcontainer | sdw1 | 0.00 | 0 | (12 rows)
-
gp_toolkit.gp_resgroup_status_per_segment
— per-segment usage. Example:SELECT rsgname, hostname, segment_id, cpu, memory_used, memory_available FROM gp_toolkit.gp_resgroup_status_per_segment WHERE segment_id = 1;
Example output:
rsgname | hostname | segment_id | cpu | memory_used | memory_available ----------------+----------+------------+------+-------------+------------------ admin_group | sdw1 | 1 | 0.03 | 0 | 531 default_group | sdw1 | 1 | 0.00 | 0 | 0 rg_plcontainer | sdw1 | 1 | 0.00 | 0 | rg_analysts | sdw1 | 1 | 0.00 | 0 | 1063 (4 rows)
View resource groups assigned to roles
To list roles and their assigned resource groups, use the following query:
SELECT rolname, rsgname
FROM pg_roles,
pg_resgroup
WHERE pg_roles.rolresgroup = pg_resgroup.oid;
Example output:
rolname | rsgname ---------+--------------- bob | default_group gpadmin | admin_group charlie | rg_analysts alice | rg_analysts (4 rows)
Manage transactions in resource groups
Greengage DB allows administrators to manage transactions within resource groups. This includes moving queries between groups and cancelling transactions to free up resources when needed.
Move a query to another resource group
For better control over resource utilization in the cluster, you can move a running query between resource groups without stopping its execution. This is particularly useful if a long-running query is blocking new transactions in its current group.
To move a running query to another resource group, the gp_toolkit.pg_resgroup_move_query()
function is used.
Its two arguments are the query PID and the name of the target resource group.
Example:
SELECT gp_toolkit.pg_resgroup_move_query(7831, 'rg_reserve');
pg_resgroup_move_query()
moves only active running queries.
A queued or idle query cannot be moved using this function.
When a query is moved using pg_resgroup_move_query()
, the target group’s resource limits apply immediately.
This means that:
-
If the target group’s concurrency or memory is fully occupied, the moved query will be queued.
-
If insufficient memory is available, the function returns an error.
A successful return from pg_resgroup_move_query()
does not guarantee immediate continued execution of the query in the target group.
The move is asynchronous.
To confirm the current resource group of a query, use the pg_stat_activity
system catalog view, for example:
SELECT pid, query, rsgid, rsgname
FROM pg_stat_activity;
Cancel a transaction in a resource group
You can cancel a running or queued transaction in a resource group to free up slots and resources for other workloads. To find the PIDs of transactions currently running or queued in resource groups, run a query like the following:
SELECT rolname, g.rsgname, pid, waiting, state, query, datname
FROM pg_roles,
gp_toolkit.gp_resgroup_status g,
pg_stat_activity
WHERE pg_roles.rolresgroup = g.groupid
AND pg_stat_activity.usename = pg_roles.rolname;
It outputs lines like this:
rolname | rsgname | pid | waiting | state | query | datname ---------+-------------+---------+---------+--------+-----------------------+--------- alice | rg_analysts | 7831 | f | idle | SELECT * FROM orders; | sales
This output helps you identify the PID and state of queries you may need to cancel.
To cancel a query, run the pg_cancel_backend()
function passing the query pid as an argument:
SELECT pg_cancel_backend(7831);
For more details on managing and stopping client queries safely, see Stop client queries and processes.
Queries that use the global shared memory pool may be terminated automatically if they exceed global shared memory usage thresholds. To learn more, see the Global shared memory section.
Attributes and limits
This section provides detailed descriptions of the limits enforced by resource groups and the corresponding attributes used to control them in Greengage DB.
Number of concurrent transactions
CONCURRENCY
does not apply to resource groups for external components.
It must be explicitly set to 0
for such groups.
The CONCURRENCY
attribute defines the maximum number of concurrent transactions (active and idle) allowed in a resource group.
It is required for all resource groups for roles.
The default value is 20
, and the maximum value equals the max_connections
parameter value.
When the concurrency limit is reached, new transactions are queued and executed in a first-in, first-out (FIFO) order as active transactions complete.
-
The observed concurrency may be lower than the limit if currently running transactions are consuming maximum allowed memory or CPU resources.
-
The number of running transactions can temporarily exceed
CONCURRENCY
. This happens when roles runSET
andSHOW
commands, which are excluded from resource group checks.
Bypass concurrency check
To bypass concurrency check in certain cases, you can use two configuration parameters:
-
gp_resource_group_bypass
— allows queries to bypass the concurrency check but limits the memory available for their execution to 10 MB. It is typically used for ad-hoc, low-memory operations. -
gp_resource_group_bypass_catalog_query
— allows queries to bypass the concurrency limit if they only read from system catalog tables.
CPU limits
Greengage DB supports two CPU allocation modes for resource groups, defined by:
-
CPUSET
— allocate CPU by specific cores. -
CPU_RATE_LIMIT
— allocate CPU by percentage.
Each resource group must have exactly one of these attributes defined.
Different groups in the cluster may use different modes, and the mode can be changed at runtime using ALTER RESOURCE GROUP
.
In addition to CPU limits on resource groups, the gp_resource_group_cpu_limit
parameter limits the total CPU usage by all queries in resource groups.
By default, its value is 0.9
, which leaves 10% of CPU to OS and other applications on cluster hosts.
If there are other resource-intensive applications running on these hosts, consider lowering this value.
Avoid increasing gp_resource_group_cpu_limit
over 0.9
.
This may leave insufficient CPU resources for the operating system and critical DBMS background processes.
CPU cores: CPUSET
The CPUSET
attribute specifies exact CPU cores on each cluster host for a resource group.
The cores are identified by ordinal numbers starting from 0
.
CPU cores allocated to resource groups using CPUSET
must be available to Greengage DB and must not overlap across groups.
Avoid assigning core 0
to any resource group.
It is used by Greengage DB as a fallback in certain cases, for example, for default resource groups when all other cores are allocated to other resource groups.
A CPUSET
value is a string of two parts separated by a semicolon (;
).
The first part defines the cores allocated on the master node, and the second part — on segment nodes.
Each part can contain one or more comma-separated numbers or hyphenated ranges.
The whole string must be enclosed in single quotes.
For example:
CPUSET = '1;1,3-5'
This value allocates core 1
on the master host and cores 1
, 3
, 4
, and 5
on each segment host.
CPUSET
usage recommendations:
-
Use lower core numbers to improve OS scheduling efficiency.
-
Leave unreserved cores for Greengage DB system operations and groups using
CPU_RATE_LIMIT
. -
For portability, limit
CPUSET
to cores that are common across target systems. For example, on a 16-core segment host, assign cores1
to7
and leave others unreserved. This makes it possible to restore the segment on an 8-core host as is.
CPU percentage: CPU_RATE_LIMIT
The CPU_RATE_LIMIT
attribute specifies the percentage of a host’s CPU resources available to this resource group.
It applies to all cluster hosts, both master and segment.
The percentage allocated on a segment host is divided equally between all primary segments that run on it.
The CPU_RATE_LIMIT
value can have integer values from 1
to 100
.
The total of all CPU_RATE_LIMIT
values of resource groups must not exceed 100
.
The effective CPU_RATE_LIMIT
allocation is constrained by:
-
The
gp_resource_group_cpu_limit
value. -
The percentage of unreserved cores (those not allocated via
CPUSET
).
For example, on an 8-core host where 4 cores are assigned via CPUSET
, only the remaining 4 cores are shared among groups that use CPU_RATE_LIMIT
.
By default, Greengage DB automatically reallocates CPU resources between groups with CPU_RATE_LIMIT
configured depending on their activity.
This is called the elastic mode.
In this mode, unused CPU capacity from idle groups is temporarily redistributed to active groups, maintaining relative CPU_RATE_LIMIT
ratios.
For example, an active group with CPU_RATE_LIMIT = 20
gets twice as many resources than another active group with CPU_RATE_LIMIT = 10
.
When the previously idle group becomes active, the CPU resources are returned to it.
To disable this automatic reallocation, switch to the ceiling enforcement mode by setting the gp_resource_group_cpu_ceiling_enforcement
parameter to true
.
In this mode, each group receives its allocated CPU share regardless of other groups' activity.
The -1
value of CPUSET
or CPU_RATE_LIMIT
means that this CPU allocation mode is not used in the group.
This value is assigned to the unused mode automatically.
Memory limits
Greengage DB manages memory resources at multiple levels:
-
host;
-
segment;
-
resource group;
-
transaction (in resource groups for roles).
The total memory available for allocation to resource groups on a host is defined by the system memory resources and the gp_resource_group_memory_limit
configuration parameter.
By default, gp_resource_group_memory_limit
is 0.7
.
This means that 70% of system memory resources are available for Greengage DB resource groups:
total_gg_memory = system memory * gp_resource_group_memory_limit
This memory is equally divided among all primary segments that run on the host. Thus, each primary segment receives the following amount of memory:
segment_memory = total_gg_memory / <host_primary_segments>
where <host_primary_segments>
is the number of active primary segments on the host.
Segment memory is further divided between:
-
Reserved memory slots for resource groups.
Group memory: MEMORY_LIMIT
The MEMORY_LIMIT
attribute defines the percentage of the segment memory reserved for a resource group.
It can have values from 0
to 100
.
The sum of MEMORY_LIMIT
values across all resource groups must not exceed 100
.
group_memory = segment_memory * MEMORY_LIMIT / 100
Reserved group memory is split into:
-
Fixed memory slots for transactions.
The number of transaction memory slots equals the group’s CONCURRENCY
value, and each slot size is calculated as:
transaction_memory = (group_memory * (100 - MEMORY_SHARED_QUOTA)) / CONCURRENCY
Each concurrent transaction in the group is guaranteed a fixed memory allocation. When this memory is exhausted, it may use additional memory from the group’s shared pool.
Group shared memory: MEMORY_SHARED_QUOTA
Each resource group for roles can have its own memory pool to share across the group’s transactions — the group shared memory.
Its percentage from the group’s MEMORY_LIMIT
is defined by the MEMORY_SHARED_QUOTA
attribute.
By default, a group’s shared memory takes up 80% of its memory limit (MEMORY_SHARED_QUOTA = 80
).
Shared memory becomes available to transactions when their fixed memory slots are exhausted, allowing transactions to consume any or all of the group’s shared memory if needed.
If all the group memory, including shared, is exhausted, transactions can additionally use the global shared memory to complete.
Global shared memory
Resource groups with MEMORY_LIMIT = 0
do not reserve memory and instead use the global shared memory pool.
Global shared memory is the unallocated portion of segment memory, that is, the difference between 100
and the sum of MEMORY_LIMIT
values of all resource groups.
global_shared_memory = 100 - SUM(MEMORY_LIMIT of all groups)
The global shared memory is available only to resource groups with vmtracker
memory auditor (resource groups for roles).
It is used when transactions require additional memory beyond their group’s reserved and shared allocations.
This helps prevent out-of-memory errors during memory-intensive operations.
Leave 10–20% of segment memory unallocated for the global shared pool to handle unexpected memory-intensive queries effectively.
Greengage DB enforces global shared memory limits using the runaway_detector_activation_percent
parameter, which triggers query termination when usage exceeds the threshold.
By default, its value is 90
.
This means that after 90% of the global shared memory is used, Greengage DB starts terminating queries in resource groups for roles.
Termination starts from queries with the highest memory usage and ends when the global shared memory consumption falls below the threshold.
Setting runaway_detector_activation_percent
to 100
disables automatic query termination based on global shared memory usage.
Spill files generation: MEMORY_SPILL_RATIO
The MEMORY_SPILL_RATIO
attribute defines the memory usage threshold for memory-intensive query operators, such as sorts or joins.
It is measured as a percentage of the group’s available memory.
When a query operator reaches this threshold, spill files are created on disk to prevent excessive memory usage and potential errors.
The default threshold is 0
.
In this case, the spill threshold is defined by the statement_mem
parameter.
If MEMORY_SPILL_RATIO
greater than 0
, then spill files are generated after a transaction consumes the following amount of memory:
transaction_memory_spill = group_memory * (MEMORY_SPILL_RATIO / 100) / CONCURRENCY
When a resource group’s MEMORY_LIMIT
is set to 0
, its MEMORY_SPILL_RATIO
must also be 0
.
Additionally, you can use the memory_spill_ratio
configuration parameter to redefine the spilling threshold on the session level or for specific queries.
To improve performance of queries with low memory requirements, it is recommended to execute them with a low statement_mem
limit and memory_spill_ratio
set to 0
:
SET memory_spill_ratio = 0;
SET statement_mem = '10 MB';
Memory auditor
The MEMORY_AUDITOR
attribute defines the way Greengage DB tracks memory consumption within a resource group.
This parameter distinguishes resource groups for roles from those for external components.
See Resource groups for roles and external components for descriptions of the two types.
Possible MEMORY_AUDITOR
values are the following:
-
vmtracker
(default) — uses Greengage DB’s internal virtual memory tracker. It is used in resource groups for roles. -
cgroup
— uses the Linux control groups (cgroup v1). It is used in resource groups for external components. To learn how to configure cgroups on cluster hosts, see the Enable resource groups section.
The MEMORY_AUDITOR
of a resource group cannot be changed after creation.