Hello, I’m DocuDroid!
Submitting feedback
Thank you for rating our AI Search!
We would be grateful if you could share your thoughts so we can improve our AI Search for you and other readers.
GitHub

COPY

Copies data between a file and a table. See COPY command for more details and usage examples.

Synopsis

COPY <table_name> [ ( <column_name> [, ...] ) ]
    FROM { '<filename>' | PROGRAM '<command>' | STDIN }
    [ [ WITH ] ( <option> [, ...] ) ]
    [ ON SEGMENT ]

COPY { <table_name> [ ( <column_name> [, ...] ) ] | ( <query> ) }
    TO { '<filename>' | PROGRAM '<command>' | STDOUT }
    [ [ WITH ] ( <option> [, ...] ) ]
    [ ON SEGMENT ]

where option can be one of:

    FORMAT 'text' | 'csv' | 'binary'
    OIDS [ <boolean> ]
    FREEZE [ <boolean> ]
    DELIMITER '<delimiter_character>'
    NULL '<null_string>'
    HEADER [ <boolean> ]
    QUOTE '<quote_character>'
    NEWLINE '<newline_character>'
    ESCAPE '<escape_character>'
    FORCE_QUOTE { ( <column_name> [, ...] ) | * }
    FORCE_NOT_NULL ( <column_name> [, ...] )
    FORCE_NULL ( <column_name> [, ...] )
    ENCODING '<encoding_name>'
    FILL MISSING FIELDS
    [LOG ERRORS] SEGMENT REJECT LIMIT <count> [ ROWS | PERCENT ]
    IGNORE EXTERNAL PARTITIONS

Description

COPY moves data between Greengage DB tables and standard file-system files. COPY TO copies the contents of a table to a file (or multiple files based on the segment ID if copying ON SEGMENT), while COPY FROM copies data from a file to a table (appending the data to whatever is in the table already). COPY TO can also copy the results of a SELECT query.

If a list of columns is specified, COPY will only copy the data in the specified columns to or from the file. If there are any columns in the table that are not in the column list, COPY FROM will insert the default values for those columns.

COPY with a file name instructs the Greengage DB master host to directly read from or write to a file. The file must be accessible to the master host, and the path is interpreted relative to the master host’s file system.

When COPY is used with the ON SEGMENT clause, the COPY TO causes segments to create individual segment-oriented files, which remain on the segment hosts. The filename argument for ON SEGMENT takes the string literal <SEGID> (required) and uses either the absolute path or the <SEG_DATA_DIR> string literal. When the COPY operation is run, the segment IDs and the paths of the segment data directories are substituted for the string literal values.

Using COPY TO with a replicated table (DISTRIBUTED REPLICATED) as a source creates a file with rows from a single segment so that the target file contains no duplicate rows. Using COPY TO with the ON SEGMENT clause with a replicated table as a source creates target files on segment hosts containing all table rows.

The ON SEGMENT clause allows you to copy table data to files on segment hosts for use in operations such as migrating data between clusters or performing a backup. Segment data created by the ON SEGMENT clause can be restored by tools such as gpfdist, which is useful for high speed data loading.

CAUTION

Use of the ON SEGMENT clause is recommended for expert users only.

When PROGRAM is specified, the server runs the given command and reads from the standard output of the program, or writes to the standard input of the program. The command must be specified from the viewpoint of the server and be executable by the gpadmin user.

When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the master. STDIN and STDOUT cannot be used with the ON SEGMENT clause.

If SEGMENT REJECT LIMIT is used, then a COPY FROM operation will operate in single row error isolation mode. Single row error isolation mode only applies to rows in the input file with format errors — for example, extra or missing attributes, attributes of a wrong data type, or invalid client encoding sequences. Constraint errors such as violation of a NOT NULL, CHECK, or UNIQUE constraint will still be handled in "all-or-nothing" input mode. The user can specify the number of error rows acceptable (on a per-segment basis), after which the entire COPY FROM operation will be cancelled and no rows will be loaded. The count of error rows is per-segment, not per entire load operation. If the per-segment reject limit is not reached, then all rows not containing an error will be loaded and any error rows discarded. To keep error rows for further examination, specify the LOG ERRORS clause to capture error log information. The error information and the row are stored internally in Greengage DB.

Outputs

On successful completion, a COPY command returns a command tag of the form, where count is the number of rows copied:

COPY <count>

If running a COPY FROM command in single row error isolation mode, the following notice message will be returned if any rows were not loaded due to format errors, where count is the number of rows rejected:

NOTICE: Rejected <count> badly formatted rows.

Parameters

Parameter Description

BINARY

Causes the data to be read or written in binary format rather than text.

You can also specify binary format as part of the FORMAT clause as FORMAT 'binary'.

Learn more in the Use binary format section

table_name

The name of an existing table

column_name

An optional list of columns to be copied.

If a list of columns is specified, COPY only copies the data in the specified columns to or from the file. COPY FROM inserts the default values for columns in the table that are not in the column list.

If no column list is specified, all columns of the table are copied

query

A SELECT or VALUES command whose results are to be copied. Note that parentheses are required around the query

filename

The path to the input or output file.

An input file name can be an absolute or relative path; an output file name must be an absolute path. The file must be accessible to the master host. The specified path must refer to a location on the master host’s file system.

PROGRAM '<command>'

A command to run.

In COPY FROM, the input is read from the standard output of the command, and in COPY TO, the output is written to the standard input of the command. The command must be specified as if entered directly on the Greengage DB master host system and must be executable by the superuser.

STDIN | STDOUT

STDIN specifies that input comes from the client application. STDOUT specifies that output goes to the client application.

The ON SEGMENT clause is not supported with STDIN or STDOUT.

ON SEGMENT

Specifies that data loading or unloading is performed via individual data files located on the segment hosts.

Learn more in Use ON SEGMENT clause

WITH <option>

Specifies additional options, which are described below. The WITH keyword is optional

Possible option values:

Parameter Description

FORMAT 'text' | 'csv' | 'binary'

Defines the data format, which can be text, csv, or binary.

You can specify the binary format by using the dedicated BINARY keyword as follows:

COPY BINARY <table_name> [(<column_name> [, ...])]
...

To learn more about formatting source data, see Format external data

OIDS [ <boolean> ]

Specifies copying the OID for each row.

An error is raised if OIDS is specified for a table that does not have OIDs, or in the case of copying a query

FREEZE [ <boolean> ]

Requests copying the data with rows already frozen, just as they would be after running the VACUUM FREEZE command. This is intended as a performance option for initial data loading. Rows are frozen only if the table being loaded is created or truncated in the current subtransaction, there are no cursors open, and there are no older snapshots held by this transaction.

Note that all other sessions can immediately see the data once it is successfully loaded. This violates the normal rules of MVCC visibility, so you should be aware of the potential problems this might cause.

To learn more about the VACUUM operation, see Remove expired table rows via VACUUM

DELIMITER '<delimiter_character>'

Designates a single ASCII character to act as a column delimiter. The delimiter must reside between any two data value fields but not at the beginning or the end of a row.

You can also specify a non-printable ASCII character or a non-printable Unicode character, such as \x1B or \u001B. The escape string syntax, E'<character-code>', is also supported for non-printable characters. The ASCII or Unicode character must be enclosed in single quotes, for example, E'\x1B' or E'\u001B'.

For text files, the default column delimiter is the horizontal TAB character (0x09).

For CSV files, the default column delimiter is the comma character (,).

See Format columns for details

NULL '<null_string>'

Designates a string representing a null value, which indicates an unknown piece of data in a column or field.

For text files, the default string is \N.

For CSV files, the default string is an empty value with no quotation marks.

See Represent NULL values for details

HEADER [ <boolean> ]

Designates whether the data file contains a header row.

If using multiple data source files, all of them must have a header row. The default is to assume that the input files do not have a header row

QUOTE '<quote_character>'

Specifies the quotation character, double quote (") by default

NEWLINE '<newline_character>'

Designates a character used as a newline character, which can be LF (Line feed, 0x0A), CR (Carriage return, 0x0D), or CR followed by LF (CRLF, 0x0D 0x0A).

If not specified, the newline character detected at the end of the first line of data is used.

See Format rows for details

ESCAPE '<escape_character>'

Designates a single character used as an escape character.

For text files, the default escape character is a backslash (\). You can deactivate escaping by providing the OFF value.

For CSV files, the default escape character is a double quote (").

See Escape characters for details

FORCE_QUOTE { ( <column_name> [, …​] ) | * }

Enforces quoting for all non-NULL values in each specified column.

If an asterisk (*) is specified, non-NULL values are quoted in all columns. NULL output is never quoted

FORCE_NOT_NULL ( <column_name> [, …​] )

Treats values in each specified column as if they were quoted. Since the default null string is an empty unquoted string, this causes missing values to be evaluated as zero-length strings

FORCE_NULL ( <column_name> [, …​] )

Matches the specified columns' values against the null string. If a match is found, sets the value to NULL even if the value is quoted. In the default case where the null string is empty, this converts a quoted empty string to NULL

ENCODING '<encoding_name>'

Defines the character set encoding of the source data.

If not specified, the default client encoding is used.

See Character encoding for details

FILL MISSING FIELDS

Sets missing trailing field values at the end of a line or row to NULL. If not set, an error is reported in such cases. Blank rows, fields with a NOT NULL constraint, and trailing delimiters on a line will still report an error

LOG ERRORS

Enables capturing error log information about rows with formatting errors.

Error log information is stored internally and can be accessed by using the Greengage DB’s gp_read_error_log() built-in SQL function.

SEGMENT REJECT LIMIT <count> [ ROWS | PERCENT ]

Runs COPY FROM in single-row error isolation mode. Can be specified as a number of rows (the default) or percentage of total rows (from 1 to 100).

IGNORE EXTERNAL PARTITIONS

If specified, when copying data from partitioned tables, data is not copied from leaf child partitions that are external tables. The corresponding message is added to the log file.

Otherwise, if not specified and Greengage DB attempts to copy data from a leaf child partition that is an external table, an error is returned.

To copy data from a partitioned table with a leaf child partition that is an external table, use an SQL query to select the data to copy

Notes

COPY can be used with regular tables, writable external tables, or the results of a SELECT query, not with readable external tables or views.

COPY only deals with the specific table named; it does not copy data to or from child tables. Thus for example COPY table TO shows the same data as SELECT * FROM ONLY table. But COPY (SELECT * FROM table) TO …​ can be used to dump all of the data in an inheritance hierarchy.

Similarly, to copy data from a partitioned table with a leaf child partition that is an external table, use an SQL query to select the data to copy. For example, if the table my_sales contains a leaf child partition that is an external table, this command COPY my_sales TO stdout returns an error. This command sends the data to stdout:

COPY (SELECT * from my_sales ) TO stdout

The BINARY keyword causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and Greengage DB versions. Also, you cannot run COPY FROM in single row error isolation mode if the data is in binary format.

You must have the SELECT privilege on the table whose values are read by COPY TO, and the INSERT privilege on the table into which values are inserted by COPY FROM. It is sufficient to have column privileges on the columns listed in the command.

Files named in a COPY command are read or written directly by the database server, not by the client application. Therefore, they must reside on or be accessible to the Greengage DB master host machine, not the client. They must be accessible to and readable or writable by the Greengage DB system user (the user ID the server runs as), not the client. Only database superusers are permitted to name files with COPY, because this allows reading or writing any file that the server has privileges to access.

COPY FROM will invoke any triggers and check constraints on the destination table. However, it will not invoke rewrite rules. Violations of constraints are not evaluated for single row error isolation mode.

COPY input and output is affected by DateStyle. To ensure portability to other Greengage DB installations that might use non-default DateStyle settings, DateStyle should be set to ISO before using COPY TO. It is also a good idea to avoid dumping data with IntervalStyle set to sql_standard, because negative interval values might be misinterpreted by a server that has a different setting for IntervalStyle.

Input data is interpreted according to ENCODING option or the current client encoding, and output data is encoded in ENCODING or the current client encoding, even if the data does not pass through the client but is read from or written to a file directly by the server.

When copying XML data from a file in text mode, the server configuration parameter xmloption affects the validation of the XML data that is copied. If the value is content (the default), XML data is validated as an XML content fragment. If the parameter value is document, XML data is validated as an XML document. If the XML data is not valid, COPY returns an error.

By default, COPY stops operation at the first error. This should not lead to problems in the event of a COPY TO, but the target table will already have received earlier rows in a COPY FROM. These rows will not be visible or accessible, but they still occupy disk space. This may amount to a considerable amount of wasted disk space if the failure happened well into a large COPY FROM operation. You may wish to invoke VACUUM to recover the wasted space. Another option would be to use single row error isolation mode to filter out error rows while still loading good rows.

FORCE_NULL and FORCE_NOT_NULL can be used simultaneously on the same column. This results in converting quoted null strings to null values and unquoted null strings to empty strings.

When a COPY FROM …​ ON SEGMENT command is run, the server configuration parameter gp_enable_segment_copy_checking controls whether the table distribution policy (from the table DISTRIBUTED clause) is checked when data is copied into the table. The default is to check the distribution policy. An error is returned if the row of data violates the distribution policy for the segment instance.

Data from a table that is generated by a COPY TO …​ ON SEGMENT command can be used to restore table data with COPY FROM …​ ON SEGMENT. However, data restored to the segments is distributed according to the table distribution policy at the time the files were generated with the COPY TO command. The COPY command might return table distribution policy errors, if you attempt to restore table data and the table distribution policy was changed after the COPY FROM …​ ON SEGMENT was run.

NOTE

If you run COPY FROM …​ ON SEGMENT and the server configuration parameter gp_enable_segment_copy_checking is false, manual redistribution of table data might be required. See the ALTER TABLE clause WITH REORGANIZE.

When you specify the LOG ERRORS clause, Greengage DB captures errors that occur while reading the external table data. You can view and manage the captured error log data.

  • Use the built-in SQL function gp_read_error_log('table_name'). It requires the SELECT privilege on table_name. This example displays the error log information for data loaded into table ext_expenses with a COPY command:

    SELECT * from gp_read_error_log('ext_expenses');

    The function returns FALSE if table_name does not exist.

  • If error log data exists for the specified table, the new error log data is appended to existing error log data. The error log information is not replicated to mirror segments.

  • Use the built-in SQL function gp_truncate_error_log('table_name') to delete the error log data for table_name. It requires the table owner privilege. This example deletes the error log information captured when moving data into the table ext_expenses:

        SELECT gp_truncate_error_log('ext_expenses');

    The function returns FALSE if table_name does not exist.

    Specify the * wildcard character to delete error log information for existing tables in the current database. Specify the string *.* to delete all database error log information, including error log information that was not deleted due to previous database issues. If * is specified, database owner privilege is required. If *.* is specified, operating system superuser privilege is required.

When a Greengage DB user who is not a superuser runs a COPY command, the command can be controlled by a resource queue. The resource queue must be configured with the ACTIVE_STATEMENTS parameter that specifies a maximum limit on the number of queries that can be run by roles assigned to that queue. Greengage DB does not apply a cost value or memory value to a COPY command, resource queues with only cost or memory limits do not affect the running of COPY commands.

A non-superuser can run only these types of COPY commands:

  • COPY FROM command where the source is stdin;

  • COPY TO command where the destination is stdout.

For information about resource queues, see Use resource queues.

File formats

File formats supported by COPY.

Text format

When the text format is used, the data read or written is a text file with one line per table row. Columns in a row are separated by the delimiter_character value (TAB by default). The column values themselves are strings generated by the output function, or acceptable to the input function, of each attribute’s data type. The specified null string is used in place of columns that are null. COPY FROM will raise an error if any line of the input file contains more or fewer columns than are expected. If OIDS is specified, the OID is read or written as the first column, preceding the user data columns.

The data file has two reserved characters that have special meaning to COPY:

  • The designated delimiter character (tab by default), which is used to separate fields in the data file.

  • A UNIX-style line feed (\n or 0x0a), which is used to designate a new row in the data file. It is strongly recommended that applications generating COPY data convert data line feeds to UNIX-style line feeds rather than Microsoft Windows style carriage return line feeds (\r\n or 0x0a 0x0d).

If your data contains either of these characters, you must escape the character so COPY treats it as data and not as a field separator or new row.

By default, the escape character is a \ (backslash) for text-formatted files and a " (double quote) for CSV-formatted files. If you want to use a different escape character, you can do so using the ESCAPE AS clause. Make sure to choose an escape character that is not used anywhere in your data file as an actual data value. You can also deactivate escaping in text-formatted files by using ESCAPE 'OFF'.

For example, suppose you have a table with three columns, and you want to load the following three fields using COPY:

  • percentage sign = %

  • vertical bar = |

  • backslash = \

Your designated delimiter character is | (pipe character), and your designated escape character is * (asterisk). The formatted row in your data file would look like this:

percentage sign = % | vertical bar = *| | backslash = \

Notice how the pipe character that is part of the data has been escaped using the asterisk character (*). Also, notice that you do not need to escape the backslash since you are using an alternative escape character.

The following characters must be preceded by the escape character if they appear as part of a column value: the escape character itself, newline, carriage return, and the current delimiter character. You can specify a different escape character using the ESCAPE AS clause.

CSV format

This format option is used for importing and exporting the comma-separated values (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping rules used by Greengage DB’s standard text format, it produces and recognizes the common CSV escaping mechanism.

The values in each record are separated by the DELIMITER character. If the value contains the delimiter character, the QUOTE character, the NULL string, a carriage return, or line feed character, then the whole value is prefixed and suffixed by the QUOTE character, and any occurrence within the value of a QUOTE character or the ESCAPE character is preceded by the ESCAPE character. You can also use FORCE_QUOTE to force quotes when outputting non-NULL values in specific columns.

The CSV format has no standard way to distinguish a NULL value from an empty string. Greengage DB’s COPY handles this by quoting. A NULL is output as the NULL parameter string and is not quoted, while a non-NULL value matching the NULL parameter string is quoted. For example, with the default settings, a NULL is written as an unquoted empty string, while an empty string data value is written with double quotes (""). Reading values follows similar rules. You can use FORCE_NOT_NULL to prevent NULL input comparisons for specific columns. You can also use FORCE_NULL to convert quoted null string data values to NULL.

Because backslash is not a special character in the CSV format, \., the end-of-data marker, could also appear as a data value. To avoid any misinterpretation, a \. data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a value of \., you might need to quote that value in the input file.

NOTE

In CSV format, all characters are significant. A whitespace character surrounding a delimiter, or a character in an unquoted null string, is included in the value. Ideally, if you want to import data from a system that pads CSV lines with white space, you should trim the whitespace before importing the data.

CSV format will both recognize and produce CSV files with quoted values containing embedded carriage returns and line feeds. Thus, the files are not strictly one line per table row like text-format files

NOTE

Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus, you might encounter some files that cannot be imported using this mechanism, and COPY might produce files that other programs cannot process.

Binary format

The binary format option causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the text and CSV formats, but a binary-format file is less portable across machine architectures and Greengage DB versions. Also, the binary format is very data type specific; for example it will not work to output binary data from a SMALLINT column and read it into an INTEGER column, even though that would work fine in text format.

The binary file format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order.

  • File Header — the file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:

    • Signature — 11-byte sequence PGCOPY\n\377\r\n\0. Note that the zero byte is a required part of the signature. The signature is designed to allow easy identification of files that have been mangled by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.

    • Flags field — 32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16 — 31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0 — 15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently, only one flag is defined, and the rest must be zero. Bit 16: if 1, OIDs are included in the data; if 0, not.

    • Header extension area length — 32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with.

  • Tuples — each tuple begins with a 16-bit integer count of the number of fields in the tuple. Presently, all tuples in a table will have the same count, but that might not always be true. Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. The length word does not include itself and can be zero. As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.

    There is no alignment padding or any other extra data between fields.

    Presently, all data values in a binary-format file are assumed to be in binary format (format code one).

    If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it is not included in the field-count. In particular, it has a length word — this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.

  • File Trailer — the file trailer consists of a 16-bit integer word containing -1. This is easily distinguished from a tuple’s field-count word. A reader should report an error if a field-count word is not -1 and not the expected number of columns. This provides an extra check against somehow getting out of sync with the data.

Compatibility

There is no COPY statement in the SQL standard.

The following syntax was used before PostgreSQL version 9.0 and is still supported:

COPY <table_name> [(<column_name> [, ...])] FROM {'<filename>' | PROGRAM '<command>' | STDIN}
     [ [WITH]
       [ON SEGMENT]
       [BINARY]
       [OIDS]
       [HEADER]
       [DELIMITER [ AS ] '<delimiter_character>']
       [NULL [ AS ] '<null_string>']
       [ESCAPE [ AS ] '<escape_character>' | 'OFF']
       [NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
       [CSV [QUOTE [ AS ] '<quote_character>']
            [FORCE NOT NULL <column_name> [, ...]]
       [FILL MISSING FIELDS]
       [[LOG ERRORS]
       SEGMENT REJECT LIMIT <count> [ROWS | PERCENT] ]

COPY { <table_name> [(<column_name> [, ...])] | (<query>)} TO {'<filename>' | PROGRAM '<command>' | STDOUT}
      [ [WITH]
        [ON SEGMENT]
        [BINARY]
        [OIDS]
        [HEADER]
        [DELIMITER [ AS ] '<delimiter_character>']
        [NULL [ AS ] '<null_string>']
        [ESCAPE [ AS ] '<escape_character>' | 'OFF']
        [CSV [QUOTE [ AS ] '<quote_character>']
             [FORCE QUOTE <column_name> [, ...]] | * ]
      [IGNORE EXTERNAL PARTITIONS ]

Note that in this syntax, BINARY and CSV are treated as independent keywords, not as arguments of a FORMAT option.

See also