mirror of
https://github.com/postgres/postgres.git
synced 2026-02-11 14:53:31 -05:00
Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql |
||
|---|---|---|
| .. | ||
| binaryheap.c | ||
| bipartite_match.c | ||
| hyperloglog.c | ||
| ilist.c | ||
| Makefile | ||
| pairingheap.c | ||
| rbtree.c | ||
| README | ||
| stringinfo.c | ||
This directory contains a general purpose data structures, for use anywhere in the backend: binaryheap.c - a binary heap hyperloglog.c - a streaming cardinality estimator pairingheap.c - a pairing heap rbtree.c - a red-black tree ilist.c - single and double-linked lists. stringinfo.c - an extensible string type Aside from the inherent characteristics of the data structures, there are a few practical differences between the binary heap and the pairing heap. The binary heap is fully allocated at creation, and cannot be expanded beyond the allocated size. The pairing heap on the other hand has no inherent maximum size, but the caller needs to allocate each element being stored in the heap, while the binary heap works with plain Datums or pointers. The linked-lists in ilist.c can be embedded directly into other structs, as opposed to the List interface in nodes/pg_list.h.