Commit graph

63677 commits

Author SHA1 Message Date
Michael Paquier
ded9754804 Add missing deflateEnd() for server-side gzip base backups
The gzip basebackup sink called deflateInit2() in begin_archive() but
never called deflateEnd(), leaking zlib's internal compression state
(~256KB per archive) until the memory context of the base backup is
destroyed.

The code tree has already a matching deflateEnd() call for each
deflateInit[2]() call (pgrypto, etc.), except for the file touched in
this commit, so this brings more consistency for all the compression
methods.  The server-side LZ4 and zstd implementations require a
dedicated cleanup callback as they allocate their state outside the
context of a palloc().

As currently used, deflateInit2() is called once per tablespace in a
single backup.  Memory would slightly bloat only when dealing with many
tablespaces at once, not across multiple base backups so this is not
worth a backpatch.  This change could matter for future uses of this
code.

zlib allows the definition of memory allocation and free callbacks in
the z_stream object given to a deflateInit[2]().  The base backup
backend code relies on palloc() for the allocations and deflateEnd()
internally only cleans up memory (no fd allocation for example).

Author: Jianghua Yang <yjhjstz@gmail.com>
Discussion: https://postgr.es/m/CAAZLFmQNJ0QNArpWEOZXwv=vbumcWKEHz-b1me5gBqRqG67EwQ@mail.gmail.com
2026-03-23 09:04:44 +09:00
Tom Lane
69c57466a7 Fix another buglet in archive_waldump.c.
While re-reading 860359ea0, I noticed another problem: when
spilling to a temp file, it did not bother to check the result
of fclose().  This is bad since write errors (like ENOSPC)
may not be reported until close time.
2026-03-22 18:48:38 -04:00
Tom Lane
860359ea02 Fix assorted bugs in archive_waldump.c.
1. archive_waldump.c called astreamer_finalize() nowhere.  This meant
that any data retained in decompression buffers at the moment we
detect archive EOF would never reach astreamer_waldump_content(),
resulting in surprising failures if we actually need the last few
bytes of the archive file.

To fix that, make read_archive_file() do the finalize once it detects
EOF.  Change its API to return a boolean "yes there's more data"
rather than the entirely-misleading raw count of bytes read.

2. init_archive_reader() relied on privateInfo->cur_file to track
which WAL segment was being read, but cur_file can become NULL if a
member trailer is processed during a read_archive_file() call.  This
could cause unreproducible "could not find WAL in archive" failures,
particularly with compressed archives where all the WAL data fits in
a small number of compressed bytes.

Fix by scanning the hash table after each read to find any cached
WAL segment with sufficient data, instead of depending on cur_file.
Also reduce the minimum data requirement from XLOG_BLCKSZ to
sizeof(XLogLongPageHeaderData), since we only need the long page
header to extract the segment size.

We likewise need to fix init_archive_reader() to scan the whole
hash table for irrelevant entries, since we might have already
loaded more than one entry when the data is compressible enough.

3. get_archive_wal_entry() relied on tracking cur_file to identify
WAL hash table entries that need to be spilled to disk.  However,
this can't work for entries that are read completely within a
single read_archive_file call: the caller will never see cur_file
pointing at such an entry.  Instead, scan the WAL hash table to
find entries we should spill.  This also fixes a buglet that any
hash table entries completely loaded during init_archive_reader
were never considered for spilling.

Also, simplify the logic tremendously by not attempting to spill
entries that haven't been read fully.  I am not convinced that the old
logic handled that correctly in every path, and it's really not worth
the complication and risk of bugs to try to spill entries on the fly.
We can just write them in a single go once they are no longer the
cur_file.

4. Fix a rather critical performance problem: the code thought that
resetStringInfo() will reclaim storage, but it doesn't.  So by the
end of the run we'd have consumed storage space equal to the total
amount of WAL read, negating all the effort of the spill logic.

Also document the contract that cur_file can change (or become NULL)
during a single read_archive_file() call, since the decompression
pipeline may produce enough output to trigger multiple astreamer
callbacks.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us
2026-03-22 18:24:42 -04:00
Tom Lane
5868372bbf Remove nonfunctional tar file trailer size check.
The ASTREAMER_ARCHIVE_TRAILER case in astreamer_tar_parser_content()
intended to reject tar files whose trailer exceeded 2 blocks.  However,
the check compared 'len' after astreamer_buffer_bytes() had already
consumed all the data and set len to 0, so the pg_fatal() could never
fire.

Moreover, per the POSIX specification for the ustar format, the last
physical block of a tar archive is always full-sized, and "logical
records after the two zero logical records may contain undefined data."
GNU tar, for example, zero-pads its output to a 10kB boundary by
default.  So rejecting extra data after the two zero blocks would be
wrong even if the check worked.  (But if the check had worked, it
would have alerted us to the bug just fixed in 9aa1fcc54.)

Remove the dead check and update the comment to explain why trailing
data is expected and harmless.

Per report from Tom Lane.

Author: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us
2026-03-22 18:13:41 -04:00
Tom Lane
9aa1fcc547 Fix finalization of decompressor astreamers.
Send the correct amount of data to the next astreamer, not the
whole allocated buffer size.  This bug escaped detection because
in present uses the next astreamer is always a tar-file parser
which is insensitive to trailing garbage.  But that may not
be true in future uses.

Author: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us
Backpatch-through: 15
2026-03-22 18:06:48 -04:00
Peter Geoghegan
e5836f7b7d Add fake LSN support to hash index AM.
Use fake LSNs in all hash AM critical sections that write a WAL record.
This gives us a reliable way (a way that works during scans of both
logged and unlogged relations) to detect when an index page was
concurrently modified during the window between when the page is
initially read (by _hash_readpage) and when the page has any known-dead
items LP_DEAD-marked (by _hash_kill_items).

Preparation for an upcoming patch that makes the hash index AM use the
amgetbatch interface, enabling I/O prefetching during hash index scans.

The amgetbatch design imposes certain rules on index AMs with respect to
how they hold on to index page buffer pins (at least in the case of pins
held as an interlock against unsafe concurrent TID recycling by VACUUM).
These rules have consequences for routines that set LP_DEAD bits on
index tuples from an amgetbatch index AM: such routines have an inherent
need to reason about concurrent TID recycling by VACUUM, but can no
longer rely on their amgettuple routine holding on to a buffer pin
(during the aforementioned window) as an interlock against such
recycling.  Instead, they have to follow a new, standardized approach.

The new approach taken by amgetbatch index AMs when setting LP_DEAD bits
is heavily based on the current nbtree dropPin design, which was added
by commit 2ed5b87f.  It also works by checking if the page's LSN
advanced during the window where unsafe concurrent TID recycling might
have taken place.

This commit is similar to commit 8a879119, which taught nbtree to use
fake LSNs to improve its dropPin behavior.  However, unlike that commit,
this is not an independently useful enhancement, since hash doesn't
implement anything like nbtree's dropPin behavior (not yet).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzkehuhxyuA8quc7rRN3EtNXpiKsjPfO8mhb+0Dr2K0Dtg@mail.gmail.com
2026-03-22 17:31:43 -04:00
Melanie Plageman
01b7e4a46d Add pruning fast path for all-visible and all-frozen pages
Because of the SKIP_PAGES_THRESHOLD optimization or a stale prune XID,
heap_page_prune_and_freeze() can be invoked for pages with no pruning or
freezing work to do. To avoid this, if a page is already all-frozen or
it is all-visible and no freezing will be attempted, exit early. We
can't exit early if vacuum passed DISABLE_PAGE_SKIPPING, though.

Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/bqc4kh5midfn44gnjiqez3bjqv4zogydguvdn446riw45jcf3y%404ez66il7ebvk
2026-03-22 15:46:50 -04:00
Peter Geoghegan
f026fbf059 Make IndexScanInstrumentation a pointer in executor scan nodes.
Change the IndexScanInstrumentation fields in IndexScanState,
IndexOnlyScanState, and BitmapIndexScanState from inline structs to
pointers.  This avoids additional space overhead whenever new fields are
added to IndexScanInstrumentation in the future, at least in the common
case where the instrumentation isn't used (i.e. when the executor node
isn't being run through an EXPLAIN ANALYZE).

Preparation for an upcoming patch series that will add index
prefetching.  The new slot-based interface that will enable index
prefetching necessitates that we add at least one more field to
IndexScanInstrumentation (to count heap fetches during index-only
scans).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wz=g=JTSyDB4UtB5su2ZcvsS7VbP+ZMvvaG6ABoCb+s8Lw@mail.gmail.com
2026-03-22 13:20:29 -04:00
Melanie Plageman
4f7ecca84d Detect and fix visibility map corruption in more cases
Move VM corruption detection and repair into heap page pruning. This
allows VM repair during on-access pruning, not only during vacuum.

Also, expand corruption detection to cover pages marked all-visible that
contain dead tuples and tuples inserted or deleted by in-progress
transactions, rather than only all-visible pages with LP_DEAD items.

Pinning the correct VM page before on-access pruning is cheap when
compared to the cost of actually pruning. The vmbuffer is saved in the
scan descriptor, so a query should only need to pin each VM page once,
and a single VM page covers a large number of heap pages.

Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/bqc4kh5midfn44gnjiqez3bjqv4zogydguvdn446riw45jcf3y%404ez66il7ebvk
2026-03-22 11:52:40 -04:00
Heikki Linnakangas
516310ed4d Don't reset 'latest_page_number' when replaying multixid truncation
'latest_page_number' is set to the correct value, according to
nextOffset, early at system startup. Contrary to the comment, it hence
should be set up correctly by the time we get to WAL replay.

This was committed to back-branches earlier already (commit
817f74600d), to fix a bug in a backwards-compatibility codepath. We
don't have that bug on 'master', but the change nevertheless makes
sense on 'master' too.

Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/20260214090150.GC2297@p46.dedyn.io;lightning.p46.dedyn.io
Discussion: https://www.postgresql.org/message-id/e1787b17-dc93-4621-a5a1-c713d1ac6a1b@iki.fi
2026-03-22 14:23:54 +02:00
Michael Paquier
1f7947a48d Add test for single-page VACUUM of hash index on INSERT
_hash_vacuum_one_page() in hashinsert.c is a routine related to hash
indexes that can perform a single-page VACUUM when dead tuples are
detected during index insertion.  This routine previously had no test
coverage, and this commit adds a test case for that purpose.

To safely create dead tuples in a way that works with parallel tests,
this uses a technique based on a rollbacked INSERT, following a
suggestion by Heikki Linnakangas.

Author: Alexander Kuzmenkov <akuzmenkov@tigerdata.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CALzhyqxrc1ZHYmf5V8NE+yMboqVg7xZrQM7K2c7VS0p1v8z42w@mail.gmail.com
2026-03-22 15:24:33 +09:00
Michael Paquier
322bab7974 Move declarations related to locktags from lock.h to new locktag.h
This commit moves all the declarations related to locktags from lock.h
to a new header called locktag.h.  This header is useful so as code
paths that care about locktags but not the lock hashtable can know about
these without having to include lock.h and all its set of dependencies.

This move includes the basic locktag structures and the set of macros to
fill in the locktag fields before attempting to acquire a lock.

Based on a suggestion from me, suggestion done while discussing a
different feature.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/abufUya2oK-_PJ3E@paquier.xyz
2026-03-21 14:34:47 +09:00
Tom Lane
ce8d5fe0e2 plpgsql: optimize "SELECT simple-expression INTO var".
Previously, we always fed SELECT ... INTO to the SPI machinery.
While that works for all cases, it's a great deal slower than
the otherwise-equivalent "var := expression" if the expression
is "simple" and the INTO target is a single variable.  Users
coming from MSSQL or T_SQL are likely to be surprised by this;
they are used to writing SELECT ... INTO since there is no
"var := expression" syntax in those dialects.  Hence, check for
a simple expression and use the faster code path if possible.

(Here, "simple" means whatever exec_is_simple_query accepts,
which basically means "SELECT scalar-expression" without any
input tables, aggregates, qual clauses, etc.)

This optimization is not entirely transparent.  Notably, one of
the reasons it's faster is that the hooks that pg_stat_statements
uses aren't called in this path, so that the evaluated expression
no longer appears in pg_stat_statements output as it did before.
There may be some other minor behavioral changes too, although
I tried hard to make error reporting look the same.  Hopefully,
none of them are significant enough to not be acceptable as
routine changes in a PG major version.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRDieSQOPDHD_svvR75875uRejS9cN87FoAC3iXMXS1saQ@mail.gmail.com
2026-03-20 18:23:45 -04:00
Jeff Davis
4a0b46b6e1 Fix dependency on FDW's connection function.
Missed in commit 8185bb5347.

Catalog version bump.

Discussion: https://postgr.es/m/fd49b44dc65da8e71ab20c1cf1ec7e65921c20f5.camel@j-davis.com
2026-03-20 12:42:59 -07:00
Andrew Dunstan
b3cf461b3c pg_verifybackup: Enable WAL parsing for tar-format backups
Now that pg_waldump supports reading WAL from tar archives, remove the
restriction that forced --no-parse-wal for tar-format backups.

pg_verifybackup now automatically locates the WAL archive: it looks for
a separate pg_wal.tar first, then falls back to the main base.tar.  A
new --wal-path option (replacing the old --wal-directory, which is kept
as a silent alias) accepts either a directory or a tar archive path.

The default WAL directory preparation is deferred until the backup
format is known, since tar-format backups resolve the WAL path
differently from plain-format ones.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
b15c151398 pg_waldump: Add support for reading WAL from tar archives
pg_waldump can now accept the path to a tar archive (optionally
compressed with gzip, lz4, or zstd) containing WAL files and decode
them.  This was added primarily for pg_verifybackup, which previously
had to skip WAL parsing for tar-format backups.

The implementation uses the existing archive streamer infrastructure
with a hash table to track WAL segments read from the archive.  If WAL
files within the archive are not in sequential order, out-of-order
segments are written to a temporary directory (created via mkdtemp under
$TMPDIR or the archive's directory) and read back when needed.  An
atexit callback ensures the temporary directory is cleaned up.

The --follow option is not supported when reading from a tar archive.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
f8a0cd2671 pg_waldump: Preparatory refactoring for tar archive WAL decoding.
Several refactoring steps in preparation for adding tar archive WAL
decoding support to pg_waldump:

- Move XLogDumpPrivate and related declarations into a new pg_waldump.h
  header, allowing a second source file to share them.

- Factor out required_read_len() so the read-size calculation can be
  reused for both regular WAL files and tar-archived WAL.

- Move the WAL segment size variable into XLogDumpPrivate and rename it
  to segsize, making it accessible to the archive streamer code.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Andrew Dunstan
c8a350a439 Move tar detection and compression logic to common.
Consolidate tar archive identification and compression-type detection
logic into a shared location. Currently used by pg_basebackup and
pg_verifybackup, this functionality is also required for upcoming
pg_waldump enhancements.

This change promotes code reuse and simplifies maintenance across
frontend tools.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
discussion: https://postgr.es/m/CAAJ_b94bqdWN3h2J-PzzzQ2Npbwct5ZQHggn_QoYGhC2rn-=WQ@mail.gmail.com
2026-03-20 15:31:35 -04:00
Nathan Bossart
48f11bfa06 Bump transaction/multixact ID warning limits to 100M.
These warning limits were last changed to 40M by commit cd5e82256d.
For the benefit of workloads that rapidly consume transactions or
multixacts, this commit bumps the limits to 100M.  This will
hopefully give users enough time to react.

Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Discussion: https://postgr.es/m/aRdhSSFb9zZH_0zc%40nathan
2026-03-20 14:15:33 -05:00
Nathan Bossart
e646450e60 Add percentage of available IDs to wraparound warnings.
This commit adds DETAIL messages to the existing wraparound
WARNINGs that include the percentage of transaction/multixact IDs
that remain available for use.  The hope is that this more clearly
expresses the urgency of the situation.

Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Discussion: https://postgr.es/m/aRdhSSFb9zZH_0zc%40nathan
2026-03-20 14:15:33 -05:00
Tom Lane
733f20df53 Discount the metapage when estimating number of index pages visited.
genericcostestimate() estimates the number of index leaf pages to
be visited as a pro-rata fraction of the total number of leaf pages.
Or at least that was the intention.  What it actually used in the
calculation was the total number of index pages, so that non-leaf
pages were also counted.  In a decent-sized index the error is
probably small, since we expect upper page fanout to be high.
But in a small index that's not true; in the worst case with one
data-bearing page plus a metapage, we had 100% relative error.
This led to surprising planning choices such as not using a small
partial index.

To fix, ask genericcostestimate's caller to supply an estimate of
the number of non-leaf pages, and subtract that.  For the built-in
index AMs, it seems sufficient to count the index metapage (if the
AM uses one) as non-leaf.  Per the above argument, counting upper
index pages shouldn't change the estimate much, and in most cases
we don't have any easy way of estimating the number of upper pages.
This might be an area for further research in future.

Any external genericcostestimate callers that do not set the new field
GenericCosts.numNonLeafPages will see the same behavior as before,
assuming they followed the advice to zero out that whole struct.

Unsurprisingly, this change affects a number of plans seen in the
core regression tests.  I hacked up the existing tests to keep the
tests' plans the same, since in each case it appeared that the
test's intent was to test exactly that plan.  Also add one new
test case demonstrating that a better index choice is now made.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Henson Choi <assam258@gmail.com>
Discussion: https://postgr.es/m/870521.1745860752@sss.pgh.pa.us
2026-03-20 14:50:53 -04:00
Alexander Korotkov
07b7a964d3 Fix self-join removal to update bare Var references in join clauses
Self-join removal failed to update Var nodes when the join clause was a
bare Var (e.g., ON t1.bool_col) rather than an expression containing
Vars.  ChangeVarNodesWalkExpression() used expression_tree_walker(),
which descends into child nodes but does not process the top-level node
itself.  When a bare Var referencing the removed relation appeared as
the clause, its varno was left unchanged, leading to "no relation entry
for relid N" errors.

Fix by calling ChangeVarNodes_walker() directly instead of
expression_tree_walker(), so the top-level node is also processed.

Bug: #19435
Reported-by: Hang Ammmkilo <ammmkilo@163.com>
Author: Andrei Lepikhov <lepihov@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/19435-3cc1a87f291129f1%40postgresql.org
Backpatch-through: 18
2026-03-20 15:46:30 +02:00
Álvaro Herrera
e7975f1c06
SET NOT NULL: Call object-alter hook only after the catalog change
... otherwise, the function invoked by the hook might consult the
catalog and not see that the new constraint exists.

This relies on set_attnotnull doing CommandCounterIncrement()
after successfully modifying the catalog.

Oversight in commit 14e87ffa5c.

Author: Artur Zakirov <zaartur@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAKNkYnxUPCJk-3Xe0A3rmCC8B8V8kqVJbYMVN6ySGpjs_qd7dQ@mail.gmail.com
2026-03-20 14:38:50 +01:00
Robert Haas
12444183e4 test_plan_advice: Set TAP test priority 50 in meson.build.
Since this runs the main regression tests, it can take some time to
complete. Therefore, it's better to start it earlier, as we also do
for the main regression test suite.

Author: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: http://postgr.es/m/1095d3fe-a6eb-4d83-866e-649d6f369908@gmail.com
2026-03-20 08:41:38 -04:00
Andrew Dunstan
4c0390ac53 Add option force_array for COPY JSON FORMAT
This adds the force_array option, which is available exclusively
when using COPY TO with the JSON format.

When enabled, this option wraps the output in a top-level JSON array
(enclosed in square brackets with comma-separated elements), making the
entire result a valid single JSON value.  Without this option, the
default behavior is to output a stream of independent JSON objects.

Attempting to use this option with COPY FROM or with formats other than
JSON will raise an error.

Author: Joe Conway <mail@joeconway.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com
Discussion: https://postgr.es/m/6a04628d-0d53-41d9-9e35-5a8dc302c34c@joeconway.com
2026-03-20 08:40:17 -04:00
Andrew Dunstan
7dadd38cda json format for COPY TO
This introduces the JSON format option for the COPY TO command, allowing
users to export query results or table data directly as a stream of JSON
objects (one per line, NDJSON style).

The JSON format is currently supported only for COPY TO operations; it
is not available for COPY FROM.

JSON format is incompatible with some standard text/CSV formatting
options, including HEADER, DEFAULT, NULL, DELIMITER, FORCE QUOTE,
FORCE NOT NULL, and FORCE NULL.

Column list support is included: when a column list is specified, only
the named columns are emitted in each JSON object.

Regression tests covering valid JSON exports and error handling for
incompatible options have been added to src/test/regress/sql/copy.sql.

Author: Joe Conway <mail@joeconway.com>
Author: jian he <jian.universality@gmail.com>
Co-Authored-By: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Davin Shearer <davin@apache.org>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com
Discussion: https://postgr.es/m/6a04628d-0d53-41d9-9e35-5a8dc302c34c@joeconway.com
2026-03-20 08:40:04 -04:00
Andrew Dunstan
a2145605ee introduce CopyFormat, refactor CopyFormatOptions
Currently, the COPY command format is determined by two boolean fields
(binary, csv_mode) in CopyFormatOptions.  This approach, while
functional, isn't ideal for implementing other formats in the future.

To simplify adding new formats, introduce a CopyFormat enum.  This makes
the code cleaner and more maintainable, allowing for easier integration
of additional formats down the line.

Author: Joel Jacobson <joel@compiler.org>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com
Discussion: https://postgr.es/m/6a04628d-0d53-41d9-9e35-5a8dc302c34c@joeconway.com
2026-03-20 08:21:57 -04:00
Peter Eisentraut
040a56be4b Cleanup users and roles in graph_table_rls test
This test leaves behind the roles and users it creates.
002_pg_upgrade test dumps and restore the regression when
PG_TEST_EXTRA contains regress_dump_restore.  The global objects such
as users and roles are not dumped by pg_dump.  But it still dumps the
policies associated with users, and commands to set the ownership.
Restoring these policies and the ownerships fails since the users and
roles do not exist.  To fix this failure we could use --no-owner, but
it does not exclude the policy objects associated with users.  Hence
drop the users, roles and policies that depend upon them at the end of
the test.

Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/a855795d-e697-4fa5-8698-d20122126567@eisentraut.org
2026-03-20 10:55:51 +01:00
Peter Eisentraut
57ee397953 Update Unicode data to Unicode 17.0.0
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Alexander Borisov <lex.borisov@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/2a668979-ed92-49a3-abf9-a3ec2d460ec2%40eisentraut.org
2026-03-20 08:42:50 +01:00
Amit Kapila
493f8c6439 Add support for EXCEPT TABLE in ALTER PUBLICATION.
Following commit fd366065e0, which added EXCEPT TABLE support to
CREATE PUBLICATION, this commit extends ALTER PUBLICATION to allow
modifying the exclusion list.

New Syntax:
ALTER PUBLICATION name SET  publication_all_object [, ... ]

where publication_all_object is one of:
ALL TABLES [ EXCEPT TABLE ( except_table_object [, ... ] ) ]
ALL SEQUENCES

If the EXCEPT clause is provided, the existing exclusion list in
pg_publication_rel is replaced with the specified relations. If the
EXCEPT clause is omitted, any existing exclusions for the publication
are cleared. Similarly, SET ALL SEQUENCES updates

Note that because this is a SET command, specifying only one object
type (e.g., SET ALL SEQUENCES) will reset the other unspecified flags
(e.g., setting puballtables to false).

Consistent with CREATE PUBLICATION, only root partitioned tables or
standard tables can be specified in the EXCEPT list. Specifying a
partition child will result in an error.

Author: vignesh C <vignesh21@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/CALDaNm3=JrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh=tamA@mail.gmail.com
2026-03-20 11:36:09 +05:30
David Rowley
07d5bffe75 Fix new tuple deforming code so it can support cstrings again
In c456e3911, I mistakenly thought that the deformer code would never
see cstrings and that I could use pg_assume() to have the compiler omit
producing code for attlen == -2 attributes.  That saves bloating the
deforming code a bit with the extra check and strlen() call.  While this
is ok to do for tuples from the heap, it's not ok to do for
MinimalTuples as those *can* contain cstrings and
tts_minimal_getsomeattrs() implements deforming by inlining the
(slightly misleadingly named) slot_deform_heap_tuple() code.

To fix, add a new parameter to the slot_deform_heap_tuple() and have the
callers define which code to inline.  Because this new parameter is
passed as a const, the compiler can choose to emit or not emit the
cstring-related code based on the parameter's value.

Author: David Rowley <dgrowleyml@gmail.com>
Reported-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/CAHewXNmSK+gKziAt_WvQoMVWt3_LRVMmRYY9dAbMPMcpPV0QmA@mail.gmail.com
2026-03-20 14:16:06 +13:00
Jeff Davis
703fee3b25 Fix dependency on FDW handler.
ALTER FOREIGN DATA WRAPPER could drop the dependency on the handler
function if it wasn't explicitly specified.

Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/35c44a4b7fb76d35418c4d66b775a88f4ce60c86.camel@j-davis.com
Backpatch-through: 14
2026-03-19 15:07:43 -07:00
Masahiko Sawada
adcdbe9386 Add parallel vacuum worker usage to VACUUM (VERBOSE) and autovacuum logs.
This commit adds both the number of parallel workers planned and the
number of parallel workers actually launched to the output of
VACUUM (VERBOSE) and autovacuum logs.

Previously, this information was only reported as an INFO message
during VACUUM (VERBOSE), which meant it was not included in autovacuum
logs in practice. Although autovacuum does not yet support parallel
vacuum, a subsequent patch will enable it and utilize these logs in
its regression tests. This change also improves observability by
making it easier to verify if parallel vacuum is utilizing the
expected number of workers.

Author: Daniil Davydov <3danissimo@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CACG=ezZOrNsuLoETLD1gAswZMuH2nGGq7Ogcc0QOE5hhWaw=cw@mail.gmail.com
2026-03-19 15:01:47 -07:00
Masahiko Sawada
ba21f5bf8a Allow explicit casting between bytea and uuid.
This enables the use of functions such as encode() and decode() with
UUID values, allowing them to be converted to and from alternative
formats like base64 or hex.

The cast maps the 16-byte internal representation of a UUID directly
to a bytea datum. This is more efficient than going through a text
forepresentation.

Bump catalog version.

Author:	Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Co-authored-by: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAJ7c6TOramr1UTLcyB128LWMqita1Y7%3Darq3KHaU%3Dqikf5yKOQ%40mail.gmail.com
2026-03-19 13:51:50 -07:00
Tom Lane
1811f1af98 Improve hash join's handling of tuples with null join keys.
In a plain join, we can just summarily discard an input tuple
with null join key(s), since it cannot match anything from
the other side of the join (assuming a strict join operator).
However, if the tuple comes from the outer side of an outer join
then we have to emit it with null-extension of the other side.

Up to now, hash joins did that by inserting the tuple into the hash
table as though it were a normal tuple.  This is unnecessarily
inefficient though, since the required processing is far simpler than
for a potentially-matchable tuple.  Worse, if there are a lot of such
tuples they will bloat the hash bucket they go into, possibly causing
useless repeated attempts to split that bucket or increase the number
of batches.  We have a report of a large join vainly creating many
thousands of batches when faced with such input.

This patch improves the situation by keeping such tuples out of the
hash table altogether, instead pushing them into a separate tuplestore
from which we return them later.  (One might consider trying to return
them immediately; but that would require substantial refactoring, and
it doesn't work anyway for cases where we rescan an unmodified hash
table.)  This works even in parallel hash joins, because whichever
worker reads a null-keyed tuple can just return it; there's no need
for consultation with other workers.  Thus the tuplestores are local
storage even in a parallel join.

A pre-existing buglet that I noticed while analyzing the code's
behavior is that ExecHashRemoveNextSkewBucket fails to decrement
hashtable->skewTuples for tuples moved into the main hash table
from the skew hash table.  This invalidates ExecHashTableInsert's
calculation of the number of main-hash-table tuples, though probably
not by a lot since we expect the skew table to be small relative
to the main one.  Nonetheless, let's fix that too while we're here.

Bug: #18909
Reported-by: Sergey Koposov <Sergey.Koposov@ed.ac.uk>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/3061845.1746486714@sss.pgh.pa.us
2026-03-19 15:21:36 -04:00
Tom Lane
8b02c22bb4 Avoid leaking duplicated file descriptors in corner cases.
pg_dump's compression modules had variations on the theme of

	fp = fdopen(dup(fd), mode);
	if (fp == NULL)
	    // fail, reporting errno

which is problematic for two reasons.  First, if dup() succeeds but
fdopen() fails, we'd leak the duplicated FD.  That's not important
at present since the program will just exit immediately after failure
anyway; but perhaps someday we'll try to continue, making the resource
leak potentially significant.  Second, if dup() fails then fdopen()
will overwrite the useful errno (perhaps EMFILE) with a misleading
value EBADF, making it difficult to understand what went wrong.
Fix both issues by testing for dup() failure before proceeding to
the next call.

These failures are sufficiently unlikely, and the consequences minor
enough, that this doesn't seem worth the effort to back-patch.
But let's fix it in HEAD.

Author: Jianghua Yang <yjhjstz@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/62bbe34d-2315-4b42-b768-56d901aa83e1@gmail.com
2026-03-19 14:25:26 -04:00
Nathan Bossart
dd1398f137 Allow choosing specific grantors via GRANT/REVOKE ... GRANTED BY.
Except for GRANT and REVOKE on roles, the GRANTED BY clause
currently only accepts the current role to match the SQL standard.
And even if an acceptable grantor (i.e., the current role) is
specified, Postgres ignores it and chooses the "best" grantor for
the command.  Allowing the user to select a specific grantor would
allow better control over the precise behavior of GRANT/REVOKE
statements.  This commit adds that ability.  For consistency with
select_best_grantor(), we only permit choosing grantor roles for
which the current role inherits privileges.

Author: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aRYLkTpazxKhnS_w%40nathan
2026-03-19 11:41:39 -05:00
Robert Haas
6f0738ddec dshash: Make it possible to suppress out of memory errors
Introduce dshash_find_or_insert_extended, which is just like
dshash_find_or_insert except that it takes a flags argument.
Currently, the only supported flag is DSHASH_INSERT_NO_OOM, but
I have chosen to use an integer rather than a boolean in case we
end up with more flags in the future.

Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: http://postgr.es/m/CA+TgmoaJwUukUZGu7_yL74oMTQQz2=zqucMhF9+9xBmSC5us1w@mail.gmail.com
2026-03-19 11:51:17 -04:00
Tom Lane
5a2043bf71 Fix transient memory leakage in jsonpath evaluation.
This patch reimplements JsonValueList to be more space-efficient
and arranges for temporary JsonValueLists created during jsonpath
evaluation to be freed when no longer needed, rather than being
leaked till the end of the function evaluation cycle as before.

The motivation is to prevent indefinite memory bloat while
evaluating jsonpath expressions that traverse a lot of data.
As an example, this query
  SELECT
    jsonb_path_query((SELECT jsonb_agg(i) FROM generate_series(1,10000) i),
                     '$[*] ? (@ < $)');
formerly required about 6GB to execute, with the space required
growing quadratically with the length of the input array.
With this patch the memory consumption stays static.  (The time
required is still quadratic, but we can't do much about that: this
path expression asks to compare each array element to each other one.)

The bloat happens because we construct a JsonValueList containing all
the array elements to represent the second occurrence of "$", and then
just leak it after evaluating the filter expression for any one value
generated from "$[*]".  If I were implementing this functionality from
scratch I'd probably try to avoid materializing that representation at
all, but changing that now looks like more trouble than it's worth.
This patch takes the more conservative approach of just making sure
we free the list after we're done with it.

The existing representation of JsonValueList is neither especially
compact nor especially easy to free: it's a List containing pointers
to separately-palloc'd JsonbValue structs.  We could theoretically
use list_free_deep, but it's not 100% clear that all the JsonbValues
are always safe for us to free.  In any case we are talking about a
lot of palloc/pfree traffic if we keep it like this.  This patch
replaces that with what's essentially an expansible array of
JsonbValues, so that even a long list requires relatively few
palloc requests.  Also, for the very common case that only one or
two elements appear in the list, this representation uses *zero*
pallocs: the elements can be kept in the on-the-stack base struct.

Note that we are only interested in freeing the JsonbValue structs
themselves.  While many types of JsonbValue include pointers to
external data such as strings or numerics, we expect that that data
is part of the original jsonb input Datum(s) and need not (indeed
cannot) be freed here.

In this reimplementation, JsonValueListAppend() always copies the
supplied JsonbValue struct into the JsonValueList data.  This allows
simplifying and regularizing many call sites that sometimes palloc'd
JsonbValues and sometimes passed a local-variable JsonbValue.  Always
doing the latter is simpler, faster, and less bug-prone.

I also removed JsonValueListLength() in favor of constant-time tests
for whether the list has zero, one, or more than one member, which is
what the callers really need to know.  JsonValueListLength() was not
a hot code path, so this aspect of the patch won't move the needle in
the least performance-wise.  But it seems neater.

I've not done any wide-ranging performance testing, but this should
be faster than the old code thanks to reduction of palloc overhead.
On the specific example shown above, it's about twice as fast as
before on not-very-large inputs; and of course it wins big if you
consider an input large enough to drive the old code into swapping.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/569394.1773783211@sss.pgh.pa.us
2026-03-19 11:37:14 -04:00
Peter Eisentraut
7724cb9935 Add some const qualifiers enabled by typeof_unqual change on copyObject
The recent commit to change copyObject() to use typeof_unqual allows
cleaning up some APIs to take advantage of this improved qualifier
handling.  EventTriggerCollectSimpleCommand() is a good example: It
takes a node tree and makes a copy that it keeps around for its
internal purposes, but it can't communicate via its function signature
that it promises not scribble on the passed node tree.  That is now
fixed.

Reviewed-by: David Geier <geidav.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/92f9750f-c7f6-42d8-9a4a-85a3cbe808f3%40eisentraut.org
2026-03-19 06:35:54 +01:00
Michael Paquier
46fb08aff6 Add commit 015d32016d to .git-blame-ignore-revs. 2026-03-19 13:45:07 +09:00
Michael Paquier
015d32016d test_saslprep: Apply proper indentation
Noticed before koel has the idea to complain.  Rebase thinko from commit
aa73838a5c.
2026-03-19 13:42:24 +09:00
David Rowley
c95cd2991f Short-circuit row estimation in NOT IN containing NULL consts
ScalarArrayOpExpr used for either NOT IN or <>/= ALL, when the array
contains a NULL constant, will never evaluate to true.  Here we add an
explicit short-circuit in scalararraysel() to account for this and return
0.0 rows when we see that a NULL exists.  When the array is a constant,
we can very quickly see if there are any NULL values and return early
before going to much effort in scalararraysel().  For non-const arrays,
we short-circuit after finding the first NULL and forego selectivity
estimations of any remaining elements.

In the future, it might be better to do something for this case in
constant folding.  We would need to be careful to only do this for
strict operators on expressions located in places that don't care about
distinguishing false from NULL returns. i.e. EXPRKIND_QUAL expressions.
Doing that requires a bit more thought and effort, so here we just fix
some needlessly slow selectivity estimations for ScalarArrayOpExpr
containing many array elements and at least one NULL.

Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>
Reviewed-by: David Geier <geidav.pg@gmail.com>
Reviewed-by: Zsolt Parragi <zsolt.parragi@percona.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/eaa2598c-5356-4e1e-9ec3-5fd6eb1cd704@tantorlabs.com
2026-03-19 17:16:36 +13:00
Michael Paquier
aa73838a5c test_saslprep: Test module for SASLprep()
This module includes two functions:
- test_saslprep(), that performs pg_saslprep on a bytea.
- test_saslprep_ranges(), able to check for all valid ranges of UTF-8
codepoints pg_saslprep() handles each one of them.

This provides a detailed coverage of our implementation of SASLprep()
used for SCRAM, with:
- ASCII characters.
- Incomplete UTF-8 sequences, for 390b3cbbb2 (later backpatched).
- A more advanced check for all the valid UTF-8 ranges of codepoints, to
check for cases where these generate an empty password, based on an
original suggestion from Heikki Linnakangas.  This part consumes
resources and time, so it is implemented as a TAP test under a
new PG_TEST_EXTRA value.

A different patch is still under discussion to tweak our internal
SASLprep() implementation, and this module can be used to track any
changes in behavior.

Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/aaEJ-El2seZHeFcG@paquier.xyz
2026-03-19 13:03:30 +09:00
Michael Paquier
79a5911fe6 Add more debugging information for bgworker termination tests of worker_spi
widowbird has failed again after af8837a10b, with the same symptoms of
a backend still lying around when attempting a database rename with a
bgworker connected to the database being renamed.

We are still not sure yet how the failure can be reached, if this is a
timing issue in the test or an actual bug in the logic used for
interruptible bgworkers.  This commit adds more debugging information in
the backend to help with the analysis as a temporary measure.

Another thing I have noticed is that the queries launching the dynamic
bgworkers or checking pg_stat_activity would connect to the database
renamed.  These are switched to use 'postgres'.  That will hopefully
remove some of the friction of the test, but I doubt that this is the
end of the story.

Discussion: https://postgr.es/m/abtJLEAsf1HZXWdR@paquier.xyz
2026-03-19 11:39:31 +09:00
Fujii Masao
645c6d05cc doc: Clarify BUFFERS behavior without ANALYZE in EXPLAIN
This commit clarifies the documentation for the BUFFERS option of EXPLAIN
by explicitly describing its behavior when ANALYZE is not specified.

Author: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/TYRPR01MB13457D31085CB5B246DBBA1AFE845A@TYRPR01MB13457.jpnprd01.prod.outlook.com
2026-03-19 08:30:50 +09:00
Robert Haas
b335fe56f3 pg_plan_advice: Fix multiple copy-and-paste-errors in test case.
The second half of this file is meant to test feedback, not
generated advice, and is meant to use the statements that it
prepares, not leftover prepared statements from earlier in the
file.

These mistakes resulted in failures under debug_discard_caches = 1,
because re-executing pt2 instead of executing pt4 for the first
time resulted in different output depending on whether the query
was replanned.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us> (per BF member avocet)
2026-03-18 18:24:39 -04:00
Daniel Gustafsson
e87ab5049d ssl: Skip passphrase reload tests in EXEC_BACKEND builds
SSL password command reloading must be enabled on Windows and in
EXEC_BACKEND builds due to them always reloading the context. The
new tests in commit 4f433025 skipped under Windows but missed the
EXEC_BACKEND check.  Reported by buildfarm member culicidae.

Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAOYmi+kXmCCgBWffzmSjaNhME5rD=gjyc_OP1FeWQTw2MmSNjg@mail.gmail.com
2026-03-18 22:59:57 +01:00
Tom Lane
f91b8ff6af Fix -Wstrict-prototypes warning in ecpg_init_sqlca() declaration.
When headerscheck compiles ecpglib_extern.h, POSTGRES_ECPG_INTERNAL is
not defined, causing sqlca.h to expand "sqlca" as a macro
(*ECPGget_sqlca()).  This causes the ecpg_init_sqlca() declaration to
trigger a -Wstrict-prototypes warning.

Fix by renaming the parameter from "sqlca" to "sqlca_p" in both the
declaration and definition, avoiding the macro expansion.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reported-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAN55FZ1VDwJ-ZD092ChYf%2B%2BhuP%2B-S3Cg45tJ8jNH5wx2c4BHAg%40mail.gmail.com
2026-03-18 15:27:49 -04:00
Nathan Bossart
ec80215c03 pg_restore: Remove unnecessary strlen() calls in options parsing.
Unlike pg_dump and pg_dumpall, pg_restore first checks whether the
argument passed to --format, --host, and --port is empty before
setting the corresponding variable.  Consequently, pg_restore does
not error if given an empty format name, whereas pg_dump and
pg_dumpall do.  Empty arguments for --host and --port are ignored
by all three applications, so this commit produces no functionality
changes there.  This behavior should perhaps be reconsidered, but
that is left as a future exercise.  As with other recent changes to
option handling for these applications (commits b2898baaf7,
7c8280eeb5, and be0d0b457c), no back-patch.

Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Discussion: https://postgr.es/m/CAKYtNApkh%3DVy2DpNRCnEJmPpxNuksbAh_QBav%3D2fLmVjBhGwFw%40mail.gmail.com
2026-03-18 14:22:15 -05:00