While EUC_CN supports only 1- and 2-byte sequences (CS0, CS1), the
mb<->wchar conversion functions allow 3-byte sequences beginning SS2,
SS3.
Change pg_encoding_max_length() to return 3, not 2, to close a
hypothesized buffer overrun if a corrupted string is converted to wchar
and back again in a newly allocated buffer. We might reconsider that in
master (ie harmonizing in a different direction), but this change seems
better for the back-branches.
Also change pg_euccn_mblen() to report SS2 and SS3 characters as having
length 3 (following the example of EUC_KR). Even though such characters
would not pass verification, it's remotely possible that invalid bytes
could be used to compute a buffer size for use in wchar conversion.
Security: CVE-2026-2006
Backpatch-through: 14
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
The code made a subtle assumption that the lower-cased version of a
string never has more characters than the original. That is not always
true. For example, in a database with the latin9 encoding:
latin9db=# select lower(U&'\00CC' COLLATE "lt-x-icu");
lower
-----------
i\x1A\x1A
(1 row)
In this example, lower-casing expands the single input character into
three characters.
The generate_trgm_only() function relied on that assumption in two
ways:
- It used "slen * pg_database_encoding_max_length() + 4" to allocate
the buffer to hold the lowercased and blank-padded string. That
formula accounts for expansion if the lower-case characters are
longer (in bytes) than the originals, but it's still not enough if
the lower-cased string contains more *characters* than the original.
- Its callers sized the output array to hold the trigrams extracted
from the input string with the formula "(slen / 2 + 1) * 3", where
'slen' is the input string length in bytes. (The formula was
generous to account for the possibility that RPADDING was set to 2.)
That's also not enough if one input byte can turn into multiple
characters.
To fix, introduce a growable trigram array and give up on trying to
choose the correct max buffer sizes ahead of time.
Backpatch to v18, but no further. In previous versions lower-casing was
done character by character, and thus the assumption that lower-casing
doesn't change the character length was valid. That was changed in v18,
commit fb1a18810f.
Security: CVE-2026-2007
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
The function assumed that if charlen == bytelen, there are no
multibyte characters in the string. That's sensible, but the callers
were a little careless in how they calculated the lengths. The callers
converted the string to lowercase before calling make_trigram(), and
the 'charlen' value was calculated *before* the conversion to
lowercase while 'bytelen' was calculated after the conversion. If the
lowercased string had a different number of characters than the
original, make_trigram() might incorrectly apply the fastpath and
treat all the bytes as single-byte characters, or fail to apply the
fastpath (which is harmless), or it might hit the "Assert(bytelen ==
charlen)" assertion. I'm not aware of any locale / character
combinations where you could hit that assertion in practice,
i.e. where a string converted to lowercase would have fewer characters
than the original, but it seems best to avoid making that assumption.
To fix, remove the 'charlen' argument. To keep the performance when
there are no multibyte characters, always try the fast path first, but
check the input for multibyte characters as we go. The check on each
byte adds some overhead, but it's close enough. And to compensate, the
find_word() function no longer needs to count the characters.
This fixes one small bug in make_trigrams(): in the multibyte
codepath, it peeked at the byte just after the end of the input
string. When compiled with IGNORECASE, that was harmless because there
is always a NUL byte or blank after the input string. But with
!IGNORECASE, the call from generate_wildcard_trgm() doesn't guarantee
that.
Backpatch to v18, but no further. In previous versions lower-casing was
done character by character, and thus the assumption that lower-casing
doesn't change the character length was valid. That was changed in v18,
commit fb1a18810f.
Security: CVE-2026-2007
Reviewed-by: Noah Misch <noah@leadboat.com>
pgp_pub_decrypt_bytea() was missing a safeguard for the session key
length read from the message data, that can be given in input of
pgp_pub_decrypt_bytea(). This can result in the possibility of a buffer
overflow for the session key data, when the length specified is longer
than PGP_MAX_KEY, which is the maximum size of the buffer where the
session data is copied to.
A script able to rebuild the message and key data that can trigger the
overflow is included in this commit, based on some contents provided by
the reporter, heavily editted by me. A SQL test is added, based on the
data generated by the script.
Reported-by: Team Xint Code as part of zeroday.cloud
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2005
Backpatch-through: 14
Looking again at commit 7cdb633c8, I wondered why we have hard-wired
"1034" for the OID of type aclitem[]. Some other entries in the same
array have numeric type OIDs as well. This seems to be a hangover
from years ago when not every built-in pg_type entry had an OID macro.
But since we made genbki.pl responsible for generating these macros,
there are macros available for all these array types, so there's no
reason not to follow the project policy of never writing numeric OID
constants in C code.
This thinko caused us to not substitute our own getopt() code,
which results in failing to parse long options for the postmaster
since Solaris' getopt() doesn't do what we expect. This can be seen
in the results of buildfarm member icarus, which is the only one
trying to build via meson on Solaris.
Per consultation with pgsql-release, it seems okay to fix this
now even though we're in release freeze. The fix visibly won't
affect any other platforms, and it can't break Solaris/meson
builds any worse than they're already broken.
Discussion: https://postgr.es/m/2471229.1770499291@sss.pgh.pa.us
Backpatch-through: 16
Commit 176dffdf7 added a NULL array pointer check before performing
a qsort in order to prevent undefined behavior when passing NULL
pointer and zero length. To head off future degenerate cases, check
that there are at least two elements to sort before proceeding with
insertion sort. This has the added advantage of allowing us to remove
four equivalent checks that guarded against recursion/iteration.
There might be a tiny performance penalty from unproductive
recursions, but we can buy that back by increasing the insertion sort
threshold. That is left for future work.
Discussion: https://postgr.es/m/CANWCAZZWvds_35nXc4vXD-eBQa_=mxVtqZf-PM_ps=SD7ghhJg@mail.gmail.com
Commit 7b378237a widened AclMode to 64 bits, which implies that
the alignment of AclItem is now determined by an int64 field.
That commit correctly set the typalign for SQL type aclitem to
'd', but it missed the hard-wired knowledge about _aclitem in
bootstrap.c. This doesn't seem to have caused any ill effects,
probably because we never try to fill a non-null value into
an aclitem[] column during bootstrap. Nonetheless, it's clearly
a gotcha waiting to happen, so fix it up.
In passing, also fix a couple of typanalyze functions that were
using hard-coded typalign constants when they could just as
easily use greppable TYPALIGN_xxx macros.
Noticed these while working on a patch to expand the set of
typalign values. I doubt we are going to pursue that path,
but these fixes still seem worth a quick commit.
Discussion: https://postgr.es/m/1127261.1769649624@sss.pgh.pa.us
This commit adjusts a few debugging macros to match the style of
those in pg_config_manual.h. Like commits 123661427b and
b4cbc106a6, these were discovered while reviewing Aleksander
Alekseev's proposed changes to pgindent.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aP-H6kSsGOxaB21k%40nathan
The main reason that libpq doesn't request protocol version 3.2 by
default is because other proxy/server implementations don't implement
the negotiation. This is a bit of a chicken-and-egg problem: We don't
bump the default version that libpq requests, but other implementations
may not be incentivized to implement version negotiation if their users
never run into issues.
One established practice to combat this is to flip Postel's Law on its
head, by sending parameters that the server cannot possibly support. If
the server fails the handshake instead of correctly negotiating, then
the problem is surfaced naturally. If the server instead claims to
support the bogus parameters, then we fail the connection to make the
lie obvious. This is called "grease" (or "greasing"), after the GREASE
mechanism in TLS that popularized the concept:
https://www.rfc-editor.org/rfc/rfc8701.html
This patch reserves 3.9999 as an explicitly unsupported protocol version
number and `_pq_.test_protocol_negotiation` as an explicitly unsupported
protocol extension. A later commit will send these by default in order
to stress-test the ecosystem during the beta period; that commit will
then be reverted before 19 RC1, so that we can decide what to do with
whatever data has been gathered.
The _pq_.test_protocol_negotiation change here is intentionally docs-
only: after its implementation is reverted, the parameter should remain
reserved.
Extracted/adapted from a patch by Jelte Fennema-Nio.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DDPR5BPWH1RJ.1LWAK6QAURVAY%40jeltef.nl
First, split the Protocol Versions table in two, and lead with the list
of versions that are supported today. Reserved and unsupported version
numbers go into the second table.
Second, in anticipation of a new (reserved) protocol extension, document
the extension negotiation process alongside version negotiation, and add
the corresponding tables for future extension parameter registrations.
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/DDPR5BPWH1RJ.1LWAK6QAURVAY%40jeltef.nl
This routine's internals directly used MyProcNumber to choose which
object ID to assign for the hash key of a backend's stats entry, while
the value to use is given as input argument of the function.
The original intention was to pass MyProcNumber as an argument of
pgstat_create_backend() when called in pgstat_bestart_final(),
pgstat_beinit() ensuring that MyProcNumber has been set, not use it
directly in the function. This commit addresses this inconsistency by
using the procnum given by the caller of pgstat_create_backend(), not
MyProcNumber.
This issue is not a cause of bugs currently. However, let's keep the
code in sync across all the branches where this code exists, as it could
matter in a future backpatch.
Oversight in 4feba03d8b.
Reported-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Discussion: https://postgr.es/m/TYCPR01MB11316AD8150C8F470319ACCAEE866A@TYCPR01MB11316.jpnprd01.prod.outlook.com
Backpatch-through: 18
These errors are very unlikely going to show up, but in the event that
they happen, some incorrect information would have been provided:
- In pg_rewind, a stat() failure was reported as an open() failure.
- In pg_combinebackup, a check for the new directory of a tablespace
mapping was referred as the old directory.
- In pg_combinebackup, a failure in reading a source file when copying
blocks referred to the destination file.
The changes for pg_combinebackup affect v17 and newer versions. For
pg_rewind, all the stable branches are affected.
Author: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_1EE1430B1E6C18A663B8990F@qq.com
Backpatch-through: 14
Provide a way to disable the use of posix_fallocate() for relation
files. It was introduced by commit 4d330a61bb. The new setting
file_extend_method=write_zeros can be used as a workaround for problems
reported from the field:
* BTRFS compression is disabled by the use of posix_fallocate()
* XFS could produce spurious ENOSPC errors in some Linux kernel
versions, though that problem is reported to have been fixed
The default is file_extend_method=posix_fallocate if available, as
before. The write_zeros option is similar to PostgreSQL < 16, except
that now it's multi-block.
Backpatch-through: 16
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reported-by: Dimitrios Apostolou <jimis@gmx.net>
Discussion: https://postgr.es/m/b1843124-fd22-e279-a31f-252dffb6fbf2%40gmx.net
synchronized_standby_slots is defined in guc_parameter.dat as part of
the REPLICATION_PRIMARY group and is listed under the "Primary Server"
section in postgresql.conf.sample. However, in the documentation
its description was previously placed under the "Sending Servers" section.
Since synchronized_standby_slots only takes effect on the primary server,
this commit moves its documentation to the "Primary Server" section to
match its behavior and other references.
Backpatch to v17 where synchronized_standby_slots was added.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwE_LwgXgCrqd08OFteJqdERiF3noqOKu2vt7Kjk4vMiGg@mail.gmail.com
Backpatch-through: 17
Commit 29d0a77fa6 improved pg_upgrade to allow migrating logical slots
provided that all logical slots have caught up (i.e., they have no
pending decodable WAL records). Previously, this verification was done
by checking each slot individually, which could be time-consuming if
there were many logical slots to migrate.
This commit optimizes the check to avoid reading the same WAL stream
multiple times. It performs the check only for the slot with the
minimum confirmed_flush_lsn and applies the result to all other slots
in the same database. This limits the check to at most one logical
slot per database.
During the check, we identify the last decodable WAL record's LSN to
report any slots with unconsumed records, consistent with the existing
error reporting behavior. Additionally, the maximum
confirmed_flush_lsn among all logical slots on the database is used as
an early scan cutoff; finding a decodable WAL record beyond this point
implies that no slot has caught up.
Performance testing demonstrated that the execution time remains
stable regardless of the number of slots in the database.
Note that we do not distinguish slots based on their output plugins. A
hypothetical plugin might use a replication origin filter that filters
out changes from a specific origin. In such cases, we might get a
false positive (erroneously considering a slot caught up). However,
this is safe from a data integrity standpoint, such scenarios are
rare, and the impact of a false positive is minimal.
This optimization is applied only when the old cluster is version 19
or later.
Bump catalog version.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAD21AoBZ0LAcw1OHGEKdW7S5TRJaURdhEk3CLAW69_siqfqyAg@mail.gmail.com
This affects two command patterns, showing information about relations:
* oid2name -x -d DBNAME, applying to all relations on a database.
* oid2name -x -d DBNAME -t TABNAME [-t ..], applying to a subset of
defined relations on a database.
The relative path of a relation is added to the information provided,
using pg_relation_filepath().
Author: David Bidoc <dcbidoc@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Mark Wong <markwkm@gmail.com>
Discussion: https://postgr.es/m/CABour1v2CU1wjjoM86wAFyezJQ3-+ncH43zY1f1uXeVojVN8Ow@mail.gmail.com
Instead of assigning the backend type in the Main function of each
postmaster child, do it right after fork(), by which time it is already
known by postmaster_child_launch(). This reduces the time frame during
which MyBackendType is incorrect.
Before this commit, ProcessStartupPacket would overwrite MyBackendType
to B_BACKEND for dead-end backends, which is quite dubious. Stop that.
We may now see MyBackendType == B_BG_WORKER before setting up
MyBgworkerEntry. As far as I can see this is only a problem if we try
to log a message and %b is in log_line_prefix, so we now have a constant
string to cover that case. Previously, it would print "unrecognized",
which seems strictly worse.
Author: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/e85c6671-1600-4112-8887-f97a8a5d07b2@app.fastmail.com
Commit 5f13999aa1 added a TAP test for GUC settings passed via the
CONNECTION string in logical replication, but the buildfarm member
sungazer reported test failures.
The test incorrectly used the subscriber's log file position as the
starting offset when reading the publisher's log. As a result, the test
failed to find the expected log message in the publisher's log and
erroneously reported a failure.
This commit fixes the test to use the publisher's own log file position
when reading the publisher's log.
Also, to avoid similar confusion in the future, this commit splits the single
$log_location variable into $log_location_pub and $log_location_sub,
clearly distinguishing publisher and subscriber log positions.
Backpatched to v15, where commit 5f13999aa1 introduced the test.
Per buildfarm member sungazer.
This issue was reported and diagnosed by Alexander Lakhin.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/966ec3d8-1b6f-4f57-ae59-fc7d55bc9a5a@gmail.com
Backpatch-through: 15
Mostly this involves checking for NULL pointer before doing operations
that add a non-zero offset.
The exception is an overflow warning in heap_fetch_toast_slice(). This
was caused by unneeded parentheses forcing an expression to be
evaluated to a negative integer, which then got cast to size_t.
Per clang 21 undefined behavior sanitizer.
Backpatch to all supported versions.
Co-authored-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/777bd201-6e3a-4da0-a922-4ea9de46a3ee@gmail.com
Backpatch-through: 14
We can immediately make use of it in pg_signal_backend(), which
previously fetched the process type from the backend status array with
pgstat_get_backend_type_by_proc_number(). That was correct but felt a
little questionable to me: backend status should be for observability
purposes only, not for permission checks.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/b77e4962-a64a-43db-81a1-580444b3e8f5@iki.fi
Currently, when the argument of copyObject() is const-qualified, the
return type is also, because the use of typeof carries over all the
qualifiers. This is incorrect, since the point of copyObject() is to
make a copy to mutate. But apparently no code ran into it.
The new implementation uses typeof_unqual, which drops the qualifiers,
making this work correctly.
typeof_unqual is standardized in C23, but all recent versions of all
the usual compilers support it even in non-C23 mode, at least as
__typeof_unqual__. We add a configure/meson test for typeof_unqual
and __typeof_unqual__ and use it if it's available, else we use the
existing fallback of just returning void *.
Reviewed-by: David Geier <geidav.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/92f9750f-c7f6-42d8-9a4a-85a3cbe808f3%40eisentraut.org
A failure while closing pg_wal/summaries/ incorrectly generated a report
about pg_wal/archive_status/.
While at it, this commit adds #undefs for the macros used in
KillExistingWALSummaries() and KillExistingArchiveStatus() to prevent
those values from being misused in an incorrect function context.
Oversight in dc21234005.
Author: Tianchen Zhang <zhang_tian_chen@163.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/SE2P216MB2390C84C23F428A7864EE07FA19BA@SE2P216MB2390.KORP216.PROD.OUTLOOK.COM
Backpatch-through: 17
The pg_dump documentation had repetitive notes for the --schema,
--table, and --extension switches, noting that dependent database
objects are not automatically included in the dump. This commit removes
these notes and replaces them with a consolidated paragraph in the
"Notes" section.
pg_restore had a similar note for -t but lacked one for -n; do likewise.
Also, add a note to --extension in pg_dump to note that ancillary files
(such as shared libraries and control files) are not included in the
dump and must be present on the destination system.
Author: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/284C4D55-4F90-4AA0-84C8-1E6A28DDF271@gmail.com
I don't understand how to reach errdetail_abort() with
MyProc->recoveryConflictPending set. If a recovery conflict signal is
received, ProcessRecoveryConflictInterrupt() raises an ERROR or FATAL
error to cancel the query or connection, and abort processing clears
the flag. The error message from ProcessRecoveryConflictInterrupt() is
very clear that the query or connection was terminated because of
recovery conflict.
The only way to reach it AFAICS is with a race condition, if the
startup process sends a recovery conflict signal when the transaction
has just entered aborted state for some other reason. And in that case
the detail would be misleading, as the transaction was already aborted
for some other reason, not because of the recovery conflict.
errdetail_abort() was the only user of the recoveryConflictPending
flag in PGPROC, so we can remove that and all the related code too.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/4cc13ba1-4248-4884-b6ba-4805349e7f39@iki.fi
When using ALTER TABLE ... ADD CONSTRAINT to add a not-null constraint
with an explicit name, we have to ensure that if the column is already
marked NOT NULL, the provided name matches the existing constraint name.
Failing to do so could lead to confusion regarding which constraint
object actually enforces the rule.
This patch adds a check to throw an error if the user tries to add a
named not-null constraint to a column that already has one with a
different name.
Reported-by: yanliang lei <msdnchina@163.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-bu: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/19351-8f1c523ead498545%40postgresql.org
Two wait events are added to the COPY FROM/TO code:
* COPY_FROM_READ: reading data from a copy_file.
* COPY_TO_WRITE: writing data to a copy_file.
In the COPY code, copy_file can be set when processing a command through
the pipe mode (for the non-DestRemote case), the program mode or the
file mode, when processing fread() or fwrite() on it.
Author: Nikolay Samokhvalov <nik@postgres.ai>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAM527d_iDzz0Kqyi7HOfqa-Xzuq29jkR6AGXqfXLqA5PR5qsng@mail.gmail.com
This routine has an option to bypass an error if a WAL summary file is
opened for read but is missing (missing_ok=true). However, the code
incorrectly checked for EEXIST, that matters when using O_CREAT and
O_EXCL, rather than ENOENT, for this case.
There are currently only two callers of OpenWalSummaryFile() in the
tree, and both use missing_ok=false, meaning that the check based on the
errno is currently dead code. This issue could matter for out-of-core
code or future backpatches that would like to use missing_ok set to
true.
Issue spotted while monitoring this area of the code, after
a9afa021e9.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aYAf8qDHbpBZ3Rml@paquier.xyz
Backpatch-through: 17
Previously, when synchronous_standby_names was changed (for example,
by reducing the number of required synchronous standbys or modifying
the standby list), backends waiting for synchronous replication were not
released immediately, even if the new configuration no longer required them
to wait. They could remain blocked until additional messages arrived from
standbys and triggered their release.
This commit improves walsender so that backends waiting for synchronous
replication are released as soon as the updated configuration takes effect and
the new settings no longer require them to wait, by calling
SyncRepReleaseWaiters() when configuration changes are processed.
As part of this change, the duplicated code that handles configuration changes
in walsender has been refactored into a new helper function, which is now used
at the three existing call places.
Since this is an improvement rather than a bug fix, it is applied only to
the master branch.
Author: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Discussion: https://postgr.es/m/CAOzEurSRii0tEYhu5cePmRcvS=ZrxTLEvxm3Kj0d7_uKGdM23g@mail.gmail.com
This commit introduces a new prompt escape %i for psql, which shows
whether the connected server is operating in hot standby mode. It
expands to standby if the server reports in_hot_standby = on, and
primary otherwise.
This is useful for distinguishing standby servers from primary ones
at a glance, especially when working with multiple connections in
replicated environments where libpq's multi-host connection strings
are used.
Author: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/016f6738-f9a9-4e98-bb5a-e1e4b9591d46@uni-muenster.de
The test relies on VACUUM being able to mark a page all-visible, but
this can fail when autovacuum in other sessions prevents the visibility
horizon from advancing. Making the test table temporary isolates its
horizon from other sessions, including catalog table vacuums, ensuring
reliable test behavior.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/2b09fba6-6b71-497a-96ef-a6947fcc39f6%40gmail.com
Separate att_align_nominal() into two macros, similarly to what
was already done with att_align_datum() and att_align_pointer().
The inner macro att_nominal_alignby() is really just TYPEALIGN(),
while att_align_nominal() retains its previous API by mapping
TYPALIGN_xxx values to numbers of bytes to align to and then
calling att_nominal_alignby(). In support of this, split out
tupdesc.c's logic to do that mapping into a publicly visible
function typalign_to_alignby().
Having done that, we can replace performance-critical uses of
att_align_nominal() with att_nominal_alignby(), where the
typalign_to_alignby() mapping is done just once outside the loop.
In most places I settled for doing typalign_to_alignby() once
per function. We could in many places pass the alignby value
in from the caller if we wanted to change function APIs for this
purpose; but I'm a bit loath to do that, especially for exported
APIs that extensions might call. Replacing a char typalign
argument by a uint8 typalignby argument would be an API change
that compilers would fail to warn about, thus silently breaking
code in hard-to-debug ways. I did revise the APIs of array_iter_setup
and array_iter_next, moving the element type attribute arguments to
the former; if any external code uses those, the argument-count
change will cause visible compile failures.
Performance testing shows that ExecEvalScalarArrayOp is sped up by
about 10% by this change, when using a simple per-element function
such as int8eq. I did not check any of the other loops optimized
here, but it's reasonable to expect similar gains.
Although the motivation for creating this patch was to avoid a
performance loss if we add some more typalign values, it evidently
is worth doing whether that patch lands or not.
Discussion: https://postgr.es/m/1127261.1769649624@sss.pgh.pa.us
Up to now we've used GNU-style local labels for branch targets
in s_lock.h's assembly blocks. But there's an alternative style,
which I for one didn't know about till recently: use regular
assembler labels, and insert a per-asm-block number in them
using %= to ensure they are distinct across multiple TAS calls
within one source file. gcc has had %= since gcc 2.0, and
I've verified that clang knows it too.
While the immediate motivation for changing this is that AIX's
assembler doesn't do local labels, it seems to me that this is a
superior solution anyway. There is nothing mnemonic about "1:",
while a regular label can convey something useful, and at least
to me it feels less error-prone. Therefore let's standardize on
this approach, also converting the one other usage in s_lock.h.
Discussion: https://postgr.es/m/399291.1769998688@sss.pgh.pa.us
A failing unlink() was reporting an incorrect error message, referring
to stat().
Author: Man Zeng <zengman@halodbtech.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/tencent_3BBE865C5F49D452360FF190@qq.com
Backpath-through: 17
The build generates four files based on the wait event contents stored
in wait_event_names.txt:
- wait_event_types.h
- pgstat_wait_event.c
- wait_event_funcs_data.c
- wait_event_types.sgml
The SGML file is generated as part of a documentation build, with its
data stored in doc/src/sgml/ for meson and configure. The three others
are handled differently for meson and configure:
- In configure, all the files are created in src/backend/utils/activity/.
A link to wait_event_types.h is created in src/include/utils/.
- In meson, all the files are created in src/include/utils/.
The two C files, pgstat_wait_event.c and wait_event_funcs_data.c, are
then included in respectively wait_event.c and wait_event_funcs.c,
without the "utils/" path.
For configure, this does not present a problem. For meson, this has to
be combined with a trick in src/backend/utils/activity/meson.build,
where include_directories needs to point to include/utils/ to make the
inclusion of the C files work properly, causing builds to pull in
PostgreSQL headers rather than system headers in some build paths, as
src/include/utils/ would take priority.
In order to fix this issue, this commit reworks the way the C/H files
are generated, becoming consistent with guc_tables.inc.c:
- For meson, basically nothing changes. The files are still generated
in src/include/utils/. The trick with include_directories is removed.
- For configure, the files are now generated in src/backend/utils/, with
links in src/include/utils/ pointing to the ones in src/backend/. This
requires extra rules in src/backend/utils/activity/Makefile so as a
make command in this sub-directory is able to work.
- The three files now fall under header-stamp, which is actually simpler
as guc_tables.inc.c does the same.
- wait_event_funcs_data.c and pgstat_wait_event.c are now included with
"utils/" in their path.
This problem has not been an issue in the buildfarm; it has been noted
with AIX and a conflict with float.h. This issue could, however, create
conflicts in the buildfarm depending on the environment with unexpected
headers pulled in, so this fix is backpatched down to where the
generation of the wait-event files has been introduced.
While on it, this commit simplifies wait_event_names.txt regarding the
paths of the files generated, to mention just the names of the files
generated. The paths where the files are generated became incorrect.
The path of the SGML path was wrong.
This change has been tested in the CI, down to v17. Locally, I have run
tests with configure (with and without VPATH), as well as meson, on the
three branches.
Combo oversight in fa88928470 and 1e68e43d3f.
Reported-by: Aditya Kamath <aditya.kamath1@ibm.com>
Discussion: https://postgr.es/m/LV8PR15MB64888765A43D229EA5D1CFE6D691A@LV8PR15MB6488.namprd15.prod.outlook.com
Backpatch-through: 17
Similarly to the preceding commit, 030_pager.pl was assuming
that patterns it looks for in interactive psql output would
appear by themselves on a line, but that assumption tends to
fall over in builds made --without-readline: the output we
get might have a psql prompt immediately followed by the
expected line of output.
For several of these tests, just checking for the pattern
followed by newline seems sufficient, because we could not
get a false match against the command echo, nor against the
unreplaced command output if the pager fails to be invoked
when expected. However, that's fairly scary for the test
that was relying on information_schema.referential_constraints:
"\d+" could easily appear at the end of a line in that view.
Let's get rid of that hazard by making a custom test view
instead of using information_schema.referential_constraints.
This test script is new in v19, so no need for back-patch.
Reported-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Discussion: https://postgr.es/m/db6fdb35a8665ad3c18be01181d44b31@postgrespro.ru
BackgroundPsql needs to wait for all the output from an interactive
psql command to come back. To make sure that's happened, it issues
the command, then issues \echo and \warn psql commands that echo
a "banner" string (which we assume won't appear in the command's
output), then waits for the banner strings to appear. The hazard
in this approach is that the banner will also appear in the echoed
psql commands themselves, so we need to distinguish those echoes from
the desired output. Commit 8b886a4e3 tried to do that by positing
that the desired output would be directly preceded and followed by
newlines, but it turns out that that assumption is timing-sensitive.
In particular, it tends to fail in builds made --without-readline,
wherein the command echoes will be made by the pty driver and may
be interspersed with prompts issued by psql proper.
It does seem safe to assume that the banner output we want will be
followed by a newline, since that should be the last output before
things quiesce. Therefore, we can improve matters by putting quotes
around the banner strings in the \echo and \warn psql commands, so
that their echoes cannot include banner directly followed by newline,
and then checking for just banner-and-newline in the match pattern.
While at it, spruce up the pump() call in sub query() to look like
the neater version in wait_connect(), and don't die on timeout
until after printing whatever we got.
Reported-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Diagnosed-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Discussion: https://postgr.es/m/db6fdb35a8665ad3c18be01181d44b31@postgrespro.ru
Backpatch-through: 14
For readability. It was a slight modularity violation to have fields
in PGShmemHeader that were only used by the allocator code in
shmem.c. And it was inconsistent that ShmemLock was nevertheless not
stored there. Moving all the allocator-related fields to a separate
struct makes it more consistent and modular, and removes the need to
allocate and pass ShmemLock separately via BackendParameters.
Merge InitShmemAccess() and InitShmemAllocation() into a single
function that initializes the struct when called from postmaster, and
when called from backends in EXEC_BACKEND mode, re-establishes the
global variables. That's similar to all the *ShmemInit() functions
that we have.
Co-authored-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAExHW5uNRB9oT4pdo54qAo025MXFX4MfYrD9K15OCqe-ExnNvg@mail.gmail.com
Previously, the CheckXidAlive check was performed within the table_scan*next*
functions. This caused the check to be executed for every fetched tuple, an
unnecessary overhead.
To fix, move the check to table_beginscan* so it is performed once per scan
rather than once per row.
Note: table_tuple_fetch_row_version() does not use a scan descriptor;
therefore, the CheckXidAlive check is retained in that function. The overhead
is unlikely to be relevant for the existing callers.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Suggested-by: Andres Freund <andres@anarazel.de>
Suggested-by: Amit Kapila <akapila@postgresql.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/tlpltqm5jjwj7mp66dtebwwhppe4ri36vdypux2zoczrc2i3mp%40dhv4v4nikyfg
In fcb9c977aa I included an assertion in BufferLockConditional() to detect if
a conditional lock acquisition is done on a buffer that we already have
locked. The assertion was added in the course of adding other assertions.
Unfortunately I failed to realize that some of our code relies on such lock
acquisitions to silently fail. E.g. spgist and nbtree may try to conditionally
lock an already locked buffer when acquiring a empty buffer.
LWLockAcquireConditional(), which was previously used to implement
ConditionalLockBuffer(), does not have such an assert.
Instead of just removing the assert, and relying on the lock acquisition to
fail due to the buffer already locked, this commit changes the behaviour of
conditional content lock acquisition to fail if the current backend has any
pre-existing lock on the buffer, even if the lock modes would not
conflict. The reason for that is that we currently do not have space to track
multiple lock acquisitions on a single buffer. Allowing multiple locks on the
same buffer by a backend also seems likely to lead to bugs.
There is only one non-self-exclusive conditional content lock acquisition, in
GetVictimBuffer(), but it only is used if the target buffer is not pinned and
thus can't already be locked by the current backend.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/90bd2cbb-49ce-4092-9f61-5ac2ab782c94@gmail.com