If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.
Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array. With only one allocation or re-allocation, there is no
intermediate state.
Back-patch to all supported releases.
Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
Previously, the test relied on a trick with a shell to retrieve the PID
of the psql session to be stopped with SIGINT, that was skipped on
Windows. This commit changes the test to use IPC::Run::signal()
instead, which still does not work on Windows, but for a different
reason: SIGINT would stop the test before finishing.
This should allow the test to run on non-Windows platforms where PPID is
not supported (like NetBSD), spreading it a bit more across the
buildfarm. And the logic of the test is simpler.
It is the first time in the tree that IPC::Run::signal() is used, so, as
a matter of safety (or just call that as me having cold feet), no
backpatch is done, at least for now.
Author: Yugo NAGATA
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20230810125935.22c2922ea5250ba79358965b@sraoss.co.jp
This changes 020_cancel.pl so as the test is entirely skipped on
Windows. This test was already doing nothing under WIN32, except
initializing and starting a node without using it so this shaves a few
test cycles.
Author: Yugo NAGATA
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20230810125935.22c2922ea5250ba79358965b@sraoss.co.jp
Backpatch-through: 15
The new test added by commit 68a59f9e9 disables the subscription and
manually drops the associated replication slot. However, since
disabling the subsubscription doesn't wait for a walsender to release
the replication slot and exit, pg_drop_replication_slot() could
fail. Avoid failure by adding a wait for the replication slot to
become inactive.
Reported-by: Hou Zhijie, as per buildfarm
Reviewed-by: Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB571682316378379AA34854F694E9A%40OS0PR01MB5716.jpnprd01.prod.outlook.com
Backpatch-through: 15
generation_counter includes time spent on both JIT:ing expressions
and tuple deforming which are configured independently via options
jit_expressions and jit_tuple_deforming. As they are combined in
the same counter it's not apparent what fraction of time the tuple
deforming takes.
This adds deform_counter dedicated to tuple deforming, which allows
seeing more directly the influence jit_tuple_deforming is having on
the query. The counter is exposed in EXPLAIN and pg_stat_statements
bumpin pg_stat_statements to 1.11.
Author: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20220612091253.eegstkufdsu4kfls@erthalion.local
The WAIT_USE_WIN32 implementation of WaitEventSetWait() previously
reported at most one event per call, because that's what the underlying
WaitForMultipleObjects() call does.
We can make the behavior match the three Unix implementations by looping
until our output buffer is full, or there are no more events available
now. This makes no difference to most callers including the regular
FEBE socket code, since they ask for at most one event anyway. A
difference in socket accept priority might be perceived by end users
after commit 7389aad6 started using WaitEventSet in the postmaster.
With this commit, the accept order now matches Unix systems, servicing
listening sockets in round-robin order.
We decided it wasn't really a bug or worth back-patching, but it seems
good to align the behavior across platforms.
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Tested-by: "Wei Wang (Fujitsu)" <wangw.fnst@fujitsu.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BA2dk29hr5zRP3HVJQ-_PncNJM6HVQ7aaYLXLRBZU-xw%40mail.gmail.com
Now that ATExecDropConstraint doesn't recurse anymore, so it's wrong to
test privileges "during recursion" there. Move the check to
dropconstraint_internal, which is the place where recursion occurs.
In passing, remove now-useless 'recursing' argument to
ATExecDropConstraint.
Discussion: https://postgr.es/m/202309051744.y4mndw5gwzhh@alvherre.pgsql
Now that we have catalogued not-null constraints, our information_schema
definition can be updated to grab those rather than fabricate synthetic
definitions.
Note that we still don't have catalog rows for not-null constraints on
domains, but we've never had not-null constraints listed in
information_schema, so that's a problem to be solved separately.
Co-authored-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/81b461c4-edab-5d8c-2f88-203108425340@enterprisedb.com
Discussion: https://postgr.es/m/202309041710.psytrxlsiqex@alvherre.pgsql
pgbouncer can cause PQbackendPID() to return negative values due to it
filling be_pid with random bytes (even these days pid_max can only be
set up to 2^22 on 64b machines on Linux, for example, so this cannot
happen with normal PID numbers). When this happens, pg_basebackup may
generate a temporary slot name that may not be accepted by the parser,
leading to spurious failures, like:
pg_basebackup: error: could not send replication command
ERROR: replication slot name "pg_basebackup_-1201966863" contains
invalid character
This commit fixes that problem by formatting the result from
PQbackendPID() as an unsigned integer when creating the temporary
replication slot name, so as the invalid character is gone and the
command can be parsed.
Author: Jelte Fennema
Reviewed-by: Daniel Gustafsson, Nishant Sharma
Discussion: https://postgr.es/m/CAGECzQQOGvYfp8ziF4fWQ_o8s2K7ppaoWBQnTmdakn3s-4Z=5g@mail.gmail.com
Backpatch-through: 11
Where the test wants a VACUUM command to generate WAL that would
conflict with a session on the standby, it could transiently fail to do
so if it couldn't acquire a cleanup lock conditionally at that moment on
the primary. VACUUM FREEZE will wait, so use that instead.
No back-patch for now, but that will be needed if/when the test is
re-enabled in back-branches.
Suggested-by: Andres Freund <andres@anarazel.de>
Reported-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/20230812210006.ei7tutzwcr5svyt6%40awork3.anarazel.de
We shouldn't be doing non-trivial work in signal handlers in general,
and in this case the handler could reach unsafe code and corrupt state.
It also clobbered its own "reason" code.
Move all recovery conflict decision logic into the next
CHECK_FOR_INTERRUPTS(), and have the signal handler just set flags and
the latch, following the standard pattern. Since there are several
different "reasons", use a separate flag for each.
With this refactoring, the recovery conflict system no longer
piggy-backs on top of the regular query cancelation mechanism, but
instead raises an error directly if it decides that is necessary. It
still needs to respect QueryCancelHoldoffCount, because otherwise the
FEBE protocol might get out of sync (see commit 2b3a8b20c2).
This fixes one class of intermittent failure in the new
031_recovery_conflict.pl test added by commit 9f8a050f, though the buggy
coding is much older. Failures outside contrived testing seem to be
very rare (or perhaps incorrectly attributed) in the field, based on
lack of reports.
No back-patch for now due to complexity and release schedule. We have
the option to back-patch into 16 later, as 16 has prerequisite commit
bea3d7e.
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Reviewed-by: Michael Paquier <michael@paquier.xyz> (earlier version)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Tested-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/CA%2BhUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO%3DnEYKSg%40mail.gmail.com
Discussion: https://postgr.es/m/CALj2ACVr8au2J_9D88UfRCi0JdWhyQDDxAcSVav0B0irx9nXEg%40mail.gmail.com
This commit allows specifying a --sync-method in several frontend
utilities that must synchronize many files to disk (initdb,
pg_basebackup, pg_checksums, pg_dump, pg_rewind, and pg_upgrade).
On Linux, users can specify "syncfs" to synchronize the relevant
file systems instead of calling fsync() for every single file. In
many cases, using syncfs() is much faster.
As with recovery_init_sync_method, this new option comes with some
caveats. The descriptions of these caveats have been moved to a
new appendix section in the documentation.
Co-authored-by: Justin Pryzby
Reviewed-by: Michael Paquier, Thomas Munro, Robert Haas, Justin Pryzby
Discussion: https://postgr.es/m/20210930004340.GM831%40telsasoft.com
This commit adds support for using syncfs() in fsync_pgdata() and
fsync_dir_recurse() (which have been renamed to sync_pgdata() and
sync_dir_recurse()). Like recovery_init_sync_method,
sync_pgdata() calls syncfs() for the data directory, each
tablespace, and pg_wal (if it is a symlink). For now, all of the
frontend utilities that use these support functions are hard-coded
to use fsync(), but a follow-up commit will allow specifying
syncfs().
Co-authored-by: Justin Pryzby
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20210930004340.GM831%40telsasoft.com
This commit renames RecoveryInitSyncMethod to DataDirSyncMethod and
moves it to common/file_utils.h. This is preparatory work for a
follow-up commit that will allow specifying the synchronization
method in frontend utilities such as pg_upgrade and pg_basebackup.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/ZN2ZB4afQ2JbR9TA%40paquier.xyz
This file is now made of two columns, removing the column listing the
user-visible strings used in the system views and the documentation:
- Enum definitions for each class without the prefix "WAIT_EVENT_", so
as this information can be grepped in the code and wait_event_names.txt
at the same time.
- Description in the documentation.
The wait event names are now generated from the enum objects in
CamelCase, with the underscores removed. The data generated for wait
events is consistent with what was produced by 414f6c0fb7.
This has the advantage to remove WAIT_EVENT_DOCONLY, which was a
placeholder for the wait event types Lock and LWLock as these two only
require the generation of the documentation.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
The event names use the same case-insensitive characters, hence applying
lower() or upper() to the monitoring queries allows the detection of the
same events as before this change. It is possible to cross-check the
data with the system view pg_wait_events, for instance, with a query
like that showing no differences:
SELECT lower(type), lower(name), description
FROM pg_wait_events ORDER BY 1, 2;
This will help in the introduction of more simplifications in the format
of wait_event_names. Some of the enum values in the code had to be
renamed a bit to follow the same convention naming across the board.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZOxVHQwEC/9X/p/z@paquier.xyz
Presently, frontend code that needs to use these macros must either
include storage/fd.h, which declares several frontend-unsafe
functions, or duplicate the macros. This commit moves these macros
to common/file_utils.h, which is safe for both frontend and backend
code. Consequently, we can also remove the duplicated macros in
pg_checksums and stop including storage/fd.h in pg_rewind.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/ZOP5qoUualu5xl2Z%40paquier.xyz
This lock was introduced before memory barrier support was added,
and it is only used to guarantee proper memory ordering when
KnownAssignedXidsAdd() appends to the array without a lock. Now
that such memory barrier support exists, we can remove the lock and
use barriers instead.
Suggested-by: Tom Lane
Author: Michail Nikolaev
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/CANtu0oh0si%3DjG5z_fLeFtmYcETssQ08kLEa8b6TQqDm_cinroA%40mail.gmail.com
If all the bits of a key in a tsvector are true (marked with ALLISTRUE),
gtsvectorout() would show the following description:
"0 true bits, 0 false bits"
This is confusing, as all the bits are true, but this would be
equivalent to the information if siglen is 0.
This commit improves the output so as "all true bits" show instead in
this case. Alexander has proposed a regression test for pageinspect,
not included here as it is rather expensive compared to its coverage
value.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.
This is wrong since tsearch2 has been moved into core with commit
140d4ebcb4, so backpatch all the way down. This error has been
spotted by valgrind.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
Dropping a database while a connection is attempted on it was able to
lead to the presence of valid database entries in shared statistics.
The issue is that MyDatabaseId was getting set too early than it should,
as, if the connection attempted on the dropped database fails when
renamed or dropped, the shutdown callback of the shared statistics would
finish by re-inserting a correct entry related to the database already
dropped.
As analyzed by the bug reporters, this issue could lead to phantom
entries in the database list maintained by the autovacuum launcher
(in rebuild_database_list()) if the database dropped was part of the
database list when it was still valid. After the database was dropped,
it would remain the highest on the list of databases to considered by
the autovacuum worker as things to process. This would prevent
autovacuum jobs to happen on all the other databases still present.
The commit fixes this issue by delaying setting MyDatabaseId until the
database existence has been re-checked with the second scan on
pg_database after getting a shared lock on it, and by switching
pgstat_update_dbstats() so as nothing happens if MyDatabaseId is not
valid.
Issue introduced by 5891c7a8ed, so backpatch down to 15.
Reported-by: Will Mortensen, Jacob Speidel
Analyzed-by: Will Mortensen, Jacob Speidel
Author: Andres Freund
Discussion: https://postgr.es/m/17973-bca1f7d5c14f601e@postgresql.org
Backpatch-through: 15
When a partitioned table has a primary key, trying to find the
corresponding not-null constraint for that column would come up empty,
causing code that's trying to check said not-null constraint to crash.
Fix by only running the check when the not-null constraint exists.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/d57b4a69-7394-3146-5976-9a1ef27e7972@gmail.com
The comment in heapgettup_advance_block() says that it reports the
scan position before checking for end of scan, but that didn't match
the code. The code was refactored in commit 7ae0ab0ad9, which
inadvertently changed the order of the check and reporting. Change it
back.
This caused a few regression test failures with a small shared_buffers
setting like 10 MB. The 'portals' and 'cluster' tests perform seqscans
that are large enough that sync seqscans kick in. When the sync scan
position is not updated at end of scan, the next seq scan doesn't
start at the beginning of the table, and the test queries are
sensitive to that.
Reviewed-by: Melanie Plageman, David Rowley
Discussion: https://www.postgresql.org/message-id/6f991389-ae22-d844-a9d8-9aceb7c01a9a@iki.fi
Backpatch-through: 16
There was some confusion in the ObjectProperty entry for transforms.
Some fields had values that were apparently meant for a different
field. Also, some fields were not assigned, which is okay for most
fields, but not for all. In particular, for .oid_catcache_id,
.name_catcache_id, and .objtype, zero is a valid value, so we need to
use -1 if not applicable. It has apparently been like that from the
very beginning (commit cac7658205). The faulty values were not
actually reachable, so it's not a big problem in practice, but we
should make it correct.
Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
The code is able to compile already without warnings under
-Wshadow=compatible-local, which is itself already enabled in the tree,
and the ones fixed here showed up with the more restrictive -Wshadow.
There are more of these that we may want to look at, and the ones fixed
here made the code confusing.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PuR0y4ofNOxi691VTVWmBfScHV9AaBMGSpeh8+DKp81Nw@mail.gmail.com
Unlike the other pg_stat_get_backend* functions,
pg_stat_get_backend_subxact() looks up the backend entry by using
its integer argument as a 1-based index in an internal array. The
other functions look for the entry with the matching session
backend ID. These numbers often match, but that isn't reliably
true.
This commit resolves this discrepancy by introducing
pgstat_get_local_beentry_by_backend_id() and using it in
pg_stat_get_backend_subxact(). We cannot use
pgstat_get_beentry_by_backend_id() because it returns a
PgBackendStatus, which lacks the locally computed additions
available in LocalPgBackendStatus that are required by
pg_stat_get_backend_subxact().
Author: Ian Barwick
Reviewed-by: Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16
Presently, pgstat_fetch_stat_beentry() accepts a session's backend
ID as its argument, and pgstat_fetch_stat_local_beentry() accepts a
1-based index in an internal array as its argument. The former is
typically used wherever a user must provide a backend ID, and the
latter is usually used internally when looping over all entries in
the array. This difference was first introduced by d7e39d72ca.
Before that commit, both functions accepted a 1-based index to the
internal array.
This commit renames these two functions to make it clear whether
they use the backend ID or the 1-based index to look up the entry.
This is preparatory work for a follow-up change that will introduce
a function for looking up a LocalPgBackendStatus using a backend
ID.
Reviewed-by: Ian Barwick, Sami Imseih, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/CAB8KJ%3Dj-ACb3H4L9a_b3ZG3iCYDW5aEu3WsPAzkm2S7JzS1Few%40mail.gmail.com
Backpatch-through: 16
Previously when client was aborted due to some error during
benchmarking, other clients continued their run until certain number
of transactions specified -t was reached or the time specified by -T
was expired. At the end, the results are printed with caution: "Run
was aborted; the above results are incomplete" shows.
New option "--exit-on-abort" allows pgbench to exit immediately in
this case so that users could quickly fix the cause of the failure and
try again another round of benchmarking.
Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Tatsuo Ishii
Discussion: https://postgr.es/m/flat/20230804130325.df32e60879c38c92bca64207%40sraoss.co.jp
nFreeBlocks, defined as a long, stores the number of free blocks in a
logical tape. ltsGetFreeBlock() has been using an int to store the
value of nFreeBlocks, which could lead to overflows on platforms where
long and int are not the same size (in short everything except Windows
where long is 4 bytes).
The problematic intermediate variable is switched to be a long instead
of an int.
Issue introduced by c02fdc9223, so backpatch down to 13.
Author: Ranier vilela
Reviewed-by: Peter Geoghegan, David Rowley
Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com
Backpatch-through: 13
It makes no sense to add a NO INHERIT not-null constraint to a child
table that already has one in that column inherited from its parent.
Disallow that, and add tests for the relevant cases.
Per complaint from Kyotaro Horiguchi. I also used part of his proposed
patch.
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20230828.161658.1184657435220765047.horikyota.ntt@gmail.com
The logical_replication_mode GUC is intended for testing and debugging
purposes, but its current name may be misleading and encourage users to make
unnecessary changes.
To avoid confusion, renaming the GUC to a less misleading name
debug_logical_replication_streaming that casual users are less likely to mistakenly
assume needs to be modified in a regular logical replication setup.
Author: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/d672d774-c44b-6fec-f993-793e744f169a%40eisentraut.org
When running a repeat query with \watch in psql, it can be
helpful to be able to stop the watch process when the query
no longer returns the expected amount of rows. An example
would be to watch for the presence of a certain event in
pg_stat_activity and stopping when the event is no longer
present, or to watch an index creation and stop when the
index is created.
This adds a min_rows=MIN parameter to \watch which can be
set to a non-negative integer, and the watch query will
stop executing when it returns less than MIN rows.
Author: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAKAnmmKStATuddYxP71L+p0DHtp9Rvjze3XRoy0Dyw67VQ45UA@mail.gmail.com
While there are numerous instances of using "power of 2" in the code,
translated user-facing messages use "power of two". Fix two instances
which used "power of 2" instead.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20230829.175615.682972421946735863.horikyota.ntt@gmail.com
After commit b0bea38705, syslogger prints 63 warnings about failing to
close a listen socket at postmaster startup. That's because the
syslogger process forks before the ListenSockets array is initialized,
so ClosePostmasterPorts() calls "close(0)" 64 times. The first call
succeeds, because fd 0 is stdin.
This has been like this since commit 9a86f03b4e in version 13, which
moved the SysLogger_Start() call to before initializing ListenSockets.
We just didn't notice until commit b0bea38705 added the LOG message.
Reported by Michael Paquier and Jeff Janes.
Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/ZOvvuQe0rdj2slA9%40paquier.xyz
Discussion: https://www.postgresql.org/message-id/ZO0fgDwVw2SUJiZx@paquier.xyz#482670177eb4eaf4c9f03c1eed963e5f
Backpatch-through: 13
Since its introduction in 10074651e3, pg_promote() has been returning
a false status in three cases:
- SIGUSR1 not sent to the postmaster process.
- Postmaster death during standby promotion.
- Standby not promoted within the specified wait time.
An application calling this function will have a hard time understanding
what a false state returned actually means.
Per discussion, this switches the two first states to fail rather than
return a "false" status, making the second case more consistent with the
existing CHECK_FOR_INTERRUPTS in the wait loop. False is only returned
when the promotion is not completed within the specified time (60s by
default).
Author: Ashutosh Sharma
Reviewed-by: Fujii Masao, Laurenz Albe, Michael Paquier
Discussion: https://postgr.es/m/CAE9k0P=QTrwptL0t4J0fuBRDDjgsT-0PVKd-ikd96i1hyL7Bcg@mail.gmail.com
Make the primary messages more compact and make the detail messages
uniform. In initdb.c and pg_resetwal.c, use the newish
option_parse_int() to simplify some of the option parsing. For the
backend GUC wal_segment_size, add a GUC check hook to do the
verification instead of coding it in bootstrap.c. This might be
overkill, but that way the check is in the right place and it becomes
more self-documenting.
In passing, make pg_controldata use the logging API for warning
messages.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/9939aa8a-d7be-da2c-7715-0a0b5535a1f7@eisentraut.org
OpenSSL in FIPS mode rejects several encrypted private keys used in
the test suites ssl and ssl_passphrase_callback. This is because they
are in a "traditional" OpenSSL format that uses MD5 for key
generation. The fix is to convert them to the more standard PKCS#8
format that uses SHA1 for key derivation.
This commit contains the converted keys, with the conversion done like
this:
openssl pkcs8 -topk8 -in src/test/modules/ssl_passphrase_callback/server.key -passin pass:FooBaR1 -out src/test/modules/ssl_passphrase_callback/server.key.new -passout pass:FooBaR1
mv src/test/modules/ssl_passphrase_callback/server.key.new src/test/modules/ssl_passphrase_callback/server.key
etc., as well as updated build rules to generate the keys in the new
format if they need to be regenerated.
Reviewed-by: Jacob Champion <jchampion@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/64de784b-8833-e055-3bd4-7420e6675351%40eisentraut.org
Interval values now generate an error when the user has multiple
consecutive units or a unit without a value. Previously, it was
possible to specify multiple units consecutively which is contrary to
what the documentation allows, so it was possible to finish with
confusing interval values.
This is a follow-up of the work done in 165d581f14.
Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
This commit Restrict the unit "ago" to only appear at the end of the
interval. According to the documentation, a direction can only be
defined at the end of an interval, but it was possible to define it in
the middle of the string or define it multiple times.
In spirit, this is similar to the error handling improvements done in
5b3c595355 or bcc704b524.
Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
This commit removes some dead code related to the unit type RESERV,
whose last use has been removed from the unit lookup table used for
intervals ("deltatktbl" in datetime.c) in 666cbae16d. Before that,
RESERV was used as an equivalent of "invalid", but that's now
unreachable.
Author: Joseph Koshakow
Reviewed-by: Jacob Champion, Gurjeet Singh, Reid Thompson
Discussion: https://postgr.es/m/CAAvxfHd-yNO+XYnUxL=GaNZ1n+eE0V-oE0+-cC1jdjdU0KS3iw@mail.gmail.com
This commit switches query jumbling so as prepared statement names are
treated as constants in DeallocateStmt. A boolean field is added to
DeallocateStmt to make a distinction between ALL and named prepared
statements, as "name" was used to make this difference before, NULL
meaning DEALLOCATE ALL.
Prior to this commit, DEALLOCATE was not tracked in pg_stat_statements,
for the reason that it was not possible to treat its name parameter as a
constant. Now that query jumbling applies to all the utility nodes,
this reason does not apply anymore.
Like 638d42a3c5, this can be a huge advantage for monitoring where
prepared statement names are randomly generated, preventing bloat in
pg_stat_statements. A couple of tests are added to track the new
behavior.
Author: Dagfinn Ilmari Mannsåker, Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/ZMhT9kNtJJsHw6jK@paquier.xyz
Adding an extra LOG for connections that have not set an authn ID, like
when the "trust" authentication method is used, is useful for audit
purposes.
A couple of TAP tests for SSL and authentication need to be tweaked to
adapt to this new LOG generated, as some scenarios expected no logs but
they now get a hit.
Reported-by: Shaun Thomas
Author: Jacob Champion
Reviewed-by: Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/CAFdbL1N7-GF-ZXKaB3XuGA+CkSmnjFvqb8hgjMnDfd+uhL2u-A@mail.gmail.com
We now create contype='n' pg_constraint rows for not-null constraints.
We propagate these constraints to other tables during operations such as
adding inheritance relationships, creating and attaching partitions and
creating tables LIKE other tables. We also spawn not-null constraints
for inheritance child tables when their parents have primary keys.
These related constraints mostly follow the well-known rules of
conislocal and coninhcount that we have for CHECK constraints, with some
adaptations: for example, as opposed to CHECK constraints, we don't
match not-null ones by name when descending a hierarchy to alter it,
instead matching by column name that they apply to. This means we don't
require the constraint names to be identical across a hierarchy.
For now, we omit them for system catalogs. Maybe this is worth
reconsidering. We don't support NOT VALID nor DEFERRABLE clauses
either; these can be added as separate features later (this patch is
already large and complicated enough.)
psql shows these constraints in \d+.
pg_dump requires some ad-hoc hacks, particularly when dumping a primary
key. We now create one "throwaway" not-null constraint for each column
in the PK together with the CREATE TABLE command, and once the PK is
created, all those throwaway constraints are removed. This avoids
having to check each tuple for nullness when the dump restores the
primary key creation.
pg_upgrading from an older release requires a somewhat brittle procedure
to create a constraint state that matches what would be created if the
database were being created fresh in Postgres 17. I have tested all the
scenarios I could think of, and it works correctly as far as I can tell,
but I could have neglected weird cases.
This patch has been very long in the making. The first patch was
written by Bernd Helmle in 2010 to add a new pg_constraint.contype value
('n'), which I (Álvaro) then hijacked in 2011 and 2012, until that one
was killed by the realization that we ought to use contype='c' instead:
manufactured CHECK constraints. However, later SQL standard
development, as well as nonobvious emergent properties of that design
(mostly, failure to distinguish them from "normal" CHECK constraints as
well as the performance implication of having to test the CHECK
expression) led us to reconsider this choice, so now the current
implementation uses contype='n' again. During Postgres 16 this had
already been introduced by commit e056c557ae, but there were some
problems mainly with the pg_upgrade procedure that couldn't be fixed in
reasonable time, so it was reverted.
In 2016 Vitaly Burovoy also worked on this feature[1] but found no
consensus for his proposed approach, which was claimed to be closer to
the letter of the standard, requiring an additional pg_attribute column
to track the OID of the not-null constraint for that column.
[1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Commit 2a8b40e36 introduces the worker type field for logical replication
workers, but forgot to reset the type when the worker exits. This can lead
to recognizing a stopped worker as a valid logical replication worker.
Fix it by resetting the worker type and additionally adding the safeguard
to not use LogicalRepWorker until ->in_use is verified.
Reported-by: Thomas Munro based on cfbot reports.
Author: Hou Zhijie, Alvaro Herrera
Reviewed-by: Amit Kapila
Discussion: http://postgr.es/m/CA+hUKGK2RQh4LifVgBmkHsCYChP-65UwGXOmnCzYVa5aAt4GWg@mail.gmail.com
When running all (or just many) of our tests, a significant portion of both
CPU time and IO is spent running initdb. Most of those initdb runs don't
specify any options influencing properties of the created data directory.
Avoid most of that overhead by creating a "template" data directory, alongside
the temporary installation. Instead of running initdb, pg_regress and tap
tests can copy that data directory. When a tap test specifies options to
initdb, the template data directory is not used. That could be relaxed for
some options, but it's not clear it's worth the effort.
There unfortunately is some duplication between pg_regress.c and Cluster.pm,
but there are no easy ways of sharing that code without introducing additional
complexity.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20220120021859.3zpsfqn4z7ob7afz@alap3.anarazel.de
Commit 7b378237aa added a status message to pg_upgrade that is 60
characters wide. Since the MESSAGE_WIDTH macro is currently set to
60, there is no space between this new status message and the "ok"
or "failed" indicator appended when the step completes. To fix
this problem, this commit increases the value of MESSAGE_WIDTH to
62.
Suggested-by: Bharath Rupireddy
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/CALj2ACVVvk1cYLtWVxHv%3DZ1Ubq%3DUES9fhKbUU4c9k4W%2BfEDnbw%40mail.gmail.com
Backpatch-through: 16
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot. Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot. We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL. We can fix it in the same way, by excluding those
command types from revalidation.
However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands. To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree. If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.
stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile. It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.
In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.
Per bug #18059 from Pavel Kulakov. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
This code is insufficiently covered by tests, so add a few small test
cases to immortalize its behavior before it gets rewritten completely by
the project to catalog NOT NULL constraints.
It's good hygiene if e.g. an extension launches a subprogram when
being loaded. We went through some effort to close them in the child
process in EXEC_BACKEND mode, but it's better to not hand them down to
the child process in the first place. We still need to close them
after fork when !EXEC_BACKEND, but it's a little simpler.
In the passing, LOG a message if closing the client connection or
listen socket fails. Shouldn't happen, but if it does, would be nice
to know.
Reviewed-by: Tristan Partin, Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
Having argument names makes it easier to understand how to use the
aggregate functions when inspecting them with \dfa or similar.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Discussion: https://postgr.es/m/877cw3jl8y.fsf@wibble.ilmari.org
The SnapBuildRestoreContents() used a const value in the error message to
indicate the size in bytes it was expecting to read from the serialized
snapshot file. Fix it by reporting the size that was actually passed.
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 16
Discussion: http://postgr.es/m/OS0PR01MB5716D408364F7DF32221C08D941FA@OS0PR01MB5716.jpnprd01.prod.outlook.com
The meson build scripts attempt to find files left over from configure
and fail, mentioning that "make maintainer-clean" should be run to remove
these. This seems to have been done for files generated from configure.
pg_config_paths.h is generated during the actual make build, so seems to
have been missed. This would result in compilation using the wrong
pg_config_paths.h file.
Here we just add this file to generated_sources_ac so that meson errors
out if pg_config_paths.h exists.
Likely this wasn't noticed before because make maintainer-clean will
remove pg_config_paths.h, however, people using the MSVC build scripts
are more likely to run into issues and they have to manually remove
these files and pg_config_paths.h wasn't listed as a conflicting file to
remove in the meson log.
Backpatch-through: 16, where meson support was added
Discussion: https://postgr.es/m/CAApHDvqjYOxZfmLKAOWKFEE7LOr9_E6UA6YNmx9r8nxStcS3gg@mail.gmail.com
See prior commit for an explanation for the goal of the change and why it had
to be split into two commits.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de
Backpatch: 15-, where CI support was added
getprotobyname returns undefined on some CI machines. It's not clear
why. The code overall still works, but it raises a warning.
In PostgreSQL C code, we always call socket() with 0 for the protocol
argument, so we should be able to do the same in Perl (since the Perl
documentation says that the arguments of the socket function are the
same as in C). So do that, to avoid the issue.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
This commit introduces functions for converting numbers to their
equivalent binary and octal representations. Also, the base
conversion code for these functions and to_hex() has been moved to
a common helper function.
Co-authored-by: Eric Radman
Reviewed-by: Ian Barwick, Dag Lem, Vignesh C, Tom Lane, Peter Eisentraut, Kirk Wolak, Vik Fearing, John Naylor, Dean Rasheed
Discussion: https://postgr.es/m/Y6IyTQQ/TsD5wnsH%40vm3.eradman.com
Some of the ambuildempty functions used smgrwrite() directly, followed
by smgrimmedsync(). A few small problems with that:
Firstly, one is supposed to use smgrextend() when extending a
relation, not smgrwrite(). It doesn't make much difference in
production builds. smgrextend() updates the relation size cache, so
you miss that, but that's harmless because we never use the cached
relation size of an init fork. But if you compile with
CHECK_WRITE_VS_EXTEND, you get an assertion failure.
Secondly, the smgrwrite() calls were performed before WAL-logging, so
the page image written to disk had 0/0 as the LSN, not the LSN of the
WAL record. That's also harmless in practice, but seems sloppy.
Thirdly, it's better to use the buffer cache, because then you don't
need to smgrimmedsync() the relation to disk, which adds latency.
Bypassing the cache makes sense for bulk operations like index
creation, but not when you're just initializing an empty index.
Creation of unlogged tables is hardly performance bottleneck in any
real world applications, but nevertheless.
Backpatch to v16, but no further. These issues should be harmless in
practice, so better to not rock the boat in older branches.
Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9@iki.fi
The new_cluster parameter in check_for_new_tablespace_dir was
shadowing the globally defined new_cluster variable, causing
compiler warnings when running with -Wshadow. The function is
only applicable to the new cluster, so remove the parameter
rather than rename to match check_new_cluster_is_empty which
also only applies to the new cluster.
Author: Peter Smith <peter.b.smith@fujitsu.com>
Discussion: https://postgr.es/m/CAHut+PvS_PHLntWy1yTgXv0O1tWm4iVcKBQFzpoQRDsm2Ce_Fg@mail.gmail.com
This commit introduces descriptively-named macros for the
identifiers used in wire protocol messages. These new macros are
placed in a new header file so that they can be easily used by
third-party code.
Author: Dave Cramer
Reviewed-by: Alvaro Herrera, Tatsuo Ishii, Peter Smith, Robert Haas, Tom Lane, Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/CADK3HHKbBmK-PKf1bPNFoMC%2BoBt%2BpD9PH8h5nvmBQskEHm-Ehw%40mail.gmail.com
Commit 31966b15 invented a way for functions dealing with relation
extension to accept a Relation in online code and an SMgrRelation in
recovery code. It seems highly likely that future bufmgr.c interfaces
will face the same problem, and need to do something similar.
Generalize the names so that each interface doesn't have to re-invent
the wheel.
Back-patch to 16. Since extension AM authors might start using the
constructor macros once 16 ships, we agreed to do the rename in 16
rather than waiting for 17.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2B6tLD2BhpRWycEoti6LVLyQq457UL4ticP5xd8LqHySA%40mail.gmail.com
Attribute missing values might be needed past the lifetime of the tuple
descriptors from which they are extracted. To avoid possibly using
pointers for by-reference values which might thus be left dangling, we
cache a datumCopy'd version of the datum in the TopMemoryContext. Since
we first search for the value this only needs to be done once per
session for any such value.
Original complaint from Tom Lane, idea for mitigation by Andrew Dunstan,
tweaked by Tom Lane.
Backpatch to version 11 where missing values were introduced.
Discussion: https://postgr.es/m/1306569.1687978174@sss.pgh.pa.us
This commit fixes the function of $subject for shared relations. This
feature has been added by e042678. Unfortunately, this new behavior got
removed by 5891c7a when moving statistics to shared memory.
Reported-by: Mitsuru Hinata
Author: Masahiro Ikeda
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada
Discussion: https://postgr.es/m/7cc69f863d9b1bc677544e3accd0e4b4@oss.nttdata.com
Backpatch-through: 15
This new view, wrapped around a SRF, shows some information known about
wait events, as of:
- Name.
- Type (Activity, I/O, Extension, etc.).
- Description.
All the information retrieved comes from wait_event_names.txt, and the
description is the same as the documentation with filters applied to
remove any XML markups. This view is useful when joined with
pg_stat_activity to get the description of a wait event reported.
Custom wait events for extensions are included in the view.
Original idea by Yves Colin.
Author: Bertrand Drouvot
Reviewed-by: Kyotaro Horiguchi, Masahiro Ikeda, Tom Lane, Michael
Paquier
Discussion: https://postgr.es/m/0e2ae164-dc89-03c3-cf7f-de86378053ac@gmail.com
A significant chunk of the time on the macos CI task is spent installing
packages using homebrew. The downloads of the packages are cached, but the
installation needs to happen every time. We can't cache the whole homebrew
installation, because it is too large due to pre-installed packages.
Speed this up by installing packages using macports and caching the
installation as .dmg. That's a lot faster than unpacking a tarball.
In addition, don't install llvm - it wasn't enabled when building, so it's
just a waste of time/space.
This substantially speeds up the mac CI time, both in the cold cache and in
the warm cache case (the latter from ~1m20s to ~5s).
It doesn't seem great to have diverging sources of packages for CI between
branches, so backpatch to 15 (where CI was added).
Discussion: https://postgr.es/m/20230805202539.r3umyamsnctysdc7@awork3.anarazel.de
Backpatch: 15-, where CI was added
The OAT hooks are added in ALTER TABLE for the following subcommands:
- { ENABLE | DISABLE | [NO] FORCE } ROW LEVEL SECURITY
- { ENABLE | DISABLE } TRIGGER
- { ENABLE | DISABLE } RULE. Note that there was hook for pg_rewrite,
but not for relation ALTER'ed in pg_class.
Tests are added to test_oat_hook for all the subcommand patterns gaining
hooks here. Based on an ask from Legs Mansion.
Discussion: https://postgr.es/m/tencent_083B3850655AC6EE04FA0A400766D3FE8309@qq.com
The source code comment already said that the presence of the field
element_types.domain_default might be a bug in the standard, since it
never made sense there. Indeed, the field is gone in newer versions
of the standard. So just remove it.
Previously, if a specialized comparator found equal datum1 keys,
the "comparetup" function would repeat the comparison on the
datum before proceeding with the unabbreviated first key
and/or additional sort keys.
Move comparing additional sort keys into "tiebreak" functions so
that specialized comparators can call these directly if needed,
avoiding duplicate work.
Reviewed by David Rowley
Discussion: https://postgr.es/m/CAFBsxsGaVfUrjTghpf%3DkDBYY%3DjWx1PN-fuusVe7Vw5s0XqGdGw%40mail.gmail.com
This was disabled in commit 6f80a8d9c due to the lack of support for
handling of pseudoconstant quals assigned to replaced joins in
createplan.c. To re-allow it, this patch adds the support by 1)
modifying the ForeignPath and CustomPath structs so that if they
represent foreign and custom scans replacing a join with a scan, they
store the list of RestrictInfo nodes to apply to the join, as in
JoinPaths, and by 2) modifying create_scan_plan() in createplan.c so
that it uses that list in that case, instead of the baserestrictinfo
list, to get pseudoconstant quals assigned to the join, as mentioned in
the commit message for that commit.
Important item for the release notes: this is non-backwards-compatible
since it modifies the ForeignPath and CustomPath structs, as mentioned
above, and changes the argument lists for FDW helper functions
create_foreignscan_path(), create_foreign_join_path(), and
create_foreign_upper_path().
Richard Guo, with some additional changes by me, reviewed by Nishant
Sharma, Suraj Kharage, and Richard Guo.
Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
Commit b91dd9de was concerned with a theoretical problem with our
non-atomic condition variable operations. If you stop sleeping, and
then cancel the sleep in a separate step, you might be signaled in
between, and that could be lost. That doesn't matter for callers of
ConditionVariableBroadcast(), but callers of ConditionVariableSignal()
might be upset if a signal went missing like this.
Commit bc971f4025 interacted badly with that logic, because it doesn't
use ConditionVariableSleep(), which would normally put us back in the
wait list. ConditionVariableCancelSleep() would be confused and think
we'd received an extra signal, and try to forward it to another backend,
resulting in wakeup storms.
New idea: ConditionVariableCancelSleep() can just return true if we've
been signaled. Hypothetical users of ConditionVariableSignal() would
then still have a way to deal with rare lost signals if they are
concerned about that problem.
Back-patch to 16, where bc971f4025 arrived.
Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/2840876b-4cfe-240f-0a7e-29ffd66711e7%40enterprisedb.com
The new relation extension logic, introduced in 00d1e02be2, could lead to
slowdowns in some scenarios. E.g., when loading narrow rows into a table using
COPY, the caller of RelationGetBufferForTuple() will only request a small
number of pages. Without concurrency, we just extended using pwritev() in that
case. However, if there is *some* concurrency, we switched between extending
by a small number of pages and a larger number of pages, depending on the
number of waiters for the relation extension logic. However, some
filesystems, XFS in particular, do not perform well when switching between
extending files using fallocate() and pwritev().
To avoid that issue, remember the number of prior relation extensions in
BulkInsertState and extend more aggressively if there were prior relation
extensions. That not just avoids the aforementioned slowdown, but also leads
to noticeable performance gains in other situations, primarily due to
extending more aggressively when there is no concurrency. I should have done
it this way from the get go.
Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940=Kp6mszNGK3bq9yRN6g@mail.gmail.com
Backpatch: 16-, where the new relation extension code was added
Currently, the names of the custom wait event must be registered for
each backend, requiring all these to link to the shared memory area of
an extension, even if these are not loaded with
shared_preload_libraries.
This patch relaxes the constraints related to this infrastructure by
storing the wait events and their names in two dynamic hash tables in
shared memory. This has the advantage to simplify the registration of
custom wait events to a single routine call that returns an event ID
ready for consumption:
uint32 WaitEventExtensionNew(const char *wait_event_name);
The caller of this routine can then cache locally the ID returned, to be
used for pgstat_report_wait_start(), WaitLatch() or a similar routine.
The implementation uses two hash tables: one with a key based on the
event name to avoid duplicates and a second using the event ID as key
for event lookups, like on pg_stat_activity. These tables can hold a
minimum of 16 entries, and a maximum of 128 entries, which should be plenty
enough.
The code changes done in worker_spi show how things are simplified (most
of the code removed in this commit comes from there):
- worker_spi_init() is gone.
- No more shared memory hooks required (size requested and
initialization).
- The custom wait event ID is cached in the process that needs to set
it, with one single call to WaitEventExtensionNew() to retrieve it.
Per suggestion from Andres Freund.
Author: Masahiro Ikeda, with a few tweaks from me.
Discussion: https://postgr.es/m/20230801032349.aaiuvhtrcvvcwzcx@awork3.anarazel.de
We deduce a LogicalRepWorker's type from the values of several different
fields ('relid' and 'leader_pid') whenever logic needs to know it.
In fact, the logical replication worker type is already known at the time
of launching the LogicalRepWorker and it never changes for the lifetime of
that process. Instead of deducing the type, it is simpler to just store it
one time, and access it directly thereafter.
Author: Peter Smith
Reviewed-by: Amit Kapila, Bharath Rupireddy
Discussion: http://postgr.es/m/CAHut+PttPSuP0yoZ=9zLDXKqTJ=d0bhxwKaEaNcaym1XqcvDEg@mail.gmail.com
pg_logical_emit_message(false, '_', repeat('x', 1069547465)) failed with
self-contradictory message "WAL record would be 1069547520 bytes (of
maximum 1069547520 bytes)". There's no particular benefit from allowing
or denying one byte in either direction; XLogRecordMaxSize could rise a
few megabytes without trouble. Hence, this is just for cleanliness.
Back-patch to v16, where this check first appeared.
This relies on the "location" field added to TransactionStmt in 31de7e6,
now applied to the "gid" field used by 2PC commands. These commands are
now reported like:
COMMIT PREPARED $1
PREPARE TRANSACTION $1
ROLLBACK PREPARED $1
Applying constants for these commands is a huge advantage for workloads
that rely a lot on 2PC commands with different GIDs. Some tests are
added to track the new behavior.
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/ZMhT9kNtJJsHw6jK@paquier.xyz
Store function config settings in lists to avoid the need to parse and
allocate for each function execution.
Speedup is modest but significant. Additionally, this change also
seems cleaner and supports some other performance improvements under
discussion.
Discussion: https://postgr.es/m/04c8592dbd694e4114a3ed87139a7a04e4363030.camel@j-davis.com
Reviewed-by: Nathan Bossart
The fix itself is fine, but the test revealed other problems related
to parallel query that are not easily fixable. Remove the test for
now to fix the buildfarm.
Discussion: https://postgr.es/m/88825.1691665432@sss.pgh.pa.us
Backpatch-through: 11
Commit 19d8e2308b changed the list of set-of-columns that can be
returned by RelationGetIndexAttrBitmap, but didn't update its
"documentation". That was pretty hard to read already, so rewrite to
make it more comprehensible, adding the missing values while at it.
Backpatch to 16, like that commit.
Discussion: https://postgr.es/m/20230809091155.7c7f3gttjk3dj4ze@alvherre.pgsql
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
As with the Intel and Arm CRC instructions, compiler intrinsics for
them must be supported by the compiler. In contrast, no runtime check
is needed. Aligned memory access is faster, so use the Arm coding as
a model.
YANG Xudong
Discussion: https://postgr.es/m/b522a0c5-e3b2-99cc-6387-58134fb88cbe%40ymatrix.cn
In-place tablespaces would be dumped with the path produced by
pg_tablespace_location(), which is in this case a relative path built as
pg_tblspc/OID, but this would fail to restore as such tablespaces need
to use an empty string as location. In order to detect if an in-place
tablespace is used, this commit checks if the path returned is relative
and adapts the dump contents in consequence.
Like the other changes related to in-place tablespaces, no backpatch is
done as these are only intended for development purposes. Rui Zhao has
fixed the code, while the test is from me.
Author: Rui Zhao, Michael Paquier
Discussion: https://postgr.es/m/80c80b4a-b87b-456f-bd46-1ae326601d79.xiyuan.zr@alibaba-inc.com
fa88928 has introduced wait_event_names.txt, and some of its entries had
some documentation fields with more information than necessary.
This commit brings back the description of all the wait events to be
consistent with the older stable branches. Five descriptions were
incorrect.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/e378989e-1899-643a-dec1-10f691a0a105@gmail.com
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or ""). No bundled extension was
vulnerable. Vulnerable uses do appear in a documentation example and in
non-bundled extensions. Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension. Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser. By blocking this attack in the core
server, there's no need to modify individual extensions. Back-patch to
v11 (all supported versions).
Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.
Security: CVE-2023-39417
The use of Memoize was already disabled in normal joins when the join
conditions had volatile functions per the code in
match_opclause_to_indexcol(). Ordinarily, the parameterization for the
inner side of a nested loop will be an Index Scan or at least eventually
lead to an index scan (perhaps nested several joins deep). However, for
lateral joins, that's not the case and seq scans can be parameterized
too, so we can't rely on match_opclause_to_indexcol().
Here we explicitly check the parameterization for volatile functions and
don't consider the generation of a Memoize path when such functions
are present.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49nHFnHbpepLsv_yF3qkpCS4BdB-v8HoJVv8_=Oat0u_w@mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
If MERGE executes an UPDATE action on a table with row-level security,
the code incorrectly applied the WITH CHECK clauses from the target
table's INSERT policies to new rows, instead of the clauses from the
table's UPDATE policies. In addition, it failed to check new rows
against the target table's SELECT policies, if SELECT permissions were
required (likely to always be the case).
In addition, if MERGE executes a DO NOTHING action for matched rows,
the code incorrectly applied the USING clauses from the target table's
DELETE policies to existing target tuples. These policies were applied
as checks that would throw an error, if they did not pass.
Fix this, so that a MERGE UPDATE action applies the same RLS policies
as a plain UPDATE query with a WHERE clause, and a DO NOTHING action
does not apply any RLS checks (other than adding clauses from SELECT
policies to the join).
Back-patch to v15, where MERGE was introduced.
Dean Rasheed, reviewed by Stephen Frost.
Security: CVE-2023-39418
If we define ZLIB_CONST before including zlib.h, zlib augments some
interfaces with const decorations. By doing that we can keep our own
interfaces cleaner and can remove some unconstify calls.
ZLIB_CONST was introduced in zlib 1.2.5.2 (17 Dec 2011). When
compiling with older zlib releases, you might now get compiler
warnings about discarding qualifiers.
CentOS 6 has zlib 1.2.3, but in 8e278b6576, we removed support for the
OpenSSL release in CentOS 6, so it seems ok to de-support the zlib
release in CentOS 6 as well.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/33462926-bb1e-7cc9-8d92-d86318e8ed1d%40eisentraut.org
The code in join_search_one_level was a bit convoluted. With a casual
glance, you might think that other_rels_list was being set to something
other than joinrels[1] when level == 2, however, joinrels[level - 1] is
joinrels[1] when level == 2, so nothing special needs to happen to set
other_rels_list. Let's clean that up to avoid confusing anyone.
In passing, we may as well modernize the loop in
make_rels_by_clause_joins() and instead of passing in the ListCell to
start looping from, let's just pass in the index where to start from and
make use of for_each_from(). Ever since 1cff1b95a, Lists are arrays
under the hood. lnext() and list_head() both seem a little too linked-list
like.
Author: Alex Hsieh, David Rowley, Richard Guo
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CANWNU8x9P9aCXGF%3DaT-A_8mLTAT0LkcZ_ySYrGbcuHzMQw2-1g%40mail.gmail.com
This test was recently added in 3900a02c9. It appears to be unstable in
regards to the join order presumably due to the relations at either side
of the join being equal in side. Here we add a qual to make one of them
smaller so the planner is more likely to choose to hash the smaller of the
two.
Reported-by: Nathan Bossart, Tom Lane
Discussion: https://www.postgr.es/m/20230803235403.GC1238296@nathanxps13
Here we adjust the costs for WindowAggs so that they properly take into
account how much of their subnode they must read before outputting the
first row. Without this, we always assumed that the startup cost for the
WindowAgg was not much more expensive than the startup cost of its
subnode, however, that's going to be completely wrong in many cases. The
WindowAgg may have to read *all* of its subnode to output a single row
with certain window bound options.
Here we estimate how many rows we'll need to read from the WindowAgg's
subnode and proportionally add more of the subnode's run costs onto the
WindowAgg's startup costs according to how much of it we expect to have to
read in order to produce the first WindowAgg row.
The reason this is more important than we might have initially thought is
that we may end up making use of a path from the lower planner that works
well as a cheap startup plan when the query has a LIMIT clause, however,
the WindowAgg might mean we need to read far more rows than what the LIMIT
specifies.
No backpatch on this so as not to cause plan changes in released
versions.
Bug: #17862
Reported-by: Tim Palmer
Author: David Rowley
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/17862-1ab8f74b0f7b0611@postgresql.org
Discussion: https://postgr.es/m/CAApHDvrB0S5BMv+0-wTTqWFE-BJ0noWqTnDu9QQfjZ2VSpLv_g@mail.gmail.com
Both apply and tablesync workers were using ApplyWorkerMain() as entry
point. As the name implies, ApplyWorkerMain() should be considered as
the main function for apply workers. Tablesync worker's path was hidden
and does not have enough in common to share the same main function with
apply worker.
Also, most of the code shared by both worker types is already combined
in LogicalRepApplyLoop(). There is no need to combine the rest in
ApplyWorkerMain() anymore.
This patch introduces TablesyncWorkerMain() as a new entry point for
tablesync workers. This aims to increase code readability and would help
with future improvements like the reuse of tablesync workers in the
initial synchronization.
Author: Melih Mutlu based on suggestions by Melanie Plageman
Reviewed-by: Peter Smith, Kuroda Hayato, Amit Kapila
Discussion: http://postgr.es/m/CAGPVpCTq=rUDd4JUdaRc1XUWf4BrH2gdSNf3rtOMUGj9rPpfzQ@mail.gmail.com
Between 6fcda9aba and 1b6f632a3, the pg_strtoint functions became quite
a bit slower in v16, despite efforts in 6b423ec67 to speed these up.
Since the majority of cases for these functions will only contain
base-10 digits, perhaps prefixed by a '-', it makes sense to have a
special case for this and just fall back on the more complex version
which processes hex, octal, binary and underscores if the fast path
version fails to parse the string.
While we're here, update the header comments for these functions to
mention that hex, octal and binary formats along with underscore
separators are now supported.
Author: Andres Freund, David Rowley
Reported-by: Masahiko Sawada
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940%3DKp6mszNGK3bq9yRN6g%40mail.gmail.com
Backpatch-through: 16, where 6fcda9aba and 1b6f632a3 were added
The stats regression test attempts to ensure that Buffer Access Strategy
"reuses" are being counted in pg_stat_io by vacuuming a table which is larger
than the size of the strategy ring. However, when shared buffers are in
sufficiently high demand, another backend could evict one of the blocks in the
strategy ring before the first backend has a chance to reuse the buffer. The
backend using the strategy would then evict another shared buffer and add that
buffer to the strategy ring. This counts as an eviction and not a reuse in
pg_stat_io. Count both evictions and reuses in the test to ensure it does not
fail incorrectly.
Reported-by: Jeff Davis <pgsql@j-davis.com>,
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_bNG27AxG9TdPtwsL6wg8AWbVckjmTL2t1HF=miDQuNtw@mail.gmail.com
This was failing for queries which try to get the .type() of a
jpiLikeRegex. For example:
select jsonb_path_query('["string", "string"]',
'($[0] like_regex ".{7}").type()');
Reported-by: Alexander Kozhemyakin
Bug: #18035
Discussion: https://postgr.es/m/18035-64af5cdcb5adf2a9@postgresql.org
Backpatch-through: 12, where SQL/JSON path was added.
The previous commit removed the "override" APIs. Surviving APIs facilitate
plancache.c to snapshot search_path and test whether the current value equals
a remembered snapshot.
Aleksander Alekseev. Reported by Alexander Lakhin and Noah Misch.
Discussion: https://postgr.es/m/8ffb4650-52c4-6a81-38fc-8f99be981130@gmail.com
Two backend routines are added to allow extension to allocate and define
custom wait events, all of these being allocated in the type
"Extension":
* WaitEventExtensionNew(), that allocates a wait event ID computed from
a counter in shared memory.
* WaitEventExtensionRegisterName(), to associate a custom string to the
wait event ID allocated.
Note that this includes an example of how to use this new facility in
worker_spi with tests in TAP for various scenarios, and some
documentation about how to use them.
Any code in the tree that currently uses WAIT_EVENT_EXTENSION could
switch to this new facility to define custom wait events. This is left
as work for future patches.
Author: Masahiro Ikeda
Reviewed-by: Andres Freund, Michael Paquier, Tristan Partin, Bharath
Rupireddy
Discussion: https://postgr.es/m/b9f5411acda0cf15c8fbb767702ff43e@oss.nttdata.com
MSVC's _BitScan* functions return a boolean indicating whether any
bits were set in the input, and we were previously asserting that
they returned true, per our API. This is correct. However, other
platforms simply assert that the input is non-zero, so do that to be
more consistent.
Noted while investigating a hypothesis from Ranier Vilela about
undefined behavior, but this is not his proposed patch.
Discussion: https://www.postgresql.org/message-id/CAEudQAoDhUZyKGJ1vbMGcgVUOcsixe-%3DjcVaDWarqkUg163D2w%40mail.gmail.com
These are useful to extract the class ID and the event ID associated to
a single uint32 wait_event_info. Only two code paths use them now, but
an upcoming patch will extend their use.
Discussion: https://postgr.es/m/ZMcJ7F7nkGkIs8zP@paquier.xyz
libpq_source.c would consider any result returned by
pg_tablespace_location() as a symlink, resulting in run-time errors like
that:
pg_rewind: error: file "pg_tblspc/NN" is of different type in source and target
In-place tablespaces are directories located in pg_tblspc/, returned as
relative paths instead of absolute paths, so rely on that to make the
difference with a normal tablespace and an in-place one. If the path is
relative, the tablespace is handled as a directory. If the path is
absolute, consider it as a symlink.
In-place tablespaces are only intended for development purposes, so like
363e8f9 no backpatch is done. A test is added in pg_rewind with an
in-place tablespace and some data in it.
Author: Rui Zhao, Michael Paquier
Discussion: https://postgr.es/m/2b79d2a8-b2d5-4bd7-a15b-31e485100980.xiyuan.zr@alibaba-inc.com
The second portion of the tests had a race condition where it would be
possible for the startup of the dynamic workers to fail, in the event
where the static workers started before them with the library loading in
shared_preload_libraries did not finish to create their respective
schemas. The conflict is caused by the fact that the dynamic and static
workers used the same IDs, overlapping each other, so, for now, switch
the dynamic workers to use different IDs, leading to different schemas
created.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20230728022332.egqzobhskmlf6ntr@awork3.anarazel.de
Commits 83dec5a712 and ff402ae11b taught vacuumdb to reuse
passwords instead of prompting repeatedly. However, the docs still
warn about repeated prompts, and this improvement was not applied
to clusterdb and reindexdb. This commit allows clusterdb and
reindexdb to reuse passwords just like vacuumdb does, and it
expunges the aforementioned warnings from the docs.
Reviewed-by: Gurjeet Singh, Zhang Mingli
Discussion: https://postgr.es/m/20230628045741.GA1813397%40nathanxps13
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.
To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break. So fix by modifying the infrastructure
to just disallow replacing joins with such quals.
Back-patch to all supported branches.
Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and
Richard Guo.
Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes. We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations. Let's get rid of the fixed
limit by using a StringInfo instead of a fixed-size buffer.
This actually takes less code than before, since we can get rid of
a nontrivial error recovery stanza. It's doubtless a hair slower,
but parsing the content of the HBA files should in no way be
performance-critical.
Although this is a pretty straightforward patch, it doesn't seem
worth the risk to back-patch given the small number of complaints
to date. In released branches, we'll just raise MAX_TOKEN to
ameliorate the problem.
Discussion: https://postgr.es/m/1588937.1690221208@sss.pgh.pa.us
This commit moves worker_spi to use TAP tests. sql/worker_spi.sql is
gone, replaced by an equivalent set of queries in a TAP script, without
worker_spi loaded in shared_preload_libraries:
- One query to launch a worker dynamically, relying now on "postgres" as
the default database to connect to.
- Two wait queries with poll_query_until(), one to wait for the worker
schema to be initialized and a second to wait for a tuple processed by
the worker.
- Server reload to accelerate the main loop of the spawned worker.
More coverage is added for workers registered when the library is loaded
with shared_preload_libraries, while on it, checking that these are
connecting to the database set in the GUC worker_spi.database.
A local run of these test is showing that TAP is slightly faster than
the original, while providing more coverage (3.7s vs 4.4s). There was
also some discussions about keeping the SQL tests, but this would
require initializing twice a cluster, increasing the runtime of the
tests up to 5.6s here.
These tests will be expanded more in an upcoming patch aimed at adding
support for custom wait events for the Extension class, still under
discussion, to check the new in-core APIs with and without a library set
in shared_preload_libraries.
Bharath has written the part where shared_preload_libraries is used,
while I have migrated the existing SQL tests to TAP.
Author: Bharath Rupireddy, Michael Paquier
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
9f8377f7a added code to allow COPY FROM insert a column's default value
when the input matches the DEFAULT string specified in the COPY command.
Here we fix some inefficient code which needlessly palloc0'd an array to
store if we should use the default value or input value for the given
column. This array was being palloc0'd and pfree'd once per row. It's
much more efficient to allocate this once and just reset the values once
per row.
Reported-by: Masahiko Sawada
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940%3DKp6mszNGK3bq9yRN6g%40mail.gmail.com
Backpatch-through: 16, where 9f8377f7a was introduced.
There was already a check on the relation OID, but not its index OID and
the attributes that can be used during the syscache lookups. The two
assertions added by this commit are cheap, and actually useful for
developers to fasten the detection of incorrect data in a new entry
added in the syscache list, as these assertions are triggered during the
initial cache loading (initdb, or just backend startup), not requiring a
syscache that uses the new entry.
While on it, the relation OID check is switched to use OidIsValid().
Author: Aleksander Alekseev
Reviewed-by: Dagfinn Ilmari Mannsåker, Zhang Mingli, Michael Paquier
Discussion: https://postgr.es/m/CAJ7c6TOjUTJ0jxvWY6oJeP2-840OF8ch7qscZQsuVuotXTOS_g@mail.gmail.com
In pg_stat_statements, savepoint names now show up as constants with a
parameter symbol, using as base query string the one added as a new
entry to the PGSS hash table, leading to:
RELEASE $1
ROLLBACK TO $1
SAVEPOINT $1
Applying constants to these query parts is a huge advantage for
workloads that generate randomly savepoint points, like ORMs (Django is
at the origin of this patch). The ODBC driver is a second layer that
likes a lot savepoints, though it does not use a random naming pattern.
A "location" field is added to TransactionStmt, now set only for
savepoints. The savepoint name is ignored by the query jumbling. The
location can be extended to other query patterns, if required, like 2PC
commands. Some tests are added to pg_stat_statements for all the query
patterns supported by the parser.
ROLLBACK, ROLLBACK TO SAVEPOINT and ROLLBACK TRANSACTION TO SAVEPOINT
have the same Node representation, so all these are equivalents. The
same happens for RELEASE and RELEASE SAVEPOINT.
Author: Greg Sabino Mullane
Discussion: https://postgr.es/m/CAKAnmm+2s9PA4OaumwMJReWHk8qvJ_-g1WqxDRDAN1BSUfxyTw@mail.gmail.com
psql's --echo-hidden, --log-file, and --single-step options
generate extra lines to clearly separate queries from other output.
Presently, these extra lines are not valid SQL comments, which
makes them a hazard for anyone trying to copy/paste the decorated
queries into a client or query editor. This commit replaces the
starting and ending asterisks in these extra lines with forward
slashes so that they are valid SQL comments that can be copy/pasted
without incident.
Author: Kirk Wolak
Reviewed-by: Pavel Stehule, Laurenz Albe, Tom Lane, Alvaro Herrera, Andrey Borodin
Discussion: https://postgr.es/m/CACLU5mTFJRJYtbvmZ26txGgmXWQo0hkGhH2o3hEquUPmSbGtBw%40mail.gmail.com
This Patch introduces three SQL standard JSON functions:
JSON()
JSON_SCALAR()
JSON_SERIALIZE()
JSON() produces json values from text, bytea, json or jsonb values,
and has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value,
including json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis
or represents json or jsonb;
For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.
Catversion bumped as this changes ruleutils.c.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera,
Peter Eisentraut
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
This is to export datum_to_json(), datum_to_jsonb(), and
jsonb_from_cstring(), though the last one is exported as
jsonb_from_text().
A subsequent commit to add new SQL/JSON constructor functions will
need them for calling from the executor.
Discussion: https://postgr.es/m/20230720160252.ldk7jy6jqclxfxkq%40alvherre.pgsql
Commit 5764f611e used dclist_delete_from() to remove the proc from the
wait queue. However, since it doesn't clear dist_node's next/prev to
NULL, it could call RemoveFromWaitQueue() twice: when the process
detects a deadlock and then when cleaning up locks on aborting the
transaction. The waiting lock information is cleared in the first
call, so it led to a crash in the second call.
Backpatch to v16, where the change was introduced.
Bug: #18031
Reported-by: Justin Pryzby, Alexander Lakhin
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/ZKy4AdrLEfbqrxGJ%40telsasoft.com
Discussion: https://postgr.es/m/18031-ebe2d08cb405f6cc@postgresql.org
Backpatch-through: 16
This gives a way to make a difference between workers registered when
the library is loaded with shared_preload_libraries and when these are
launched dynamically, in ps output or pg_stat_activity.
Extracted from a larger patch by the same author.
Author: Bharath Rupireddy
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
This commit adds a few comments about what LWLockWaitForVar() relies on
when a backend waits for a variable update on its LWLocks for WAL
insertions up to an expected LSN.
First, LWLockWaitForVar() does not include a memory barrier, relying on
a spinlock taken at the beginning of WaitXLogInsertionsToFinish(). This
was hidden behind two layers of routines in lwlock.c. This assumption
is now documented at the top of LWLockWaitForVar(), and detailed at bit
more within LWLockConflictsWithVar().
Second, document why WaitXLogInsertionsToFinish() does not include
memory barriers, relying on a spinlock at its top, which is, per Andres'
input, fine for two different reasons, both depending on the fact that
the caller of WaitXLogInsertionsToFinish() is waiting for a LSN up to a
certain value.
This area's documentation and assumptions could be improved more in the
future, but at least that's a beginning.
Author: Bharath Rupireddy, Andres Freund
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
Previously, when selecting an usable index for update/delete for the
REPLICA IDENTITY FULL table, in IsIndexOnlyExpression(), we used to
check if all index fields are not expressions. However, it was not
necessary, because it is enough to check if only the leftmost index
field is not an expression (and references the remote table column)
and this check has already been done by
RemoteRelContainsLeftMostColumnOnIdx().
This commit removes IsIndexOnlyExpression() and
RemoteRelContainsLeftMostColumnOnIdx() and all checks for usable
indexes for REPLICA IDENTITY FULL tables are now performed by
IsIndexUsableForReplicaIdentityFull().
Backpatch this to remain the code consistent.
Reported-by: Peter Smith
Reviewed-by: Amit Kapila, Önder Kalacı
Discussion: https://postgr.es/m/CAHut%2BPsGRE5WSsY0jcLHJEoA17MrbP9yy8FxdjC_ZOAACxbt%2BQ%40mail.gmail.com
Backpatch-through: 16
The WAL insertion lock variable insertingAt is currently being read
and written with the help of the LWLock wait list lock to avoid any read
of torn values. This wait list lock can become a point of contention on
a highly concurrent write workloads.
This commit switches insertingAt to a 64b atomic variable that provides
torn-free reads/writes. On platforms without 64b atomic support, the
fallback implementation uses spinlocks to provide the same guarantees
for the values read. LWLockWaitForVar(), through
LWLockConflictsWithVar(), reads the new value to check if it still needs
to wait with a u64 atomic operation. LWLockUpdateVar() updates the
variable before waking up the waiters with an exchange_u64 (full memory
barrier). LWLockReleaseClearVar() now uses also an exchange_u64 to
reset the variable. Before this commit, all these steps relied on
LWLockWaitListLock() and LWLockWaitListUnlock().
This reduces contention on LWLock wait list lock and improves
performance of highly-concurrent write workloads. Here are some
numbers using pg_logical_emit_message() (HEAD at d6677b93) with various
arbitrary record lengths and clients up to 1k on a rather-large machine
(64 vCPUs, 512GB of RAM, 16 cores per sockets, 2 sockets), in terms of
TPS numbers coming from pgbench:
message_size_b | 16 | 64 | 256 | 1024
--------------------+--------+--------+--------+-------
patch_4_clients | 83830 | 82929 | 80478 | 73131
patch_16_clients | 267655 | 264973 | 250566 | 213985
patch_64_clients | 380423 | 378318 | 356907 | 294248
patch_256_clients | 360915 | 354436 | 326209 | 263664
patch_512_clients | 332654 | 321199 | 287521 | 240128
patch_1024_clients | 288263 | 276614 | 258220 | 217063
patch_2048_clients | 252280 | 243558 | 230062 | 192429
patch_4096_clients | 212566 | 213654 | 205951 | 166955
head_4_clients | 83686 | 83766 | 81233 | 73749
head_16_clients | 266503 | 265546 | 249261 | 213645
head_64_clients | 366122 | 363462 | 341078 | 261707
head_256_clients | 132600 | 132573 | 134392 | 165799
head_512_clients | 118937 | 114332 | 116860 | 150672
head_1024_clients | 133546 | 115256 | 125236 | 151390
head_2048_clients | 137877 | 117802 | 120909 | 138165
head_4096_clients | 113440 | 115611 | 120635 | 114361
Bharath has been measuring similar improvements, where the limit of the
WAL insertion lock begins to be felt when more than 256 concurrent
clients are involved in this specific workload.
An extra patch has been discussed to introduce a fast-exit path in
LWLockUpdateVar() when there are no waiters, still this does not
influence the write-heavy workload cases discussed as there are always
waiters. This will be considered separately.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
We include the message type while displaying an error context in the
apply worker. Now, while retrieving the message type string if the
message type is unknown we throw an error that will hide the original
error. So, instead, we need to simply return the string indicating an
unknown message type.
Reported-by: Ashutosh Bapat
Author: Euler Taveira, Amit Kapila
Reviewed-by: Ashutosh Bapat
Backpatch-through: 15
Discussion: https://postgr.es/m/CAExHW5suAEDW-mBZt_qu4RVxWZ1vL54-L+ci2zreYWebpzxYsA@mail.gmail.com
Due to the bug LimitAdditionalPins() could return 0, violating
LimitAdditionalPins()'s API ("One additional pin is always allowed"). This
could be hit when setting shared_buffers very low and using a fair amount of
concurrency.
This bug was introduced in 31966b151e.
Author: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: Victoria Shepard
Discussion: https://postgr.es/m/ae46f2fb-5586-3de0-b54b-1bb0f6410ebd@inbox.ru
Backpatch: 16-
Because PostgreSQL::Version is very nuanced about development version
numbers, the comparison to 16beta2 makes it think that that release is
older than 16, therefore applying a database tweak that doesn't work
there (the comparison is only supposed to match when run on version 15).
As suggested by Andrew Dunstan, fix by having AdjustUpgrade.pm public
methods create a separate PostgreSQL::Version object to use for these
comparisons, that only carries the major version number.
While at it, have the same methods ensure that the objects given are of
the expected type.
Backpatch to 16. This module goes all the way back to 9.2, but there's
probably no need for this fix except where betas still live.
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20230719110504.zbu74o54bqqlsufb@alvherre.pgsql
This commit switches the client-side data generation from INSERT queries
to COPY for the two tables pgbench_branches and pgbench_tellers.
pgbench_accounts was already using COPY.
COPY is a better interface for bulk loading or high latency connections
(this point can be countered with the option for server-side data
generation, still client-side is the default), and measurements have
proved that using it for these two other tables can lead to improvements
during initialization. I did not notice slowdowns at large scale
numbers on a local setup, either, most of the work happening for the
accounts table.
Previously COPY was only used for the pgbench_accounts table because the
amount of data was much larger than the two other tables. The code is
refactored so as all three tables use the same code path to execute the
COPY queries, with a callback to build data rows.
Author: Tristan Partin
Discussion: https://postgr.es/m/CSTU5P82ONZ1.19XFUGHMXHBRY@c3po
The tables created by pgbench rely on a few assumptions for TPC-B, where
the "filler" attribute is used to comply with this benchmark's rules as
well as pgbencn historical behavior. The data generated for each table
uses this filler in a different way:
- pgbench_accounts uses it as a blank-padded empty string.
- pgbench_tellers and pgbench_branches use it as a NULL value.
There were no checks done about the consistency of the data initialized,
and this has showed up while discussing a patch that changes the logic
in charge of the client-side data generation (pgbench documents all that
already in its comments). This commit adds some checks on the data
generated for both the server-side and client-side logic.
Reviewed-by: Tristan Partin
Discussion: https://postgr.es/m/ZLik4oKnqRmVCM3t@paquier.xyz
After 3c90dcd03, try_partitionwise_join's child_joinrelids
variable is read only in an Assert, provoking a compiler
warning in non-assert builds. Rearrange code to avoid the
warning and eliminate unnecessary work in the non-assert case.
Per CI testing (via Jeff Davis and Bharath Rupireddy)
Discussion: https://postgr.es/m/ef0de9713e605451f1b60b30648c5ee900b2394c.camel@j-davis.com
Applying add_outer_joins_to_relids() to a child join doesn't actually
work, even if we've built a SpecialJoinInfo specialized to the child,
because that function will also compare the join's relids to elements
of the main join_info_list, which only deal in regular relids not
child relids. This mistake escaped detection by the existing
partitionwise join tests because they didn't test any cases where
add_outer_joins_to_relids() needs to add additional OJ relids (that
is, any cases where join reordering per identity 3 is possible).
Instead, let's apply adjust_child_relids() to the relids of the parent
join. This requires minor code reordering to collect the relevant
AppendRelInfo structures first, but that's work we'd do shortly anyway.
Report and fix by Richard Guo; cosmetic changes by me
Discussion: https://postgr.es/m/CAMbWs49NCNbyubZWgci3o=_OTY=snCfAPtMnM-32f3mm-K-Ckw@mail.gmail.com
b6e1157e7d made some changes to enforce that
JsonValueExpr.formatted_expr is always set and is the expression that
gives a JsonValueExpr its runtime value, but that's not really
apparent from the comments about and the code manipulating
formatted_expr. This commit fixes that.
Per suggestion from Álvaro Herrera.
Discussion: https://postgr.es/m/20230718155313.3wqg6encgt32adqb%40alvherre.pgsql
Currently, the database name to connect is initialized only when the
module is loaded with shared_preload_libraries, causing any call of
worker_spi_launch() to fail if the library is not loaded for a dynamic
bgworker launch. Rather than making the GUC defining the database to
connect to a PGC_POSTMASTER, this commit switches worker_spi.database to
PGC_SIGHUP, loaded even if the module's library is loaded dynamically
for a worker.
We have been discussing about the integration of more advanced tests in
this module, with and without shared_preload_libraries set, so this
eases a bit the work planned in this area.
No backpatch is done as, while this is a bug, it changes the definition
of worker_spi.database.
Author: Masahiro Ikeda
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/d30d3ea7d21cb7c9e1e3cc47e301f1b6@oss.nttdata.com
If both the passed-in plan pointer and plansource->gplan are
NULL, CachedPlanIsSimplyValid would think that the plan pointer
is possibly-valid and try to dereference it. For the one extant
call site in plpgsql, this situation doesn't normally happen
which is why we've not noticed. However, it appears to be possible
if the previous use of the cached plan failed, as per report from
Justin Pryzby. Add an extra check to prevent crashing.
Back-patch to v13 where this code was added.
Discussion: https://postgr.es/m/ZLlV+STFz1l/WhAQ@telsasoft.com
This adds the X509 attributes notBefore and notAfter to sslinfo
as well as pg_stat_ssl to allow verifying and identifying the
validity period of the current client certificate.
Author: Cary Huang <cary.huang@highgo.ca>
Discussion: https://postgr.es/m/182b8565486.10af1a86f158715.2387262617218380588@highgo.ca
Rather than specifying a validity of 10 000 days into the future
during test certificate generation, this hardcodes the notBefore
and notAfter attributes to known values. This will allow writing
tests on the validity of the certificates without knowing when a
specific certificate was regenerated.
This is done as a prerequisite for an upcoming patch which adds
notBefore and notAfter to pg_stat_ssl and sslinfo.
Discussion: https://postgr.es/m/EE288A58-947E-479A-9D99-C46C273D7A23@yesql.se
When the cluster failed the pg_controldata check for clean shut
down we only reported that it did so, not why. The state of the
cluster can be important information when diagnosing the failed
upgrade attempt, so instead include it in the error message.
Discussion: https://postgr.es/m/E0D5EA16-A085-4753-8DDC-C7055048B439@yesql.se
This essentially removes the JsonbTypeCategory enum and
jsonb_categorize_type() and integrates any jsonb-specific logic that
was in jsonb_categorize_type() into json_categorize_type(), now
moved to jsonfuncs.c. The remaining JsonTypeCategory enum and
json_categorize_type() cover the needs of the callers in both json.c
and jsonb.c. json_categorize_type() has grown a new parameter named
is_jsonb for callers to engage the jsonb-specific behavior of
json_categorize_type().
One notable change in the now exported API of json_categorize_type()
is that it now always returns *outfuncoid even though a caller may
have no need currently to see one.
This is in preparation of later commits to implement additional
SQL/JSON functions.
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
Based on how postgres.h foes the Oid <-> Datum conversion, there is no
existing bugs but let's be consistent. 17 spots have been noticed as
incorrectly passing down Oids rather than Datums. Aleksander got one,
Zhang two and I the rest.
Author: Michael Paquier, Aleksander Alekseev, Zhang Mingli
Discussion: https://postgr.es/m/ZLUhqsqQN1MOaxdw@paquier.xyz
When pg_recvlogical needs to abort on a signal like SIGINT/SIGTERM, it
is expected to exit cleanly as the code documents. However, the code
forgot to clean up the state of the connection before leaving. This
would cause the tool to emit messages like "unexpected termination of
replication stream" error, which is meant for really unexpected
termination or a crash.
The code is refactored to apply the same termination abort operations for
signals, end LSN and keepalive cases, registering a "reason" for the
termination with a message printed under --verbose adapted to the reason
used.
This is arguably a bug, but this has been this way since the tool exists
and the signal termination can now become slower depending on the change
being decoded when the signal is received.
Reported-by: Andres Freund
Author: Bharath Rupireddy
Reviewed-by: Andres Freund, Kyotaro Horiguchi, Cary Huang, Michael
Paquier
Discussion: https://postgr.es/m/20221019213953.htdtzikf4f45ywil@awork3.anarazel.de
b5913f6120 added a parenthesized syntax for CLUSTER, but it
requires specifying a table name. This is unlike commands such as
VACUUM and ANALYZE, which do not require specifying a table in the
parenthesized syntax. This change resolves this inconsistency.
This is preparatory work for a follow-up commit that will move the
unparenthesized syntax to the "Compatibility" section of the
CLUSTER documentation.
Reviewed-by: Melanie Plageman, Michael Paquier
Discussion: https://postgr.es/m/CAAKRu_bc5uHieG1976kGqJKxyWtyQt9yvktjsVX%2Bi7NOigDjOA%40mail.gmail.com
This change moves the unparenthesized syntax for CLUSTER to the end
of the ClusterStmt rules in preparation for a follow-up commit that
will move this syntax to the "Compatibility" section of the CLUSTER
documentation. The documentation for the CLUSTER syntaxes has also
been consolidated.
Suggested-by: Melanie Plageman
Discussion https://postgr.es/m/CAAKRu_bc5uHieG1976kGqJKxyWtyQt9yvktjsVX%2Bi7NOigDjOA%40mail.gmail.com
With the addition of INHERIT and SET options for role grants,
the historical display of role memberships in \du/\dg is woefully
inadequate. Besides those options, there are pre-existing
shortcomings that you can't see the ADMIN option nor the grantor.
To fix this, remove the "Member of" column from \du/\dg altogether
(making that output usefully narrower), and invent a new meta-command
"\drg" that is specifically for displaying role memberships. It
shows one row for each role granted to the selected role(s), with
the grant options and grantor.
We would not normally back-patch such a feature addition post
feature freeze, but in this case the change is mainly driven by
v16 changes in the server, so it seems appropriate to include it
in v16.
Pavel Luzanov, with bikeshedding and review from a lot of people,
but particularly David Johnston
Discussion: https://postgr.es/m/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740@postgrespro.ru
By default, pg_archivecleanup does not remove backup history files.
These are just few bytes useful for debugging purposes, still keeping
them around can bloat an archive path history files mixed with the WAL
segments if the path has a long history.
This patch adds a new option to control if backup history files are
removed, depending on the oldest segment name to keep around.
While on it, the TAP tests are refactored so as these are now able to
handle lists of files. Each file has a flag to track if it should still
exist or not depending on the oldest segment defined with the command
run.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com
This commit refactors a bit the main loop of pg_archivecleanup that
handles the removal of old segments, reducing by one its level of
indentation. This will help an incoming patch that adds a new option
related to the segment filtering logic.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com