fill_seq_fork_with_data(), used to initialize a new sequence relation, only
locked the buffer after calling PageInit(), even though PageInit() modifies
page contents.
This is unlikely to cause real-world issues, as the relation is exclusively
locked at that point, and the buffer not yet marked dirty, so other processes
should not access the buffer.
This issue looks to have been present since the introduction of sequences in
e8647c45d6.
Given the low risk, it does not seem worth backpatching the fix.
Discussion: https://postgr.es/m/20230404185501.wdkmo3k7bedlx6qk@awork3.anarazel.de
Convert to BCP47 language tags before storing in the catalog, except
during binary upgrade or when the locale comes from an existing
collation or template database.
The resulting language tags can vary slightly between ICU
versions. For instance, "@colBackwards=yes" is converted to
"und-u-kb-true" in older versions of ICU, and to the simpler (but
equivalent) "und-u-kb" in newer versions.
The process of canonicalizing to a language tag also understands more
input locale string formats than ucol_open(). For instance,
"fr_CA.UTF-8" is misinterpreted by ucol_open() and the region is
ignored; effectively treating it the same as the locale "fr" and
opening the wrong collator. Canonicalization properly interprets the
language and region, resulting in the language tag "fr-CA", which can
then be understood by ucol_open().
This commit fixes a problem in prior versions due to ucol_open()
misinterpreting locale strings as described above. For instance,
creating an ICU collation with locale "fr_CA.UTF-8" would store that
string directly in the catalog, which would later be passed to (and
misinterpreted by) ucol_open(). After this commit, the locale string
will be canonicalized to language tag "fr-CA" in the catalog, which
will be properly understood by ucol_open(). Because this fix affects
the resulting collator, we cannot change the locale string stored in
the catalog for existing databases or collations; otherwise we'd risk
corrupting indexes. Therefore, only canonicalize locales for
newly-created (not upgraded) collations/databases. For similar
reasons, do not backport.
Discussion: https://postgr.es/m/8c7af6820aed94dc7bc259d2aa7f9663518e6137.camel@j-davis.com
Reviewed-by: Peter Eisentraut
Invent "GET DIAGNOSTICS oid_variable = PG_ROUTINE_OID".
This is useful for avoiding the maintenance nuisances that come
with embedding a function's name in its body, as one might do
for logging purposes for example. Typically users would cast the
result to regproc or regprocedure to get something human-readable,
but we won't pre-judge whether that's appropriate.
Pavel Stehule, reviewed by Kirk Wolak and myself
Discussion: https://postgr.es/m/CAFj8pRA4zMd5pY-B89Gm64bDLRt-L+akOd34aD1j4PEstHHSVQ@mail.gmail.com
This option is normally false, but can be set to true to obtain
the legacy behavior where the subscription runs with the permissions
of the subscription owner rather than the permissions of the
table owner. The advantages of this mode are (1) it doesn't require
that the subscription owner have permission to SET ROLE to each
table owner and (2) since no role switching occurs, the
SECURITY_RESTRICTED_OPERATION restrictions do not apply.
On the downside, it allows any table owner to easily usurp
the privileges of the subscription owner - basically, to take
over their account. Because that's generally quite undesirable,
we don't make this mode the default, but we do make it available,
just in case the new behavior causes too many problems for someone.
Discussion: http://postgr.es/m/CA+TgmoZ-WEeG6Z14AfH7KhmpX2eFh+tZ0z+vf0=eMDdbda269g@mail.gmail.com
Up until now, logical replication actions have been performed as the
subscription owner, who will generally be a superuser. Commit
cec57b1a0f documented hazards
associated with that situation, namely, that any user who owns a
table on the subscriber side could assume the privileges of the
subscription owner by attaching a trigger, expression index, or
some other kind of executable code to it. As a remedy, it suggested
not creating configurations where users who are not fully trusted
own tables on the subscriber.
Although that will work, it basically precludes using logical
replication in the way that people typically want to use it,
namely, to replicate a database from one node to another
without necessarily having any restrictions on which database
users can own tables. So, instead, change logical replication to
execute INSERT, UPDATE, DELETE, and TRUNCATE operations as the
table owner when they are replicated.
Since this involves switching the active user frequently within
a session that is authenticated as the subscription user, also
impose SECURITY_RESTRICTED_OPERATION restrictions on logical
replication code. As an exception, if the table owner can SET
ROLE to the subscription owner, these restrictions have no
security value, so don't impose them in that case.
Subscription owners are now required to have the ability to
SET ROLE to every role that owns a table that the subscription
is replicating. If they don't, replication will fail. Superusers,
who normally own subscriptions, satisfy this property by default.
Non-superusers users who own subscriptions will need to be
granted the roles that own relevant tables.
Patch by me, reviewed (but not necessarily in its entirety) by
Jelte Fennema, Jeff Davis, and Noah Misch.
Discussion: http://postgr.es/m/CA+TgmoaSCkg9ww9oppPqqs+9RVqCexYCE6Aq=UsYPfnOoDeFkw@mail.gmail.com
- At the last minute and for no particularly good reason, I changed the
WITHOUT token to be marked especially for lookahead, from the one in
WITHOUT TIME to the one in WITHOUT UNIQUE. Study of upcoming patches
(where a new WITHOUT ARRAY WRAPPER clause is added) showed me that the
former was better, so put it back the way the original patch had it.
- update exprTypmod() for JsonConstructorExpr to return the typmod of
the RETURNING clause, as a comment there suggested. Perhaps it's
possible for this to make a difference with datetime types, but I
didn't try to build a test case.
- The nodeFuncs.c support code for new nodes was calling walker()
directly instead of the WALK() macro as introduced by commit 1c27d16e6e.
Modernize that. Also add exprLocation() support for a couple of nodes
that missed it. Lastly, reorder the code more sensibly.
The WITHOUT_LA -> WITHOUT change means that stored rules containing
either WITHOUT TIME ZONE or WITHOUT UNIQUE KEYS would change
representation. Therefore, bump catversion.
Discussion: https://postgr.es/m/20230329181708.e64g2tpy7jyufqkr@alvherre.pgsql
The trace point was using the relfileno / fork / block for the to-be-read-in
buffer. Some upcoming work would make that more expensive to provide. We still
have buffer-flush-start/done, which can serve most tracing needs that
buffer-write-dirty could serve.
Discussion: https://postgr.es/m/f5164e7a-eef6-8972-75a3-8ac622ed0c6e@iki.fi
Commit 61b313e4 made VACUUM pass down a heaprel to index AM bulkdelete
and vacuumcleanup routines. Although this was primarily intended as
preparation for logical decoding on standbys, it also made it easy to
correct an old deficiency in how we determine how to cleanup SP-GiST
redirect and placeholder tuples.
Pass the heaprel to GlobalVisTestFor() during cleanup of redirect and
placeholder tuples, rather than pessimistically passing NULL.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/02392033-f030-a3c8-c7d0-5c27eb529fec@gmail.com
Commit 61b313e4 made nbtree consistently pass down a heaprel to low
level routines like _bt_getbuf(). Although this was primarily intended
as preparation for logical decoding on standbys, it also made it easy to
correct an old deficiency in how nbtree VACUUM determines whether or not
it's now safe to recycle deleted pages.
Pass the heaprel to GlobalVisTestFor() in nbtree routines that deal with
recycle safety. nbtree now makes less pessimistic assumptions about
recycle safety within non-catalog relations. This enhancement
complements the recycling enhancement added by commit 9dd963ae25.
nbtree remains just as pessimistic as ever when it comes to recycle
safety within indexes on catalog relations. There is no fundamental
reason why we need to treat catalog relations differently, though. The
behavioral inconsistency is a consequence of the way that nbtree uses
nextXID values to implement what Lanin and Shasha call "the drain
technique". Note in particular that it has nothing to do with whether
or not index tuples might still be required for an older MVCC snapshot.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkaiDxCje0yPuH=3Uh2p1V_2pFGY==xfbZoZu7Ax_NB8g@mail.gmail.com
In USE_VALGRIND builds, add code to print the text of the current query
to the valgrind log whenever the valgrind error count has increased
during the query. Valgrind will already have printed its report,
if the error is distinct from ones already seen, so that this works
out fairly well as a way of annotating the log.
Onur Tirtir and Tom Lane
Discussion: https://postgr.es/m/AM9PR83MB0498531E804DC8DF8CFF0E8FE9D09@AM9PR83MB0498.EURPRD83.prod.outlook.com
The new name better reflects what the field is - the size of the buffers[]
array. ring_size sounded more like it is in units of bytes.
An upcoming commit allows a BufferAccessStrategy of custom sizes, so it
seems relevant to improve this beforehand.
Author: Melanie Plageman
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAAKRu_YefVbhg4rAxU2frYFdTap61UftH-kUNQZBvAs%2BYK81Zg%40mail.gmail.com
Traditionally, vacuum always makes use of a buffer access strategy 32
buffers in size. This means that running vacuums tend not to cause too
many shared buffers to become dirty, however, this can cause vacuums to
run much more slowly than they otherwise could as WAL flushes will occur
more frequently due to having to flush WAL out to the LSN of the dirty
page before that page can be written to disk.
When we are performing failsafe VACUUMs (as added in 1e55e7d17), we really
want to make the vacuum work go as quickly as possible, so here we disable
the buffer access strategy when entering failsafe mode while vacuuming a
relation.
Per idea and analyis from Andres Freund.
In passing, also include some changes I had intended for 32fbe0239.
Author: Melanie Plageman
Reviewed-by: Justin Pryzby, David Rowley
Discussion: https://postgr.es/m/20230111182720.ejifsclfwymw2reb%40awork3.anarazel.de
VACUUM FULL and VACUUM ONLY_DATABASE_STATS will not use the vacuum
strategy ring created in vacuum(), so don't waste effort making it in
those cases.
There are other conceivable cases where the buffer strategy also won't be
used, but those are probably less common and not worth troubling over too
much. For example VACUUM (PROCESS_MAIN false, PROCESS_TOAST false).
There are other cases too, but many of these are only discovered once
inside vacuum_rel().
Author: Melanie Plageman
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAAKRu_ZLRuzkM3zKogiZAz2hUony37yLY4aaLb8fPf8fgqs5Og@mail.gmail.com
Using global variables because we don't want to pass these values around
as parameters does not really seem like a great idea, so let's remove
these two global variables and adjust a few functions to accept these
values as parameters instead.
This is part of a wider patch which intends to allow the size of the
buffer access strategy that vacuum uses to be adjusted.
Author: Melanie Plageman
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/CAAKRu_b1q_07uquUtAvLqTM%3DW9nzee7QbtzHwA4XdUo7KX_Cnw%40mail.gmail.com
This commit only implements one prerequisite part for allowing logical
decoding. The commit message contains an explanation of the overall design,
which later commits will refer back to.
Overall design:
1. We want to enable logical decoding on standbys, but replay of WAL
from the primary might remove data that is needed by logical decoding,
causing error(s) on the standby. To prevent those errors, a new replication
conflict scenario needs to be addressed (as much as hot standby does).
2. Our chosen strategy for dealing with this type of replication slot
is to invalidate logical slots for which needed data has been removed.
3. To do this we need the latestRemovedXid for each change, just as we
do for physical replication conflicts, but we also need to know
whether any particular change was to data that logical replication
might access. That way, during WAL replay, we know when there is a risk of
conflict and, if so, if there is a conflict.
4. We can't rely on the standby's relcache entries for this purpose in
any way, because the startup process can't access catalog contents.
5. Therefore every WAL record that potentially removes data from the
index or heap must carry a flag indicating whether or not it is one
that might be accessed during logical decoding.
Why do we need this for logical decoding on standby?
First, let's forget about logical decoding on standby and recall that
on a primary database, any catalog rows that may be needed by a logical
decoding replication slot are not removed.
This is done thanks to the catalog_xmin associated with the logical
replication slot.
But, with logical decoding on standby, in the following cases:
- hot_standby_feedback is off
- hot_standby_feedback is on but there is no a physical slot between
the primary and the standby. Then, hot_standby_feedback will work,
but only while the connection is alive (for example a node restart
would break it)
Then, the primary may delete system catalog rows that could be needed
by the logical decoding on the standby (as it does not know about the
catalog_xmin on the standby).
So, it’s mandatory to identify those rows and invalidate the slots
that may need them if any. Identifying those rows is the purpose of
this commit.
Implementation:
When a WAL replay on standby indicates that a catalog table tuple is
to be deleted by an xid that is greater than a logical slot's
catalog_xmin, then that means the slot's catalog_xmin conflicts with
the xid, and we need to handle the conflict. While subsequent commits
will do the actual conflict handling, this commit adds a new field
isCatalogRel in such WAL records (and a new bit set in the
xl_heap_visible flags field), that is true for catalog tables, so as to
arrange for conflict handling.
The affected WAL records are the ones that already contain the
snapshotConflictHorizon field, namely:
- gistxlogDelete
- gistxlogPageReuse
- xl_hash_vacuum_one_page
- xl_heap_prune
- xl_heap_freeze_page
- xl_heap_visible
- xl_btree_reuse_page
- xl_btree_delete
- spgxlogVacuumRedirect
Due to this new field being added, xl_hash_vacuum_one_page and
gistxlogDelete do now contain the offsets to be deleted as a
FLEXIBLE_ARRAY_MEMBER. This is needed to ensure correct alignment.
It's not needed on the others struct where isCatalogRel has
been added.
This commit just introduces the WAL format changes mentioned above. Handling
the actual conflicts will follow in future commits.
Bumps XLOG_PAGE_MAGIC as the several WAL records are changed.
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Author: Andres Freund <andres@anarazel.de> (in an older version)
Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version)
Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Oversight in f2698ea02c, which introduced
the variable. This lowers some 1000s timeouts to the configurable
default of 180s, due to a lack of evidence for needing the longer
timeout. The alternative was 6*PG_TEST_TIMEOUT_DEFAULT, which we can
adopt if the need arises. Given the lack of observed trouble with these
timeouts, no back-patch.
This is done in preparation for logical decoding on standby, which needs to
include whether visibility affecting WAL records are about a (user) catalog
table. Which is only known for the table, not the indexes.
It's also nice to be able to pass the heap relation to GlobalVisTestFor() in
vacuumRedirectAndPlaceholder().
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/21b700c3-eecf-2e05-a699-f8c78dd31ec7@gmail.com
If visibilitymap_set() is called with flags containing a higher bit than
VISIBILITYMAP_ALL_FROZEN, the state of neighboring pages is affected. While
there was an assertion that *some* valid bits were set, it did not check
that *only* valid bits were. Change that.
Discussion: https://postgr.es/m/20230331043300.gux3s5wzrapqi4oe@awork3.anarazel.de
PageInit() while holding the extension lock is unnecessary after 0d1fe9f74e
started to use RBM_ZERO_AND_LOCK - nobody can look at the new page before we
release the page lock. PageInit() zeroes the page, which isn't that cheap, so
deferring it until after the extension lock is released seems like a good idea.
Doing visibilitymap_pin() while holding the extension lock, introduced in
7db0cd2145, looks like an accident. Due to the restrictions on
HEAP_INSERT_FROZEN it's unlikely to be a performance issue, but it still seems
better to move it out. We also are doing the visibilitymap_pin() while
holding the buffer lock, which will be fixed in a separate commit.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: http://postgr.es/m/419312fd-9255-078c-c3e3-f0525f911d7f@iki.fi
After 0da243fed0 got committed, it was reported that in some cases the
compression ratio is rather poor - particularly for custom format with
narrow tables - due to writing the LZ4 header/footer for each row.
This commit switches to LZ4F (LZ4 frame format), eliminating most of the
overhead and greatly improving the compression ratio. This makes the
compressed size about the same for plain and custom formats (just like
for gzip, for example).
LZ4F is now used by both compression APIs, which allowed refactoring and
reusing more of the code. For consistency this also renames the LZ4File
struct to LZ4State, and a number of functions are now prefixed with
LZ4Stream_ (instead of LZ4File_).
Patch by Georgios Kokolatos, based on report and initial patch by Justin
Pryzby. Review and minor cleanups by me.
Author: Georgios Kokolatos, Justin Pryzby
Reported-by: Justin Pryzby
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20230227044910.GO1653%40telsasoft.com
This patch introduces the SQL standard IS JSON predicate. It operates
on text and bytea values representing JSON, as well as on the json and
jsonb types. Each test has IS and IS NOT variants and supports a WITH
UNIQUE KEYS flag. The tests are:
IS JSON [VALUE]
IS JSON ARRAY
IS JSON OBJECT
IS JSON SCALAR
These should be self-explanatory.
The WITH UNIQUE KEYS flag makes these return false when duplicate keys
exist in any object within the value, not necessarily directly contained
in the outermost object.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/CAF4Au4w2x-5LTnN_bxky-mq4=WOqsGsxSpENCzHRAzSnEd8+WQ@mail.gmail.com
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
I realized that the third overflow case I posited in commit b0e9e4d76
actually should be handled in a different way: rather than tolerating
the idea that the quotient could round to 1, we should clamp so that
the output cannot be more than "count" when we know that the operand is
less than bound2. That being the case, we don't need an overflow-aware
increment in that code path, which leads me to revert the movement of
the pg_add_s32_overflow() call. (The diff in width_bucket_float8
might be easier to read by comparing against b0e9e4d76^.)
What's more, width_bucket_numeric also has this problem of the quotient
potentially rounding to 1, so add a clamp there too.
As before, I'm not quite convinced that a back-patch is warranted.
Discussion: https://postgr.es/m/391415.1680268470@sss.pgh.pa.us
Up through v11 it was sensible to use the "oid" system column as
a foreign key column, but since that was removed there's no visible
usefulness in making any of the remaining system columns a foreign
key. Moreover, since the TupleTableSlot rewrites in v12, such cases
actively fail because of implicit assumptions that only user columns
appear in foreign keys. The lack of complaints about that seems
like good evidence that no one is trying to do it. Hence, rather
than trying to repair those assumptions (of which there are at least
two, maybe more), let's just forbid the case up front.
Per this patch, a system column in either the referenced or
referencing side of a foreign key will draw this error; however,
putting one in the referenced side would have failed later anyway,
since we don't allow unique indexes to be made on system columns.
Per bug #17877 from Alexander Lakhin. Back-patch to v12; the
case still appears to work in v11, so we shouldn't break it there.
Discussion: https://postgr.es/m/17877-4bcc658e33df6de1@postgresql.org
The totalrows/totaldeadrows outputs were left uninitialized in cases
where we find no analyzable child tables of a partitioned table. This
could lead to setting the partitioned table's pg_class.reltuples value
to garbage. It's not clear that that would have any very bad effects
in practice, but fix it anyway because it's making valgrind unhappy.
Reported and diagnosed by Alexander Lakhin (bug #17880).
Discussion: https://postgr.es/m/17880-9282037c923d856e@postgresql.org
This converts pg_regress output format to emit TAP compliant output
while keeping it as human readable as possible for use without TAP
test harnesses. As verbose harness related information isn't really
supported by TAP this also reduces the verbosity of pg_regress runs
which makes scrolling through log output in buildfarm/CI runs a bit
easier as well.
As the meson TAP parser conumes whitespace, the leading indentation
for differentiating parallel tests from sequential tests has been
changed to a single character prefix.
This patch has been around for an extended period of time, reviewers
listed below may have been involved in reviewing a version quite
different from the version in this commit. The original idea for
this patch was a hacking session with Jinbao Chen.
TAP format testing is also enabled in meson as of this.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nikolay Shaplov <dhyan@nataraj.su>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/BD4B107D-7E53-4794-ACBA-275BEB4327C9@yesql.se
Discussion: https://postgr.es/m/20220221164736.rq3ornzjdkmwk2wo@alap3.anarazel.de
I forgot to do so in the referenced commit. While the consequences of omitting
the version change are likely to be harmless (besides discarding stats, as a
PGSTAT_FILE_FORMAT_ID bump also does), it still seems worth doing.
Among other things, this should make it easier to calculate a useful cache hit
ratio by excluding buffer reads via buffer access strategies. As buffer access
strategies reuse buffers (and thus evict the prior buffer contents), it is
normal to see reads on repeated scans of the same data.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAAKRu_beMa9Hzih40%3DXPYqhDVz6tsgUGTrhZXRo%3Dunp%2Bszb%3DUA%40mail.gmail.com
When calling generateSerialExtraStmts(), we would pass in the
constraint->options. In some cases, generateSerialExtraStmts() would
modify the referenced List to remove elements from it, but doing so is
invalid without assigning the list back to all variables that point to it.
In the particular reported problem case, the List became empty, in which
cases it became NIL, but the passed in constraint->options didn't get to
find out about that and was left pointing to free'd memory.
To fix this, just perform a list_copy() inside generateSerialExtraStmts().
We could just do a list_copy() just before we perform the delete from the
list, however, that seems less robust. Let's make sure the generated
CreateSeqStmt gets a completely different copy of the list to be safe.
Bug: #17879
Reported-by: Fei Changhong
Diagnosed-by: Fei Changhong
Discussion: https://postgr.es/m/17879-b7dfb5debee58ff5@postgresql.org
Backpatch-through: 11, all supported versions
Full and right outer joins were not supported in the initial
implementation of Parallel Hash Join because of deadlock hazards (see
discussion). Therefore FULL JOIN inhibited parallelism, as the other
join strategies can't do that in parallel either.
Add a new PHJ phase PHJ_BATCH_SCAN that scans for unmatched tuples on
the inner side of one batch's hash table. For now, sidestep the
deadlock problem by terminating parallelism there. The last process to
arrive at that phase emits the unmatched tuples, while others detach and
are free to go and work on other batches, if there are any, but
otherwise they finish the join early.
That unfairness is considered acceptable for now, because it's better
than no parallelism at all. The build and probe phases are run in
parallel, and the new scan-for-unmatched phase, while serial, is usually
applied to the smaller of the two relations and is either limited by
some multiple of work_mem, or it's too big and is partitioned into
batches and then the situation is improved by batch-level parallelism.
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com
In instr_time.h it is stated that:
* When summing multiple measurements, it's recommended to leave the
* running sum in instr_time form (ie, use INSTR_TIME_ADD or
* INSTR_TIME_ACCUM_DIFF) and convert to a result format only at the end.
The reason for that is that converting to microseconds is not cheap, and can
loose precision. Therefore this commit changes 'PendingWalStats' to use
'instr_time' instead of 'PgStat_Counter' while accumulating 'wal_write_time'
and 'wal_sync_time'.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/1feedb83-7aa9-cb4b-5086-598349d3f555@gmail.com
There's no need for callers to pass aggregate names so that the function
can resolve them to OIDs, when callers can just pass aggregate OIDs
directly to begin with.
We were transferring partPruneInfos from PlannerInfo into PlannerGlobal
wrong, essentially relying on all of them being transferred, and
adjusting their list indexes based on that. But apparently it's
possible that some of them are skipped, so that strategy leads to a
corrupted execution tree. Instead, adjust each Append/MergeAppend's
partpruneinfo index as we copy from one list to the other, which seems
safer anyway. This requires adjusting the RT offset of the RTE
referenced in each partPruneInfo ahead of actually adjusting the RTE
itself, which seems a bit too ad-hoc.
This problem was introduced by commit ec38694894. However, it may be
that we no longer require the change introduced there, so perhaps we
should revert both the present commit and that one.
Problem noticed by sqlsmith.
Author: Amit Langote <amitlangote09@gmail.com
Discussion: https://postgr.es/m/CA+HiwqG6tbc2oadsbyyy24b2AL295XHQgyLRWghmA7u_SL1K8A@mail.gmail.com
Quoting Melanie:
> Since if buffer is INT_MAX, then the -(buffer + 1) version invokes
> undefined behavior while the -buffer - 1 version doesn't.
All other places were already using the correct version. I (Andres), copied
the code into more places in a patch. Melanie caught it in review, but to
prevent more people from copying the bad code, fix it. Even if it is a
theoretical issue.
We really ought to wrap these accesses in a helper function...
As this is a theoretical issue, don't backpatch.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_aW2SX_LWtwHgfnqYpBrunMLfE9PD6-ioPpkh92XH0qpg@mail.gmail.com
This role can be granted to non-superusers to allow them to issue
CREATE SUBSCRIPTION. The non-superuser must additionally have CREATE
permissions on the database in which the subscription is to be
created.
Most forms of ALTER SUBSCRIPTION, including ALTER SUBSCRIPTION .. SKIP,
now require only that the role performing the operation own the
subscription, or inherit the privileges of the owner. However, to
use ALTER SUBSCRIPTION ... RENAME or ALTER SUBSCRIPTION ... OWNER TO,
you also need CREATE permission on the database. This is similar to
what we do for schemas. To change the owner of a schema, you must also
have permission to SET ROLE to the new owner, similar to what we do
for other object types.
Non-superusers are required to specify a password for authentication
and the remote side must use the password, similar to what is required
for postgres_fdw and dblink. A superuser who wants a non-superuser to
own a subscription that does not rely on password authentication may
set the new password_required=false property on that subscription. A
non-superuser may not set password_required=false and may not modify a
subscription that already has password_required=false.
This new password_required subscription property works much like the
eponymous postgres_fdw property. In both cases, the actual semantics
are that a password is not required if either (1) the property is set
to false or (2) the relevant user is the superuser.
Patch by me, reviewed by Andres Freund, Jeff Davis, Mark Dilger,
and Stephen Frost (but some of those people did not fully endorse
all of the decisions that the patch makes).
Discussion: http://postgr.es/m/CA+TgmoaDH=0Xj7OBiQnsHTKcF2c4L+=gzPBUKSJLh8zed2_+Dg@mail.gmail.com
The original coding of this function paid little attention to the
possibility of overflow. There were actually three different hazards:
1. The range from bound1 to bound2 could exceed DBL_MAX, which on
IEEE-compliant machines produces +Infinity in the subtraction.
At best we'd lose all precision in the result, and at worst
produce NaN due to dividing Inf/Inf. The range can't exceed
twice DBL_MAX though, so we can fix this case by scaling all the
inputs by 0.5.
2. We computed count * (operand - bound1), which is also at risk of
float overflow, before dividing. Safer is to do the division first,
producing a quotient that should be in [0,1), and even after allowing
for roundoff error can't be outside [0,1]; then multiplying by count
can't produce a result overflowing an int. (width_bucket_numeric does
the multiplication first on the grounds that that improves accuracy of
its result, but I don't think that a similar argument can be made in
float arithmetic.)
3. If the division result does round to 1, and count is INT_MAX,
the final addition of 1 would overflow an int. We took care
of that in the operand >= bound2 case but did not consider that
it could be possible in the main path. Fix that by moving the
overflow-aware addition of 1 so it is done that way in all cases.
The fix for point 2 creates a possibility that values very close to
a bucket boundary will be rounded differently than they were before.
I'm not troubled by that for HEAD, but it is an argument against
putting this into the stable branches. Given that the cases being
fixed here are fairly extreme and unlikely to be hit in normal use,
it seems best not to back-patch.
Mats Kindahl and Tom Lane
Discussion: https://postgr.es/m/17876-61f280d1601f978d@postgresql.org
The fallback seed for when pg_strong_random cannot generate a high
quality seed mixes in the address of the conn object, but the cast
failed to take the word size into consideration. Fix by casting to
a uintptr_t instead. The seed calculation was added in 7f5b19817e.
The code as it stood generated the following warning on mamba and
lapwing in the buildfarm:
fe-connect.c: In function 'libpq_prng_init':
fe-connect.c:1048:11: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
1048 | rseed = ((uint64) conn) ^
| ^
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/TYAPR01MB58665250EDCD551CCA9AD117F58E9@TYAPR01MB5866.jpnprd01.prod.outlook.com
The tts_flag is named TTS_FLAG_SHOULDFREE, so use that instead of
TTS_SHOULDFREE, which is the name of the macro that checks for that flag.
Additionally, 4da597edf got rid of the TupleTableSlot.tts_tuple field but
forgot to update a comment which referenced that field. Fix that.
Reported-by: Zhen Mingyang <zhenmingyang@yeah.net>
Reported-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/1a96696c.9d3.187193989c3.Coremail.zhenmingyang@yeah.net
This adds support for load balancing connections with libpq using a
connection parameter: load_balance_hosts=<string>. When setting the
param to random, hosts and addresses will be connected to in random
order. This then results in load balancing across these addresses and
hosts when multiple clients or frequent connection setups are used.
The randomization employed performs two levels of shuffling:
1. The given hosts are randomly shuffled, before resolving them
one-by-one.
2. Once a host its addresses get resolved, the returned addresses
are shuffled, before trying to connect to them one-by-one.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by libpq such that future improvements can alter for
example the order of entries.
As a nice side effect of this refactor the mechanism for iteration
over addresses in PQconnectPoll is now identical to its iteration
over hosts.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.com
gistBuildCallback tried to fetch the size of an index tuple that
might have already been freed by gistProcessEmptyingQueue.
While this seems to usually be harmless in production builds,
in principle it could result in a SIGSEGV, or more likely a bogus
value for indtuplesSize leading to poor page-split decisions later
in the build.
The memory management here is confusing and could stand to be
refactored, but for the moment it seems to be enough to fetch
the tuple size sooner. AFAICT the indtuples[Size] totals aren't
used in between these places; even if they were, the updated
values shouldn't be any worse to use. So just move the
incrementing of the totals up.
It's not very clear why our valgrind-using buildfarm animals
haven't noticed this problem, because the relevant code path
does seem to be exercised according to the code coverage report.
I think the reason that we didn't fix this bug after the first
report is that I'd wanted to try to understand that better.
However, now that it's been re-discovered let's just be pragmatic
and fix it already.
Original report by Alexander Lakhin (bug #16329),
later rediscovered by Egor Chindyaskin (bug #17874).
Patch by Alexander Lakhin (commentary by Pavel Borisov and me).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/16329-7a6aa9b6fa1118a1@postgresql.org
Discussion: https://postgr.es/m/17874-63ca6c7ce42d2103@postgresql.org
This function has been a no-op for over a decade. Even if bufmgr
regains a need to be called during commit, it seems unlikely that
the most appropriate call points would be precisely here, so it's not
doing us much good as a placeholder either. Now, removing it probably
doesn't save any noticeable number of cycles --- but the main call is
inside the commit critical section, and the less work done there the
better.
Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2Wi1=tLKbxZnXzcD+8fYKyKqBtivVakLQC_mYBsP4Y8qVA@mail.gmail.com
This commit introduces the SQL/JSON standard-conforming constructors for
JSON types:
JSON_ARRAY()
JSON_ARRAYAGG()
JSON_OBJECT()
JSON_OBJECTAGG()
Most of the functionality was already present in PostgreSQL-specific
functions, but these include some new functionality such as the ability
to skip or include NULL values, and to allow duplicate keys or throw
error when they are found, as well as the standard specified syntax to
specify output type and format.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/CAF4Au4w2x-5LTnN_bxky-mq4=WOqsGsxSpENCzHRAzSnEd8+WQ@mail.gmail.com
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
The standard collations "ucs_basic" and "unicode" were defined in
initdb, even though pg_collation.dat seems like the correct place for
them. It seems this was just forgotten during various reorganizations
of initdb and pg_collation.dat/.h over time.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/08b58ecd-0d50-9395-ed51-dc8294e3fd2b%40enterprisedb.com
In ancient times, these belonged to arguments or fields that were
actually of type long, but now they are not anymore, so this "L"
decoration is just confusing. (Some other 0L and other "L" constants
remain, where they are actually associated with a long type.)
When there are multiple publications for a subscription and one of those
publishes via the parent table by using publish_via_partition_root and the
other one directly publishes the child table, we end up copying the same
data twice during initial synchronization. The reason for this was that we
get both the parent and child tables from the publisher and try to copy
the data for both of them.
This patch extends the function pg_get_publication_tables() to take a
publication list as its input parameter. This allows us to exclude a
partition table whose ancestor is published by the same publication list.
This problem does exist in back-branches but we decide to fix it there in
a separate commit if required. The fix for back-branches requires quite
complicated changes to fetch the required table information from the
publisher as we can't update the function pg_get_publication_tables() in
back-branches. We are not sure whether we want to deviate and complicate
the code in back-branches for this problem as there are no field reports
yet.
Author: Wang wei
Reviewed-by: Peter Smith, Jacob Champion, Kuroda Hayato, Vignesh C, Osumi Takamichi, Amit Kapila
Discussion: https://postgr.es/m/OS0PR01MB57167F45D481F78CDC5986F794B99@OS0PR01MB5716.jpnprd01.prod.outlook.com
The pg_dump Compressor API has three basic callbacks - Allocate, Write
and End. The gzip implementation (since e9960732a) wrongly assumed the
Write function would always be called, and deferred the initialization
of the internal compression system until the first such call. But when
there's no data to compress (e.g. for empty LO), this would result in
not finalizing the compression state (because it was not actually
initialized), producing invalid dump.
Fixed by initializing the internal compression system in the Allocate
call, whenever the caller provides the Write. For decompression the
state is not needed, so we leave the private_data member unpopulated.
Introduces a pg_dump TAP test compressing an empty large object.
This also rearranges the functions to their original order, to make
diffs against older code simpler to understand. Finally, replace an
unreachable pg_fatal() with a simple assert check.
Reported-by: Justin Pryzby
Author: Justin Pryzby, Georgios Kokolatos
Reviewed-by: Georgios Kokolatos, Tomas Vondra
https://postgr.es/m/20230228235834.GC30529%40telsasoft.com
For ICU collations, ensure that the locale's language exists in ICU,
and that the locale can be opened.
Basic validation helps avoid minor mistakes and misspellings, which
often fall back to the root locale instead of the intended
locale. It's even more important to avoid such mistakes in ICU
versions 54 and earlier, where the same (misspelled) locale string
could fall back to different locales depending on the environment.
Discussion: https://postgr.es/m/11b1eeb7e7667fdd4178497aeb796c48d26e69b9.camel@j-davis.com
Discussion: https://postgr.es/m/df2efad0cae7c65180df8e5ebb709e5eb4f2a82b.camel@j-davis.com
Reviewed-by: Peter Eisentraut
MERGE planning could fail with "variable not found in subplan target
list" if the target table is partitioned and all its partitions are
excluded at plan time, or in the case where it has no partitions but
used to have some. This happened because distribute_row_identity_vars
thought it didn't need to make the target table's reltarget list
fully valid; but if we generate a join plan then that is required
because the dummy Result node's tlist will be made from the reltarget.
The same logic appears in distribute_row_identity_vars in v14,
but AFAICS the problem is unreachable in that branch for lack of
MERGE. In other updating statements, the target table is always
inner-joined to any other tables, so if the target is known dummy
then the whole plan reduces to dummy, so no join nodes are created.
So I'll refrain from back-patching this code change to v14 for now.
Per report from Alvaro Herrera.
Discussion: https://postgr.es/m/20230328112248.6as34mlx5sr4kltg@alvherre.pgsql
The extra checks done in check_icu_locale() are not necessary. An
existing comment already pointed out that the checks would be done
during post-bootstrap initialization, when the locale is opened by the
backend. This was a mistake in commit 27b62377b4.
This commit creates a simpler function default_icu_locale() to just
return the locale of the default collator.
Discussion: https://postgr.es/m/04182066-7655-344a-b8b7-040b1b2490fb%40enterprisedb.com
Reviewed-by: Peter Eisentraut
To support older ICU versions, we rely on
icu_set_collation_attributes() to do error checking that is handled
directly by ucol_open() in newer ICU versions. Commit 3b50275b12
introduced a slight inconsistency, where the error report includes the
fixed-up locale string, rather than the locale string passed to
pg_ucol_open().
Refactor slightly so that pg_ucol_open() handles the errors from both
ucol_open() and icu_set_collation_attributes(), making it easier to
see any differences between the error reports. It also makes
pg_ucol_open() responsible for closing the UCollator on error, which
seems like the right place.
Discussion: https://postgr.es/m/04182066-7655-344a-b8b7-040b1b2490fb%40enterprisedb.com
Reviewed-by: Peter Eisentraut
Change the columns attndims, attstattarget, and attinhcount from int32
to int16, and reorder a bit. This saves some space (currently 4
bytes) in pg_attribute and tuple descriptors, which translates into
small performance benefits and/or room for new columns in pg_attribute
needed by future features.
attndims and attinhcount are never realistically used with values
larger than int16. Just to be sure, add some overflow checks.
attstattarget is currently limited explicitly to 10000.
For consistency, pg_constraint.coninhcount is also changed like
attinhcount.
Discussion: https://www.postgresql.org/message-id/flat/d07ffc2b-e0e8-77f7-38fb-be921dff71af%40enterprisedb.com
Two new macros are added with their respective functions switched to
use them. These are for functions with millisecond stats, with and
without "xact" in their names (for the stats that can be tracked within
a transaction).
While on it, prefix the macro for float8 on database entries with "_MS",
as it does a us->ms conversion, based on a suggestion from Andres
Freund.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/6e2efb4f-6fd0-807e-f6bf-94207db8183a@gmail.com
find_composite_type_dependencies() ignored indexes, which is a poor
decision because an expression index could have a stored column of
a composite (or other container) type even when the underlying table
does not. Teach it to detect such cases and error out. We have to
work a bit harder than for other relations because the pg_depend entry
won't identify the specific index column of concern, but it's not much
new code.
This does not address bug #17872's original complaint that dropping
a column in such a type might lead to violations of the uniqueness
property that a unique index is supposed to ensure. That seems of
much less concern to me because it won't lead to crashes.
Per bug #17872 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17872-d0fbb799dc3fd85d@postgresql.org
Commit bbc1376b39 checked that if
a redirected line pointer pointed to a tuple, the tuple should be
marked both HEAP_ONLY_TUPLE and HEAP_UPDATED. But Andres Freund
pointed out that *any* tuple that is marked HEAP_ONLY_TUPLE should
be marked HEAP_UPDATED, not just one that is the target of a
redirected line pointer. Do that instead.
To see why this is better, consider a redirect line pointer A
which points to a heap-only tuple B which points (via CTID)
to another heap-only tuple C. With the old code, we'd complain
if B was not marked HEAP_UPDATED, but with this change, we'll
complain if either B or C is not marked HEAP_UPDATED.
(Note that, with or without this commit, if either B or C were
not marked HEAP_ONLY_TUPLE, we would also complain about that.)
Discussion: http://postgr.es/m/CA%2BTgmobLypZx%3DcOH%2ByY1GZmCruaoucHm77A6y_-Bo%3Dh-_3H28g%40mail.gmail.com
Replace the hardcoded value with a GUC such that the iteration
count can be raised in order to increase protection against
brute-force attacks. The hardcoded value for SCRAM iteration
count was defined to be 4096, which is taken from RFC 7677, so
set the default for the GUC to 4096 to match. In RFC 7677 the
recommendation is at least 15000 iterations but 4096 is listed
as a SHOULD requirement given that it's estimated to yield a
0.5s processing time on a mobile handset of the time of RFC
writing (late 2015).
Raising the iteration count of SCRAM will make stored passwords
more resilient to brute-force attacks at a higher computational
cost during connection establishment. Lowering the count will
reduce computational overhead during connections at the tradeoff
of reducing strength against brute-force attacks.
There are however platforms where even a modest iteration count
yields a too high computational overhead, with weaker password
encryption schemes chosen as a result. In these situations,
SCRAM with a very low iteration count still gives benefits over
weaker schemes like md5, so we allow the iteration count to be
set to one at the low end.
The new GUC is intentionally generically named such that it can
be made to support future SCRAM standards should they emerge.
At that point the value can be made into key:value pairs with
an undefined key as a default which will be backwards compatible
with this.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
Discussion: https://postgr.es/m/F72E7BC7-189F-4B17-BF47-9735EB72C364@yesql.se
The nested-arrays code path in ExecEvalArrayExpr() used palloc to
allocate the result array, whereas every other array-creating function
has used palloc0 since 18c0b4ecc. This mostly works, but unused bits
past the end of the nulls bitmap may end up undefined. That causes
valgrind complaints with -DWRITE_READ_PARSE_PLAN_TREES, and could
cause planner misbehavior as cited in 18c0b4ecc. There seems no very
good reason why we should strive to avoid palloc0 in just this one case,
so fix it the easy way with s/palloc/palloc0/.
While looking at that I noted that we also failed to check for overflow
of "nbytes" and "nitems" while summing the sizes of the sub-arrays,
potentially allowing a crash due to undersized output allocation.
For "nbytes", follow the policy used by other array-munging code of
checking for overflow after each addition. (As elsewhere, the last
addition of the array's overhead space doesn't need an extra check,
since palloc itself will catch a value between 1Gb and 2Gb.)
For "nitems", there's no very good reason to sum the inputs at all,
since we can perfectly well use ArrayGetNItems' result instead of
ignoring it.
Per discussion of this bug, also remove redundant zeroing of the
nulls bitmap in array_set_element and array_set_slice.
Patch by Alexander Lakhin and myself, per bug #17858 from Alexander
Lakhin; thanks also to Richard Guo. These bugs are a dozen years old,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17858-8fd287fd3663d051@postgresql.org
When extracting an attr from a cached tuple in the syscache with
SysCacheGetAttr the isnull parameter must be checked in case the
attr cannot be NULL. For cases when this is known beforehand, a
wrapper is introduced which perform the errorhandling internally
on behalf of the caller, invoking an elog in case of a NULL attr.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/AD76405E-DB45-46B6-941F-17B1EB3A9076@yesql.se
The "partitions_total" and "partitions_done" fields were updated
as though the current level of partitioning was the only one.
In multi-level cases, not only could partitions_total change
over the course of the command, but partitions_done could go
backwards or exceed the currently-reported partitions_total.
Fix by setting partitions_total to the total number of direct
and indirect children once at command start, and then just
incrementing partitions_done at appropriate points. Invent
a new progress monitoring function "pgstat_progress_incr_param"
to simplify doing the latter. We can avoid adding cost for the
former when doing CREATE INDEX, because ProcessUtility already
enumerates the children and it's pretty easy to pass the count
down to DefineIndex. In principle the same could be done in
ALTER TABLE, but that's structurally difficult; for now, just
eat the cost of an extra find_all_inheritors scan in that case.
Ilya Gladyshev and Justin Pryzby
Discussion: https://postgr.es/m/a15f904a70924ffa4ca25c3c744cff31e0e6e143.camel@gmail.com
This provides a very simple way to see the generic plan for a
parameterized query. Without this, it's necessary to define
a prepared statement and temporarily change plan_cache_mode,
which is a bit tedious.
One thing that's a bit of a hack perhaps is that we disable
execution-time partition pruning when the GENERIC_PLAN option
is given. That's because the pruning code may attempt to
fetch the value of one of the parameters, which would fail.
Laurenz Albe, reviewed by Julien Rouhaud, Christoph Berg,
Michel Pelletier, Jim Jones, and myself
Discussion: https://postgr.es/m/0a29b954b10b57f0d135fe12aa0909bd41883eb0.camel@cybertec.at
ICU versions 53 and earlier rely on icu_set_collation_attributes() to
process the attributes in the locale string. Avoid leaking the
already-opened UCollator object if an error is encountered.
Discussion: https://postgr.es/m/04182066-7655-344a-b8b7-040b1b2490fb%40enterprisedb.com
Reviewed-by: Peter Eisentraut
The fields of NLSVERSIONINFOEX are of type DWORD, which is unsigned
long, so the results of the computations being printed are also of
type unsigned long.
The sslcertmode option controls whether the server is allowed and/or
required to request a certificate from the client. There are three
modes:
- "allow" is the default and follows the current behavior, where a
configured client certificate is sent if the server requests one
(via one of its default locations or sslcert). With the current
implementation, will happen whenever TLS is negotiated.
- "disable" causes the client to refuse to send a client certificate
even if sslcert is configured or if a client certificate is available in
one of its default locations.
- "require" causes the client to fail if a client certificate is never
sent and the server opens a connection anyway. This doesn't add any
additional security, since there is no guarantee that the server is
validating the certificate correctly, but it may helpful to troubleshoot
more complicated TLS setups.
sslcertmode=require requires SSL_CTX_set_cert_cb(), available since
OpenSSL 1.0.2. Note that LibreSSL does not include it.
Using a connection parameter different than require_auth has come up as
the simplest design because certificate authentication does not rely
directly on any of the AUTH_REQ_* codes, and one may want to require a
certificate to be sent in combination of a given authentication method,
like SCRAM-SHA-256.
TAP tests are added in src/test/ssl/, some of them relying on sslinfo to
check if a certificate has been set. These are compatible across all
the versions of OpenSSL supported on HEAD (currently down to 1.0.1).
Author: Jacob Champion
Reviewed-by: Aleksander Alekseev, Peter Eisentraut, David G. Johnston,
Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
The same error message will be used for a different option, to be
introduced in a separate patch. Reshaping the error message as done
here saves in translation.
Extracted from a larger patch by the same author.
Author: Jacob Champion
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
This commit renames the members of a few pgstat structures related to
functions and relations, by respectively removing their prefix "f_" and
"t_". The statistics for functions and relations and handled in their
own file, and pgstatfuncs.c associates each field in a structure
variable named based on the object type handled, so no information is
lost with this rename.
This will help with some of the refactoring aimed for pgstatfuncs.c, as
this makes more consistent the field names with the SQL functions
retrieving them.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier, Melanie Plageman
Discussion: https://postgr.es/m/9142f62a-a422-145c-bde0-b5bc498a4ada@gmail.com
Replace the symlink-chasing logic in find_my_exec with realpath(3),
which has been required by POSIX since SUSv2. (Windows lacks
realpath(), but there we can use _fullpath() which is functionally
equivalent.) The main benefit of this is that -- on all modern
platforms at least -- realpath() avoids the chdir() shenanigans
we used to perform while interpreting symlinks. That had various
corner-case failure modes so it's good to get rid of it.
There is still ongoing discussion about whether we could skip the
replacement of symlinks in some cases, but that's really matter
for a separate patch. Meanwhile I want to push this before we get
too close to feature freeze, so that we can find out if there are
showstopper portability issues.
Discussion: https://postgr.es/m/797232.1662075573@sss.pgh.pa.us
Add pgstat counter to track row updates that result in the successor
version going to a new heap page, leaving behind an original version
whose t_ctid points to the new version. The current count is shown by
the n_tup_newpage_upd column of each of the pg_stat_*_tables views.
The new n_tup_newpage_upd column complements the existing n_tup_hot_upd
and n_tup_upd columns. Tables that have high n_tup_newpage_upd values
(relative to n_tup_upd) are good candidates for tuning heap fillfactor.
Corey Huinker, with small tweaks by me.
Author: Corey Huinker <corey.huinker@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CADkLM=ded21M9iZ36hHm-vj2rE2d=zcKpUQMds__Xm2pxLfHKA@mail.gmail.com
The "und" locale is an alternative spelling of the root locale, but it
was not recognized until ICU 55. To maintain common behavior across
all supported ICU versions, check for "und" and replace with "root"
before opening.
Previously, the lack of support for "und" was dangerous, because
versions 54 and older fall back to the environment when a locale is
not found. If the user specified "und" for the language (which is
expected and documented), it could not only resolve to the wrong
collator, but it could unexpectedly change (which could lead to
corrupt indexes).
This effectively reverts commit d72900bded, which worked around the
problem for the built-in "unicode" collation, and is no longer
necessary.
Discussion: https://postgr.es/m/60da0cecfb512a78b8666b31631a636215d8ce73.camel@j-davis.com
Discussion: https://postgr.es/m/0c6fa66f2753217d2a40480a96bd2ccf023536a1.camel@j-davis.com
Reviewed-by: Peter Eisentraut
This is evidently not required by most compilers, but buildfarm
member fairywren is unhappy without it. It looks like the meson
infrastructure has this right already.
Prior to the introduction of the compression API in e9960732a9, pg_dump
would use the ZLIB_IN_SIZE/ZLIB_OUT_SIZE to size input/output buffers.
Commit 0da243fed0 introduced similar constants for LZ4, but while gzip
defined both buffers to be 4kB, LZ4 used 4kB and 16kB without any clear
reasoning why that's desirable.
Furthermore, parts of the code unaware of which compression is used
(e.g. pg_backup_directory.c) continued to use ZLIB_OUT_SIZE directly.
Simplify by replacing the various constants with DEFAULT_IO_BUFFER_SIZE,
set to 4kB. The compression implementations still have an option to use
a custom value, but considering 4kB was fine for 20+ years, I find that
unlikely (and we'd probably just increase the default buffer size).
Author: Georgios Kokolatos
Reviewed-by: Tomas Vondra, Justin Pryzby
Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
After 0da243fed0 got committed, we've received a report about a compiler
warning, related to the new LZ4File_gets() function:
compress_lz4.c: In function 'LZ4File_gets':
compress_lz4.c:492:19: warning: comparison of unsigned expression in
'< 0' is always false [-Wtype-limits]
492 | if (dsize < 0)
The reason is very simple - dsize is declared as size_t, which is an
unsigned integer, and thus the check is pointless and we might fail to
notice an error in some cases (or fail in a strange way a bit later).
The warning could have been silenced by simply changing the type, but we
realized the API mostly assumes all the libraries use the same types and
report errors the same way (e.g. by returning 0 and/or negative value).
But we can't make this assumption - the gzip/lz4 libraries already
disagree on some of this, and even if they did a library added in the
future might not.
The right solution is to define what the API does, and translate the
library-specific behavior in consistent way (so that the internal errors
are not exposed to users of our compression API). So this adjusts the
data types in a couple places, so that we don't miss library errors, and
simplifies and unifies the error reporting to simply return true/false
(instead of e.g. size_t).
While at it, make sure LZ4File_open_write() does not clobber errno in
case open_func() fails.
Author: Georgios Kokolatos
Reported-by: Alexander Lakhin
Reviewed-by: Tomas Vondra, Justin Pryzby
Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
Hide details of supporting older ICU versions in a wrapper
function. The current code only needs to handle
icu_set_collation_attributes(), but a subsequent commit will add
additional version-specific code.
Discussion: https://postgr.es/m/7ee414ad-deb5-1144-8a0e-b34ae3b71cd5@enterprisedb.com
Reviewed-by: Peter Eisentraut
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having generated columns. We didn't use to ignore
generated columns while doing tuple comparison among the tuples from
the publisher and subscriber during apply of updates and deletes.
Author: Onder Kalaci
Reviewed-by: Shi yu, Amit Kapila
Backpatch-through: 12
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
This patch allows copying tables in the binary format during table
synchronization when the binary option for a subscription is enabled.
Previously, tables are copied in text format even if the subscription is
created with the binary option enabled. Copying tables in binary format
may reduce the time spent depending on column types.
A binary copy for initial table synchronization is supported only when
both publisher and subscriber are v16 or later.
Author: Melih Mutlu
Reviewed-by: Peter Smith, Shi yu, Euler Taveira, Vignesh C, Kuroda Hayato, Osumi Takamichi, Bharath Rupireddy, Hou Zhijie
Discussion: https://postgr.es/m/CAGPVpCQvAziCLknEnygY0v1-KBtg%2BOm-9JHJYZOnNPKFJPompw%40mail.gmail.com
* Commit 3048898e dropped -ING from PHJ wait event names. Update the
corresponding barrier phases names to match.
* Rename the "DONE" phases to "FREE". That's symmetrical with
"ALLOCATE", and names the activity that actually happens in that phase
(as we do for the other phases) rather than a state. The bug fixed by
commit 8d578b9b might have been more obvious with this name.
* Rename the batch/bucket growth barriers' "ALLOCATE" phases to
"REALLOCATE", a better description of what they do.
* Update the high level comments about phases to highlight phases
are executed by a single process with an asterisk (mostly memory
management phases).
No behavior change, as this is just improving internal identifiers. The
only user-visible sign of this is that a couple of wait events' display
names change from "...Allocate" to "...Reallocate" in pg_stat_activity,
to stay in sync with the internal names.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BMDpwF2Eo2LAvzd%3DpOh81wUTsrwU1uAwR-v6OGBB6%2B7g%40mail.gmail.com
Currently, in read committed transaction isolation mode (default), we have the
following sequence of actions when tuple_update()/tuple_delete() finds
the tuple updated by concurrent transaction.
1. Attempt to update/delete tuple with tuple_update()/tuple_delete(), which
returns TM_Updated.
2. Lock tuple with tuple_lock().
3. Re-evaluate plan qual (recheck if we still need to update/delete and
calculate the new tuple for update).
4. Second attempt to update/delete tuple with tuple_update()/tuple_delete().
This attempt should be successful, since the tuple was previously locked.
This patch eliminates step 2 by taking the lock during first
tuple_update()/tuple_delete() call. Heap table access method saves some
efforts by checking the updated tuple once instead of twice. Future
undo-based table access methods, which will start from the latest row version,
can immediately place a lock there.
The code in nodeModifyTable.c is simplified by removing the nested switch/case.
Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com
Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp
Reviewed-by: Andres Freund, Chris Travers
When we lock tuple using table_tuple_lock() then we at the same time fetch
the locked tuple to the slot. In this case we can skip extra
table_tuple_fetch_row_version() thank to we've already fetched the 'old' tuple
and nobody can change it concurrently since it's locked.
Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com
Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp
Reviewed-by: Andres Freund, Chris Travers
Use of pack("L") gets around the basic endian problem, but it doesn't
deal with the fact that the order of the bitfields within the struct
may differ. This patch fixes it to work with gcc on NetBSD/macppc,
but I wonder whether that will be enough --- in principle, there
could be four different combinations of bitpatterns needed here.
Discussion: https://postgr.es/m/1650745.1679513221@sss.pgh.pa.us
In commit 3e51b278d, I misinterpreted the coding in setup_config()
as setting min_wal_size and max_wal_size to compile-time-constant
values. But it's not: there's a hidden dependency on --wal-segsize.
Therefore leaving these variables commented out is the wrong thing.
Per report from Andres Freund.
Discussion: https://postgr.es/m/20230322200751.jvfvsuuhd3hgm6vv@awork3.anarazel.de
While testing commit 3e51b278d, I noted that initdb leaks about a
megabyte worth of data due to the sloppy bookkeeping in its
string-manipulating code. That's not a huge amount on modern machines,
but it's still kind of annoying, and it's easy to fix by recognizing
that we might as well treat these arrays of strings as
modifiable-in-place. There's no caller that cares about preserving
the old state of the array after replace_token or replace_guc_value.
With this fix, valgrind sees only a few hundred bytes leaked during
an initdb run.
Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
This option, or its long form --set, sets the GUC "name" to "value".
The setting applies in the bootstrap and standalone servers run by
initdb, and is also written into the generated postgresql.conf.
This can save an extra editing step when creating a new cluster,
but the real use-case is for coping with situations where the
bootstrap server fails to start due to environmental issues;
for example, if it's necessary to force huge_pages to off.
Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
RelationCopyStorageUsingBuffer() did not free the strategies used to access
the source / target relation. They memory was released at the end of the
transaction, but when using a template database with a lot of relations, the
temporary leak can become big prohibitively big.
RelationCopyStorageUsingBuffer() acquired the buffer for the target relation
with RBM_NORMAL, therefore requiring a read of a block guaranteed to be
zero. Use RBM_ZERO_AND_LOCK instead.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20230321070113.o2vqqxogjykwgfrr@awork3.anarazel.de
Backpatch: 15-, where STRATEGY WAL_LOG was introduced
Prior to this commit, we only consider each tuple or line pointer
on the page in isolation, but now we can do some validation of a line
pointer against its successor. For example, a redirect line pointer
shouldn't point to another redirect line pointer, and if a tuple
is HOT-updated, the result should be a heap-only tuple.
Himanshu Upadhyaya and Robert Haas, reviewed by Aleksander Alekseev,
Andres Freund, and Peter Geoghegan.
The files generated by gen_node_support.pl for query jumbling
(queryjumblefuncs.funcs.c and queryjumblefuncs.switch.c) were not being
removed on make maintainer-clean (they need to remain around after a
simple "clean"). This commit makes the operation consistent with the
copy, equal, out and read files.
While on it, update a comment in the nodes'README where a reference to
queryjumblefuncs.funcs.c was missing.
Reported-by: Nathan Bossart
Reviewed-by: Richard Guo, Daniel Gustafsson
Discussion: https://postgr.es/m/ZBgAfTHcL6W7zGdW@paquier.xyz
As demonstrated by David Johnston, the Memoize cache hit ratio calculation
wasn't quite correct.
This change only affects the estimated hit ratio when the estimated number
of entries to cache is estimated not to fit inside the cache. For
example, if we expect 2000 distinct cache key values and only expect to be
able to cache 1000 of those at once due to memory constraints, with an
estimate of 10000 calls, if we could store all entries then the hit ratio
should be 80% to account for the first 2000 of the 10000 calls to be a
cache miss due to the value not being cached yet. If we can only store
1000 entries for each of the 2000 distinct possible values at once then
the 80% should be reduced by half to make the final estimate of 40%.
Previously, the calculation would have produced an estimated hit ratio of
30%, which wasn't correct.
Apply to master only so as not to destabilize plans in the back branches.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwZEmcNk3YQo2Xj4EDUOdY6qakad31rOD1Vc4q1_s68-Ew@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrV44LwiF4W_qf_RpbGYWSgp1kF=cZr+kTRRaALUfmXqw@mail.gmail.com
These are set after a \! command or a backtick substitution.
SHELL_ERROR is just "true" for error (nonzero exit status) or "false"
for success, while SHELL_EXIT_CODE records the actual exit status
following standard shell/system(3) conventions.
Corey Huinker, reviewed by Maxim Orlov and myself
Discussion: https://postgr.es/m/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com
The check for the number of roles in the target cluster for an upgrade
selects the existing roles and performs a COUNT(*) over the result. A
value of one is the expected query result value indicating that only
the install user is present in the new cluster. The result was converted
with the function for converting a string containing an Oid into a numeric,
which avoids potential overflow but makes the code less readable since
it's not actually an Oid at all.
Discussion: https://postgr.es/m/41AB5F1F-4389-4B25-9668-5C430375836C@yesql.se
This makes it easier to specify values taken directly from WAL file
names.
The option parsing is arranged in the style of option_parse_int() (but
we need to parse unsigned int), to allow future refactoring in the
same manner.
Reviewed-by: Sébastien Lardière <sebastien@lardiere.net>
Discussion: https://www.postgresql.org/message-id/flat/8fef346e-2541-76c3-d768-6536ae052993@lardiere.net
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having dropped columns. We didn't use to ignore dropped
columns while doing tuple comparison among the tuples from the publisher
and subscriber during apply of updates and deletes.
Author: Onder Kalaci, Shi yu
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
With unlucky timing and parallel_leader_participation=off (not the
default), PHJ could attempt to access per-batch shared state just as it
was being freed. There was code intended to prevent that by checking
for a cleared pointer, but it was racy. Fix, by introducing an extra
barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to
access the per-batch state to find a batch to help with, and
PHJ_BUILD_DONE means that it is too late. The last to detach will free
the array of per-batch state as before, but now it will also atomically
advance the phase, so that late attachers can avoid the hazard. This
mirrors the way per-batch hash tables are freed (see phases
PHJ_BATCH_PROBING and PHJ_BATCH_DONE).
An earlier attempt to fix this (commit 3b8981b6, later reverted) missed
one special case. When the inner side is empty (the "empty inner
optimization), the build barrier would only make it to
PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from
the hashtable. In that case, fast-forward the build barrier to
PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold
and we can still negotiate who is cleaning up.
Revealed by build farm failures, where BarrierAttach() failed a sanity
check assertion, because the memory had been clobbered by dsa_free().
In non-assert builds, the result could be a segmentation fault.
Back-patch to all supported releases.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: David Geier <geidav.pg@gmail.com>
Tested-by: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
Counting writes only for io_context = 'normal' is unreliable, as backends
using a buffer access strategy could flush all of the dirty buffers out from
under the other backends and checkpointer. Change the test to count writes in
any context. This achieves roughly the same coverage anyway.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://www.postgresql.org/message-id/ZAnWU8WbXEDjrfUE%40telsasoft.com
@extschema:name@ extends the existing @extschema@ feature so that
we can also insert the schema name of some required extension,
thus making cross-extension references robust even if they are in
different schemas.
However, this has the same hazard as @extschema@: if the schema
name is embedded literally in an installed object, rather than being
looked up once during extension script execution, then it's no longer
safe to relocate the other extension to another schema. To deal with
that without restricting things unnecessarily, add a "no_relocate"
option to extension control files. This allows an extension to
specify that it cannot handle relocation of some of its required
extensions, even if in themselves those extensions are relocatable.
We detect "no_relocate" requests of dependent extensions during
ALTER EXTENSION SET SCHEMA.
Regina Obe, reviewed by Sandro Santilli and myself
Discussion: https://postgr.es/m/003001d8f4ae$402282c0$c0678840$@pcorp.us
When determining whether an index update may be skipped by using HOT, we
can ignore attributes indexed by block summarizing indexes without
references to individual tuples that need to be cleaned up.
A new type TU_UpdateIndexes provides a signal to the executor to
determine which indexes to update - no indexes, all indexes, or only the
summarizing indexes.
This also removes rd_indexattr list, and replaces it with rd_attrsvalid
flag. The list was not used anywhere, and a simple flag is sufficient.
This was originally committed as 5753d4ee32, but then got reverted by
e3fcca0d0d because of correctness issues.
Original patch by Josef Simanek, various fixes and improvements by Tomas
Vondra and me.
Authors: Matthias van de Meent, Josef Simanek, Tomas Vondra
Reviewed-by: Tomas Vondra, Alvaro Herrera
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
Discussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com
When calculating distance in brin_minmax_multi_distance_inet(), the
netmask was applied incorrectly. This results in (seemingly) incorrect
ordering of values, triggering an assert.
For builds without asserts this is mostly harmless - we may merge other
ranges, possibly resulting in slightly less efficient index. But it's
still correct and the greedy algorithm doesn't guarantee optimality
anyway.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported by Dmitry Dolgov, investigation and fix by me.
Reported-by: Dmitry Dolgov
Backpatch-through: 14
Discussion: https://postgr.es/m/17774-c6f3e36dd4471e67@postgresql.org
The Memoize executor node stores the cache key values along with the
tuple(s) which were found in the outer node which match each key value,
however, when the planner tried to estimate how many entries could be
stored in the cache, it didn't take into account that the cache key must
also be stored. In many cases, this won't make a large difference as the
key is likely small in comparison to the tuple(s) being stored, however,
it's not impossible to craft cases where the key could take more memory
than the tuple(s) stored for it.
Here we adjust the planner so it takes into account the estimated amount
of memory to store the cache key. Effectively, this change will reduce
the estimated cache hit ratio when it's thought that not all items will
fit in the cache, thus Memoize will become more expensive in such cases.
The executor already takes into account the memory consumed by the cache
key, so here we only need to adjust the planner.
Discussion: https://postgr.es/m/CAApHDvqGErGuyBfQvBQrTCHDbzLTqoiW=_G9sOzeFxWEc_7auA@mail.gmail.com
When probing the Memoize cache to check if the current cache key values
exist in the cache, we perform an evaluation of the expressions making up
the cache key before probing the hash table for those values. This
operation could leak memory as it is possible that the cache key is an
expression which requires allocation of memory, as was the case in bug
17844.
Here we fix this by correctly switching to the per tuple context before
evaluating the cache expressions so that the memory is freed next time the
per tuple context is reset.
Bug: 17844
Reported-by: Alexey Ermakov
Discussion: https://postgr.es/m/17844-d2f6f9e75a622bed@postgresql.org
Backpatch-through: 14, where Memoize was introduced
nodeRead() will have created a Node struct that's only allocated big
enough for the specific node type, so copying sizeof(union ValUnion)
can be copying too much. This provokes valgrind complaints, and with
very bad luck could perhaps result in SIGSEGV.
While at it, tidy up _equalA_Const to avoid duplicate checks of isnull.
Per report from Alexander Lakhin. This code is new as of a6bc33019,
so no need to back-patch.
Discussion: https://postgr.es/m/4995256b-cc65-170e-0b22-60ad2cd535f1@gmail.com
Add versions of timestamptz + interval, timestamptz - interval, and
generate_series(timestamptz, ...) in which a timezone can be specified
explicitly instead of defaulting to the TimeZone GUC setting.
The new functions for the first two are named date_add and
date_subtract. This might seem too generic, but we could use
overloading to add additional variants if that seems useful.
Along the way, improve the docs' pretty inadequate explanation
of how timestamptz +- interval works.
Przemysław Sztoch and Gurjeet Singh; cosmetic changes and most of
the docs work by me
Discussion: https://postgr.es/m/01a84551-48dd-1359-bf7e-f6b0203a6bd0@sztoch.pl
We already had five copies of essentially the same logic, and an
upcoming patch introduces yet another use-case. That's past my
threshold of pain, so introduce a common subroutine. There's not
that much net code savings, but the chance of typos should go down.
Inspired by a patch from Przemysław Sztoch, but different in detail.
Discussion: https://postgr.es/m/01a84551-48dd-1359-bf7e-f6b0203a6bd0@sztoch.pl
Check whether the datctype is C to determine whether t_isspace() and
related functions use isspace() or iswspace().
Previously, t_isspace() checked whether the database default collation
was C; which is incorrect when the default collation uses the ICU
provider.
Discussion: https://postgr.es/m/79e4354d9eccfdb00483146a6b9f6295202e7890.camel@j-davis.com
Reviewed-by: Peter Eisentraut
Backpatch-through: 15
Instead of trying to optimize this by skipping creation of the
links for tables we don't plan to dump, just create them all in
bulk with a single scan over the pg_inherits data. The previous
approach was more or less O(N^2) in the number of pg_inherits
entries, not to mention being way too complicated.
Also, don't create useless TableAttachInfo objects.
It's silly to create a TableAttachInfo object that we're not
going to dump, when we know perfectly well at creation time
that it won't be dumped.
Patch by me; thanks to Julien Rouhaud for review.
Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
Hash partitioning on an enum is problematic because the hash codes are
derived from the OIDs assigned to the enum values, which will almost
certainly be different after a dump-and-reload than they were before.
This means that some rows probably end up in different partitions than
before, causing restore to fail because of partition constraint
violations. (pg_upgrade dodges this problem by using hacks to force
the enum values to keep the same OIDs, but that's not possible nor
desirable for pg_dump.)
Users can work around that by specifying --load-via-partition-root,
but since that's a dump-time not restore-time decision, one might
find out the need for it far too late. Instead, teach pg_dump to
apply that option automatically when dealing with a partitioned
table that has hash-on-enum partitioning.
Also deal with a pre-existing issue for --load-via-partition-root
mode: in a parallel restore, we try to TRUNCATE target tables just
before loading them, in order to enable some backend optimizations.
This is bad when using --load-via-partition-root because (a) we're
likely to suffer deadlocks from restore jobs trying to restore rows
into other partitions than they came from, and (b) if we miss getting
a deadlock we might still lose data due to a TRUNCATE removing rows
from some already-completed restore job.
The fix for this is conceptually simple: just don't TRUNCATE if we're
dealing with a --load-via-partition-root case. The tricky bit is for
pg_restore to identify those cases. In dumps using COPY commands we
can inspect each COPY command to see if it targets the nominal target
table or some ancestor. However, in dumps using INSERT commands it's
pretty impractical to examine the INSERTs in advance. To provide a
solution for that going forward, modify pg_dump to mark TABLE DATA
items that are using --load-via-partition-root with a comment.
(This change also responds to a complaint from Robert Haas that
the dump output for --load-via-partition-root is pretty confusing.)
pg_restore checks for the special comment as well as checking the
COPY command if present. This will fail to identify the combination
of --load-via-partition-root and --inserts in pre-existing dump files,
but that should be a pretty rare case in the field. If it does
happen you will probably get a deadlock failure that you can work
around by not using parallel restore, which is the same as before
this bug fix.
Having done this, there seems no remaining reason for the alarmism
in the pg_dump man page about combining --load-via-partition-root
with parallel restore, so remove that warning.
Patch by me; thanks to Julien Rouhaud for review. Back-patch to
v11 where hash partitioning was introduced.
Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
Mainly move some detail from errmsg to errdetail, remove explicit
mention of superuser where appropriate, since that is implied in most
permission checks, and make messages more uniform.
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/20230316234701.GA903298@nathanxps13
Since commit 7627b91cd5, libpq has used FD_CLOEXEC so that sockets
wouldn't be leaked to subprograms. With enough bad luck, a
multi-threaded program might fork in between the socket() and fcntl()
calls. We can close that tiny gap by using SOCK_CLOEXEC instead of a
separate call. While here, we might as well do the same for
SOCK_NONBLOCK, to save another syscall.
These flags are expected to appear in the next revision of the POSIX
standard, specifically to address this problem. Our Unixoid targets
except macOS and AIX have had them for a long time, and macOS would
hopefully use guarded availability to roll them out, so it seems enough
to use a simple ifdef test for availability until we hear otherwise.
Windows doesn't have them, but has non-inheritable sockets by default.
Discussion: https://postgr.es/m/CA%2BhUKGKb6FsAdQWcRL35KJsftv%2B9zXqQbzwkfRf1i0J2e57%2BhQ%40mail.gmail.com
Unfortunately it turns out that the logfile-only option added in b9f8d1cbad
is only available in openldap starting in 2.6.
Luckily the option to control the log level (loglevel/-s) have been around for
much longer. As it turns out loglevel/-s only control what goes into syslog,
not what ends up in the file specified with 'logfile' and stderr.
While we currently are specifying 'logfile', nothing ends up in it, as the
option only controls debug messages, and we didn't set a debug level. The
debug level can only be configured on the commandline and also prevents
forking. That'd require larger changes, so this commit doesn't tackle that
issue.
Specify the syslog level when starting slapd using -s, as that allows to
prevent all syslog messages if one uses '0' instead of 'none', while loglevel
doesn't prevent the first message.
Discussion: https://postgr.es/m/20230311233708.3yjdbjkly2q4gq2j@awork3.anarazel.de
Backpatch: 11-
Currently, there are quite a few places in reorderbuffer.c that tries to
access top-transaction for a subtransaction. This makes the code to access
top-transaction consistent and easier to follow.
Author: Peter Smith
Reviewed-by: Vignesh C, Sawada Masahiko
Discussion: https://postgr.es/m/CAHut+PuCznOyTqBQwjRUu-ibG-=KHyCv-0FTcWQtZUdR88umfg@mail.gmail.com
The logic added in 9d9c02ccd to determine when a qual can be used as a
WindowClause run condition failed to correctly check for subqueries in the
qual. This was being done correctly for normal subquery qual pushdowns,
it's just that 9d9c02ccd failed to follow the lead on that.
This also fixes various other cases where transforming the qual into a
WindowClause run condition in the subquery should have been disallowed.
Bug: #17826
Reported-by: Anban Company
Discussion: https://postgr.es/m/17826-7d8750952f19a5f5@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced.
Until now the tests using slapd spammed syslog for every connection /
query. Use logfile-only to prevent syslog activity. Unfortunately that only
takes effect after logging the first message, but that's still much better
than the prior situation.
Discussion: https://postgr.es/m/20230311233708.3yjdbjkly2q4gq2j@awork3.anarazel.de
Backpatch: 11-
Support for SCM credential authentication has been removed in the
backend in 9.1, and libpq has kept some code to handle it for
compatibility.
Commit be4585b, that did the cleanup of the backend code, has done
so because the code was not really portable originally. And, as there
are likely little chances that this is used these days, this removes the
remaining code from libpq. An error will now be raised by libpq if
attempting to connect to a server that returns AUTH_REQ_SCM_CREDS,
instead.
References to SCM credential authentication are removed from the
protocol documentation. This removes some meson and configure checks.
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZBLH8a4otfqgd6Kn@paquier.xyz
Further to commit 6a9229da, checking for NULL is now redundant. An "out
of memory" error would have been thrown already by palloc() and treated
as FATAL, so we can delete a few more lines.
Back-patch to all releases, like those other commits.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/4040668.1679013388%40sss.pgh.pa.us
create_append_path() would only apply get_baserel_parampathinfo
when the path is for a partitioned table, but it's also potentially
useful for paths for UNION ALL appendrels. Specifically, that
supports building a Memoize path atop this one.
While we're in the vicinity, delete some dead code in
create_merge_append_plan(): there's no need for it to support
parameterized MergeAppend paths, and it doesn't look like that
is going to change anytime soon. It'll be easy enough to undo
this when/if it becomes useful.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_ABSu4PWG2rE1q10tJugEXHWgru3U8dAgkoFvgrb6aEA@mail.gmail.com
gcc 12+ has complaints like the following:
../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':
../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
1893 | pdst[nb] = ~pip[nb];
| ~~~~~~~~~^~~~~~~~~~
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16
27 | unsigned char ipaddr[16]; /* up to 128 bits of address */
| ^~~~~~
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16
This is due to a compiler bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104986
It has been a year since the bug has been reported without getting fixed. As
the warnings are verbose and use of gcc 12 is becoming more common, it seems
worth working around the bug. Particularly because a simple reformulation of
the loop condition fixes the issue and isn't any less readable.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/144536.1648326206@sss.pgh.pa.us
Backpatch: 11-
A comment was left behind claiming that we needed to use malloc() rather
than palloc() because the corresponding free would run in another
thread, but that's not true anymore. Remove that comment. And, with
the reason being gone, we might as well actually use palloc().
Back-patch to supported releases, like d41a178b.
Discussion: https://postgr.es/m/CA%2BhUKG%2BpdM9v3Jv4tc2BFx2jh_daY3uzUyAGBhtDkotEQDNPYw%40mail.gmail.com
DecodeDateTime and DecodeTimeOnly had support for date input in the
style "Y2023M03D16", which the comments claimed to be an "ISO" format.
However, so far as I can find there is no such format in ISO 8601;
they write units before numbers in intervals, but not in datetimes.
Furthermore, the lesser-known ISO 8601-2 spec actually defines an
incompatible format "2023Y03M16D". None of our documentation mentions
such a format either. So let's just drop it.
That leaves us with only two cases for a prefix unit specifier in
datetimes: Julian dates written as Jnnnn, and the "T" separator
defined by ISO 8601. Add checks to catch misuse of these specifiers,
that is consecutive specifiers or a dangling specifier at the end of
the string. We do not however disallow a specifier that is separated
from the field that it disambiguates (by noise words or unrelated
fields). That being the case, remove some overly-aggressive error
checks from the ISOTIME cases.
Joseph Koshakow, editorialized a bit by me; thanks also to
Peter Eisentraut for some standards-reading.
Discussion: https://postgr.es/m/CAAvxfHf2Q1gKLiHGnuPOiyf0ASvKUM4BnMfsXuwgtYEb_Gx0Zw@mail.gmail.com
.../src/common/file_utils.c: In function ‘pg_pwrite_zeros’:
.../src/common/file_utils.c:543:9: warning: ‘static’ is not at beginning of declaration [-Wold-style-declaration]
543 | const static PGAlignedBlock zbuffer = {{0}}; /* worth BLCKSZ */
The user receiving the message might not understand where the
server's "current directory" is. "Data directory" seems clearer.
(This would not be good for frontend code, but both of these
messages are only issued in the backend.)
Kyotaro Horiguchi
Discussion: https://postgr.es/m/20230316.111646.1564684434328830712.horikyota.ntt@gmail.com
In the .pc (pkg-config) files generated by the make and meson builds,
the Requires.private entries use different delimiters. The make build
uses spaces, the meson build uses commas. The pkg-config documentation
says that it should be comma-separated, but apparently about half the
.pc in the wild use just spaces. The pkg-config source code
acknowledges that both commas and spaces work.
This changes the make build to use commas, for consistency.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/1fb52d61-0964-2d8e-87d9-e8be830e2b24%40enterprisedb.com
This structure included only PgStat_FunctionCounts, and removing it
facilitates some upcoming refactoring for pgstatfuncs.c to use more
macros rather that mostly-duplicated functions.
Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/11d531fe-52fc-c6ea-7e8e-62f1b6ec626e@gmail.com
A failure in parsing the interval value defined in the \watch command
was silently switched to 1s of interval between two queries, which can
be confusing. This commit improves the error handling, and a couple of
tests are added to check after:
- An incorrect value.
- An out-of-range value.
- A negative value.
A value of zero is able to work now, meaning that there is no interval
of time between two queries in a \watch loop. No backpatch is done, as
it could break existing applications.
Author: Andrey Borodin
Reviewed-by: Kyotaro Horiguchi, Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com
This adds the ability to pretty-print XML documents ... according to
libxml's somewhat idiosyncratic notions of what's pretty, anyway.
One notable divergence from a strict reading of the spec is that
libxml is willing to collapse empty nodes "<node></node>" to just
"<node/>", whereas SQL and the underlying XML spec say that this
option should only result in whitespace tweaks. Nonetheless,
it seems close enough to justify using the SQL-standard syntax.
Jim Jones, reviewed by Peter Smith and myself
Discussion: https://postgr.es/m/2f5df461-dad8-6d7d-4568-08e10608a69b@uni-muenster.de
The hook can be installed by a shared_preload library.
A similar mechanism could be used for radius paswords, for example, and
the type name auth_password_hook_typ has been shosen with that in mind.
John Naylor and Andrew Dunstan
Discussion: https://postgr.es/m/469b06ed-69de-ba59-c13a-91d2372e52a9@dunslane.net
preprocess_targetlist thought PHVs couldn't appear here.
It was mistaken, as per report from Önder Kalacı.
Surveying other pull_var_clause calls, I noted no similar errors,
but I did notice that qual_is_pushdown_safe's assertion about
!contain_window_function was pointless, because the following
pull_var_clause call would complain about them anyway. In HEAD
only, remove the redundant Assert and improve the commentary.
Discussion: https://postgr.es/m/CACawEhUuum-gC_2S3sXLTcsk7bUSPSHOD+g1ZpfKaDK-KKPPWA@mail.gmail.com
The previous coding based on select() had commentary about historical
portability concerns. Use POSIX nanosleep() instead.
This has independently been suggested a couple of times before, but
never managed to stick. Since recent and proposed work removes other
uses of select(), and associated code and comments relating to its
non-portable interaction with signals, it seems like a good time to tidy
up this case, too.
Also modernize the explanation of why WaitLatch() is a better way to
wait.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Suggested-by: Paul Guo <paulguo@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com
Discussion: https://postgr.es/m/CABQrizfxpBLZT5mZeE0js5oCh1tqEWvcGF3vMRCv5P-RwUY5dQ@mail.gmail.com
Discussion: https://postgr.es/m/4902.1552349020@sss.pgh.pa.us
There are still some systems that use traditional tick-based sleep
timing, but many including Linux, FreeBSD and macOS started using high
resolution timer hardware more directly a decade or two ago. Update our
comment about that. Also highlight that Windows is like the older
Unixen in that respect.
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BogAon8_V223Ldv6taPR2uKH3X_UJ_A7LJAf3-VRARPA%40mail.gmail.com
The current implementation of _pgfstat64() is ineffective in detecting a
terminal handle or an anonymous named pipe. This commit improves our
port of fstat() to detect more efficiently such cases by relying on
GetFileType(), and returning more correct data when the type found is
either a FILE_TYPE_PIPE (_S_IFIFO) or a FILE_TYPE_CHAR (_S_IFCHR).
This is part of a more global fix to address failures when feeding the
output generated by pg_dump to pg_restore through a pipe, for example,
but not all of it. We are also going to need to do something about
fseek() and ftello() which are not reliable on WIN32 for the same cases
where fstat() was incorrect. Fixing fstat() is independent of the rest,
though, which is why both fixes are handled separately, and this is the
first part of it.
Reported-by: Daniel Watzinger
Author: Daniel Watzinger, Juan José Santamaría Flecha
Discussion: https://postgr.es/m/b1448cd7-871e-20e3-8398-895e2d1d3bf9@gmail.com
Backpatch-through: 14
Using REPLICA IDENTITY FULL on the publisher can lead to a full table scan
per tuple change on the subscription when REPLICA IDENTITY or PK index is
not available. This makes REPLICA IDENTITY FULL impractical to use apart
from some small number of use cases.
This patch allows using indexes other than PRIMARY KEY or REPLICA
IDENTITY on the subscriber during apply of update/delete. The index that
can be used must be a btree index, not a partial index, and it must have
at least one column reference (i.e. cannot consist of only expressions).
We can uplift these restrictions in the future. There is no smart
mechanism to pick the index. If there is more than one index that
satisfies these requirements, we just pick the first one. We discussed
using some of the optimizer's low-level APIs for this but ruled it out
as that can be a maintenance burden in the long run.
This patch improves the performance in the vast majority of cases and the
improvement is proportional to the amount of data in the table. However,
there could be some regression in a small number of cases where the indexes
have a lot of duplicate and dead rows. It was discussed that those are
mostly impractical cases but we can provide a table or subscription level
option to disable this feature if required.
Author: Onder Kalaci, Amit Kapila
Reviewed-by: Peter Smith, Shi yu, Hou Zhijie, Vignesh C, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/CACawEhVLqmAAyPXdHEPv1ssU2c=dqOniiGz7G73HfyS7+nGV4w@mail.gmail.com
Commit 4753ef37 changed vacuum_delay_point() to use the WaitLatch() API,
to fix the problem that vacuum could keep running for a very long time
after the postmaster died.
Unfortunately, that broke commit caf626b2's support for fractional
vacuum_cost_delay, which shipped in PostgreSQL 12. WaitLatch() works in
whole milliseconds.
For now, revert the change from commit 4753ef37, but add an explicit
check for postmaster death. That's an extra system call on systems
other than Linux and FreeBSD, but that overhead doesn't matter much
considering that we willingly went to sleep and woke up again. (In
later work, we might add higher resolution timeouts to the latch API so
that we could do this with our standard programming pattern, but that
wouldn't be back-patched.)
Back-patch to 14, where commit 4753ef37 arrived.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com
Our waitpid() emulation didn't prevent a PID from being recycled by the
OS before the call to waitpid(). The postmaster could finish up
tracking more than one child process with the same PID, and confuse
them.
Fix, by moving the guts of pgwin32_deadchild_callback() into waitpid(),
so that resources are released synchronously. The process and PID
continue to exist until we close the process handle, which only happens
once we're ready to adjust our book-keeping of running children.
This seems to explain a couple of failures on CI. It had never been
reported before, despite the code being as old as the Windows port.
Perhaps Windows started recycling PIDs more rapidly, or perhaps timing
changes due to commit 7389aad6 made it more likely to break.
Thanks to Alexander Lakhin for analysis and Andres Freund for tracking
down the root cause.
Back-patch to all supported branches.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230208012852.bvkn2am4h4iqjogq%40awork3.anarazel.de
The band-aid applied in commit f0bedf3e4 turns out to still need
some work: it made sure we didn't set Np->last_relevant too small
(to the left of the decimal point), but it didn't prevent setting
it too large (off the end of the partially-converted string).
This could result in fetching data beyond the end of the allocated
space, which with very bad luck could cause a SIGSEGV, though
I don't see any hazard of interesting memory disclosure.
Per bug #17839 from Thiago Nunes. The bug's pretty ancient,
so back-patch to all supported versions.
Discussion: https://postgr.es/m/17839-aada50db24d7b0da@postgresql.org
This patch adds new pg_dump switches
--table-and-children=pattern
--exclude-table-and-children=pattern
--exclude-table-data-and-children=pattern
which act the same as the existing --table, --exclude-table, and
--exclude-table-data switches, except that any partitions or
inheritance child tables of the table(s) matching the pattern
are also included or excluded.
Gilles Darold, reviewed by Stéphane Tachoires
Discussion: https://postgr.es/m/5aa393b5-5f67-8447-b83e-544516990ee2@migops.com
Scanning the expression for compatible Vars isn't really necessary,
because the subsequent match against StatisticExtInfo entries will
eliminate expressions containing other Vars just fine. Moreover,
this code hadn't stopped to think about what to do with
PlaceHolderVars or Aggrefs in the clause; and at least for the PHV
case, that demonstrably leads to failures. Rather than work out
whether it's reasonable to ignore those, let's just remove the
whole stanza.
Per report from Richard Guo. Back-patch to v14 where this code
was added.
Discussion: https://postgr.es/m/CAMbWs48Mmvm-acGevXuwpB=g5JMqVSL6i9z5UaJyLGJqa-XPAA@mail.gmail.com
Expose the standard error functions as SQL-callable functions. These
are expected to be useful to people working with normal distributions,
and we use them here to test the distribution from random_normal().
Since these functions are defined in the POSIX and C99 standards, they
should in theory be available on all supported platforms. If that
turns out not to be the case, more work will be needed.
On all platforms tested so far, using extra_float_digits = -1 in the
regression tests is sufficient to allow for variations between
implementations. However, past experience has shown that there are
almost certainly going to be additional unexpected portability issues,
so these tests may well need further adjustments, based on the
buildfarm results.
Dean Rasheed, reviewed by Nathan Bossart and Thomas Munro.
Discussion: https://postgr.es/m/CAEZATCXv5fi7+Vu-POiyai+ucF95+YMcCMafxV+eZuN1B-=MkQ@mail.gmail.com
The new connection parameter require_auth allows a libpq client to
define a list of comma-separated acceptable authentication types for use
with the server. There is no negotiation: if the server does not
present one of the allowed authentication requests, the connection
attempt done by the client fails.
The following keywords can be defined in the list:
- password, for AUTH_REQ_PASSWORD.
- md5, for AUTH_REQ_MD5.
- gss, for AUTH_REQ_GSS[_CONT].
- sspi, for AUTH_REQ_SSPI and AUTH_REQ_GSS_CONT.
- scram-sha-256, for AUTH_REQ_SASL[_CONT|_FIN].
- creds, for AUTH_REQ_SCM_CREDS (perhaps this should be removed entirely
now).
- none, to control unauthenticated connections.
All the methods that can be defined in the list can be negated, like
"!password", in which case the server must NOT use the listed
authentication type. The special method "none" allows/disallows the use
of unauthenticated connections (but it does not govern transport-level
authentication via TLS or GSSAPI).
Internally, the patch logic is tied to check_expected_areq(), that was
used for channel_binding, ensuring that an incoming request is
compatible with conn->require_auth. It also introduces a new flag,
conn->client_finished_auth, which is set by various authentication
routines when the client side of the handshake is finished. This
signals to check_expected_areq() that an AUTH_REQ_OK from the server is
expected, and allows the client to complain if the server bypasses
authentication entirely, with for example the reception of a too-early
AUTH_REQ_OK message.
Regression tests are added in authentication TAP tests for all the
keywords supported (except "creds", because it is around only for
compatibility reasons). A new TAP script has been added for SSPI, as
there was no script dedicated to it yet. It relies on SSPI being the
default authentication method on Windows, as set by pg_regress.
Author: Jacob Champion
Reviewed-by: Peter Eisentraut, David G. Johnston, Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
The majority of error exit cases in json_lex_string() failed to
set lex->token_terminator, causing problems for the error context
reporting code: it would see token_terminator less than token_start
and do something more or less nuts. In v14 and up the end result
could be as bad as a crash in report_json_context(). Older
versions accidentally avoided that fate; but all versions produce
error context lines that are far less useful than intended,
because they'd stop at the end of the prior token instead of
continuing to where the actually-bad input is.
To fix, invent some macros that make it less notationally painful
to do the right thing. Also add documentation about what the
function is actually required to do; and in >= v14, add an assertion
in report_json_context about token_terminator being sufficiently
far advanced.
Per report from Nikolay Shaplov. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/7332649.x5DLKWyVIX@thinkpad-pgpro
check_agg_arguments_walker() supposed that it needn't descend into
the arguments of a lower-level aggregate function, but this is
just wrong in the presence of multiple levels of sub-select. The
oversight would lead to executor failures on queries that should
be rejected. (Prior to v11, they actually were rejected, thanks
to a "redundant" execution-time check.)
Per bug #17835 from Anban Company. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17835-4f29f3098b2d0ba4@postgresql.org
This allows for a string which if an input field matches causes the
column's default value to be inserted. The advantage of this is that
the default can be inserted in some rows and not others, for which
non-default data is available.
The file_fdw extension is also modified to take allow use of this
option.
Israel Barth Rubio
Discussion: https://postgr.es/m/CAO_rXXAcqesk6DsvioOZ5zmeEmpUN5ktZf-9=9yu+DTr0Xr8Uw@mail.gmail.com
This ensures that the row count in the command tag for a MERGE is
correctly computed in the case where UPDATEs or DELETEs are skipped
due to a BEFORE ROW trigger returning NULL (the INSERT case was
already handled correctly by ExecMergeNotMatched() calling
ExecInsert()).
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCU8XEmR0JWKDtyb7iZ%3DqCffxS9uyJt0iOZ4TV4RT%2Bow1w%40mail.gmail.com
If MERGE attempts an UPDATE or DELETE on a table with BEFORE ROW
triggers, or a cross-partition UPDATE (with or without triggers), and
a concurrent UPDATE or DELETE happens, the merge code would fail.
In some cases this would lead to a crash, while in others it would
cause the wrong merge action to be executed, or no action at all. The
immediate cause of the crash was the trigger code calling
ExecGetUpdateNewTuple() as part of the EPQ mechanism, which fails
because during a merge ri_projectNew is NULL, since merge has its own
per-action projection information, which ExecGetUpdateNewTuple() knows
nothing about.
Fix by arranging for the trigger code to exit early, returning the
TM_Result and TM_FailureData information, if a concurrent modification
is detected, allowing the merge code to do the necessary EPQ handling
in its own way. Similarly, prevent the cross-partition update code
from doing any EPQ processing for a merge, allowing the merge code to
work out what it needs to do.
This leads to a number of simplifications in nodeModifyTable.c. Most
notably, the ModifyTableContext->GetUpdateNewTuple() callback is no
longer needed, and mergeGetUpdateNewTuple() can be deleted, since
there is no longer any requirement for get-update-new-tuple during a
merge. Similarly, ModifyTableContext->cpUpdateRetrySlot is no longer
needed. Thus ExecGetUpdateNewTuple() and the retry_slot handling of
ExecCrossPartitionUpdate() can be restored to how they were in v14,
before the merge code was added, and ExecMergeMatched() no longer
needs any special-case handling for cross-partition updates.
While at it, tidy up ExecUpdateEpilogue() a bit, making it handle
recheckIndexes locally, rather than passing it in as a parameter,
ensuring that it is freed properly. This dates back to when it was
split off from ExecUpdate() to support merge.
Per bug #17809 from Alexander Lakhin, and follow-up investigation of
bug #17792, also from Alexander Lakhin.
Back-patch to v15, where MERGE was introduced, taking care to preserve
backwards-compatibility of the trigger API in v15 for any extensions
that might use it.
Discussion:
https://postgr.es/m/17809-9e6650bef133f0fe%40postgresql.orghttps://postgr.es/m/17792-0f89452029662c36%40postgresql.org
Most of these calls were to generate some random data. These can be
replaced by appropriately adapted sha256() calls. To keep the diff
smaller, we wrap this into a helper function that produces the same
output format and length as the md5() call.
This will eventually allow these tests to pass in OpenSSL FIPS mode
(which does not allow MD5 use).
Similar work for other test suites will follow later.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852@enterprisedb.com
The recently added standard collation UNICODE (0d21d4b9bc) doesn't
give consistent results on some build farm members with old ICU
versions. Apparently, the ICU locale specification 'und' (language
tag style) misbehaves on some older ICU versions. Replacing it with
'' (ICU locale ID style) fixes it at least on some OS versions. Let's
see what the build farm says.
The error cases for TLS and GSS encryption were inconsistent. After TLS
fails, the connection is marked as dead and follow-up calls of
PQconnectPoll() would return immediately, but GSS encryption was not
doing that, so the connection would still have been allowed to enter the
GSS handling code. This was handled incorrectly when gssencmode was set
to "require". "prefer" was working correctly, and this could not happen
under "disable" as GSS encryption would not be attempted.
This commit makes the error handling of GSS encryption on par with TLS
portion, fixing the case of gssencmode=require.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion, Stephen Frost
Discussion: https://postgr.es/m/23787477-5fe1-a161-6d2a-e459f74c4713@timescale.com
Backpatch-through: 12
The 'ssl' option is of type 'combo', but we add a choice 'auto' that
simulates the behavior of a feature option. This way, openssl is used
automatically by default if present, but we retain the ability to
potentially select another ssl library.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ad65ffd1-a9a7-fda1-59c6-f7dc763c3051%40enterprisedb.com
adjust_appendrel_attrs can't transfer nullingrel labeling to a non-Var
translation expression (mainly because it's too late to wrap such an
expression in a PlaceHolderVar). I'd supposed in commit 2489d76c4
that that restriction was unreachable because we'd not attempt to push
problematic clauses down to an appendrel child relation. I forgot that
set_append_rel_size blindly converts all the parent rel's joininfo
clauses to child clauses, and that list could well contain clauses
from above a nulling outer join.
We might eventually have to devise a direct fix for this implementation
restriction, but for now it seems enough to filter out troublesome
clauses while constructing the child's joininfo list. Such clauses
are certainly not useful while constructing paths for the child rel;
they'll have to be applied later when we join the completed appendrel
to something else. So we don't need them here, and omitting them from
the list should save a few cycles while processing the child rel.
Per bug #17832 from Marko Tiikkaja.
Discussion: https://postgr.es/m/17832-d0a8106cdf1b722e@postgresql.org
This was an omission in the original creation of the module.
Also slightly adjust some wording to avoid a double "is".
Backpatch the non-meson piece of this to release 12, where the module
was introduced.
Discussion: https://postgr.es/m/be869e1c-8e3f-4cde-8609-212c899cccf9@dunslane.net
Freezing the relation N times and fetching the tuples one-by-one isn't that
cheap. On my machine this reduces test times by a bit less than one second, on
windows CI it's a few seconds.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230309001558.b7shzvio645ebdta@awork3.anarazel.de
64bit xids can't represent xids before epoch 0 (see also be504a3e97). When
FullTransactionIdFromXidAndCtx() was passed such an xid, it'd create a 64bit
xid far into the future. Noticed while adding assertions in the course of
investigating be504a3e97, as amcheck's test create such xids.
To fix the issue, just return FirstNormalFullTransactionId in this case. A
freshly initdb'd cluster already has a newer horizon. The most minimal version
of this would make the messages for some detected corruptions differently
inaccurate. To make those cases accurate, switch
FullTransactionIdFromXidAndCtx() to use the 32bit modulo difference between
xid and nextxid to compute the 64bit xid, yielding sensible "in the future" /
"in the past" answers.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de
Backpatch: 14-, where heapam verification was introduced
The COPY documentation is quite clear that "COPY relation TO" copies
rows from only the named table, not any inheritance children it may
have. However, if you enabled row-level security on the table then
this stopped being true, because the code forgot to apply the ONLY
modifier in the "SELECT ... FROM relation" query that it constructs
in order to allow RLS predicates to be attached. Fix that.
Report and patch by Antonin Houska (comment adjustments and test case
by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/3472.1675251957@antos
Previously, the default encoding was derived from the locale when
using libc; while the default was always UTF-8 when using ICU. That
would throw an error when the locale was not compatible with UTF-8.
This commit causes initdb to derive the default encoding from the
locale for both providers. If --no-locale is specified (or if the
locale is C or POSIX), the default encoding will be UTF-8 for ICU
(because ICU does not support SQL_ASCII) and SQL_ASCII for libc.
Per buildfarm failure on system "hoverfly" related to commit
27b62377b4.
Discussion: https://postgr.es/m/d191d5841347301a8f1238f609471ddd957fc47e.camel%40j-davis.com
Datetime input formerly accepted combinations such as
'1995-08-06 infinity', but this seems like a clear error.
Reject any combination of regular y/m/d/h/m/s fields with
these special tokens.
Joseph Koshakow, reviewed by Keisuke Kuroda and myself
Discussion: https://postgr.es/m/CAAvxfHdm8wwXwG_FFRaJ1nTHiMWb7YXS2YKCzCt8Q0a2ZoMcHg@mail.gmail.com
Previously, pg_upgrade checked that the old and new clusters were
compatible, including the locale and encoding. But the new cluster was
just created, and only template0 from the new cluster will be
preserved (template1 and postgres are both recreated during the
upgrade process).
Because template0 is not sensitive to locale or encoding, just update
the pg_database entry to be the same as template0 from the original
cluster.
This commit makes it easier to change the default initdb locale or
encoding settings without causing needless incompatibilities.
Discussion: https://postgr.es/m/d62b2874-729b-d26a-2d0a-0d64f509eca4@enterprisedb.com
Reviewed-by: Peter Eisentraut
In our Kerberos test suite, there isn't much need to worry about the
normal canonicalization that Kerberos provides by looking up the reverse
DNS for the IP address connected to, and in some cases it can actively
cause problems (eg: a captive portal wifi where the normally not
resolvable localhost address used ends up being resolved anyway, and
not to the domain we are using for testing, causing the entire
regression test to fail with errors about not being able to get a TGT
for the remote realm for cross-realm trust).
Therefore, disable it by adding rdns = false into the krb5.conf that's
generated for the test.
Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/Y/QD2zDkDYQA1GQt@tamriel.snowman.net
Change comments for pg_cryptohash_init(), pg_cryptohash_update(),
pg_cryptohash_final() in cryptohash.c to match cryptohash_openssl.c.
In particular, the claim that these functions were "designed" to never
fail was incorrect, since by design callers need to be prepared to
handle failures, for compatibility with the cryptohash_openssl.c
versions.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/301F4EDD-27B9-460F-B462-B9DB2BDE4ACF@yesql.se
A patch sent by Jacob Champion has been touching this area of the code,
and the set of changes done in a9e9a9f has made a run of pgindent on
these files a bit annoying to handle. So let's clean up a bit the area,
first, to ease the work on follow-up patches.
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
Commit bdaabb9b started skipping doomed transactions when building the
list of possible conflicts for SERIALIZABLE READ ONLY. That makes
sense, because doomed transactions won't commit, but a couple of subtle
things broke:
1. If all uncommitted r/w transactions are doomed, a READ ONLY
transaction would arbitrarily not benefit from the safe snapshot
optimization. It would not be taken immediately, and yet no other
transaction would set SXACT_FLAG_RO_SAFE later.
2. In the same circumstances but with DEFERRABLE, GetSafeSnapshot()
would correctly exit its wait loop without sleeping and then take the
optimization in non-assert builds, but assert builds would fail a sanity
check that SXACT_FLAG_RO_SAFE had been set by another transaction.
This is similar to the case for PredXact->WritableSxactCount == 0. We
should opt out immediately if our possibleUnsafeConflicts list is empty
after filtering.
The code to maintain the serializable global xmin is moved down below
the new opt out site, because otherwise we'd have to reverse its effects
before returning.
Back-patch to all supported releases. Bug #17368.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17116-d6ca217acc180e30%40postgresql.org
Discussion: https://postgr.es/m/20110707212159.GF76634%40csail.mit.edu
396d348b0 omitted adding with_icu to the pg_dump tests under
meson. Conversely, e6927270c exported ZSTD for pg_basebackup's tests, despite
pg_basebackup's ZSTD support not having any tests.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230226225239.GL1653@telsasoft.com
The changes in b6a0d469ca prevented installation of the test files during a
normal install. However, the buildfarm intentionally tries to trun the tests
against a "real" installation. The new install-test-files target provides that
ability.
Because we want to install into a normal directory, I removed the necessary
munging of the target paths from meson.build and moved it into
install-test-files. I also added DESTDIR support, so that installing can
redirect the directory if desired. That's used for the tmp_install/
installation now.
I didn't like the number of arguments necessary for install_test_files, so I
changed it to use
--install target list of files
which makes it easier to use for further directories, if/when we need them.
Discussion: https://postgr.es/m/20230308012940.edexipb3vqylcu6r@awork3.anarazel.de
This exposes the ICU facility to add custom collation rules to a
standard collation.
New options are added to CREATE COLLATION, CREATE DATABASE, createdb,
and initdb to set the rules.
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Discussion: https://www.postgresql.org/message-id/flat/821c71a4-6ef0-d366-9acf-bb8e367f739f@enterprisedb.com
Reformat some of the comments in MergeAttributes(). A lot of code has
been added here over time, and the comments could use a bit of editing
to make the code flow read better.
One file per line seems best. We already did this in some cases.
This adopts the same format everywhere (except in some cases where the
list reasonably fits on one line).
There was apparently an attempt here to list all the object types that
ACL_USAGE applies to, but it wasn't complete. So instead of trying to
keep up, put in a more timeless comment.
When vacuum_defer_cleanup_age is bigger than the current xid, including the
epoch, the subtraction of vacuum_defer_cleanup_age would lead to a wrapped
around xid. While that normally is not a problem, the subsequent conversion to
a 64bit xid results in a 64bit-xid very far into the future. As that xid is
used as a horizon to detect whether rows versions are old enough to be
removed, that allows removal of rows that are still visible (i.e. corruption).
If vacuum_defer_cleanup_age was never changed from the default, there is no
chance of this bug occurring.
This bug was introduced in dc7420c2c9. A lesser version of it exists in
12-13, introduced by fb5344c969, affecting only GiST.
The 12-13 version of the issue can, in rare cases, lead to pages in a gist
index getting recycled too early, potentially causing index entries to be
found multiple times.
The fix is fairly simple - don't allow vacuum_defer_cleanup_age to retreat
further than FirstNormalTransactionId.
Patches to make similar bugs easier to find, by adding asserts to the 64bit
xid infrastructure, have been proposed, but are not suitable for backpatching.
Currently there are no tests for vacuum_defer_cleanup_age. A patch introducing
infrastructure to make writing a test easier has been posted to the list.
Reported-by: Michail Nikolaev <michail.nikolaev@gmail.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de
Backpatch: 12-, but impact/fix is smaller for 12-13
Previously, all the nodes of CallStmt were included in the jumbling,
causing a duplicate in the computation as the transformed state of the
CALL query was included as well as the parsed state (transformed
FuncCall with all the input arguments and potential output arguments).
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Y+MRdEq9W9XVa2AB@paquier.xyz
We already didn't use touch (some earlier version of the meson build did ),
and cp is only used for updating unicode files. The latter already depends on
the optional availability of 'wget', so doing the same for 'cp' makes sense.
Eventually we probably want a portable command for updating source code as
part of a target, but for now...
Reported-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/70e96c34-64ee-e549-8c4a-f91a7a668804@dunslane.net
IntoClause.viewQuery is a copy of the parsed-but-not-rewritten SELECT
clause copied to IntoClause when transforming CreateTableAsStmt for a
materialized view. Including a second copy of the SELECT Query into the
query jumbling was leading to an incorrect numbering of the Const node
locations, as these would be counted twice instead of once.
This becomes visible once the query normalization is applied to CREATE
MATERIALIZED VIEW in pg_stat_statements in the shape of a query string
using only odd numbers for the normalized constants, (regression tests
added in pg_stat_statements as of de2aca2 would show the difference).
Including the original Query from CreateTableAsStmt is enough for the
query jumbling.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Y+MRdEq9W9XVa2AB@paquier.xyz
4211fbd has been handling PROCESS_MAIN in vacuum_rel() with an "if/else
if" structure to avoid an extra level of indentation, but this has been
found as being rather parse to read. This commit updates the code so as
we check for PROCESS_MAIN in a single place and then handle its
subpaths, FULL or non-FULL vacuums. Some comments are added to make
that clearer for the reader.
Reported-by: Melanie Plageman
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Melanie Plageman
Discussion: https://postgr.es/m/20230306194009.5cn6sp3wjotd36nu@liskov
If a view is defined atop another view, and then CREATE OR REPLACE
VIEW is used to add columns to the lower view, then when the upper
view's referencing RTE is expanded by ApplyRetrieveRule we will have
a subquery RTE with fewer eref->colnames than output columns. This
confuses various code that assumes those lists are always in sync,
as they are in plain parser output.
We have seen such problems before (cf commit d5b760ecb), and now
I think the time has come to do what was speculated about in that
commit: let's make ApplyRetrieveRule synthesize some column names to
preserve the invariant that holds in parser output. Otherwise we'll
be chasing this class of bugs indefinitely. Moreover, it appears from
testing that this actually gives us better results in the test case
d5b760ecb added, and likely in other corner cases that we lack
coverage for.
In HEAD, I replaced d5b760ecb's hack to make expandRTE exit early with
an elog(ERROR) call, since the case is now presumably unreachable.
But it seems like changing that in back branches would bring more risk
than benefit, so there I just updated the comment.
Per bug #17811 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17811-d31686b78f0dffc9@postgresql.org
This routine is able to retrieve the OID of the schema used with an
extension (pg_extension.extnamespace), or InvalidOid if this information
is not available. plpgsql_check embeds a copy of this code when
performing checks on functions, as one out-of-core example.
Author: Pavel Stehule
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CAFj8pRD+9x55hjDoi285jCcjPc8uuY_D+FLn5RpXggdz+4O2sQ@mail.gmail.com
If UPDATE is forced to retry after an EvalPlanQual check, it neglected
to repeat GENERATED-column computations, even though those might well
have changed since we're dealing with a different tuple than before.
Fixing this is mostly a matter of looping back a bit further when
we retry. In v15 and HEAD that's most easily done by altering the API
of ExecUpdateAct so that it includes computing GENERATED expressions.
Also, if an UPDATE in a partitioned table turns into a cross-partition
INSERT operation, we failed to recompute GENERATED columns. That's a
bug since 8bf6ec3ba allowed partitions to have different generation
expressions; although it seems to have no ill effects before that.
Fixing this is messier because we can now have situations where the same
query needs both the UPDATE-aligned set of GENERATED columns and the
INSERT-aligned set, and it's unclear which set will be generated first
(else we could hack things by forcing the INSERT-aligned set to be
generated, which is indeed how fe9e658f4 made it work for MERGE).
The best fix seems to be to build and store separate sets of expressions
for the INSERT and UPDATE cases. That would create ABI issues in the
back branches, but so far it seems we can leave this alone in the back
branches.
Per bug #17823 from Hisahiro Kauchi. The first part of this affects all
branches back to v12 where GENERATED columns were added.
Discussion: https://postgr.es/m/17823-b64909cf7d63de84@postgresql.org
While testing a fix for bug #17823, I discovered that EvalPlanQualStart
failed to copy es_rteperminfos from the parent EState, resulting in
failure if anything in EPQ execution wanted to consult that information.
This led me to conclude that commit a61b1f748 had been too haphazard
about where to fill es_rteperminfos, and that we need to be sure that
that happens exactly where es_range_table gets filled. So I changed the
signature of ExecInitRangeTable to help ensure that this new requirement
doesn't get missed. (Indeed, pgoutput.c was also failing to fill it.
Maybe we don't ever need it there, but I wouldn't bet on that.)
No test case yet; one will arrive with the fix for #17823.
But that needs to be back-patched, while this fix is HEAD-only.
Discussion: https://postgr.es/m/17823-b64909cf7d63de84@postgresql.org
The functions that follow are concerned with various things, of
which the tar format is only one, so this comment doesn't really
seem helpful. The file isn't really divided into sections in the
way that this comment seems to contemplate -- or at least, not
any more.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoZ_fFAoU6mrHt9QBs+dcYhN6yXenGTTMRebZNhtwPwHyg@mail.gmail.com
We read blocks of data from files that we're backing up in chunks,
some multiple of BLCKSZ for each read. If checksum verification fails,
we then try rereading just the one block for which validation failed.
If that block happened to be the first block of the chunk, and if
the file was concurrently truncated to remove that block, then we'd
reach a call to bbsink_archive_contents() with a buffer length of 0.
That causes an assertion failure.
As far as I can see, there are no particularly bad consequences if
this happens in a non-assert build, and it's pretty unlikely to happen
in the first place because it requires a series of somewhat unlikely
things to happen in very quick succession. However, assertion failures
are bad, so rearrange the code to avoid that possibility.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoZ_fFAoU6mrHt9QBs+dcYhN6yXenGTTMRebZNhtwPwHyg@mail.gmail.com
Add description of which one is the default between two complementary
options of --bypassrls and --replication in the help text and docs. In
correspondence let the command always include the tokens corresponding
to every options of that kind in the SQL command sent to server. Tests
are updated accordingly.
Also fix the checks of some trivalue vars which were using literal zero
for checking default value instead of the enum label TRI_DEFAULT. While
not a bug, since TRI_DEFAULT is defined as zero, fixing improves read-
ability improved readability (and avoid bugs if the enum is changed).
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220810.151243.1073197628358749087.horikyota.ntt@gmail.com
Disabling this option is useful to run VACUUM (with or without FULL) on
only the toast table of a relation, bypassing the main relation. This
option is enabled by default.
Running directly VACUUM on a toast table was already possible without
this feature, by using the non-deterministic name of a toast relation
(as of pg_toast.pg_toast_N, where N would be the OID of the parent
relation) in the VACUUM command, and it required a scan of pg_class to
know the name of the toast table. So this feature is basically a
shortcut to be able to run VACUUM or VACUUM FULL on a toast relation,
using only the name of the parent relation.
A new switch called --no-process-main is added to vacuumdb, to work as
an equivalent of PROCESS_MAIN.
Regression tests are added to cover VACUUM and VACUUM FULL, looking at
pg_stat_all_tables.vacuum_count to see how many vacuums have run on
each table, main or toast.
Author: Nathan Bossart
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/20221230000028.GA435655@nathanxps13
All the regression tests of VACUUM (PROCESS_TOAST) were only checking if
the commands were able to run, without checking if VACUUM was really
running on what it should. This expands this set of tests so as we now
look at pg_stat_all_tables.vacuum_count to see how many vacuums have
been run on a given table and its toast relation.
Extracted from a larger patch by the same author, as this is useful on
its own.
Special thanks to Álvaro Herrera for the idea of using
pg_stat_all_tables to check the state of the toast relation.
Author: Nathan Bossart
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/20221230000028.GA435655@nathanxps13
The following changes are made to pg_write_zeros(), the API able to
write series of zeros using vectored I/O:
- Add of an "offset" parameter, to write the size from this position
(the 'p' of "pwrite" seems to mean position, though POSIX does not
outline ythat directly), hence the name of the routine is incorrect if
it is not able to handle offsets.
- Avoid memset() of "zbuffer" on every call.
- Avoid initialization of the whole IOV array if not needed.
- Group the trailing write() call with the main write() call,
simplifying the function logic.
Author: Andres Freund
Reviewed-by: Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/20230215005525.mrrlmqrxzjzhaipl@awork3.anarazel.de
1. Make sure that we don't decrement SxactGlobalXminCount twice when
the SXACT_FLAG_RO_SAFE optimization is reached in a parallel query.
This could trigger a sanity check failure in assert builds. Non-assert
builds recompute the count in SetNewSxactGlobalXmin(), so the problem
was hidden, explaining the lack of field reports. Add a new isolation
test to exercise that case.
2. Remove an assertion that the DOOMED flag can't be set on a partially
released SERIALIZABLEXACT. Instead, ignore the flag (our transaction
was already determined to be read-only safe, and DOOMED is in fact set
during partial release, and there was already an assertion that it
wasn't set sooner). Improve an existing isolation test so that it
reaches that case (previously it wasn't quite testing what it was
supposed to be testing; see discussion).
Back-patch to 12. Bug #17116. Defects in commit 47a338cf.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17116-d6ca217acc180e30%40postgresql.org
Add support for non-decimal integer literals and underscores in
numeric literals to SQL JSON path language. This follows the rules of
ECMAScript, as referred to by the SQL standard.
Internally, all the numeric literal parsing of jsonpath goes through
numeric_in, which already supports all this, so this patch is just a
bit of lexer work and some tests and documentation.
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b11b25bb-6ec1-d42f-cedd-311eae59e1fb@enterprisedb.com
Beginning in v15, if you apply ALTER TABLE ENABLE/DISABLE TRIGGER to
a partitioned table, it also affects the partitions' cloned versions
of the affected trigger(s). The initial implementation of this
located the clones by name, but that fails on foreign-key triggers
which have names incorporating their own OIDs. We can fix that, and
also make the behavior more bulletproof in the face of user-initiated
trigger renames, by identifying the cloned triggers by tgparentid.
Following the lead of earlier commits in this area, I took care not
to break ABI in the v15 branch, even though I rather doubt there
are any external callers of EnableDisableTrigger.
While here, update the documentation, which was not touched when
the semantics were changed.
Per bug #17817 from Alan Hodgson. Back-patch to v15; older versions
do not have this behavior.
Discussion: https://postgr.es/m/17817-31dfb7c2100d9f3d@postgresql.org
We allow our header files to depend on the appropriate one of
postgres.h, postgres_fe.h, or c.h having already been included.
However, there are a few headers such as libpq-fe.h that are
meant to be used by client applications and therefore must
compile without any assumptions about previous inclusions.
These test scripts failed to consider that, which seems quite
hazardous since we might not immediately notice such a problem
otherwise. Hence, adjust these scripts to test relevant libpq
and ecpg headers with no prior inclusion.
While at it, we can also make an effort to actually use the
relevant one of postgres.h, postgres_fe.h, or c.h. I added
some rules that guess which one to use based on the first-level
src subdirectory, e.g. use postgres_fe.h under src/bin/.
These rules are hardly water-tight but they seem to work today,
and we can always refine them in the future.
These changes don't reveal any live problems today, which is good,
but they should make these scripts more able to catch future bugs.
Discussion: https://postgr.es/m/2488193.1677863247@sss.pgh.pa.us
The comments claim that certain pieces of data are part of the main
WAL record data when in reality they are part of the data for
block 0. Repair.
Bertrand Drouvot, reviewed by Amit Kapila. Originally reported by me.
Discussion: http://postgr.es/m/80db7836-4415-d54a-64c3-66b88b1430e7@gmail.com
Previously, meson installed modules under src/test/modules/ as part of
a normal installation, even though these files are only meant for use
by tests. This is because there is no way to set up up the build
system to install extra things only when told.
This patch fixes that with a workaround: We don't install these
modules as part of meson install, but we create a new "test" that runs
before the real tests whose action it is to install these files. The
installation is done by manual copies using a small helper script.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/2a039e8e-f31f-31e8-afe7-bab3130ad2de%40enterprisedb.com
Coverage of the query jumbling code has always relied on the queries
included in the regression tests of pg_stat_statements. This has its
limitations, as a lot of query patterns have never really stressed the
query jumbling code. The situation got a bit worse since the query
jumbling has been added in the backend core code (5fd9dfa), hence new
nodes that should be included in the jumbling could easily be missed,
resulting in failures in pg_stat_statements or any modules that require
query ID computations. Forcing a load of pg_stat_statements in
027_stream_regress.pl ensures that nodes are never missed in the
computations, without having to rely on a buildfarm member for this
check.
Before this commit, the line coverage of queryjumblefuncs.funcs.c was
around 48.5%, now up to 94.6% just by running 027_stream_regress.pl.
A basic check is added to show that pg_stat_statements reports are
generated after the main regression test suite is finished.
Discussion: https://postgr.es/m/Y+nD9LN70w+8eaG9@paquier.xyz
Our previous habit of showing the full function body is really
pretty unfriendly for tabular viewing of functions, and now that
we have \sf and \ef commands there seems no good reason why \df+
has to do it. It still seems to make sense to show prosrc for
internal and C-language functions, since in those cases prosrc
is just the C function name; but then let's rename the column to
"Internal name" which is a more accurate descriptor.
Isaac Morland
Discussion: https://postgr.es/m/CAMsGm5eqKc6J1=Lwn=ZONG=6ZDYWRQ4cgZQLqMuZGB1aVt_JBg@mail.gmail.com
Open long-lived data and WAL file descriptors with O_CLOEXEC. This flag
was introduced by SUSv4 (POSIX.1-2008), and by now all of our target
Unix systems have it. Our open() implementation for Windows already had
that behavior, so provide a dummy O_CLOEXEC flag on that platform.
For now, callers of open() and the "thin" wrappers in fd.c that deal in
raw descriptors need to pass in O_CLOEXEC explicitly if desired. This
commit does that for WAL files, and automatically for everything
accessed via VFDs including SMgrRelation and BufFile. (With more
discussion we might decide to turn it on automatically for the thin
open()-wrappers too to avoid risk of missing places that need it, but
these are typically used for short-lived descriptors where we don't
expect to fork/exec, and it's remotely possible that extensions could be
using these APIs and passing descriptors to subprograms deliberately, so
that hasn't been done here.)
Do the same for sockets and the postmaster pipe with FD_CLOEXEC. (Later
commits might use modern interfaces to remove these extra fcntl() calls
and more where possible, but we'll need them as a fallback for a couple
of systems, so do it that way in this initial commit.)
With this change, subprograms executed for archiving, copying etc will
no longer have access to the server's descriptors, other than the ones
that we decide to pass down.
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Discussion: https://postgr.es/m/CA%2BhUKGKb6FsAdQWcRL35KJsftv%2B9zXqQbzwkfRf1i0J2e57%2BhQ%40mail.gmail.com
When I designed the Bitmapset module, I set things up so that an empty
Bitmapset could be represented either by a NULL pointer, or by an
allocated object all of whose bits are zero. I've recently come to
the conclusion that that was a bad idea and we should instead have a
convention like the longstanding invariant for Lists, whereby an empty
list is represented by NIL and nothing else.
To do this, we need to fix bms_intersect, bms_difference, and a couple
of other functions to check for having produced an empty result; but
then we can replace bms_is_empty(a) by a simple "a == NULL" test.
This is very likely a (marginal) win performance-wise, because we
call bms_is_empty many more times than those other functions put
together. However, the real reason to do it is that we have various
places that have hand-implemented a rule about "this Bitmapset
variable must be exactly NULL if empty", so that they can use
checks-for-null in place of bms_is_empty calls in particularly hot
code paths. That is a really fragile, mistake-prone way to do things,
and I'm surprised that we've seldom been bitten by it. It's not well
documented at all which variables have this property, so you can't
readily tell which code might be violating those conventions. By
making the convention universal, we can eliminate a subtle source of
bugs.
Patch by me; thanks to Nathan Bossart and Richard Guo for review.
Discussion: https://postgr.es/m/1159933.1677621588@sss.pgh.pa.us
nodeAppend.c used non-nullness of appendstate->as_valid_subplans as
a state flag to indicate whether it'd done ExecFindMatchingSubPlans
(or some sufficient approximation to that). This was pretty
questionable even in the beginning, since it wouldn't really work
right if there are no valid subplans. It got more questionable
after commit 27e1f1456 added logic that could reduce as_valid_subplans
to an empty set: at that point we were depending on unspecified
behavior of bms_del_members, namely that it'd not return an empty
set as NULL. It's about to start doing that, which breaks this
logic entirely. Hence, add a separate boolean flag to signal
whether as_valid_subplans has been computed.
Also fix a previously-cosmetic bug in nodeAgg.c, wherein it ignored
the return value of bms_del_member instead of updating its pointer.
Patch by me; thanks to Nathan Bossart and Richard Guo for review.
Discussion: https://postgr.es/m/1159933.1677621588@sss.pgh.pa.us
This function has been semi-deprecated ever since we invented
bms_next_member(). Its habit of scribbling on the input bitmapset
isn't great, plus for sufficiently large bitmapsets it would take
O(N^2) time to complete a loop. Now we have the additional problem
that reducing the input to empty while leaving it still accessible
would violate a planned invariant. So let's just get rid of it,
after updating the few extant callers to use bms_next_member().
Patch by me; thanks to Nathan Bossart and Richard Guo for review.
Discussion: https://postgr.es/m/1159933.1677621588@sss.pgh.pa.us
Some deprecated options were not marked as such in usage output. This
does so across the installed binaries in an attempt to provide consistent
markup for this.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/062C6A8A-A4E8-4F52-9E31-45F0C9E9915E@yesql.se
Commit 0a20ff54f split out the GUC variables from guc.c into a new file
guc_tables.c. This updates comments referencing guc.c regarding variables
which are now in guc_tables.c.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6B50C70C-8C1F-4F9A-A7C0-EEAFCC032406@yesql.se
When you have some invalid WAL, you often get a message like "wanted
24, got 0". This is a bit incorrect, since it really wanted *at
least* 24, not exactly 24. This updates the messages to that effect,
and also adds that detail to one message where it was available but
not printed.
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Jeevan Ladhe <jeevanladhe.os@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/726d782b-5e45-0c3e-d775-6686afe9aa83%40enterprisedb.com
This is usually harmless, but if you were very unlucky it could
provoke a segfault due to the "to" string being right up against
the end of memory. Found via valgrind testing (so we might've
found it earlier, except that our regression tests lacked any
exercise of translate()'s deletion feature).
Fix by switching the order of the test-for-end-of-string and
advance-pointer steps. While here, compute "to_ptr + tolen"
just once. (Smarter compilers might figure that out for
themselves, but let's just make sure.)
Report and fix by Daniil Anisimov, in bug #17816.
Discussion: https://postgr.es/m/17816-70f3d2764e88a108@postgresql.org
As it stands, flagInhAttrs() can make changes in table properties that
change decisions made at other tables during other iterations of its
loop. This is a pretty bad idea, since we visit the tables in OID
order which is not necessarily related to inheritance relationships.
So far as I can tell, the consequences are just cosmetic: we might
dump DEFAULT or GENERATED expressions that we don't really need to
because they match properties of the parent. Nonetheless, it's buggy,
and somebody might someday add functionality here that fails less
benignly when the traversal order varies.
One issue is that when we decide we needn't dump a particular
GENERATED expression, we physically unlink the struct for it,
so that it will now look like the table has no such expression,
causing the wrong choice to be made at any child visited later.
We can improve that by instead clearing the dobj.dump flag,
and taking care to check that flag when it comes time to dump
the expression or not.
The other problem is that if we decide we need to fake up a DEFAULT
NULL clause to override a default that would otherwise get inherited,
we modify the data structure in the reverse fashion, creating an
attrdefs entry where there hadn't been one. It's harder to avoid
doing that, but since the backend won't report a plain "DEFAULT NULL"
property we can modify the code to recognize ones we just added.
Add some commentary to perhaps forestall future mistakes of the
same ilk.
Since the effects of this seem only cosmetic, no back-patch.
Discussion: https://postgr.es/m/1506298.1676323579@sss.pgh.pa.us
Per buildfarm members snakefly, parula and prion, that reflect the
results coming from the latest versions of libxml2.
Oversight in b8da37b in the shape of an incorrect copy-paste. The CI
was green, but it does not stress this expected output.
pg_input_error_info() is now a SQL function able to return a row with
more than just the error message generated for incorrect data type
inputs when these are able to handle soft failures, returning more
contents of ErrorData, as of:
- The error message (same as before).
- The error detail, if set.
- The error hint, if set.
- SQL error code.
All the regression tests that relied on pg_input_error_message() are
updated to reflect the effects of the rename.
Per discussion with Tom Lane and Andrew Dunstan.
Author: Nathan Bossart
Discussion: https://postgr.es/m/139a68e1-bd1f-a9a7-b5fe-0be9845c6311@dunslane.net
Some clang versions whine about comparing an enum variable to
a value outside the range of the enum, on the grounds that the
result must be constant. In the cases we fix here, the loops
will terminate only if the enum variable can in fact hold a
value one beyond its declared range. While that's very likely
to always be true for these enum types, it still seems like a
poor coding practice to assume it; so use "int" loop variables
instead to silence the warnings. (This matches what we've done
in other places, for example loops over the range of ForkNumber.)
While at it, let's drop the XXX_FIRST macros for these enums and just
write zeroes for the loop start values. The apparent flexibility
seems rather illusory given that iterating up to one-less-than-
the-number-of-values is only correct for a zero-based range.
Melanie Plageman
Discussion: https://postgr.es/m/20520.1677435600@sss.pgh.pa.us
This will create two bytes of padding space in xl_hash_vacuum_one_page which
can be used for future patches. This makes the datatype of
xl_hash_vacuum_one_page.ntuples same as gistxlogDelete.ntodelete which is
advisable as both are used for the same purpose.
Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/b0e20c40-cb7a-fc1c-c607-2a78dac5021e@gmail.com
It's been this way for a very long time, but it appears to have been
masking an issue that only manifests with different settings. Therefore,
run the tests in the installation's default encoding/locale.
Backpatch to all live branches.
We already tried to fix this in commits 3f7323cbb et al (and follow-on
fixes), but now it emerges that there are still unfixed cases;
moreover, these cases affect all branches not only pre-v14. I thought
we had eliminated all cases of making multiple clones of an UPDATE's
target list when we nuked inheritance_planner. But it turns out we
still do that in some partitioned-UPDATE cases, notably including
INSERT ... ON CONFLICT UPDATE, because ExecInitPartitionInfo thinks
it's okay to clone and modify the parent's targetlist.
This fix is based on a suggestion from Andres Freund: let's stop
abusing the ParamExecData.execPlan mechanism, which was only ever
meant to handle initplans, and instead solve the execution timing
problem by having the expression compiler move MULTIEXPR_SUBLINK steps
to the front of their expression step lists. This is feasible because
(a) all branches still in support compile the entire targetlist of
an UPDATE into a single ExprState, and (b) we know that all
MULTIEXPR_SUBLINKs do need to be evaluated --- none could be buried
inside a CASE, for example. There is a minor semantics change
concerning the order of execution of the MULTIEXPR's subquery versus
other parts of the parent targetlist, but that seems like something
we can get away with. By doing that, we no longer need to worry
about whether different clones of a MULTIEXPR_SUBLINK share output
Params; their usage of that data structure won't overlap.
Per bug #17800 from Alexander Lakhin. Back-patch to all supported
branches. In v13 and earlier, we can revert 3f7323cbb and follow-on
fixes; however, I chose to keep the SubPlan.subLinkId field added
in ccbb54c72. We don't need that anymore in the core code, but it's
cheap enough to fill, and removing a plan node field in a minor
release seems like it'd be asking for trouble.
Andres Freund and Tom Lane
Discussion: https://postgr.es/m/17800-ff90866b3906c964@postgresql.org
If a rule action contains a subquery that refers to columns from OLD
or NEW, then those are really lateral references, and the planner will
complain if it sees such things in a subquery that isn't marked as
lateral. However, at rule-definition time, the user isn't required to
mark the subquery with LATERAL, and so it can fail when the rule is
used.
Fix this by marking such subqueries as lateral in the rewriter, at the
point where they're used.
Dean Rasheed and Tom Lane, per report from Alexander Lakhin.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/5e09da43-aaba-7ea7-0a51-a2eb981b058b%40gmail.com
A unique index which is created with non-distinct NULLS cannot be
used for backing a primary key constraint. Make sure to disallow
such table alterations and teach pg_dump to drop the non-distinct
NULLS clause on indexes where this has been set.
Bug: 17720
Reported-by: Reiner Peterke <zedaardv@drizzle.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/17720-dab8ee0fa85d316d@postgresql.org
Multiple cycles of starting up and shutting down the plugin within a
single session would eventually lead to "out of relcache_callback_list
slots", because pgoutput_startup blindly re-registered its cache
callbacks each time. Fix it to register them only once, as all other
users of cache callbacks already take care to do.
This has been broken all along, so back-patch to all supported branches.
Shi Yu
Discussion: https://postgr.es/m/OSZPR01MB631004A78D743D68921FFAD3FDA79@OSZPR01MB6310.jpnprd01.prod.outlook.com
Expand pg_dump's compression streaming and file APIs to support the lz4
algorithm. The newly added compress_lz4.{c,h} files cover all the
functionality of the aforementioned APIs. Minor changes were necessary
in various pg_backup_* files, where code for the 'lz4' file suffix has
been added, as well as pg_dump's compression option parsing.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Shi Yu, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
Offers a generally better separation of responsibilities for collation
code. Also, a step towards multi-lib ICU, which should be based on a
clean separation of the routines required for collation providers.
Callers with NUL-terminated strings should call pg_strcoll() or
pg_strxfrm(); callers with strings and their length should call the
variants pg_strncoll() or pg_strnxfrm().
Reviewed-by: Peter Eisentraut, Peter Geoghegan
Discussion: https://postgr.es/m/a581136455c940d7bd0ff482d3a2bd51af25a94f.camel%40j-davis.com
Switch pg_dump to use the Compression API, implemented by bf9aa490db.
The CompressFileHandle replaces the cfp* family of functions with a
struct of callbacks for accessing (compressed) files. This allows adding
new compression methods simply by introducing a new struct instance with
appropriate implementation of the callbacks.
Archives compressed using custom compression methods store an identifier
of the compression algorithm in their header instead of the compression
level. The header version is bumped.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
It's possible, in admittedly-rather-contrived cases, for an eclass
to generate a derived "join" qual that constrains the post-outer-join
value(s) of some RHS variable(s) without mentioning the LHS at all.
While the mechanisms were set up to work for this, we fell foul of
the "get_common_eclass_indexes" filter installed by commit 3373c7155:
it could decide that such an eclass wasn't relevant to the join, so
that the required qual clause wouldn't get emitted there or anywhere
else.
To fix, apply get_common_eclass_indexes only at inner joins, where
its rule is still valid. At an outer join, fall back to examining all
eclasses that mention either input (or the OJ relid, though it should
be impossible for an eclass to mention that without mentioning either
input). Perhaps we can improve on that later, but the cost/benefit of
adding more complexity to skip some irrelevant eclasses is dubious.
To allow cheaply distinguishing outer from inner joins, pass the
ojrelid to generate_join_implied_equalities as a separate argument.
This also allows cleaning up some sloppiness that had crept into
the definition of its join_relids argument, and it allows accurate
calculation of nominal_join_relids for a child outer join. (The
latter oversight seems not to have been a live bug, but it certainly
could have caused problems in future.)
Also fix what might be a live bug in check_index_predicates: it was
being sloppy about what it passed to generate_join_implied_equalities.
Per report from Richard Guo.
Discussion: https://postgr.es/m/CAMbWs4-DsTBfOvXuw64GdFss2=M5cwtEhY=0DCS7t2gT7P6hSA@mail.gmail.com
Commit bf9aa490db introduced a compression API in compress_io.{c,h} to
make reuse easier, and allow adding more compression algorithms.
However, pg_backup_archiver.c was not switched to this API and continued
to call the compression directly.
This commit teaches pg_backup_archiver.c about the compression API, so
that it can benefit from bf9aa490db (simpler code, easier addition of
new compression methods).
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
If the source server was just promoted, and it hasn't written the
checkpoint record yet, pg_rewind considered the server to be still on
the old timeline. Because of that, it would claim incorrectly that no
rewind is required. Fix that by looking at minRecoveryPointTLI in the
control file in addition to the ThisTimeLineID on the checkpoint.
This has been a known issue since forever, and we had worked around it
in the regression tests by issuing a checkpoint after each promotion,
before running pg_rewind. But that was always quite hacky, so better
to fix this properly. This doesn't add any new tests for this, but
removes the previously-added workarounds from the existing tests, so
that they should occasionally hit this codepath again.
This is arguably a bug fix, but don't backpatch because we haven't
really treated it as a bug so far. Also, the patch didn't apply
cleanly to v13 and below. I'm sure sure it could be made to work on
v13, but doesn't seem worth the risk and effort.
Reviewed-by: Kyotaro Horiguchi, Ibrar Ahmed, Aleksander Alekseev
Discussion: https://www.postgresql.org/message-id/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi
Given an updatable view with a DO ALSO INSERT ... SELECT rule, a
multi-row INSERT ... VALUES query on the view fails if the VALUES list
contains any DEFAULTs that are not replaced by view defaults. This
manifests as an "unrecognized node type" error, or an Assert failure,
in an assert-enabled build.
The reason is that when RewriteQuery() attempts to replace the
remaining DEFAULT items with NULLs in any product queries, using
rewriteValuesRTEToNulls(), it assumes that the VALUES RTE is located
at the same rangetable index in each product query. However, if the
product query is an INSERT ... SELECT, then the VALUES RTE is actually
in the SELECT part of that query (at the same index), rather than the
top-level product query itself.
Fix, by descending to the SELECT in such cases. Note that we can't
simply use getInsertSelectQuery() for this, since that expects to be
given a raw rule action with OLD and NEW placeholder entries, so we
duplicate its logic instead.
While at it, beef up the checks in getInsertSelectQuery() by checking
that the jointree->fromlist node is indeed a RangeTblRef, and that the
RTE it points to has rtekind == RTE_SUBQUERY.
Per bug #17803, from Alexander Lakhin. Back-patch to all supported
branches.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/17803-53c63ed4ecb4eac6%40postgresql.org
This was previously only documented in a comment. Given the size of the
struct, it's not hard to miss that comment. As evidenced by the commits
leading up to fe3caa1439, 67b26703b4.
It's possible, but not likely, that we might have to weaken these assertions
on a less commonly used architecture.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/295606.1677101684@sss.pgh.pa.us
A couple of code paths in CONNECTION_AWAITING_RESPONSE will eagerly read
bytes off a connection that should be closed. Don't let a misbehaving
server chew up client resources here; a v2 error can't be infinitely
long, and a v3 error should be bounded by its original message length.
For the existing error_return cases, I added some additional error
messages for symmetry with the new ones, and cleaned up some message
rot.
Author: Jacob Champion
Discussion: https://www.postgresql.org/message-id/8e729daf-7d71-6965-9687-8bc0630599b3%40timescale.com
initsplan.c figured that it could push Var-free qual clauses to
the top of the current JoinDomain, which is okay in the abstract.
But if the current domain is inside some outer join, and we later
commute an inside-the-domain outer join with one outside it,
we end up placing the pushed-up qual clause incorrectly.
In distribute_qual_to_rels, avoid this by using the syntactic scope
of the qual clause; with the exception that if we're in the top-level
join domain we can still use the full query relid set, ensuring the
resulting gating Result node goes to the top of the plan. (This is
approximately as smart as the pre-v16 code was. Perhaps we can do
better later, but it's not clear that such cases are worth a lot of
sweat.)
In process_implied_equality, we don't have a clear notion of syntactic
scope, but we do have the results of SpecialJoinInfo construction.
Thumb through those and remove any lower outer joins that might get
commuted to above the join domain. Again, we can make an exception
for the top-level join domain. It'd be possible to work harder here
(for example, by keeping outer joins that aren't shown as potentially
commutable), but I'm going to stop here for the moment. This issue
has convinced me that the current representation of join domains
probably needs further refinement, so I'm disinclined to write
inessential dependent logic just yet.
In passing, tighten the qualscope passed to process_implied_equality
by generate_base_implied_equalities_no_const; there's no need for
it to be larger than the rel we are currently considering.
Tom Lane and Richard Guo, per report from Tender Wang.
Discussion: https://postgr.es/m/CAHewXNk9eJ35ru5xATWioTV4+xZPHptjy9etdcNPjUfY9RQ+uQ@mail.gmail.com
Whe decoding a transactional logical message, logicalmsg_decode called
SnapBuildGetOrBuildSnapshot. But we may not have a consistent snapshot
yet at that point. We don't actually need the snapshot in this case
(during replay we'll have the snapshot from the transaction), so in
practice this is harmless. But in assert-enabled build this crashes.
Fixed by requesting the snapshot only in non-transactional case, where
we are guaranteed to have SNAPBUILD_CONSISTENT.
Backpatch to 11. The issue exists since 9.6.
Backpatch-through: 11
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/84d60912-6eab-9b84-5de3-41765a5449e8@enterprisedb.com
In ExecInitPartitionInfo(), the Assert when building the WITH CHECK
OPTION list for the new partition assumed that the command would be an
INSERT or UPDATE, but it can also be a MERGE. This can be triggered by
a MERGE into a partitioned table with RLS checks to enforce.
Fix, and back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWWFtQmW67F3XTyMU5Am10Oxa_b8oe0x%2BNu5Mo%2BCdRErg%40mail.gmail.com
This ensures that the row count in the command tag for a MERGE is
correctly computed. Previously, if MERGE updated a partitioned table,
the row count would be incorrect if any row was moved to a different
partition, since such updates were counted twice.
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWRMG7XX2QEsVL1LswmNo2d_YG8tKTLkpD3=Lp644S7rg@mail.gmail.com
SQL:2023 defines an ANY_VALUE aggregate whose purpose is to emit an
implementation-dependent (i.e. non-deterministic) value from the
aggregated rows.
Author: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5cff866c-10a8-d2df-32cb-e9072e6b04a2@postgresfriends.org
WAL_LOG does a scan of the template's pg_class to determine the set of
relations that need to be copied from a template database to the new
one. However, as coded in 9c08aea, this copy strategy would load the
pages of pg_class without considering it as a permanent relation,
causing the loaded pages to never be flushed when they should. Any
modification of the template's pg_class, mostly through DDLs, would then
be missed, causing corruptions.
STRATEGY = WAL_LOG is the default over FILE_COPY since it has been
introduced, so any changes done to pg_class on a database template would
be gone. Updates of database templates should be a rare thing, so the
impact of this bug should be hopefully limited. The pre-14 default
strategy FILE_COPY is safe, and can be used as a workaround.
Ryo Matsumura has found and analyzed the issue, and Nathan has written a
test able to reproduce the failure (with few tweaks from me).
Backpatch down to 15, where STRATEGY = WAL_LOG has been introduced.
Author: Nathan Bossart, Ryo Matsumura
Reviewed-by: Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/TYCPR01MB6868677E499C9AD5123084B5E8A39@TYCPR01MB6868.jpnprd01.prod.outlook.com
Backpatch-through: 15
If asked to decrease the size of a large (>8K) palloc chunk,
AllocSetRealloc could improperly change the Valgrind state of memory
beyond the new end of the chunk: it would mark data UNDEFINED as far
as the old end of the chunk after having done the realloc(3) call,
thus tromping on the state of memory that no longer belongs to it.
One would normally expect that memory to now be marked NOACCESS,
so that this mislabeling might prevent detection of later errors.
If realloc() had chosen to move the chunk someplace else (unlikely,
but well within its rights) we could also mismark perfectly-valid
DEFINED data as UNDEFINED, causing false-positive valgrind reports
later. Also, any malloc bookkeeping data placed within this area
might now be wrongly marked, causing additional problems.
Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)".
It's sufficient to mark as far as "size" when that's smaller, because
whatever remains in the new chunk size will be marked NOACCESS below,
and we expect realloc() to have taken care of marking the memory
beyond the new official end of the chunk.
While we're here, also rename the function's "oldsize" variable
to "oldchksize" to more clearly explain what it actually holds,
namely the distance to the end of the chunk (that is, requested size
plus trailing padding). This is more consistent with the use of
"size" and "chksize" to hold the new requested size and chunk size.
Add a new variable "oldsize" in the one stanza where we're actually
talking about the old requested size.
Oversight in commit c477f3e44. Back-patch to all supported branches,
as that was, just in case anybody wants to do valgrind testing on back
branches.
Karina Litskevich
Discussion: https://postgr.es/m/CACiT8iaAET-fmzjjZLjaJC4zwSJmrFyL7LAdHwaYyjjQOQ4hcg@mail.gmail.com
Commits 04cad8f7 and 0c088568 supported old macOS systems that didn't
define O_CLOEXEC or O_DSYNC yet, but those arrived in macOS releases
10.7 and 10.6 (respectively), which themselves reached EOL around a
decade ago. We've already made use of other POSIX features that early
macOS vintages can't compile (for example commits 623cc673, d2e15083).
A later commit will use O_CLOEXEC on POSIX systems so it would be
strange to pretend here that it's optional, and we might as well give
O_DSYNC the same treatment since the reference is also guarded by a test
for a macOS-specific macro, and we know that current Macs have it.
Discussion: https://postgr.es/m/CA%2BhUKGKb6FsAdQWcRL35KJsftv%2B9zXqQbzwkfRf1i0J2e57%2BhQ%40mail.gmail.com
Failing to do so results in an error when a pgbench script tries to
start a serializable transaction inside a pipeline, because by the time
BEGIN ISOLATION LEVEL SERIALIZABLE is executed, we're already in a
transaction that has acquired a snapshot, so the server rightfully
complains.
We can work around that by preparing all commands in the pipeline before
actually starting the pipeline. This changes the existing code in two
aspects: first, we now prepare each command individually at the point
where that command is about to be executed; previously, we would prepare
all commands in a script as soon as the first command of that script
would be executed. It's hard to see that this would make much of a
difference (particularly since it only affects the first time to execute
each script in a client), but I didn't actually try to measure it.
Secondly, we no longer use PQsendPrepare() in pipeline mode, but only
PQprepare. There's no specific reason for this change other than no
longer needing to do differently in pipeline mode. (Previously we had
no choice, because in pipeline mode PQprepare could not be used.)
Backpatch to 14, where pgbench got support for pipeline mode.
Reported-by: Yugo NAGATA <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20210716153013.fc53b1c780b06fccc07a7f0d@sraoss.co.jp
Historically we've accepted interval input like 'P.1e10D'. This
is probably an accident of having used strtod() to do the parsing,
rather than something anyone intended, but it's been that way for
a long time. Commit e39f99046 broke this by trying to parse the
integer and fractional parts separately, without accounting for
the possibility of an exponent. In principle that coding allowed
for precise conversions of field values wider than 15 decimal
digits, but that does not seem like a goal worth sweating bullets
for. So, rather than trying to manage an exponent on top of the
existing complexity, let's just revert to the previous coding that
used strtod() by itself. We can still improve on the old code to
the extent of allowing the value to range up to 1.0e15 rather than
only INT_MAX. (Allowing more than that risks creating problems
due to precision loss: the converted fractional part might have
absolute value more than 1. Perhaps that could be dealt with in
some way, but it really does not seem worth additional effort.)
Per bug #17795 from Alexander Lakhin. Back-patch to v15 where
the faulty code came in.
Discussion: https://postgr.es/m/17795-748d6db3ed95d313@postgresql.org
This was not something that required consideration before MERGE
was invented; but MERGE builds a join tree that left-joins to the
result relation, meaning that remove_useless_joins will consider
removing it. That should generally be stopped by the query's use
of output variables from the result relation. However, if the
result relation is inherited (e.g. a partitioned table) then
we don't add any row identity variables to the query until
expand_inherited_rtentry, which happens after join removal.
This was exposed as of commit 3c569049b, which made it possible
to deduce that a partitioned table could contain at most one row
matching a join key, enabling removal of the not-yet-expanded
result relation. Ooops.
To fix, let's just teach join_is_removable that the query result
rel is never removable. It's a cheap enough test in any case,
and it'll save some cycles that we'd otherwise expend in proving
that it's not removable, even in the cases we got right.
Back-patch to v15 where MERGE was added. Although I think the
case cannot be reached in v15, this seems like cheap insurance.
Per investigation of a report from Alexander Lakhin.
Discussion: https://postgr.es/m/36bee393-b351-16ac-93b2-d46d83637e45@gmail.com
For no clearly good reason, make_modifytable assumed that it
could not reach its get-the-FDW-info-the-hard-way path in MERGE.
It's currently possible to demonstrate that assertion failing,
which seems to be due to an upstream planner bug; but there's no
good reason to do it like this at all. Let's apply the principle
of separation of concerns and make the MERGE check separately,
after getting or not getting the fdwroutine pointer.
Per report from Alexander Lakhin. No test case, since I think
the potential test condition will go away soon.
Discussion: https://postgr.es/m/36bee393-b351-16ac-93b2-d46d83637e45@gmail.com
The RelOptInfo->userid field (the user ID to check permissions as) of an
"otherrel" relation was being copied from its parent relation, which is
correct in most cases but wrong when the parent is a subquery. In that
case, using the value from the RTEPermissionInfo of the child itself is
the appropriate thing to do.
Coming up with a test case where user-visible behavior changes proves
hard enough, so we don't add one here.
Bug introduced by a61b1f7482, discovered by Amit while reviewing
nearby code.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqE0WY_AhLnGtTsY7eYebG212XWbM-D8gr2A_ToOHyCywQ@mail.gmail.com
In generate_orderedappend_paths(), when match_partition_order_desc was
true, we would lcons() items to various lists in a loop over each live
partition. When the number of live partitions was large, the lcons()
could show up in profiles due to it having to perform memmove() to make
way for the new list item.
Here we adjust things so that we just perform the loop over the live
partitions backwards when match_partition_order_desc is true. This allows
us to simplify the logic in the loop. Now, as far as the guts of the loop
knows, there's no difference between match_partition_order and
match_partition_order_desc. We can just set match_partition_order to true
so that we build the correct list of paths for the asc and desc case. Per
idea from Andres Freund.
Discussion: https://postgr.es/m/20230217002351.nyt4y5tdzg6hugdt@awork3.anarazel.de
To allow testing for general support for fast bitscan intrinsics,
add symbols HAVE_BITSCAN_REVERSE and HAVE_BITSCAN_FORWARD.
Also do related cleanup in AllocSetFreeIndex(): Previously, we
tested for HAVE__BUILTIN_CLZ and copied the relevant internals of
pg_leftmost_one_pos32(), with a special fallback that does less
work than the general fallback for that function. Now that we have
a more general test, we just call pg_leftmost_one_pos32() directly
for platforms with intrinsic support. On gcc at least, there is no
difference in the binary for non-assert builds.
Discussion: https://www.postgresql.org/message-id/CAFBsxsEPc%2BFnX_0vmmQ5DHv60sk4rL_RZJ%2BMD6ei%3D76L0kFMvA%40mail.gmail.com
The setting of the process title could be seen on profiles of very
fast-to-execute queries. In many locations where we call
set_ps_display() we pass along a string constant, the length of which is
known during compilation. Here we effectively rename set_ps_display() to
set_ps_display_with_len() and then add a static inline function named
set_ps_display() which calls strlen() on the given string. This allows
the compiler to optimize away the strlen() call when dealing with
call sites passing a string constant. We can then also use memcpy()
instead of strlcpy() to copy the string into the destination buffer.
That's significantly faster than strlcpy's byte-at-a-time way of
copying.
Here we also take measures to improve some code which was adjusting the
process title to add a " waiting" suffix to it. Call sites which require
this can now just call set_ps_display_suffix() to add or adjust the suffix
and call set_ps_display_remove_suffix() to remove it again.
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAApHDvocBvvk-0gWNA2Gohe+sv9fMcv+fK_G+siBKJrgDG4O7g@mail.gmail.com
When evaluating clauses on multiple scan keys of a multi-column BRIN
index, we can stop processing as soon as we find a scan key eliminating
the range, and the range should not be added to tbe bitmap.
That's how it worked before 14, but since a681e3c107 the code treated
the range as matching if it matched at least the last scan key.
Backpatch to 14, where this code was introduced.
Backpatch-through: 14
Discussion: https://postgr.es/m/ebc18613-125e-60df-7520-fcbe0f9274fc%40enterprisedb.com
ruleutils.c blindly printed the user-given alias (or nothing if there
hadn't been one) for the target table of INSERT/UPDATE/DELETE queries.
That works a large percentage of the time, but not always: for queries
appearing in WITH, it's possible that we chose a different alias to
avoid conflict with outer-scope names. Since the chosen alias would
be used in any Var references to the target table, this'd lead to an
inconsistent printout with consequences such as dump/restore failures.
The correct logic for printing (or not) a relation alias was embedded
in get_from_clause_item. Factor it out to a separate function so that
we don't need a jointree node to use it. (Only a limited part of that
function can be reached from these new call sites, but this seems like
the cleanest non-duplicative factorization.)
In passing, I got rid of a redundant "\d+ rules_src" step in rules.sql.
Initial report from Jonathan Katz; thanks to Vignesh C for analysis.
This has been broken for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/e947fa21-24b2-f922-375a-d4f763ef3e4b@postgresql.org
Discussion: https://postgr.es/m/CALDaNm1MMntjmT_NJGp-Z=xbF02qHGAyuSHfYHias3TqQbPF2w@mail.gmail.com
A new callback named startup_cb, called shortly after a module is
loaded, is added. This makes possible the initialization of any
additional state data required by a module. This initial state data can
be saved in a ArchiveModuleState, that is now passed down to all the
callbacks that can be defined in a module. With this design, it is
possible to have a per-module state, aimed at opening the door to the
support of more than one archive module.
The initialization of the callbacks is changed so as
_PG_archive_module_init() does not anymore give in input a
ArchiveModuleCallbacks that a module has to fill in with callback
definitions. Instead, a module now needs to return a const
ArchiveModuleCallbacks.
All the structure and callback definitions of archive modules are moved
into their own header, named archive_module.h, from pgarch.h.
Command-based archiving follows the same line, with a new set of files
named shell_archive.{c,h}.
There are a few more items that are under discussion to improve the
design of archive modules, like the fact that basic_archive calls
sigsetjmp() by itself to define its own error handling flow. These will
be adjusted later, the changes done here cover already a good portion
of what has been discussed.
Any modules created for v15 will need to be adjusted to this new
design.
Author: Nathan Bossart
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20230130194810.6fztfgbn32e7qarj@awork3.anarazel.de
In commit 8bf6ec3ba, I mistakenly supposed that MergeAttributes'
loop over saved_schema was reprocessing column definitions that
had already been checked earlier: there is a variant syntax for
creating a child partition in which that's not true. So we need
to duplicate the full check appearing further up.
(Actually, I believe that the "if (restdef->identity)" part is
not reachable, because we reject identity on partitions earlier.
But it seems wise to keep the check, in case that's ever relaxed,
and to keep this code in sync with the other instance.)
Per report from Alexander Lakhin.
Discussion: https://postgr.es/m/4a8200ca-8378-653e-38ed-b2e1f1611aa6@gmail.com
d9d7fe68d3 made use of an existing wait event when sending data from the
apply worker, but we should have invented a new wait event since this is a
new place to wait.
This patch corrects the mistake by using a new wait event
"LogicalApplySendData".
Author: Hou Zhijie
Reviewed-by: Peter Smith
Discussion: https://postgr.es/m/CA+TgmobWzbr9H3yN3dLVckviEZKemPwd+XyCFKEgyZQZhgP66Q@mail.gmail.com
Support for regexps in database and role entries for pg_hba.conf has
been added in 8fea8683, and efb6f4a has extended support of pg-user in
pg_ident.conf, still both of them have missed a short description about
the new patterns supported in their respective sample files.
This commit closes the gap, by providing a short description of all the
new features supported for each entry type.
Reported-by: Pavel Luzanov
Reviewed-by: Jelte Fennema, Pavel Luzanov
Discussion: https://postgr.es/m/e495112d-8741-e651-64a2-ecb5728f1a56@postgrespro.ru
On MERGE / WHEN MATCHED DELETE it's not possible to get cross-partition
updates, so we don't initialize cpUpdateRetrySlot; however, the code was
not careful to ignore the value in that case. Make it do so.
Backpatch to 15.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/17792-0f89452029662c36@postgresql.org
force_parallel_mode is meant to be used to allow us to exercise the
parallel query infrastructure to ensure that it's working as we expect.
It seems some users think this GUC is for forcing the query planner into
picking a parallel plan regardless of the costs. A quick look at the
documentation would have made them realize that they were wrong, but the
GUC is likely too conveniently named which, evidently, seems to often
result in users expecting that it forces the planner into usefully
parallelizing queries.
Here we rename the GUC to something which casual users are less likely to
mistakenly think is what they need to make their query run more quickly.
For now, the old name can still be used. We'll revisit if the old name
mapping can be removed once the buildfarm configs are all updated.
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CAApHDvrsOi92_uA7PEaHZMH-S4Xv+MGhQWA+GrP8b1kjpS1HjQ@mail.gmail.com
OpenSSL 1.1.1 and newer versions have added support for RSA-PSS
certificates, which requires the use of a specific routine in OpenSSL to
determine which hash function to use when compiling it when using
channel binding in SCRAM-SHA-256. X509_get_signature_nid(), that is the
original routine the channel binding code has relied on, is not able to
determine which hash algorithm to use for such certificates. However,
X509_get_signature_info(), new to OpenSSL 1.1.1, is able to do it. This
commit switches the channel binding logic to rely on
X509_get_signature_info() over X509_get_signature_nid(), which would be
the choice when building with 1.1.1 or newer.
The error could have been triggered on the client or the server, hence
libpq and the backend need to have their related code paths patched.
Note that attempting to load an RSA-PSS certificate with OpenSSL 1.1.0
or older leads to a failure due to an unsupported algorithm.
The discovery of relying on X509_get_signature_info() comes from Jacob,
the tests have been written by Heikki (with few tweaks from me), while I
have bundled the whole together while adding the bits needed for MSVC
and meson.
This issue exists since channel binding exists, so backpatch all the way
down. Some tests are added in 15~, triggered if compiling with OpenSSL
1.1.1 or newer, where the certificate and key files can easily be
generated for RSA-PSS.
Reported-by: Gunnar "Nick" Bluth
Author: Jacob Champion, Heikki Linnakangas
Discussion: https://postgr.es/m/17760-b6c61e752ec07060@postgresql.org
Backpatch-through: 11
Previously make_etags always ran make_ctags -e when make_etags was
executed. However, because non-Exuberant ctags on Mac does not
support -e option (and also on other platforms including old Linux),
ctags failed. To avoid the failure change make_ctags so that if
non-Exuberant ctags is used and ctags -e option is requested, run
etags command instead. If etags command does not exist, make_ctags
will fail.
Also refactor make_ctags and tweak make_etags to emit proper usage
message.
Author: Fujii Masao
Reviewed-by: Tatsuo Ishii
Discussion: https://www.postgresql.org/message-id/369c13b9-8b0f-d6f9-58fc-61258ec8f713%40oss.nttdata.com
In commit b78f6264e I opined that it was "too risky" to delete a
relation's RelOptInfo from the planner's data structures when we have
realized that we don't need to join to it; so instead we just marked
it as a dead relation. In hindsight that judgment seems flawed: any
subsequent access to such a dead relation is arguably a bug in
itself, so leaving the RelOptInfo present just helps to mask bugs.
Let's delete it instead, allowing removal of the whole notion of a
"dead relation". So far as the regression tests can find, this
requires no other code changes, except for one Assert in equivclass.c
that was very dubiously not complaining about access to a dead rel.
Discussion: https://postgr.es/m/229905.1676062220@sss.pgh.pa.us
Late in the development of commit 2489d76c4, I (tgl) incorrectly
concluded that the new function have_unsafe_outer_join_ref couldn't
ever reach its inner loop. That should be the case if the inner
rel's parameterization is based on just one Var, but it could be
based on Vars from several relations, and then not only is the
inner loop reachable but it's wrongly coded.
Despite those errors, it still appears that the whole thing is
redundant given previous join_is_legal checks, so let's arrange
to only run it in assert-enabled builds.
Diagnosis and patch by Richard Guo, per fuzz testing by Justin Pryzby.
Discussion: https://postgr.es/m/20230212235823.GW1653@telsasoft.com
Now that we have the sources for pg_bsd_indent in our code base these
are redundant.
It is now required to provide a list of files or directories to pgindent,
either by using --commit or on the command line. The equivalent of
previously running pgindent with no parameters is now `pgindent .`
Some extra checks are also added. duplicate files in the file list are
skipped, and there is a warning if no files are specified.
If the --commit option is used, the script now chdir's to the source
root, as git always reports files relative to that. (Fixes a gripe from
Justin Pryzby)
Reviewed by Tom Lane
Discussion: https://postgr.es/m/842819.1676219054@sss.pgh.pa.us
In commit ad89a5d115, we added an unhelpful 'ON' that doesn't match
the input syntax. This was discovered while adding code to support for
DDL in logical replication.
No backpatch because of the change of behavior, however improbable it
may be that somebody is depending on this.
Author: Zheng Li <zhengli10@gmail.com>
Discussion: https://postgr.es/m/CAAD30UKg8rXeGM8Oy_MAmxKBL_K5DiHXdeNF=hUefcu1C_6VfQ@mail.gmail.com
The logic in this area was recently changed in 7da51590e, however, in that
commit, I neglected to consider that the conditions in which we should
pfree the old Datum needed to be updated after that change. This could
result in trying to pfree a NULL value, as was demonstrated by Alexander
Lakhin.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/4103db46-d888-6d1d-e88d-87c21ed99472@gmail.com
Here we fix a faulty "if" condition which failed to correctly handle two
or more consecutive NULL transition values when checking if the new value
is DISTINCT from the old value for presorted aggregates. Given a suitably
non-strict aggregate transition function, a byref aggregate could cause a
crash due to calling the type's equality function and passing along a
(Datum) 0 value to test for equality, the equality function would then try
to dereference that 0 Datum and segfault. For byval types, there'd have
been no crash and the equality function would have seen that the two 0
Datums matched, which (only by chance) meant the calling code would have
worked correctly.
Here we ensure that we only call the equality function when neither of
the input values are NULL.
This code is all new as of 1349d2790, so no backpatch needed.
Reported-by: Fujii Masao
Discussion: https://postgr.es/m/860c6d6f-a3c5-3ae9-9da2-827177bede06@oss.nttdata.com
When an aggregate function is used as a WindowFunc and a tuple transitions
out of the window frame, we ordinarily try to make use of the aggregate
function's inverse transition function to "unaggregate" the exiting tuple.
This optimization is disabled for various cases, including when the
aggregate contains a volatile function. In such a case we'd be unable to
ensure that the transition value was calculated to the same value during
transitions and inverse transitions. Unfortunately, we did this check by
calling contain_volatile_functions() which does not recursively search
SubPlans for volatile functions. If the aggregate function's arguments or
its FILTER clause contained a subplan with volatile functions then we'd
fail to notice this.
Here we fix this by just disabling the optimization when the WindowFunc
contains any subplans. Volatile functions are not the only reason that a
subplan may have nonrepeatable results.
Bug: #17777
Reported-by: Anban Company
Discussion: https://postgr.es/m/17777-860b739b6efde977%40postgresql.org
Reviewed-by: Tom Lane
Backpatch-through: 11
This commit removes most of the Plan and Path nodes, which should never
be included in the query jumbling because we ignore these in Query
nodes. This is facilitated by making no_query_jumble an inherited
attribute, like no_copy, no_equal and no_read when the supertype of a
node is found as marked with that.
RawStmt is not used in parsed queries, so it can be removed from the
query jumbling. A couple of nodes defined in pathnodes.h, plannodes.h
and primnodes.h with NodeTag as supertype need to be marked
individually.
Forcing the execution of the query jumbling code with compute_query_id =
auto while pg_stat_statements is loaded brings the code coverage of
queryjumblefuncs.funcs.c to 95.6%.
The core code does not yet include a way to enforce the execution in
query jumbling except in pg_stat_statements, so the numbers I am
mentioning above will not reflect on the default coverage report with
just what is done in this commit.
Reported-by: Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/3344827.1675809127@sss.pgh.pa.us
Commit e39f99046 moved some code up closer to the start of
DecodeInterval(), without noticing that it had been implicitly
relying on previous checks to reject the case of empty input.
Given empty input, we'd now dereference a pointer that hadn't been
set, possibly leading to a core dump. (But if we fail to provoke
a SIGSEGV, nothing bad happens, and the expected syntax error is
thrown a bit later.)
Per bug #17788 from Alexander Lakhin. Back-patch to v15 where
the fault was introduced.
Discussion: https://postgr.es/m/17788-dabac9f98f7eafd5@postgresql.org
Update the Makefile and build directions for in-tree build,
and add Meson build infrastructure. Also convert the ad-hoc
test target into a TAP test.
Currently, the Make build system will not build pg_bsd_indent
by default, while the Meson system will. Both will test it
during "make check-world" or "ninja test". Neither will install
it automatically. (We might change some of these decisions later.)
Also fix a few portability nits noted during early testing.
Also, exclude pg_bsd_indent from pgindent's purview; at least for
now, we'll leave it formatted similarly to the FreeBSD original.
Tom Lane and Andres Freund
Discussion: https://postgr.es/m/3935719.1675967430@sss.pgh.pa.us
Discussion: https://postgr.es/m/20200812223409.6di3y2qsnvynao7a@alap3.anarazel.de
An upcoming test needs to use a tablespace as part of its test. Historically,
we wanted tablespace creation be done in a dedicated file, so it's easy to
disable when testing replication. But that is not necessary anymore, due to
allow_in_place_tablespaces.
Create regress_tblspace tablespace in test_setup. Move the tablespace test to
the end of the parallel schedule, so other tests can use it.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
Builds on 28e626bde0 and f30d62c2fc. See the former for motivation.
Rows of the view show IO operations for a particular backend type, IO target
object, IO context combination (e.g. a client backend's operations on
permanent relations in shared buffers) and each column in the view is the
total number of IO Operations done (e.g. writes). So a cell in the view would
be, for example, the number of blocks of relation data written from shared
buffers by client backends since the last stats reset.
In anticipation of tracking WAL IO and non-block-oriented IO (such as
temporary file IO), the "op_bytes" column specifies the unit of the "reads",
"writes", and "extends" columns for a given row.
Rows for combinations of IO operation, backend type, target object and context
that never occur, are ommitted entirely. For example, checkpointer will never
operate on temporary relations.
Similarly, if an IO operation never occurs for such a combination, the IO
operation's cell will be null, to distinguish from 0 observed IO
operations. For example, bgwriter should not perform reads.
Note that some of the cells in the view are redundant with fields in
pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for
backwards compatibility.
Bumps catversion.
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Samay Sharma <smilingsamay@gmail.com>
Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
analyzejoins.c took care to clean out removed relids from the
clause_relids and required_relids of RestrictInfos associated with
the doomed rel ... but it paid no attention to the fact that if such a
RestrictInfo contains an OR clause, there will be sub-RestrictInfos
containing similar fields.
I'm more than a bit surprised that this oversight hasn't caused
visible problems before. In any case, it's certainly broken now,
so add logic to clean out the sub-RestrictInfos recursively.
We might need to back-patch this someday.
Per bug #17786 from Robins Tharakan.
Discussion: https://postgr.es/m/17786-f1ea7fbdab97daec@postgresql.org
One of the add_nulling_relids calls in deconstruct_distribute_oj_quals
added an OJ relid to too few Vars, while the other added it to too
many. We should consider the syntactic structure not
min_left/righthand while deciding which Vars to decorate, and when
considering pushing up a lower outer join pursuant to transforming the
second form of OJ identity 3 to the first form, we only want to
decorate Vars coming from its LHS.
In a related bug, I realized that make_outerjoininfo was failing to
check a very basic property that's needed to apply OJ identity 3:
the syntactically-upper outer join clause can't refer to the lower
join's LHS. This didn't break the join order restriction logic,
but it led to setting bogus commute_xxx bits, possibly resulting
in bogus nullingrel markings in modified quals.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/CAMbWs497CmBruMx1SOjepWEz+T5NWa4scqbdE9v7ZzSXqH_gQw@mail.gmail.com
Discussion: https://postgr.es/m/CAEP4nAx9C5gXNBfEA0JBfz7B+5f1Bawt-RWQWyhev-wdps8BZA@mail.gmail.com
Commit 28e626bde0 introduced the infrastructure for tracking more detailed IO
statistics. This commit adds the actual collection of the new IO statistics
for relations and temporary relations. See aforementioned commit for goals and
high-level design.
The changes in this commit are fairly straight-forward. The bulk of the change
is to passing sufficient information to the callsites of pgstat_count_io_op().
A somewhat unsightly detail is that it currently is hard to find a better
place to count fsyncs than in md.c, whereas the other pgstat_count_io_op()
calls are in bufmgr.c/localbuf.c. As the number of fsyncs is tied to md.c
implementation details, it's not obvious there is a better answer.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
This commit adds the infrastructure for more detailed IO statistics. The calls
to actually count IOs, a system view to access the new statistics,
documentation and tests will be added in subsequent commits, to make review
easier.
While we already had some IO statistics, e.g. in pg_stat_bgwriter and
pg_stat_database, they did not provide sufficient detail to understand what
the main sources of IO are, or whether configuration changes could avoid
IO. E.g., pg_stat_bgwriter.buffers_backend does contain the number of buffers
written out by a backend, but as that includes extending relations (always
done by backends) and writes triggered by the use of buffer access strategies,
it cannot easily be used to tune background writer or checkpointer. Similarly,
pg_stat_database.blks_read cannot easily be used to tune shared_buffers /
compute a cache hit ratio, as the use of buffer access strategies will often
prevent a large fraction of the read blocks to end up in shared_buffers.
The new IO statistics count IO operations (evict, extend, fsync, read, reuse,
and write), and are aggregated for each combination of backend type (backend,
autovacuum worker, bgwriter, etc), target object of the IO (relations, temp
relations) and context of the IO (normal, vacuum, bulkread, bulkwrite).
What is tracked in this series of patches, is sufficient to perform the
aforementioned analyses. Further details, e.g. tracking the number of buffer
hits, would make that even easier, but was left out for now, to keep the scope
of the already large patchset manageable.
Bumps PGSTAT_FILE_FORMAT_ID.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
While removing the use of SHM_QUEUE from predicate.c, in 9600371764, I made
two mistakes in GetSafeSnapshotBlockingPids():
- Removed the check for output_size
- Previously, when the first loop didn't find a matching proc, sxact would be
NULL. But with naive use of dlist_foreach() it ends up as the value of the
last iteration.
The second issue is the cause of occasional failures in the deadlock-hard and
deadlock-soft isolation tests that we have been observing on CI. The issue was
very hard to reproduce, as it requires the transactions.sql regression test to
run at the same time as the deadlock-{hard,soft} isolation test.
I did not find other similar mistakes in 9600371764.
Discussion: https://postgr.es/m/20230208221145.bwzhancellclrgia@awork3.anarazel.de
A new --commit option will add all the files in a commit to the file
list. The option can be specified more than once.
Also, if a directory is given on the command line, all the files in that
directory tree will be added to the file list.
Per suggestions from Robert Haas
Reviewed by Jelte Fennema
Discussion: https://postgr.es/m/CA+TgmoY59Ksso81RNLArNxj0a7xaqV_F_u7gSMHbgdc2kG5Vpw@mail.gmail.com
The code I added in fee7b77b9 could misbehave if commute_above_r
contains multiple relids. While adding too many relids here is
probably harmless (pre-fee7b77b9, we did it all the time), it's
not very expensive to be accurate: we just have to intersect
commute_above_r with the join's relids.
Discussion: https://postgr.es/m/17781-c0405c8b3cd5e072@postgresql.org
No GUCs that use NO_SHOW_ALL are reported in pg_show_all_settings(),
hence trying to check combinations of flags related to it is pointless.
These queries have been introduced by d10e41d, so backpatch down to 15
to keep all the branches consistent. Equivalent checks based on
NO_SHOW_ALL could be added in check_GUC_init() when a GUC is initially
loaded, but this can be done only on HEAD.
Author: Nitin Jadhav
Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com
Backpatch-through: 15
The DDLs like Refresh Materialized views that generate lots of temporary
data due to rewrite rules may not be processed by output plugins (for
example pgoutput). So, we won't send keep-alive messages for a long time
while processing such commands and that can lead the subscriber side to
timeout. We have previously fixed a similar case for large transactions in
commit f95d53eded where the output plugin filters all or most of the
changes but missed to handle the DDLs.
We decided not to backpatch this as this adds a new callback in the
existing exposed structure and moreover, users can increase the
wal_sender_timeout and wal_receiver_timeout to avoid this problem.
Author: Wang wei, Hou Zhijie
Reviewed-by: Peter Smith, Ashutosh Bapat, Shi yu, Amit Kapila
Discussion: https://postgr.es/m/OS3PR01MB6275478E5D29E4A563302D3D9E2B9@OS3PR01MB6275.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
The logic for when to add the current outer join's own relid
to the nullingrels sets of output Vars and PHVs was overly
complicated and underly correct. Not sure why I didn't think
of this before, but since what we want is marking per the
syntactic structure, we can just consult our records about
the syntactic structure, ie syn_righthand/syn_lefthand.
Also, tighten the rule about when to add the commute_above_r
bits, in hopes of eliminating some squishy reasoning. I do not
know of a reason to think that that's broken as-is, but this way
seems better.
Per bug #17781 from Robins Tharakan.
Discussion: https://postgr.es/m/17781-c0405c8b3cd5e072@postgresql.org
Commit c3382a3c3, which moved the implementation of PG_TEST_EXTRA
from src/test/Makefile into individual test scripts, broke the
directions given in the subdirectory README files about how to run
these tests by hand. Update. Also mention wal_consistency_checking
in recovery/README --- that omission isn't the fault of c3382a3c3,
but it's still an omission.
Currently, we reuse WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE in the
apply worker while sending data to the parallel apply worker via a shared
memory queue. This is not appropriate as one won't be able to distinguish
whether the worker is waiting for sending data or for the state change.
To patch instead uses the wait event WAIT_EVENT_MQ_SEND which has been
already used in blocking mode while sending data via a shared memory
queue.
Author: Hou Zhijie
Reviewed-by: Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/OS0PR01MB57161C680B22E4C591628EE994DA9@OS0PR01MB5716.jpnprd01.prod.outlook.com
Here we further simplify the code in heapgettup() and
heapgettup_pagemode() to make better use of the helper functions added in
the previous recent refactors in this area.
In passing, remove an unneeded cast added in 8ca6d49f6.
Author: Melanie Plageman
Reviewed-by: Andres Freund, David Rowley
Discussion: https://postgr.es/m/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ@mail.gmail.com
Like the implementation for node copy, write and read, this node
requires a custom implementation so as the query jumbling is able to
consider the correct value assigned to it, depending on its type (int,
float, bool, string, bitstring).
Based on a dump of pg_stat_statements from the regression database, this
would confuse the query jumbling of the following queries:
- SET.
- COPY TO with SELECT queries.
- START TRANSACTION with different isolation levels.
- ALTER TABLE with default expressions.
- CREATE TABLE with partition bounds.
Note that there may be a long-term argument in tracking the location of
such nodes so as query strings holding such nodes could be normalized,
but this is left as a separate discussion.
Oversight in 3db72eb.
Discussion: https://postgr.es/m/Y9+HuYslMAP6yyPb@paquier.xyz
The existing comments in load_relcache_init_file() were not flexible
when new entries were added at the end, so they ended up a bit wrong.
Simplify the comments to avoid this issue.
The portion of join_is_removable() that checks PlaceHolderVars
can be made a little more accurate and intelligible than it was.
The key point is that we can allow join removal even if a PHV
mentions the target rel in ph_eval_at, if that mention was only
added as a consequence of forcing the PHV up to a join level
that's at/above the outer join we're trying to get rid of.
We can check that by testing for the OJ's relid appearing in
ph_eval_at, indicating that it's supposed to be evaluated after
the outer join, plus the existing test that the contained
expression doesn't actually mention the target rel.
While here, add an explicit check that there'll be something left
in ph_eval_at after we remove the target rel and OJ relid. There
is an Assert later on about that, and I'm not too sure that the
case could happen for a PHV satisfying the other constraints,
but let's just check. (There was previously a bms_is_subset test
that meant to cover this risk, but it's broken now because it
doesn't account for the fact that we'll also remove the OJ relid.)
The real reason for revisiting this code though is that the
Assert I left behind in 8538519db turns out to be easily
reachable, because if a PHV of this sort appears in an upper-level
qual clause then that clause's clause_relids will include the
PHV's ph_eval_at relids. This is a mirage though: we have or soon
will remove these relids from the PHV's ph_eval_at, and therefore
they no longer belong in qual clauses' clause_relids either.
Remove that Assert in join_is_removable, and replace the similar
one in remove_rel_from_query with code to remove the deleted relids
from clause_relids.
Per bug #17773 from Robins Tharakan.
Discussion: https://postgr.es/m/17773-a592e6cedbc7bac5@postgresql.org
In standby mode, we don't actually report progress of recovery,
but up until now, startup_progress_timeout_handler() nevertheless
got called every log_startup_progress_interval seconds. That's
an unnecessary expense, so avoid it.
Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by
Simon Riggs, Thomas Munro, and me. Back-patch to v15, where
the problem was introduced.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
This extends the work begun by a73952b, with the addition of a GUC check
for flag combinations in check_GUC_init(), making sure that anything
defined with GUC_NO_SHOW_ALL also includes GUC_NOT_IN_SAMPLE, as first
step. There has never been any GUCs of this kind in the core code, and
this combination makes little sense as a parameter marked as not fit for
SHOW ALL should not be hidden in postgresql.conf.sample.
Note that GUCs marked with GUC_NO_SHOW_ALL are not listed under
pg_settings or SHOW ALL (still they can be queried individually), making
them unfit for checks via SQL queries in the regression tests that do a
full scan of the parameters available. The SQL tests are still a bit
incorrect about that, and will be cleaned up in a separate commit. We
have also discussed the possibility to extend the SQL functions for GUCs
so as they could show more information about parameters defined with
GUC_NO_SHOW_ALL, though it has been concluded that this is not worth the
extra complication in the long run, an enforced policy at initialization
time being enough to do the same job.
Per discussion with Nitin Jadhav and Tom Lane.
Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com
This adds a new option to pg_verifybackup called -P/--progress, showing
every second some information about the progress of the checksum
verification based on the data of a backup manifest.
Similarly to what is done for pg_rewind and pg_basebackup, the
information printed in the progress report consists of the current
amount of data computed and the total amount of data that will be
computed. Note that files found with an incorrect size do not have
their checksum verified, hence their size is not appended to the total
amount of data estimated during the first scan of the manifest data
(such incorrect sizes could be overly high, for one, falsifying the
progress report).
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoC5+JOgMd4o3z_oxw0f8JDSsCYY7zSbhe-O9x7f33rw_A@mail.gmail.com
pqsecure_open_gss() includes a code path handling error messages with
v2-style protocol messages coming from the server. The client-side
buffer holding the error message does not force a NULL-termination, with
the data of the server getting copied to the errorMessage of the
connection. Hence, it would be possible for a server to send an
unterminated string and copy arbitrary bytes in the buffer receiving the
error message in the client, opening the door to a crash or even data
exposure.
As at this stage of the authentication process the exchange has not been
completed yet, this could be abused by an attacker without Kerberos
credentials. Clients that have a valid kerberos cache are vulnerable as
libpq opportunistically requests for it except if gssencmode is
disabled.
Author: Jacob Champion
Backpatch-through: 12
Security: CVE-2022-41862
This reverts commits 24c35ec and 57169ad. PreRestoreCommand() and
PostRestoreCommand() need to be put closer to the system() call calling
a restore_command, as they enable in_restore_command for the startup
process which would in turn trigger an immediate proc_exit() in the
SIGTERM handler. Perhaps we could get rid of this behavior entirely,
but 24c35ec has made the window where the flag is enabled much larger
than it was, and any Postgres-like actions (palloc, etc.) taken by code
paths while the flag is enabled could lead to more severe issues in the
shutdown processing.
Note that curculio has showed that there are much more problems in this
area, unrelated to this change, actually, hence the issues related to
that had better be addressed first. Keeping the code of HEAD in line
with the stable branches should make that a bit easier.
Per discussion with Andres Freund and Nathan Bossart.
Discussion: https://postgr.es/m/Y979NR3U5VnWrTwB@paquier.xyz
make_outerjoininfo was set up to update SpecialJoinInfo's
commute_below, commute_above_l, commute_above_r fields as soon as
it found a pair of outer joins that look like they can commute.
However, this decision could be negated later in the same loop due
to finding an intermediate outer join that prevents commutation.
That left us with commute_xxx fields that were contradictory to the
join order restrictions expressed in min_lefthand/min_righthand.
The latter fields would keep us from actually choosing a bad join
order; but the inconsistent commute_xxx fields could bollix details
such as the varnullingrels values created for intermediate join
relation targetlists, ending in an assertion failure in setrefs.c.
To fix, wait till the end of make_outerjoininfo where we have
accurate values for min_lefthand/min_righthand, and then insert
only relids not present in those sets into the commute_xxx fields.
Per SQLSmith testing by Robins Tharakan. Note that while Robins
bisected the failure to commit b448f1c8d, it's really the fault of
2489d76c4. The outerjoin_delayed logic removed in the later commit
was keeping us from deciding that troublesome join pairs commute,
at least in the specific example seen here.
Discussion: https://postgr.es/m/CAEP4nAyAORgE8K_RHSmvWbE9UaChhjbEL1RrDU3neePwwRUB=A@mail.gmail.com
deconstruct_distribute tweaks the outer join scope (ojscope)
it passes to distribute_qual_to_rels when considering an outer
join qual that's above potentially-commutable outer joins.
However, if the current join is *not* potentially commutable,
we shouldn't do that. The argument that distribute_qual_to_rels
will not do something wrong with the bogus ojscope falls flat
if we don't pass it non-null postponed_oj_qual_list. Moreover,
there's no need to play games in this case since we aren't going
to commute anything.
Per SQLSmith testing by Robins Tharakan.
Discussion: https://postgr.es/m/CAEP4nAw74k4b-=93gmfCNX3MOY3y4uPxqbk_MnCVEpdsqHJVsg@mail.gmail.com
If we have a RestrictInfo that mentions both the removal-candidate
relation and the outer join's relid, then that is a pushed-down
condition not a join condition, so it should be grounds for deciding
that we can't remove the outer join. In commit 2489d76c4, I'd blindly
included the OJ's relid into "joinrelids" as per the new standard
convention, but the checks of attr_needed and ph_needed should only
allow the join's input rels to be mentioned.
Having done that, the check for references in pushed-down quals
a few lines further down should be redundant. I left it in place
as an Assert, though.
While researching this I happened across a couple of comments that
worried about the effects of update_placeholder_eval_levels.
That's gone as of b448f1c8d, so we can remove some worry.
Per bug #17769 from Robins Tharakan. The submitted test case
triggers this more or less accidentally because we flatten out
a LATERAL sub-select after we've done join strength reduction;
if we did that in the other order, this problem would be masked
because the outer join would get simplified to an inner join.
To ensure that the committed test case will continue to test
what it means to even if we make that happen someday, use a
test clause involving COALESCE(), which will prevent us from
using it to do join strength reduction.
Patch by me, but thanks to Richard Guo for initial investigation.
Discussion: https://postgr.es/m/17769-e4f7a5c9d84a80a7@postgresql.org
After pulling up LATERAL subqueries, we may have qual clauses that
refer to relations outside their syntactic scope. Before doing any
such pullup, prepjointree.c checks to make sure that it wouldn't
create a semantically-invalid situation; but we leave it to
deconstruct_jointree() to actually move these quals up the join
tree to a place where they can be evaluated. In commit 2489d76c4,
I (tgl) refactored deconstruct_jointree() in a way that caused
assertion failures while moving such quals, because the new logic
failed to distinguish "this jointree node is a parent of the source
one" from "this jointree node is processed after the source
one in depth-first order".
Fix this, and at the same time reduce the overhead a bit, by
getting rid of the common PostponedQual list and instead making each
JoinTreeItem contain a list of quals that needed to be postponed to
its level. We can help distribute_qual_to_rels find the appropriate
JoinTreeItem efficiently by adding parent-item links to the
JoinTreeItem data structure. This ends up being the same number
of relid subset checks as the original (pre-bug) logic, but less
list manipulation is required during multi-level postponements.
Richard Guo and Tom Lane, per bug #17768 from Robins Tharakan.
Discussion: https://postgr.es/m/17768-5ac8730ece54478f@postgresql.org
This allows underscores to be used in integer and numeric literals,
and their corresponding type input functions, for visual grouping.
For example:
1_500_000_000
3.14159_26535_89793
0xffff_ffff
0b_1001_0001
A single underscore is allowed between any 2 digits, or immediately
after the base prefix indicator of non-decimal integers, per SQL:202x
draft.
Peter Eisentraut and Dean Rasheed
Discussion: https://postgr.es/m/84aae844-dc55-a4be-86d9-4f0fa405cc97%40enterprisedb.com
The prior coding of int64_div_fast_to_numeric() had a number of bugs
that would cause it to fail under different circumstances, such as
with log10val2 <= 0, or log10val2 a multiple of 4, or in the "slow"
numeric path with log10val2 >= 10.
None of those could be triggered by any of our current code, which
only uses log10val2 = 3 or 6. However, they made it a hazard for any
future code that might use it. Also, since this is exported by
numeric.c, users writing their own C code might choose to use it.
Therefore fix, and back-patch to v14, where it was introduced.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCW8gXgW0tgPxPgHDPhVX71%2BSWFRkhnXy%2BTfGDsKLepu2g%40mail.gmail.com
To interlock against concurrent drops, we use to hold ExclusiveLock on
pg_replication_origin till xact commit. This blocks even concurrent drops
of different origins by tablesync workers. So, instead, lock the specific
origin to interlock against concurrent drops.
This reduces the test time variability in src/test/subscription where
multiple tables are being synced.
Author: Vignesh C
Reviewed-by: Hou Zhijie, Amit Kapila
Discussion: https://postgr.es/m/1412708.1674417574@sss.pgh.pa.us
Backward and forward scans share much of the same page acquisition code.
Here we consolidate that code to reduce some duplication.
Additionally, add a new rs_coffset field to HeapScanDescData to track the
offset of the current tuple. The new field fits nicely into the padding
between a bool and BlockNumber field and saves having to look at the last
returned tuple to figure out which offset we should be looking at for the
current tuple.
Author: Melanie Plageman
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com
In the 90s we needed to deal with computers that still had the
pre-standard signal masking APIs. That hasn't been relevant for a very
long time on Unix systems, and c94ae9d8 got rid of a remaining
dependency in our Windows porting code. PG_SETMASK didn't expose
save/restore functionality, so we'd already started using sigprocmask()
directly in places, creating the visual distraction of having two ways
to spell it. It's not part of the API that extensions are expected to
be using (but if they are, the change will be trivial). It seems like a
good time to drop the old macro and just call the standard POSIX
function.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BKfQgrhHP2DLTohX1WwubaCBHmTzGnAEDPZ-Gug-Xskg%40mail.gmail.com
Installing with plain cpan failed for me recently, as the archive it
searched has been purged of old releases. However, you can give cpanm a
complete URL to the exact version you want to install, so document using
that.
Improve the comment explaining the choice of rscale in numeric_sqrt(),
and ensure that the code works consistently when other values of
NBASE/DEC_DIGITS are used.
Note that, in practice, we always expect DEC_DIGITS == 4, and this
does not change the computation in that case.
Joel Jacobson and Dean Rasheed
Discussion: https://postgr.es/m/06712c29-98e9-43b3-98da-f234d81c6e49%40app.fastmail.com
Extend the existing developer option 'logical_replication_mode' to help
test the parallel apply of large transactions on the subscriber.
When set to 'buffered', the leader sends changes to parallel apply workers
via a shared memory queue. When set to 'immediate', the leader serializes
all changes to files and notifies the parallel apply workers to read and
apply them at the end of the transaction.
This helps in adding tests to cover the serialization code path in
parallel streaming mode.
Author: Hou Zhijie
Reviewed-by: Peter Smith, Kuroda Hayato, Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
As coded, the timeout given to WaitLatch() was always equal to
PGARCH_AUTOWAKE_INTERVAL, as time() was called two times repeatedly.
This simplification could have been done in d75288f.
While on it, this adjusts a comment in pgarch.c to describe the archiver
in a more neutral way.
Author: Sravan Kumar, Nathan Bossart
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CA+=NbjjqYE9-Lnw7H7DAiS5jebmoMikwZQb_sBP7kgBCn9q6Hg@mail.gmail.com
DST law changes in Greenland and Mexico. Notably, a new timezone
America/Ciudad_Juarez has been split off from America/Ojinaga.
Historical corrections for northern Canada, Colombia, and Singapore.
Here remove some dead code from heapgettup() and heapgettup_pagemode()
which was trying to support NoMovementScanDirection scans. This code can
never be reached as standard_ExecutorRun() never calls ExecutePlan with
NoMovementScanDirection.
Additionally, plans which were scanning an unordered index would use
NoMovementScanDirection rather than ForwardScanDirection. There was no
real need for this, so here we adjust this so we use ForwardScanDirection
for unordered index scans. A comment in pathnodes.h claimed that
NoMovementScanDirection was used for PathKey reasons, but if that was
true, it no longer is, per code in build_index_paths().
This does change the non-text format of the EXPLAIN output so that
unordered index scans now have a "Forward" scan direction rather than
"NoMovement". The text format of EXPLAIN has not changed.
Author: Melanie Plageman
Reviewed-by: Tom Lane, David Rowley
Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com
In commit 2489d76c4, I'd thought it'd be safe to assert that a
PlaceHolderVar appearing in a scan-level expression has empty
nullingrels. However this is not so, as when we determine that a
join relation is certainly empty we'll put its targetlist into a
Result-with-constant-false-qual node, and nothing is done to adjust
the nullingrels of the Vars or PHVs therein. (Arguably, a Result
used in this way isn't really a scan-level node, but it certainly
isn't an upper node either ...)
It's not clear this is worth any close analysis, so let's just
take out the faulty Assert.
Per report from Robins Tharakan. I added a test case based on
his example, just in case somebody tries to tighten this up.
Discussion: https://postgr.es/m/CAEP4nAz7Enq3+DEthGG7j27DpuwSRZnW0Nh6jtNh75yErQ_nbA@mail.gmail.com
This commit changes the query jumbling code in queryjumblefuncs.c to be
generated automatically based on the information of the nodes in the
headers of src/include/nodes/ by using gen_node_support.pl. This
approach offers many advantages:
- Support for query jumbling for all the utility statements, based on the
state of their parsed Nodes and not only their query string. This will
greatly ease the switch to normalize the information of some DDLs, like
SET or CALL for example (this is left unchanged and should be part of a
separate discussion). With this feature, the number of entries stored
for utilities in pg_stat_statements is reduced (for example now
"CHECKPOINT" and "checkpoint" mean the same thing with the same query
ID).
- Documentation of query jumbling directly in the structure definition
of the nodes. Since this code has been introduced in pg_stat_statements
and then moved to code, the reasons behind the choices of what should be
included in the jumble are rather sparse. Note that some explanation is
added for the most relevant parts, as a start.
- Overall code reduction and more consistency with the other parts
generating read, write and copy depending on the nodes.
The query jumbling is controlled by a couple of new node attributes,
documented in nodes/nodes.h:
- custom_query_jumble, to mark a Node as having a custom
implementation.
- no_query_jumble, to ignore entirely a Node.
- query_jumble_ignore, to ignore a field in a Node.
- query_jumble_location, to mark a location in a Node, for
normalization. This can apply only to int fields, with "location" in
their name (only Const as of this commit).
There should be no compatibility impact on pg_stat_statements, as the
new code applies the jumbling to the same fields for each node (its
regression tests have no modification, for one).
Some benchmark of the query jumbling between HEAD and this commit for
SELECT and DMLs has proved that this new code does not cause a
performance regression, with computation times close for both methods.
For utility queries, the new method is slower than the previous method
of calculating a hash of the query string, though we are talking about
extra ns-level changes based on what I measured, which is unnoticeable
even for OLTP workloads as a query ID is calculated once per query
post-parse analysis.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
This test has been added as of 857ee8e that has introduced the SQL
function txid_status(), with the purpose of checking that a transaction
ID still in-progress during a crash is correctly marked as aborted after
recovery finishes.
This test is unstable, and some configuration scenarios may that easier
to reproduce (wal_level=minimal, wal_compression=on) because the WAL
holding the information about the in-progress transaction ID may not
have made it to disk yet, hence a post-crash recovery may cause the same
XID to be reused, triggering a test failure.
We have discussed a few approaches, like making this function force a
WAL flush to make it reliable across crashes, but we don't want to pay a
performance penalty in some scenarios, as well. The test could have
been tweaked to enforce a checkpoint but that actually breaks the
promise of the test to rely on a stable result of txid_status() after
a crash.
This issue has been reported a few times across the past years, with an
original report from Kyotaro Horiguchi. The buildfarm machines tanager,
hachi and gokiburi enable wal_compression, and fail on this test
periodically.
Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com
Backpatch-through: 11
Switch to get_dirent_type() instead of lstat() while traversing a
directory tree, to see if that fixes the intermittent ENOTEMPTY failures
seen in recent pg_upgrade tests, on Windows CI. While refactoring, also
use AllocateDir() instead of opendir() in the backend, which knows how
to handle descriptor pressure.
Our CI system currently uses Windows Server 2019, a version known not to
have POSIX unlink semantics enabled by default yet, unlike typical
Windows 10 and 11 systems. That might explain why we see this flapping
on CI but (apparently) not in the build farm, though the frequency is
quite low.
The theory is that some directory entry must be in state
STATUS_DELETE_PENDING, which lstat() would report as ENOENT, though
unfortunately we don't know exactly why yet. With this change, rmtree()
will not skip them, and try to unlink (again). Our unlink() wrapper
should either wait a short time for them to go away when some other
process closes the handle, or log a message to tell us the path of the
problem file if not, so we can dig further.
Discussion: https://postgr.es/m/20220919213217.ptqfdlcc5idk5xup%40awork3.anarazel.de
EquivalenceClasses are now understood as applying within a "join
domain", which is a set of inner-joined relations (possibly underneath
an outer join). We no longer need to treat an EC from below an outer
join as a second-class citizen.
I have hopes of eventually being able to treat outer-join clauses via
EquivalenceClasses, by means of only applying deductions within the
EC's join domain. There are still problems in the way of that, though,
so for now the reconsider_outer_join_clause logic is still here.
I haven't been able to get rid of RestrictInfo.is_pushed_down either,
but I wonder if that could be recast using JoinDomains.
I had to hack one test case in postgres_fdw.sql to make it still test
what it was meant to, because postgres_fdw is inconsistent about
how it deals with quals containing non-shippable expressions; see
https://postgr.es/m/1691374.1671659838@sss.pgh.pa.us. That should
be improved, but I don't think it's within the scope of this patch
series.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
Remove RestrictInfo.nullable_relids, along with a good deal of
infrastructure that calculated it. One use-case for it was in
join_clause_is_movable_to, but we can now replace that usage with
a check to see if the clause's relids include any outer join
that can null the target relation. The other use-case was in
join_clause_is_movable_into, but that test can just be dropped
entirely now that the clause's relids include outer joins.
Furthermore, join_clause_is_movable_into should now be
accurate enough that it will accept anything returned by
generate_join_implied_equalities, so we can restore the Assert
that was diked out in commit 95f4e59c3.
Remove the outerjoin_delayed mechanism. We needed this before to
prevent quals from getting evaluated below outer joins that should
null some of their vars. Now that we consider varnullingrels while
placing quals, that's taken care of automatically, so throw the
whole thing away.
Teach remove_useless_result_rtes to also remove useless FromExprs.
Having done that, the delay_upper_joins flag serves no purpose any
more and we can remove it, largely reverting 11086f2f2.
Use constant TRUE for "dummy" clauses when throwing back outer joins.
This improves on a hack I introduced in commit 6a6522529. If we
have a left-join clause l.x = r.y, and a WHERE clause l.x = constant,
we generate r.y = constant and then don't really have a need for the
join clause. But we must throw the join clause back anyway after
marking it redundant, so that the join search heuristics won't think
this is a clauseless join and avoid it. That was a kluge introduced
under time pressure, and after looking at it I thought of a better
way: let's just introduce constant-TRUE "join clauses" instead,
and get rid of them at the end. This improves the generated plans for
such cases by not having to test a redundant join clause. We can also
get rid of the ugly hack used to mark such clauses as redundant for
selectivity estimation.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
Traditionally we used the same Var struct to represent the value
of a table column everywhere in parse and plan trees. This choice
predates our support for SQL outer joins, and it's really a pretty
bad idea with outer joins, because the Var's value can depend on
where it is in the tree: it might go to NULL above an outer join.
So expression nodes that are equal() per equalfuncs.c might not
represent the same value, which is a huge correctness hazard for
the planner.
To improve this, decorate Var nodes with a bitmapset showing
which outer joins (identified by RTE indexes) may have nulled
them at the point in the parse tree where the Var appears.
This allows us to trust that equal() Vars represent the same value.
A certain amount of klugery is still needed to cope with cases
where we re-order two outer joins, but it's possible to make it
work without sacrificing that core principle. PlaceHolderVars
receive similar decoration for the same reason.
In the planner, we include these outer join bitmapsets into the relids
that an expression is considered to depend on, and in consequence also
add outer-join relids to the relids of join RelOptInfos. This allows
us to correctly perceive whether an expression can be calculated above
or below a particular outer join.
This change affects FDWs that want to plan foreign joins. They *must*
follow suit when labeling foreign joins in order to match with the
core planner, but for many purposes (if postgres_fdw is any guide)
they'd prefer to consider only base relations within the join.
To support both requirements, redefine ForeignScan.fs_relids as
base+OJ relids, and add a new field fs_base_relids that's set up by
the core planner.
Large though it is, this commit just does the minimum necessary to
install the new mechanisms and get check-world passing again.
Follow-up patches will perform some cleanup. (The README additions
and comments mention some stuff that will appear in the follow-up.)
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
This fixes a bug that, under some circumstances, would cause MERGE to
fail to properly recompute expressions for GENERATED STORED columns.
Formerly, ExecInitModifyTable() did not call ExecInitStoredGenerated()
for a MERGE command, which meant that the generated expressions
information was not computed until later, when the first merge action
was executed. However, if the first merge action to execute was an
UPDATE, then ExecInitStoredGenerated() could decide to skip some some
generated columns, if the columns on which they depended were not
updated, which was a problem if the MERGE also contained an INSERT
action, for which no generated columns should be skipped.
So fix by having ExecInitModifyTable() call ExecInitStoredGenerated()
for MERGE, and assume that it isn't safe to skip any generated columns
in a MERGE. Possibly that could be relaxed, by allowing some generated
columns to be skipped for a MERGE without an INSERT action, but it's
not clear that it's worth the effort.
Noticed while investigating bug #17759. Back-patch to v15, where MERGE
was added.
Dean Rasheed, reviewed by Tom Lane.
Discussion:
https://postgr.es/m/17759-e76d9bece1b5421c%40postgresql.orghttps://postgr.es/m/CAEZATCXb_ezoMCcL0tzKwRGA1x0oeE%3DawTaysRfTPq%2B3wNJn8g%40mail.gmail.com
Rename the developer option 'logical_decoding_mode' to the more flexible
name 'logical_replication_mode' because doing so will make it easier to
extend this option in the future to help test other areas of logical
replication.
Currently, it is used on the publisher side to allow streaming or
serializing each change in logical decoding. In the upcoming patch, we are
planning to use it on the subscriber. On the subscriber, it will allow
serializing the changes to file and notifies the parallel apply workers to
read and apply them at the end of the transaction.
We discussed exposing this parameter as a subscription option but
it did not seem advisable since it is primarily used for testing/debugging
and there is no other such parameter. We also discussed having separate
GUCs for publisher and subscriber but for current testing/debugging
requirements, one GUC is sufficient.
Author: Hou Zhijie
Reviewed-by: Peter Smith, Kuroda Hayato, Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoAy2c=Mx=FTCs+EwUsf2kQL5MmU3N18X84k0EmCXntK4g@mail.gmail.com
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
Split out "ConfigOptionIsVisible" to perform the privilege
check for GUC_SUPERUSER_ONLY GUCs (which these days can also
be read by pg_read_all_settings role members), and move the
should-we-show-it checks from GetConfigOptionValues to its
sole caller.
This commit also removes get_explain_guc_options's check of
GUC_NO_SHOW_ALL, which seems to have got cargo-culted in there.
While there's no obvious use-case for marking a GUC both
GUC_EXPLAIN and GUC_NO_SHOW_ALL, if it were set up that way
one would expect EXPLAIN to show it --- if that's not what
you want, then don't set GUC_EXPLAIN.
In passing, simplify the loop logic in show_all_settings.
Nitin Jadhav, Bharath Rupireddy, Tom Lane
Discussion: https://postgr.es/m/CAMm1aWYgfekpRK-Jz5=pM_bV+Om=ktGq1vxTZ_dr1Z6MV-qokA@mail.gmail.com
This includes a unification of the logic used to find the excludes file
and the typedefs file.
Also, remove the dangerous and deprecated feature where the first
non-option argument was taken as a typdefs file if it wasn't a .c or .h
file, remove some extraneous blank lines, and improve the documentation
somewhat.
9d9c02ccd introduced runConditions for window functions to allow
monotonic window function evaluation to be made more efficient when the
window function value went beyond some value that it would never go back
from due to its monotonic nature. That commit added prosupport functions
to inform the planner that row_number(), rank(), dense_rank() and some
forms of count(*) were monotonic. Here we add support for ntile(),
cume_dist() and percent_rank().
Reviewed-by: Melanie Plageman
Discussion: https://postgr.es/m/CAApHDvqR+VqB8s+xR-24bzJbU8xyFrBszJ17qKgECf7cWxLCaA@mail.gmail.com
pg_restore -l has always been able to read the TOC data of a dump even
if its binary has no support for compression, for both compressed and
uncompressed dumps. 5e73a60 has introduced a backward-incompatible
behavior by switching a warning to a hard error in the code path reading
the header data of a dump, preventing the TOC items to be listed even if
pg_restore -l, with no support for compression, is used on a compressed
dump. Most modern systems should have support for zlib, but it can be
also possible that somebody relies on the past behavior when copying
over a dump where binaries are not built with zlib support (most likely
some WIN32 flavors these days, though most environments should provide
that).
There is no easy way to have a regression test for this pattern, as it
requires a mix of dump/restore commands with different compilation
options, with and without compression. One possibility I see here would
be to have a command-line option that enforces a non-compression check
for a build that supports compression, but that does not seem worth the
cost, either.
Reported-by: Justin Pryzby
Author: Georgios Kokolatos
Discussion: https://postgr.es/m/20230125180020.GF22427@telsasoft.com
We'd like to use TimestampDifferenceMilliseconds with the stop_time
possibly being TIMESTAMP_INFINITY, but up to now it's disclaimed
responsibility for overflow cases. Define it to clamp its output to
the range [0, INT_MAX], handling overflow correctly. (INT_MAX rather
than LONG_MAX seems appropriate, because the function is already
described as being intended for calculating wait times for WaitLatch
et al, and that infrastructure only handles waits up to INT_MAX.
Also, this choice gets rid of cross-platform behavioral differences.)
Having done that, we can replace some ad-hoc code in walreceiver.c
with a simple call to TimestampDifferenceMilliseconds.
While at it, fix some buglets in existing callers of
TimestampDifferenceMilliseconds: basebackup_copy.c had not read the
memo about TimestampDifferenceMilliseconds never returning a negative
value, and postmaster.c had not read the memo about Min() and Max()
being macros with multiple-evaluation hazards. Neither of these
quite seem worth back-patching.
Patch by me; thanks to Nathan Bossart for review.
Discussion: https://postgr.es/m/3126727.1674759248@sss.pgh.pa.us
Avoid having walreceiver code know explicitly about the precision
and underlying datatype of TimestampTz. (There is still one
calculation that knows that, which should be replaced with use of
TimestampDifferenceMilliseconds; but we need to figure out what to do
about overflow cases first.)
In support of this, provide a TimestampTzPlusSeconds macro, as well
as TIMESTAMP_INFINITY and TIMESTAMP_MINUS_INFINITY macros. (We could
have used the existing DT_NOEND and DT_NOBEGIN symbols, but I judged
those too opaque and confusing.)
Move GetCurrentTimestamp calls so that it's more obvious that we
are not using stale values of "now" anyplace. This doesn't result
in net more calls, and might indeed make for net fewer.
Avoid having a dummy value in the WalRcvWakeupReason enum, so that
we can hope for the compiler to catch overlooked switch cases.
Nathan Bossart and Tom Lane
Discussion: https://postgr.es/m/20230125235004.GA1327755@nathanxps13
When auto_explain.log_verbose is on, auto_explain should print in the
logs plans equivalent to the EXPLAIN (VERBOSE). However, when
compute_query_id is on, query identifiers were not showing up, being
only handled by EXPLAIN (VERBOSE). This brings auto_explain on par with
EXPLAIN regarding that. Note that like EXPLAIN, auto_explain does not
show the query identifier when compute_query_id=regress.
The change is done so as the choice of printing the query identifier is
done in ExplainPrintPlan() rather than in ExplainOnePlan(), to avoid a
duplication of the logic dealing with the query ID. auto_explain is the
only in-core caller of ExplainPrintPlan().
While looking at the area, I have noticed that more consolidation
between EXPLAIN and auto_explain would be in order for the logging of
the plan duration and the buffer usage. This refactoring is left as a
future change.
Author: Atsushi Torikoshi
Reviewed-by: Justin Pryzby, Julien Rouhaud
Discussion: https://postgr.es/m/1ea21936981f161bccfce05765c03bee@oss.nttdata.com
If the final chunk of an oversized tuple being written out to disk was
exactly 32760 bytes, it would be corrupted due to a fencepost bug.
Bug #17619. Back-patch to 11 where the code arrived.
While testing that (see test module in archives), I (tmunro) noticed
that the per-participant page counter was not initialized to zero as it
should have been; that wasn't a live bug when it was written since DSM
memory was originally always zeroed, but since 14
min_dynamic_shared_memory might be configured and it supplies non-zeroed
memory, so that is also fixed here.
Author: Dmitry Astapov <dastapov@gmail.com>
Discussion: https://postgr.es/m/17619-0de62ceda812b8b5%40postgresql.org
Eager freezing strategy avoids large build-ups of all-visible pages. It
makes VACUUM trigger page-level freezing whenever doing so will enable
the page to become all-frozen in the visibility map. This is useful for
tables that experience continual growth, particularly strict append-only
tables such as pgbench's history table. Eager freezing significantly
improves performance stability by spreading out the cost of freezing
over time, rather than doing most freezing during aggressive VACUUMs.
It complements the insert autovacuum mechanism added by commit b07642db.
VACUUM determines its freezing strategy based on the value of the new
vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables.
Tables that exceed the size threshold use the eager freezing strategy.
Unlogged tables and temp tables always use eager freezing strategy,
since the added cost is negligible there. Non-permanent relations won't
incur any extra overhead in WAL written (for the obvious reason), nor in
pages dirtied (since any extra freezing will only take place on pages
whose PD_ALL_VISIBLE bit needed to be set either way).
VACUUM uses lazy freezing strategy for logged tables that fall under the
GUC size threshold. Page-level freezing triggers based on the criteria
established in commit 1de58df4, which added basic page-level freezing.
Eager freezing is strictly more aggressive than lazy freezing. Settings
like vacuum_freeze_min_age still get applied in just the same way in
every VACUUM, independent of the strategy in use. The only mechanical
difference between eager and lazy freezing strategies is that only the
former applies its own additional criteria to trigger freezing pages.
Note that even lazy freezing strategy will trigger freezing whenever a
page happens to have required that an FPI be written during pruning,
provided that the page will thereby become all-frozen in the visibility
map afterwards (due to the FPI optimization from commit 1de58df4).
The vacuum_freeze_strategy_threshold default setting is 4GB. This is a
relatively low setting that prioritizes performance stability. It will
be reviewed at the end of the Postgres 16 beta period.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
We undefined them to avoid warnings about macro redefinitions. But we haven't
fully followed the necessary include order, since at least 147c248254, in
2011. Recently the combination of the include order rules not being followed
and undefining _POSIX_C_SOURCE started to cause a compile failure, starting
with 03023a2664. Undefining _POSIX_C_SOURCE hides clock_gettime(), which is
referenced in an inline function as of 03023a2664, whereas it was a macro
before.
After seeing some evidence that undefining _POSIX_C_SOURCE et al isn't
required, I tried to build postgres with plpython on most of our supported
platforms (except DragonFlyBSD and Illumos, but similar systems were tested),
with/without the #undefines. No compiler warning / behavioral difference.
The oldest supported python version, 3.2, defines _POSIX_C_SOURCE to 200112L
ad _XOPEN_SOURCE to 600, whereas newer versions of python use 200809L/700
respectively. As _POSIX_C_SOURCE/_XOPEN_SOURCE will default to the newer
operating system on most platforms, it's possible that when using python 3.2
new warnings would be emitted - but that seems acceptable.
It's possible that this approach won't work on some older platforms. But
getting rid of most of the include-order complexity seems promising, and it's
an easily revertible patch if we end up having to go another way.
Discussion: https://postgr.es/m/20230124165814.2njc7gnvubn2amh6@awork3.anarazel.de
Until now we undefined and then redefined a lot of *printf macros due to
worries about conflicts with Python.h macro definitions. Current Python.h
doesn't define any *printf macros, and older versions just defined snprintf,
vsnprintf, guarded by #if defined(MS_WIN32) && !defined(HAVE_SNPRINTF).
Thus we can replace the undefine/define section with a single
#define HAVE_SNPRINTF 1
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230124165814.2njc7gnvubn2amh6@awork3.anarazel.de
Invent separate macros for "invalid" values of these types, so that
we needn't embed knowledge of their representations into calling code.
These are all zeroes anyway ATM, so this is not fixing any live bug,
but it makes the code cleaner and more future-proof.
I (tgl) also chose to move DSM_HANDLE_INVALID into dsm_impl.h,
since it seems like it should live beside the typedef for dsm_handle.
Hou Zhijie, Nathan Bossart, Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/OS0PR01MB5716860B1454C34E5B179B6694C99@OS0PR01MB5716.jpnprd01.prod.outlook.com
Modify the new event loop code from commit 7389aad6 so that it checks
for work requested by signal handlers even if it doesn't see a latch
event yet.
This gives priority to shutdown and reload requests where the latch will
be reported later in the event array, or in a later call to
WaitEventSetWait(), due to scheduling details. In particular, this
guarantees that a SIGHUP-then-connect sequence (as seen in
authentication tests) causes the postmaster to process the reload before
accepting the connection. If the WaitEventSetWait() call saw the socket
as ready, and the reload signal was generated before the connection,
then the latest time the signal handler should be able to run is after
poll/epoll_wait/kevent returns but before we check the
pending_pm_reload_request flag.
While here, also shift the handling of child exit below reload requests,
per Tom Lane's observation that that might start new processes, so we
should make sure we pick up new settings first.
This probably explains the one-off failure of build farm animal
malleefowl.
Reported-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/OS0PR01MB57163D3BF2AB42ECAA94E5C394C29%40OS0PR01MB5716.jpnprd01.prod.outlook.com
This makes two small changes that will improve pgindent's usefulness in
a git hook. First, it looks for the exclude file relative to the current
directory. And second, it applies the filters to filenames given on the
command line as well as those found in a directory sweep.
It might prove necessary to make further efforts to find the exclude
file, and even to allow multiple exclude files, but for now this should
be enough for most purposes.
Reviewed by Jelte Fennema
Previously, a CREATEROLE user without SUPERUSER could not alter
REPLICATION users in any way, and could not set the BYPASSRLS
attribute. However, they could manipulate the CREATEDB property
even if they themselves did not possess it.
With this change, a CREATEROLE user without SUPERUSER can set or
clear the REPLICATION, BYPASSRLS, or CREATEDB property on a new
role or a role that they have rights to manage if and only if
that property is set for their own role.
This implements the standard idea that you can't give permissions
you don't have (but you can give the ones you do have). We might
in the future want to provide more powerful ways to constrain
what a CREATEROLE user can do - for example, to limit whether
CONNECTION LIMIT can be set or the values to which it can be set -
but that is left as future work.
Patch by me, reviewed by Nathan Bossart, Tushar Ahuja, and Neha
Sharma.
Discussion: http://postgr.es/m/CA+TgmobX=LHg_J5aT=0pi9gJy=JdtrUVGAu0zhr-i5v5nNbJDg@mail.gmail.com
The drop database command waits for the logical replication sync worker to
accept ProcSignalBarrier and the worker's slot creation waits for the drop
database to finish which leads to a deadlock. This happens because the
tablesync worker holds interrupts while creating a slot.
We prevent cancel/die interrupts while creating a slot in the table sync
worker because it is possible that before the server finishes this
command, a concurrent drop subscription happens which would complete
without removing this slot and that leads to the slot existing until the
end of walsender. However, the slot will eventually get dropped at the
walsender exit time, so there is no danger of the dangling slot.
This patch reallows cancel/die interrupts while creating a slot and
modifies the test to wait for slots to become zero to prevent finding an
ephemeral slot.
The reported hang doesn't happen in PG14 as the drop database starts to
wait for ProcSignalBarrier with PG15 (commits 4eb2176318 and e2f65f4255)
but it is good to backpatch this till PG14 as it is not a good idea to
prevent interrupts during a network call that could block indefinitely.
Reported-by: Lakshmi Narayanan Sreethar
Diagnosed-by: Andres Freund
Author: Hou Zhijie
Reviewed-by: Vignesh C, Amit Kapila
Backpatch-through: 14, where it was introduced in commit 6b67d72b60
Discussion: https://postgr.es/m/CA+kvmZELXQ4ZD3U=XCXuG3KvFgkuPoN1QrEj8c-rMRodrLOnsg@mail.gmail.com
In contrast to the changes to dblink and postgres_fdw, this does not fix a
bug, as libpqwalreceiver did already process interrupts.
Besides reducing code duplication, the conversion leads to libpqwalreceiver
now using reserving file descriptors for libpq connections. While not strictly
required for the use in walreceiver, we are also using libpqwalreceiver for
logical replication, where it does seem more important.
Even if we eventually decide to backpatch the prior commits, there'd be no
need to backpatch this commit, due to not fixing an active bug.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2@awork3.anarazel.de
Currently dblink and postgres_fdw don't process interrupts during connection
establishment. Besides preventing query cancellations etc, this can lead to
undetected deadlocks, as global barriers are not processed.
Libpqwalreceiver in contrast, processes interrupts during connection
establishment. The required code is not trivial, so duplicating it into
additional places does not seem like a good option.
These aforementioned undetected deadlocks are the reason for the spate of CI
test failures in the FreeBSD 'test_running' step.
For now the helper library is just a header, as it needs to be linked into
each extension using libpq, and it seems too small to be worth adding a
dedicated static library for.
The conversion to the helper are done in subsequent commits.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2@awork3.anarazel.de
When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the
libpq connection. In most paths that's fairly harmless, as the calling process
will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat
longer lived leak.
Fix by releasing resources, including the libpq connection, on error.
Add a test exercising the error code path. To make it reliable and safe, the
test tries to connect to port=-1, which happens to fail during connection
establishment, rather than during connection string parsing.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20230121011237.q52apbvlarfv6jm6@awork3.anarazel.de
Backpatch: 11-
b762fed64 recently changed this test to prevent subquery pullup to allow
us to test Memoize with lateral_vars. As pointed out by Tom Lane, OFFSET
0 is our standard way of preventing subquery pullups, so do it that way
instead.
Discussion: https://postgr.es/m/2144818.1674517061@sss.pgh.pa.us
Backpatch-through: 14, same as b762fed64
The test in question was meant to be testing Memoize to ensure it worked
correctly when the inner side of the join contained lateral vars, however,
nothing in the lateral subquery stopped it from being pulled up into the
main query, so the planner did that, and that meant no more lateral vars.
Here we add a simple ORDER BY to stop the planner from being able to
pullup the lateral subquery.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_LHJaN4L-tXpKMiPFnsCJWU1P8Xh59o0W7AA6UN99=cQ@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
7fcbf6a and 2ff6555 changed the function signature of XLogPageRead()
but did not update the comment.
XLogReaderRoutine contains up to date information about the API, so no
need to repeat all that at XLogPageRead(), but fix the mentions of the
no longer existing function arguments.
This enhances the numeric type input function, adding support for
hexadecimal, octal, and binary integers of any size, up to the limits
of the numeric type.
Since 6fcda9aba8, such non-decimal integers have been accepted by the
parser as integer literals and passed through to numeric_in(). This
commit gives numeric_in() the ability to handle them.
While at it, simplify the handling of NaN and infinities, reducing the
number of calls to pg_strncasecmp(), and arrange for pg_strncasecmp()
to not be called at all for regular numbers. This gives a significant
performance improvement for decimal inputs, more than offsetting the
small performance hit of checking for non-decimal input.
Discussion: https://postgr.es/m/CAEZATCV8XShnmT9HZy25C%2Bo78CVOFmUN5EM9FRAZ5xvYTggPMg%40mail.gmail.com
At least on my machine, the initial coding of this didn't actually
work, because interpolation of "$post_fh->filename" doesn't act
as intended.
I threw in some double quotes too, just in case anybody tries
to run this in a path containing spaces.
Historically we skipped writing/reading this field, but that no
longer works under WRITE_READ_PARSE_PLAN_TREES since we expanded
the coverage of that option to include utility commands (787102b56).
Remove the special case and just treat this field normally.
Bump catversion out of an abundance of caution --- I do not think
we currently ever store RangeVar nodes in the catalogs, but
perhaps I'm wrong.
Per report from Pavel Stehule.
Discussion: https://postgr.es/m/CAFj8pRAYvYu-qU7-NieqRRyaQZk-yr3UjtHQ2LR62PS9M1dZMA@mail.gmail.com
This adds two modes of running pgindent, neither of which results in
any changes being made to the source code. The --show-diff option shows
what changes would have been made, and the --silent-diff option just
exits with a status of 2 if any changes would be made. The second of
these is intended for scripting use in places such as git hooks.
Along the way some code cleanup is done, and a --help option is also
added.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/c9c9fa6d-6de6-48c2-4f8b-0fbeef026439@dunslane.net
An upcoming patch by Melanie Plageman does some refactoring work in this
area. Run pgindent on that file now before making any changes so that
it's easier to maintain/evolve each of the individual patches doing the
refactor work. Additionally, add a few new required typedefs to the list
to make it easier to do future pgindent runs on this file during the
refactor work.
Discussion: https://postgr.es/m/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ@mail.gmail.com
Commit c4649cce39 removed the "shared" and "ntapes" arguments, but the
comment still talked about "shared". It also talked about "a shared
file handle", which was technically correct because even before commit
c4649cce39, the "shared file handle" referred to the "fileset"
argument, not "shared". But it was very confusing. Improve the
comment.
Also add a comment on what the "preallocate" argument does.
Backpatch to v15, just to make backpatching other patches easier in
the future.
Discussion: https://www.postgresql.org/message-id/af989685-91d5-aad4-8f60-1d066b5ec309@enterprisedb.com
Reviewed-by: Peter Eisentraut
Per buildfarm member mandrill, it seems that
max_parallel_workers_per_gather may not always be set to the default value
of 2 when the new test added in 16fd03e95 is executed. Here let's just
explicitly set that to 2 so that the planner never opts to use more than
that many parallel workers.
This adds combine, serial and deserial functions for the array_agg() and
string_agg() aggregate functions, thus allowing these aggregates to
partake in partial aggregations. This allows both parallel aggregation to
take place when these aggregates are present and also allows additional
partition-wise aggregation plan shapes to include plans that require
additional aggregation once the partially aggregated results from the
partitions have been combined.
Author: David Rowley
Reviewed-by: Andres Freund, Tomas Vondra, Stephen Frost, Tom Lane
Discussion: https://postgr.es/m/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A@mail.gmail.com
Enforce wal_retrieve_retry_interval on a per-subscription basis,
rather than globally, and arrange to skip that delay in case of
an intentional worker exit. This probably makes little difference
in the field, where apply workers wouldn't be restarted often;
but it has a significant impact on the runtime of our logical
replication regression tests (even though those tests use
artificially-small wal_retrieve_retry_interval settings already).
Nathan Bossart, with mostly-cosmetic editorialization by me
Discussion: https://postgr.es/m/20221122004119.GA132961@nathanxps13
The motivation for this change is that when pg_dump dumps a
partitioned index that's marked REPLICA IDENTITY, it generates a
command sequence that applies REPLICA IDENTITY before the partitioned
index has been marked valid, causing restore to fail. We could
perhaps change pg_dump to not do it like that, but that would be
difficult and would not fix existing dump files with the problem.
There seems to be very little reason for the backend to disallow
this anyway --- the code ignores indisreplident when the index
isn't valid --- so instead let's fix it by allowing the case.
Commit 9511fb37a previously expressed a concern that allowing
indisreplident to be set on invalid indexes might allow us to
wind up in a situation where a table could have indisreplident
set on multiple indexes. I'm not sure I follow that concern
exactly, but in any case the only way that could happen is because
relation_mark_replica_identity is too trusting about the existing set
of markings being valid. Let's just rip out its early-exit code path
(which sure looks like premature optimization anyway; what are we
doing expending code to make redundant ALTER TABLE ... REPLICA
IDENTITY commands marginally faster and not-redundant ones marginally
slower?) and fix it to positively guarantee that no more than one
index is marked indisreplident.
The pg_dump failure can be demonstrated in all supported branches,
so back-patch all the way. I chose to back-patch 9511fb37a as well,
just to keep indisreplident handling the same in all branches.
Per bug #17756 from Sergey Belyashov.
Discussion: https://postgr.es/m/17756-dd50e8e0c8dd4a40@postgresql.org
When the length was too short, the server read outside the allocation.
That yielded the same log noise as sending the correct length with
(backendPID,cancelAuthCode) matching nothing. Change to a message about
the unexpected length. Given the attacker's lack of control over the
memory layout and the general lack of diversity in memory layouts at the
code in question, we doubt a would-be attacker could cause a segfault.
Hence, while the report arrived via security@postgresql.org, this is not
a vulnerability. Back-patch to v11 (all supported versions).
Andrey Borodin, reviewed by Tom Lane. Reported by Andrey Borodin.
Until now we used struct timespec for instr_time on all platforms but
windows. Using struct timespec causes a fair bit of memory (struct timeval is
16 bytes) and runtime overhead (much more complicated additions). Instead we
can convert the time to nanoseconds in INSTR_TIME_SET_CURRENT(), making the
remaining operations cheaper.
Representing time as int64 nanoseconds provides sufficient range, ~292 years
relative to a starting point (depending on clock source, relative to the unix
epoch or the system's boot time). That'd not be sufficient for calendar time
stored on disk, but is plenty for runtime interval time measurement.
On windows instr_time already is represented as cycles. It might make sense to
represent time as cycles on other platforms as well, as using cycle
acquisition instructions like rdtsc directly can reduce the overhead of time
acquisition substantially. This could be done in a fairly localized manner as
the code stands after this commit.
Because the windows and non-windows paths are now more similar, use a common
set of macros. To make that possible, most of the use of LARGE_INTEGER had to
be removed, which looks nicer anyway.
To avoid users of the API relying on the integer representation, we wrap the
64bit integer inside struct struct instr_time.
Author: Andres Freund <andres@anarazel.de>
Author: Lukas Fittl <lukas@fittl.com>
Author: David Geier <geidav.pg@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230113195547.k4nlrmawpijqwlsa@awork3.anarazel.de
These are all not necessary from a correctness POV. However, in the near
future instr_time will be simplified to an int64, at which point gcc would
otherwise start to warn about the changed places.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230116023639.rn36vf6ajqmfciua@awork3.anarazel.de
This is similar to 835d476, except that this one is to add node
attributes related to query jumbling and avoid long lines in the headers
and in the node structures changed by this commit.
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
This will ease a follow-up move that will generate automatically this
code. The C file is renamed, for consistency with the node-related
files whose code are generated by gen_node_support.pl:
- queryjumble.c -> queryjumblefuncs.c
- utils/queryjumble.h -> nodes/queryjumble.h
Per a suggestion from Peter Eisentraut.
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
This provides a way to reserve connection slots for non-superusers.
The slots reserved via the new GUC are available only to users who
have the new predefined role pg_use_reserved_connections.
superuser_reserved_connections remains as a final reserve in case
reserved_connections has been exhausted.
Patch by Nathan Bossart. Reviewed by Tushar Ahuja and by me.
Discussion: http://postgr.es/m/20230119194601.GA4105788@nathanxps13
This is in preparation for adding a new reserved_connections GUC,
but aligning the GUC name with the variable name is also a good
idea on general principle.
Patch by Nathan Bossart. Reviewed by Tushar Ahuja and by me.
Discussion: http://postgr.es/m/20230119194601.GA4105788@nathanxps13
Commit ea92368cd1 made max_wal_senders
a separate pool of backends from max_connections, but the documentation
and error message for superuser_reserved_connections weren't updated
at the time, and as a result are somewhat misleading. Update.
This is arguably a back-patchable bug fix, but because it seems quite
minor, no back-patch.
Patch by Nathan Bossart. Reviewed by Tushar Ahuja and by me.
Discussion: http://postgr.es/m/20230119194601.GA4105788@nathanxps13
While pg_hba.conf has support for non-literal username matches, and
this commit extends the capabilities that are supported for the
PostgreSQL user listed in an ident entry part of pg_ident.conf, with
support for:
1. The "all" keyword, where all the requested users are allowed.
2. Membership checks using the + prefix.
3. Using a regex to match against multiple roles.
1. is a feature that has been requested by Jelte Fennema, 2. something
that has been mentioned independently by Andrew Dunstan, and 3. is
something I came up with while discussing how to extend the first one,
whose implementation is facilitated by 8fea868.
This allows matching certain system users against many different
postgres users with a single line in pg_ident.conf. Without this, one
would need one line for each of the postgres users that a system user
can log in as, which can be cumbersome to maintain.
Tests are added to the TAP test of peer authentication to provide
coverage for all that.
Note that this introduces a set of backward-incompatible changes to be
able to detect the new patterns, for the following cases:
- A role named "all".
- A role prefixed with '+' characters, which is something that would not
have worked in HBA entries anyway.
- A role prefixed by a slash character, similarly to 8fea868.
Any of these can be still be handled by using quotes in the Postgres
role defined in an ident entry.
A huge advantage of this change is that the code applies the same checks
for the Postgres roles in HBA and ident entries, via the common routine
check_role().
**This compatibility change should be mentioned in the release notes.**
Author: Jelte Fennema
Discussion: https://postgr.es/m/DBBPR83MB0507FEC2E8965012990A80D0F7FC9@DBBPR83MB0507.EURPRD83.prod.outlook.com
If the public schema has a non-default owner (perhaps due to
dropping and recreating it) then use of pg_dump's "--if-exists"
option results in a warning message:
warning: could not find where to insert IF EXISTS in statement "-- *not* dropping schema, since initdb creates it"
This is harmless since the dump output is the same either way,
but nonetheless it's undesirable. It's the fault of commit
a7a7be1f2, which created situations where a TOC entry's "defn"
or "dropStmt" fields could be just comments. Although that
commit fixed up the kluges in pg_backup_archiver.c that munge defn
strings, it missed doing so for the one that munges dropStmts.
Per bug# 17753 from Justin Zhang.
Discussion: https://postgr.es/m/17753-9c8773631747ee1c@postgresql.org
This adjusts a few places which were appending a string constant
containing spaces onto a StringInfo. We have appendStringInfoSpaces for
that job, so let's use that instead.
For the change to jsonb.c's add_indent() function, appendStringInfoString
was being called inside a loop to append 4 spaces on each loop. This
meant that enlargeStringInfo would get called once per loop. Here it
should be much more efficient to get rid of the loop and just calculate
the number of spaces with "level * 4" and just append all the spaces in
one go.
Here we additionally adjust the appendStringInfoSpaces function so it
makes use of memset rather than a while loop to apply the required spaces
to the StringInfo. One of the problems with the while loop was that it
was incrementing one variable and decrementing another variable once per
loop. That's more work than what's required to get the job done. We may
as well use memset for this rather than trying to optimize the existing
loop. Some testing has shown memset is faster even for very small sizes.
Discussion: https://postgr.es/m/CAApHDvp_rKkvwudBKgBHniNRg67bzXVjyvVKfX0G2zS967K43A@mail.gmail.com
This patch largely reverts what I did in commits c9b0c678d and
78e73e875. The maximum cover length limit that I added in 78e73e875
(to band-aid over c9b0c678d's performance issues) creates too many
user-visible behavior discrepancies, as complained of for example in
bug #17691. The real problem with hlCover() is not what I thought
at the time, but more that it seems to have been designed with only
AND tsquery semantics in mind. It doesn't work quite right for OR,
and even less so for NOT or phrase queries. However, we can improve
that situation by building a variant of TS_execute() that returns a
list of match locations. We already get an ExecPhraseData struct
representing match locations for the primitive case of a simple match,
as well as one for a phrase match; we just need to add some logic to
combine these for AND and OR operators. The result is a list of
ExecPhraseDatas, which hlCover can regard as having simple AND
semantics, so that its old algorithm works correctly.
There's still a lot not to like about ts_headline's behavior, but
I think the remaining issues have to do with the heuristics used
in mark_hl_words and mark_hl_fragments (which, likewise, were not
revisited when phrase search was added). Improving those is a task
for another day.
Patch by me; thanks to Alvaro Herrera for review.
Discussion: https://postgr.es/m/840.1669405935@sss.pgh.pa.us
When ending recovery based on recovery_target_xid matching with
recovery_target_inclusive = off, we printed an incorrect timestamp
(always 2000-01-01) in the "recovery stopping before ... transaction"
log message. This is a consequence of sloppy refactoring in
c945af80c: the code to fetch recordXtime out of the commit/abort
record used to be executed unconditionally, but it was changed
to get called only in the RECOVERY_TARGET_TIME case. We need only
flip the order of operations to restore the intended behavior.
Per report from Torsten Förtsch. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CAKkG4_kUevPqbmyOfLajx7opAQk6Cvwkvx0HRcFjSPfRPTXanA@mail.gmail.com
RelOptInfo.userid is the same for all relations in a given inheritance
tree, so the code in examine_variable() and example_simple_variable()
that repeats the ACL checks on the root parent rel instead of a given
leaf child relations need not recompute userid too.
Author: Amit Langote <amitlangote09@gmail.com>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20221210201753.GA27893@telsasoft.com
This is wrong since 88e9823, that has switched the WAL sizing
configuration from checkpoint_segments to min_wal_size and
max_wal_size. This missed the recalculation of the internal value of
the internal "CheckPointSegments", that works as a mapping of the old
GUC checkpoint_segments, on reload, for example, and it controls the
timing of checkpoints depending on the volume of WAL generated.
Most users tend to leave checkpoint_completion_target at 0.9 to smooth
the I/O workload, which is why I guess this has gone unnoticed for so
long, still it can be useful to tweak and reload the value dynamically
in some cases to control the timing of checkpoints.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACXgPPAm28mruojSBno+F_=9cTOOxHAywu_dfZPeBdybQw@mail.gmail.com
Backpatch-through: 11
No buildfarm members have reported that yet, but a recently-refreshed
Debian host did.
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz
Backpatch-through: 11
Turns out the compression.sql test creates a view that needs
to be adjusted in the wake of 47bb9db75 --- except that without
--with-lz4, it fails to create the view at all, so I'd not
noticed this in testing.
Per buildfarm member crake.
In c8ad4d8166 dlist_member_check()'s arguments were made const. Unfortunately
the implementation of dlist_member_check() used dlist_foreach(), which
currently doesn't work for const lists.
As a workaround, open-code the list iteration. The other check functions
already do so.
Discussion: https://postgr.es/m/20230118182214.co7dp4oahiunwg57@awork3.anarazel.de
The rule system needs "old" and/or "new" pseudo-RTEs in rule actions
that are ON INSERT/UPDATE/DELETE. Historically it's put such entries
into the ON SELECT rules of views as well, but those are really quite
vestigial. The only thing we've used them for is to carry the
view's relid forward to AcquireExecutorLocks (so that we can
re-lock the view to verify it hasn't changed before re-using a plan)
and to carry its relid and permissions data forward to execution-time
permissions checks. What we can do instead of that is to retain
these fields of the RTE_RELATION RTE for the view even after we
convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of
extra complication in the planner and AcquireExecutorLocks, but on
the other hand we can get rid of the logic that moves that data from
one place to another.
The principal immediate benefit of doing this, aside from a small
saving in the pg_rewrite data for views, is that these pseudo-RTEs
no longer trigger ruleutils.c's heuristic about qualifying variable
names when the rangetable's length is more than 1. That results
in quite a number of small simplifications in regression test outputs,
which are all to the good IMO.
Bump catversion because we need to dump a few more fields of
RTE_SUBQUERY RTEs. While those will always be zeroes anyway in
stored rules (because we'd never populate them until query rewrite)
they are useful for debugging, and it seems like we'd better make
sure to transmit such RTEs accurately in plans sent to parallel
workers. I don't think the executor actually examines these fields
after startup, but someday it might.
This is a second attempt at committing 1b4d280ea. The difference
from the first time is that now we can add some filtering rules to
AdjustUpgrade.pm to allow cross-version upgrade testing to pass
despite all the cosmetic changes in CREATE VIEW outputs.
Amit Langote (filtering rules by me)
Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
Avoid explicitly grouping by columns that we know are redundant
for sorting, for example we need group by only one of x and y in
SELECT ... WHERE x = y GROUP BY x, y
This comes up more often than you might think, as shown by the
changes in the regression tests. It's nearly free to detect too,
since we are just piggybacking on the existing logic that detects
redundant pathkeys. (In some of the existing plans that change,
it's visible that a sort step preceding the grouping step already
didn't bother to sort by the redundant column, making the old plan
a bit silly-looking.)
To do this, build processed_groupClause and processed_distinctClause
lists that omit any provably-redundant sort items, and consult those
not the originals where relevant. This means that within the
planner, one should usually consult root->processed_groupClause or
root->processed_distinctClause if one wants to know which columns
are to be grouped on; but to check whether grouping or distinct-ing
is happening at all, check non-NIL-ness of parse->groupClause or
parse->distinctClause. This is comparable to longstanding rules
about handling the HAVING clause, so I don't think it'll be a huge
maintenance problem.
nodeAgg.c also needs minor mods, because it's now possible to generate
AGG_PLAIN and AGG_SORTED Agg nodes with zero grouping columns.
Patch by me; thanks to Richard Guo and David Rowley for review.
Discussion: https://postgr.es/m/185315.1672179489@sss.pgh.pa.us
Add leader_pid to pg_stat_subscription. leader_pid is the process ID of
the leader apply worker if this process is a parallel apply worker. If
this field is NULL, it indicates that the process is a leader apply
worker or a synchronization worker. The new column makes it easier to
distinguish parallel apply workers from other kinds of workers and helps
to identify the leader for the parallel workers corresponding to a
particular subscription.
Additionally, update the leader_pid column in pg_stat_activity as well to
display the PID of the leader apply worker for parallel apply workers.
Author: Hou Zhijie
Reviewed-by: Peter Smith, Sawada Masahiko, Amit Kapila, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
Presently, restore_command uses a different code path than
archive_cleanup_command and recovery_end_command. These code paths
are similar and can be easily combined, as long as it is possible to
identify if a command should:
- Issue a FATAL on signal.
- Exit immediately on SIGTERM.
While on it, this removes src/common/archive.c and its associated
header. Since the introduction of c96de2c, BuildRestoreCommand() has
become a simple wrapper of replace_percent_placeholders() able to call
make_native_path(). This simplifies shell_restore.c as long as
RestoreArchivedFile() includes a call to make_native_path().
Author: Nathan Bossart
Reviewed-by: Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/20221227192449.GA3672473@nathanxps13
This makes sure that the internal logic of these functions does not
attempt to change the value of the arguments constified, and it removes
one unconstify() in basic_archive.c.
Author: Nathan Bossart
Reviewed-by: Andrew Dunstan, Peter Eisentraut
Discussion: https://postgr.es/m/20230114231126.GA2580330@nathanxps13
It's likely worth adding some automated way of preventing further
omissions. We're discussing how to best do that.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230117173509.GV9837@telsasoft.com
test_extensions' test_ext_cine extension has the same upgrade hazard
as test_ext7: the regression test leaves it in an updated state
from which no downgrade path to default is provided. This causes
the update_extensions.sql script helpfully provided by pg_upgrade
to fail. So drop it in cross-version-upgrade testing.
Not entirely sure how come I didn't hit this in testing yesterday;
possibly I'd built the upgrade reference databases with
testmodules-install-check disabled.
Backpatch to v10 where this module was introduced.
Instead of half a dozen of mostly-duplicate conditional branches,
write one common one that can handle most catalogs. We already have
all the information we need, such as which system catalog corresponds
to which catalog table and which column is the ACL column.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/504bc485-6bd6-dd1b-fe10-e7351aeb310d@enterprisedb.com
Remove some code guarded by IS_MINUS() or IS_PLUS(), where the entire
stanza is inside an else-block where both of these are false. This
should slightly improve test coverage.
While at it, remove coding that apparently assumes that unsetting a
bit is so expensive that we have to first check if it's already set
in the first place.
Per Coverity report from Ranier Vilela
Analysis and review by Justin Pryzby
Discussion: https://www.postgresql.org/message-id/20221223010818.GP1153%40telsasoft.com
The code that decides the apply action missed to handle non-transactional
messages and we didn't catch it in our testing as currently such messages
are simply ignored by the apply worker. This was introduced by changes in
commit 216a784829.
While testing this, I noticed that we forgot to reset stream_xid after
processing the stream stop message which could also result in the wrong
apply action after the fix for non-transactional messages.
In passing, change assert to elog for unexpected apply action in some of
the routines so as to catch the problems in the production environment, if
any.
Reported-by: Tomas Vondra
Author: Amit Kapila
Reviewed-by: Tomas Vondra, Sawada Masahiko, Hou Zhijie
Discussion: https://postgr.es/m/984ff689-adde-9977-affe-cd6029e850be@enterprisedb.com
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
Entries of pg-user in pg_ident.conf that are quoted and include '\1'
allow a replacement from a subexpression in a system user regexp. This
commit adds a test to track this behavior and a note in the
documentation, as it could be affected by the use of an AuthToken for
the pg-user in the IdentLines parsed.
This subject has come up in the discussion aimed at extending the
support of pg-user in ident entries for more patterns.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQRNow4MwkBjgPxywXdJU_K3a9+Pm78JB7De3yQwwkTDew@mail.gmail.com
In 1349d2790, we gave the planner the ability to provide ORDER BY/DISTINCT
Aggrefs with presorted input so that nodeAgg would not have to perform
sorts during execution. That commit failed to properly consider the
implications of if the Aggref had a volatile function in its ORDER
BY/DISTINCT clause. As it happened, this resulted in an ERROR about the
volatile function being missing from the targetlist.
Here, instead of adding the volatile function to the targetlist, we just
never consider an Aggref with a volatile function in its ORDER BY/DISTINCT
clause when choosing which Aggrefs we should sort by. We do this as if we
were to choose a plan which provided these aggregates with presorted
input, then if there were many such aggregates which could all share the
same sort order, then it may be surprising if they all shared the same
sort sometimes and didn't at other times when some other set of aggregates
were given presorted results. We can avoid this inconsistency by just
never providing these volatile function aggregates with presorted input.
Reported-by: Dean Rasheed
Discussion: https://postgr.es/m/CAEZATCWETioXs5kY8vT6BVguY41_wD962VDk=u_Nvd7S1UXzuQ@mail.gmail.com
To test pg_upgrade across major PG versions, we have to be able to
modify or drop any old objects with no-longer-supported properties,
and we have to be able to deal with cosmetic changes in pg_dump output.
Up to now, the buildfarm and pg_upgrade's own test infrastructure had
separate implementations of the former, and we had nothing but very
ad-hoc rules for the latter (including an arbitrary threshold on how
many lines of unchecked diff were okay!). This patch creates a Perl
module that can be shared by both those use-cases, and adds logic
that deals with pg_dump output diffs in a much more tightly defined
fashion.
This largely supersedes previous efforts in commits 0df9641d3,
9814ff550, and 62be9e4cd, which developed a SQL-script-based solution
for the task of dropping old objects. There was nothing fundamentally
wrong with that work in itself, but it had no basis for solving the
output-formatting problem. The most plausible way to deal with
formatting is to build a Perl module that can perform editing on the
dump files; and once we commit to that, it makes more sense for the
same module to also embed the knowledge of what has to be done for
dropping old objects.
Back-patch versions of the helper module as far as 9.2, to
support buildfarm animals that still test that far back.
It's also necessary to back-patch PostgreSQL/Version.pm,
because the new code depends on that. I fixed up pg_upgrade's
002_pg_upgrade.pl in v15, but did not look into back-patching
it further than that.
Tom Lane and Andrew Dunstan
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
Tighten up the way that visibilitymap_set() is called: request that both
the all-visible and all-frozen bits get set whenever the all-frozen bit
is set, regardless of what we think we know about the present state of
the all-visible bit. Also make sure that the page level PD_ALL_VISIBLE
flag is set in the same code path.
In practice there doesn't seem to be a concrete scenario in which the
previous approach could lead to inconsistencies. It was almost possible
in scenarios involving concurrent HOT updates from transactions that
abort, but (unlike pruning) freezing can never remove XIDs > VACUUM's
OldestXmin, even those from transactions that are known to have aborted.
That was protective here.
These issues have been around since commit a892234f83, which added the
all-frozen bit to the VM fork. There is no known live bug here, so no
backpatch.
In passing, add some defensive assertions to catch the issue, and stop
reading the existing state of the VM when setting the VM in VACUUM's
final heap pass. We already know that affected pages must have had at
least one LP_DEAD item before we set it LP_UNUSED, so there is no point
in reading the VM when it is set like this.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WznuNGSzF8v6OsgjaC5aYsb3cZ6HW6MLm30X0d65cmSH6A@mail.gmail.com
Most callers of BufFileRead() want to check whether they read the full
specified length. Checking this at every call site is very tedious.
This patch provides additional variants BufFileReadExact() and
BufFileReadMaybeEOF() that include the length checks.
I considered changing BufFileRead() itself, but this function is also
used in extensions, and so changing the behavior like this would
create a lot of problems there. The new names are analogous to the
existing LogicalTapeReadExact().
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f3501945-c591-8cc3-5ef0-b72a2e0eaa9c@enterprisedb.com
The code specific to the execution of archive_cleanup_command,
recovery_end_command and restore_command is moved to a new file named
shell_restore.c. The code is split into three functions:
- shell_restore(), that attempts the execution of a shell-based
restore_command.
- shell_archive_cleanup(), for archive_cleanup_command.
- shell_recovery_end(), for recovery_end_command.
This introduces no functional changes, with failure patterns and logs
generated in consequence being the same as before (one case actually
generates one less DEBUG2 message "could not restore" when a restore
command succeeds but the follow-up stat() to check the size fails, but
that only matters with a elevel high enough).
This is preparatory work for allowing recovery modules, a facility
similar to archive modules, with callbacks shaped similarly to the
functions introduced here.
Author: Nathan Bossart
Reviewed-by: Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/20221227192449.GA3672473@nathanxps13
While system_user was stored as an AuthToken in IdentLine, pg_user was
stored as a plain string. This commit changes the code as we start
storing pg_user as an AuthToken too.
This does not have any functional changes, as all the operations on
pg_user only use the string from the AuthToken. There is no regexp
compiled and no check based on its quoting, yet. This is in preparation
of more features that intend to extend its capabilities, like support
for regexps and group membership.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQRNow4MwkBjgPxywXdJU_K3a9+Pm78JB7De3yQwwkTDew@mail.gmail.com
int2vectorin limited the number of array elements it'd take to
FUNC_MAX_ARGS, which is probably fine for the traditional use-cases.
But now that pg_publication_rel.prattrs is an int2vector, it's not
fine at all: it's easy to construct cases where that can have up to
about MaxTupleAttributeNumber entries. Trying to replicate such
tables leads to logical-replication failures.
As long as we have to touch this code anyway, let's just remove
the a-priori limit altogether, and let it accept any size that'll
be allowed by repalloc. (Note that since int2vector isn't toastable,
we cannot store arrays longer than about BLCKSZ/2; but there is no
good excuse for letting int2vectorin depend on that. Perhaps we
will lift the no-toast restriction someday.)
While at it, also improve the equivalent logic in oidvectorin.
I don't know of any practical use-case for long oidvectors right
now, but doing it right actually makes the code shorter.
Per report from Erik Rijkers. Back-patch to v15 where
pg_publication_rel.prattrs was added.
Discussion: https://postgr.es/m/668ba539-33c5-8190-ca11-def2913cb94b@xs4all.nl
In commit 8bf6ec3ba I assumed that no code path could reach
ExecGetExtraUpdatedCols without having gone through
ExecInitStoredGenerated. That turns out not to be the case in
logical replication: if there's an ON UPDATE trigger on the target
table, trigger.c will call this code before anybody has set up its
generated columns. Having seen that, I don't have a lot of faith in
there not being other such paths. ExecGetExtraUpdatedCols can call
ExecInitStoredGenerated for itself, as long as we are willing to
assume that it is only called in CMD_UPDATE operations, which on
the whole seems like a safer leap of faith.
Per report from Vitaly Davydov.
Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
Commit 60684dd8 left loose ends when it came to maintaining toast
tables or partitions.
For toast tables, simply skip the privilege check if the toast table
is an indirect target of the maintenance command, because the main
table privileges have already been checked.
For partitions, allow the maintenance command if the user has the
MAINTAIN privilege on the partition or any parent.
Also make CLUSTER emit "skipping" messages when the user doesn't have
privileges, similar to VACUUM.
Author: Nathan Bossart
Reported-by: Pavel Luzanov
Reviewed-by: Pavel Luzanov, Ted Yu
Discussion: https://postgr.es/m/20230113231339.GA2422750@nathanxps13
When VACUUM/ANALYZE are run on an entire database, it warns of
skipping relations for which the user doesn't have sufficient
privileges. That only makes sense for tables, so skip such messages
for indexes, etc.
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/c0a85c2e83158560314b576b6241c8ed0aea1745.camel%40j-davis.com
The prior behavior was confusing and hard to document. For instance,
if you had UPDATE privileges, you could lock a table in any lock mode
except ACCESS SHARE mode.
Now, if granted a privilege to lock at a given mode, one also has
privileges to lock at a less-conflicting mode. MAINTAIN, UPDATE,
DELETE, and TRUNCATE privileges allow any lock mode. INSERT privileges
allow ROW EXCLUSIVE (or below). SELECT privileges allow ACCESS SHARE.
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/9550c76535404a83156252b25a11babb4792ea1e.camel%40j-davis.com
We don't allow different column lists for the same table in the different
publications of the single subscription. A publication with a column list
except for dropped and generated columns should be considered the same as
a publication with no column list (which implicitly includes all columns
as part of the columns list). However, as we were not excluding the
dropped and generated columns from the column list combining such
publications leads to an error "cannot use different column lists for
table ...".
We decided not to backpatch this fix as there is a risk of users seeing
this as a behavior change and also we didn't see any field report of this
case.
Author: Shi yu
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/OSZPR01MB631091CCBC56F195B1B9ACB0FDFE9@OSZPR01MB6310.jpnprd01.prod.outlook.com
This hash table is used to cache the state of streaming transactions being
applied by the parallel apply workers. So, this should be created only
when we are successful in launching at least one worker. This avoids rare
case memory leak when we are never able to launch any worker.
Author: Ted Yu
Discussion: https://postgr.es/m/CALte62wg0rBR3Vj2beV=HiWo2qG9L0hzKcX=yULNER0wmf4aEw@mail.gmail.com
Regexp replacement with \1 in pg_ident.conf is tested in one check of
the kerberos test suite, still it requires a dependency on
--with-gssapi to be triggered. This commit adds to the test suite of
peer authentication two tests to check the replacement of \1 in a
pg-username, coupled with a system-username regexp:
- With a subexpression in system-username, similarly to the kerberos
test suite.
- Without a subexpression in system-username, checking for a failure.
This had no coverage until now, and the error pattern is checked in the
server logs.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQRNow4MwkBjgPxywXdJU_K3a9+Pm78JB7De3yQwwkTDew@mail.gmail.com
The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of
WaitEventSetWaitBlock() confused the size of their internal buffer with
the size of the caller's output buffer, and could ask the kernel for too
many events. In fact the set of events retrieved from the kernel needs
to be able to fit in both buffers, so take the smaller of the two.
The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this
confusion.
This probably didn't come up before because we always used the same
number in both places, but commit 7389aad6 calculates a dynamic size at
construction time, while using MAXLISTEN for its output event buffer on
the stack. That seems like a reasonable thing to want to do, so
consider this to be a pre-existing bug worth fixing.
As discovered by valgrind on skink.
Back-patch to all supported releases for epoll, and to release 13 for
the kqueue part, which copied the incorrect epoll code.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us
The current jsonpath code assumes that the referenced variable always exists.
It could only throw an error at the value valuation time. At the same time
existence checking assumes variable is present without valuation, and error
suppression doesn't work for missing variables.
This commit makes existense checking trigger an error for missing variables.
This makes the overall behavior consistent.
Backpatch to 12 where jsonpath was introduced.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwbeytffJkVnEqDyLZ%3DrQsznoTh1OgDoOF3VmOMkxcTMjA%40mail.gmail.com
Author: Alexander Korotkov, David G. Johnston
Backpatch-through: 12
Const qualifiers ensure that we don't do something stupid in the
function implementation. Additionally they clarify the interface. As
an example:
void
slist_delete(slist_head *head, const slist_node *node)
Here one can instantly tell that node->next is not going to be set to
NULL. Finally, const qualifiers potentially allow the compiler to do
more optimizations. This being said, no benchmarking was done for
this patch.
The functions that return non-const pointers like slist_next_node(),
dclist_next_node() etc. are not affected by the patch intentionally.
Author: Aleksander Alekseev
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAJ7c6TM2%3D08mNKD9aJg8vEY9hd%2BG4L7%2BNvh30UiNT3kShgRgNg%40mail.gmail.com
The code that handles authentication for user maps was pretty confusing
with its choice of variable names. It involves two types of users: a
system user and a Postgres user (well, role), and these were not named
consistently throughout the code that processes the user maps loaded
from pg_ident.conf at authentication.
This commit changes the following things to improve the situation:
- Rename "pg_role" to "pg_user" and "token" to "system_user" in
IndetLine. These choices are more consistent with the pg_ident.conf
example in the docs, as well. "token" has been introduced recently in
fc579e1, and it is way worse than the choice before that, "ident_user".
- Switch the order of the fields in IdentLine to map with the order of
the items in the ident files, as of map name, system user and PG user.
- In check_ident_usermap(), rename "regexp_pgrole" to "expanded_pg_user"
when processing a regexp for the system user entry in a user map. This
variable does not store a regular expression at all: it would be either
a string or a substitution to \1 if the Postgres role is specified as
such.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQTkwELHUOAKhvdA+m3tWbUQySHHkExJV8GAZ1pwgbEgXg@mail.gmail.com
A comment in hba.h mentioned that AuthTokens are used when building the
IdentLines from pg_ident.conf, but since 8fea868 that has added support
of regexps for databases and roles in pg_hba.conf, it is also the case
of HBA files. This refreshes the comment to refer to both HBA and ident
files.
Issue spotted while going through a different patch.
The creation of a logical decoding context in CreateDecodingContext()
updates some data of its slot for two-phase transactions if enabled by
the caller, but the code forgot to acquire a spinlock when updating
these fields like any other code paths. This could lead to the read of
inconsistent data.
Oversight in a8fd13c.
Author: Sawada Masahiko
Discussion: https://postgr.es/m/CAD21AoAD8_fp47191LKuecjDd3DYhoQ4TaucFco1_TEr_jQ-Zw@mail.gmail.com
Backpatch-through: 15
This reverts commit 1b4d280ea1.
It's broken the buildfarm members that run cross-version-upgrade tests,
because they're not prepared to deal with cosmetic differences between
CREATE VIEW commands emitted by older servers and HEAD. Even if we had
a solution to that, which we don't, it'd take some time to roll it out
to the affected animals. This improvement isn't valuable enough to
justify addressing that problem on an emergency basis, so revert it
for now.
Switch to a design similar to regular backends, instead of the previous
arrangement where signal handlers did non-trivial state management and
called fork(). The main changes are:
* The postmaster now has its own local latch to wait on. (For now, we
don't want other backends setting its latch directly, but that could
probably be made to work with more research on robustness.)
* The existing signal handlers are cut in two: a handle_pm_XXX() part
that just sets pending_pm_XXX flags and the latch, and a
process_pm_XXX() part that runs later when the latch is seen.
* Signal handlers are now installed with the regular pqsignal()
function rather than the special pqsignal_pm() function; historical
portability concerns about the effect of SA_RESTART on select() are no
longer relevant, and we don't need to block signals anymore.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2BZ-HpOj1JsO9eWUP%2Bar7npSVinsC_npxSy%2BjdOMsx%3DGg%40mail.gmail.com
Rename the heapam.c freeze plan deduplication routines added by commit
9e540599 to names that follow conventions for functions in heapam.c.
Also relocate the functions so that they're next to their caller, which
runs during original execution, when FREEZE_PAGE WAL records are built.
The routines were initially placed next to (and followed the naming
conventions of) conceptually related REDO routine code, but that scheme
turned out to be kind of jarring when considered in a wider context.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230109214308.icz26oqvt3k2274c@awork3.anarazel.de
The rule system needs "old" and/or "new" pseudo-RTEs in rule actions
that are ON INSERT/UPDATE/DELETE. Historically it's put such entries
into the ON SELECT rules of views as well, but those are really quite
vestigial. The only thing we've used them for is to carry the
view's relid forward to AcquireExecutorLocks (so that we can
re-lock the view to verify it hasn't changed before re-using a plan)
and to carry its relid and permissions data forward to execution-time
permissions checks. What we can do instead of that is to retain
these fields of the RTE_RELATION RTE for the view even after we
convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of
extra complication in the planner and AcquireExecutorLocks, but on
the other hand we can get rid of the logic that moves that data from
one place to another.
The principal immediate benefit of doing this, aside from a small
saving in the pg_rewrite data for views, is that these pseudo-RTEs
no longer trigger ruleutils.c's heuristic about qualifying variable
names when the rangetable's length is more than 1. That results
in quite a number of small simplifications in regression test outputs,
which are all to the good IMO.
Bump catversion because we need to dump a few more fields of
RTE_SUBQUERY RTEs. While those will always be zeroes anyway in
stored rules (because we'd never populate them until query rewrite)
they are useful for debugging, and it seems like we'd better make
sure to transmit such RTEs accurately in plans sent to parallel
workers. I don't think the executor actually examines these fields
after startup, but someday it might.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
This appends the set of object types supported by these commands, and
the objects defined in the cluster are completed after that. Note that
these may not be in the extension being working on when using DROP, to
keep the code simple, but this is much more useful than the previous
behavior of not knowing the objects that can be touched.
Author: Vignesh C
Discussion: https://postgr.es/m/CALDaNm3LVM2QcUWqgOonKZH80TveT-tUthbw4ZhuE_6pD3yi-A@mail.gmail.com
Document that TransactionIdDidAbort() won't indicate that transactions
that were in-progress during a crash have aborted. Tie this to existing
discussion of the TransactionIdDidCommit() and TransactionIdDidCommit()
protocol that code in heapam_visibility.c (and a few other places) must
observe.
Follow-up to bugfix commit eb5ad4ff.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wzn4bEEqgmaUQL3aJ73yM9gAeK-wE4ngi7kjRjLztb+P0w@mail.gmail.com
In both partitioning and traditional inheritance, require child
columns to be GENERATED if and only if their parent(s) are.
Formerly we allowed the case of an inherited column being
GENERATED when its parent isn't, but that results in inconsistent
behavior: the column can be directly updated through an UPDATE
on the parent table, leading to it containing a user-supplied
value that might not match the generation expression. This also
fixes an oversight that we enforced partition-key-columns-can't-
be-GENERATED against parent tables, but not against child tables
that were dynamically attached to them.
Also, remove the restriction that the child's generation expression
be equivalent to the parent's. In the wake of commit 3f7836ff6,
there doesn't seem to be any reason that we need that restriction,
since generation expressions are always computed per-table anyway.
By removing this, we can also allow a child to merge multiple
inheritance parents with inconsistent generation expressions, by
overriding them with its own expression, much as we've long allowed
for DEFAULT expressions.
Since we're rejecting a case that we used to accept, this doesn't
seem like a back-patchable change. Given the lack of field
complaints about the inconsistent behavior, it's likely that no
one is doing this anyway, but we won't change it in minor releases.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
Commits cf5eb37c5 and e5b8a4c09 each created a new role that they
forgot to remove again. This breaks the use-case of running "make
installcheck" more than once, and it's also against project policy
because it'd be quite unfriendly behavior if one were running
"make installcheck" against a non-throwaway installation.
There are a number of places where a shell command is constructed with
percent-placeholders (like %x). It's cumbersome to have to open-code
this several times. This factors out this logic into a separate
function. This also allows us to ensure consistency for and document
some subtle behaviors, such as what to do with unrecognized
placeholders.
The unified handling is now that incorrect and unknown placeholders
are an error, where previously in most cases they were skipped or
ignored. This affects the following settings:
- archive_cleanup_command
- archive_command
- recovery_end_command
- restore_command
- ssl_passphrase_command
The following settings are part of this refactoring but already had
stricter error handling and should be unchanged in their behavior:
- basebackup_to_shell.command
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5238bbed-0b01-83a6-d4b2-7eb0562a054e%40enterprisedb.com
Prior to this, we only considered a full sort on the cheapest input path
and uniquifying any path which was already sorted in the required sort
order. Here we adjust create_final_distinct_paths() so that it also
adds an Incremental Sort path on any path which has presorted keys.
Additionally, this adjusts the parallel distinct code so that we now
consider sorting the cheapest partial path and incrementally sorting any
partial paths with presorted keys. Previously we didn't consider any
sorting for parallel distinct and only added a unique path atop any path
which had the required pathkeys already.
Author: David Rowley
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/CAApHDvo8Lz2H=42urBbfP65LTcEUOh288MT7DsG2_EWtW1AXHQ@mail.gmail.com
Can be set to the empty string, or to either or both of "set" or
"inherit". If set to a non-empty value, a non-superuser who creates
a role (necessarily by relying up the CREATEROLE privilege) will
grant that role back to themselves with the specified options.
This isn't a security feature, because the grant that this feature
triggers can also be performed explicitly. Instead, it's a user experience
feature. A superuser would necessarily inherit the privileges of any
created role and be able to access all such roles via SET ROLE;
with this patch, you can configure createrole_self_grant = 'set, inherit'
to provide a similar experience for a user who has CREATEROLE but not
SUPERUSER.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
Previously, CREATEROLE users were permitted to make nearly arbitrary
changes to roles that they didn't create, with certain exceptions,
particularly superuser roles. Instead, allow CREATEROLE users to make such
changes to roles for which they possess ADMIN OPTION, and to
grant membership only in roles for which they possess ADMIN OPTION.
When a CREATEROLE user who is not a superuser creates a role, grant
ADMIN OPTION on the newly-created role to the creator, so that they
can administer roles they create or for which they have been given
privileges.
With these changes, CREATEROLE users still have very significant
powers that unprivileged users do not receive: they can alter, rename,
drop, comment on, change the password for, and change security labels
on roles. However, they can now do these things only for roles for
which they possess appropriate privileges, rather than all
non-superuser roles; moreover, they cannot grant a role such as
pg_execute_server_program unless they themselves possess it.
Patch by me, reviewed by Mark Dilger.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
As I suspected, some machines have even more low-order-bit
inaccuracy than the ones I tested. Tweak new test so that
(hopefully) it will pass everywhere. Per buildfarm.
Discussion: https://postgr.es/m/4173840.1673290336@sss.pgh.pa.us
We aren't using this anymore in the wake of commit 09d517773,
so delete it. We can always revert this if some future use
emerges, but I think our standards for test quality are now
high enough that that will never happen.
Discussion: https://postgr.es/m/4173840.1673290336@sss.pgh.pa.us
We had some pretty ad-hoc and inefficient code here. To make
matters worse, it didn't test the properties of the random()
function very thoroughly, and it had a test failure rate of
one in every few tens of thousands of runs. Replace the
script altogether with new test cases that prove much more
about random()'s output, run faster, and can be calculated
to have test failure rates on the order of 1e-9.
Having done that, the failure rate of this script should be
negligible in comparison to other causes of test failures,
so remove the "ignore" marker for it in parallel_schedule.
(If it does fail, we'd like to know about that, so "ignore"
was always pretty counterproductive.)
Tom Lane and Dean Rasheed
Discussion: https://postgr.es/m/4173840.1673290336@sss.pgh.pa.us
This allows left join removals and unique joins to work with partitioned
tables. The planner just lacked sufficient proofs that a given join
would not cause any row duplication. Unique indexes currently serve as
that proof, so have get_relation_info() populate the indexlist for
partitioned tables too.
Author: Arne Roland
Reviewed-by: Alvaro Herrera, Zhihong Yu, Amit Langote, David Rowley
Discussion: https://postgr.es/m/c3b2408b7a39433b8230bbcd02e9f302@index.de
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.
In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode - in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.
This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).
In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.
Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
GIN index scans were not taking any descent CPU-based cost into account. That
made them look cheaper than other types of indexes when they shouldn't be.
We use the same heuristic as for btree indexes, but multiply it by the number
of searched entries.
Additionally, the CPU cost for the tree was based largely on a
genericcostestimate. For a GIN index, we should not charge index quals per
tuple, but per entry. On top of this, charge cpu_index_tuple_cost per actual
tuple.
This should fix the cases where a GIN index is preferred over a btree and
the ones where a memoize node is not added on top of the GIN index scan
because it seemed too cheap.
We don't packpatch this to evade unexpected plan changes in stable versions.
Discussion: https://postgr.es/m/CABs3KGQnOkyQ42-zKQqiE7M0Ks9oWDSee%3D%2BJx3-TGq%3D68xqWYw%40mail.gmail.com
Discussion: https://postgr.es/m/3188617.44csPzL39Z%40aivenronan
Author: Ronan Dunklau
Reported-By: Hung Nguyen
Reviewed-by: Tom Lane, Alexander Korotkov
This allows an optional "S" modifier to be added to \dp and \z, to
have them include system objects in the list.
Note that this also changes the behaviour of a bare \dp or \z without
the "S" modifier to include temp objects in the list, and exclude
information_schema objects, making them consistent with other psql
meta-commands.
Nathan Bossart, reviewed by Maxim Orlov.
Discussion: https://postgr.es/m/20221206193606.GB3078082@nathanxps13
After restart, we try to stream the changes for large transactions that
were not sent before server crash and restart. However, we forget to send
the abort message for such transactions. This leads to spurious streaming
files on the subscriber which won't be cleaned till the apply worker or
the subscriber server restarts.
Reported-by: Dilip Kumar
Author: Hou Zhijie
Reviewed-by: Dilip Kumar and Amit Kapila
Backpatch-through: 14
Discussion: https://postgr.es/m/OS0PR01MB5716A773F46768A1B75BE24394FB9@OS0PR01MB5716.jpnprd01.prod.outlook.com
During the development of 728202b63, which was aimed at reducing the
number of sorts required to evaluate multiple window functions with
different WindowClause definitions, the code written sorted the
WindowClauses in reverse tleSortGroupRef order. There appears to be no
discussion in the thread which was opened to discuss the development of
this patch and no comments mentioning the fact that having the
WindowClauses in reverse tleSortGroupRef order makes it more likely that
the final WindowClause to be evaluated will provide presorted input to
the query's DISTINCT or ORDER BY clause. The reason for this is that the
tleSortGroupRef indexes are assigned for the DISTINCT and ORDER BY clauses
before they are for the WindowClauses PARTITION BY and ORDER BY clauses.
Putting the WindowClause with the lowest tleSortGroupRef last means that
it's more likely that no additional sorting is required for the query's
DISTINCT or ORDER BY clause.
All we're doing here is adding some tests and a comment to help ensure
that remains true and that we don't accidentally forget to consider this
again should we ever rewrite that code.
Author: Ankit Kumar Pandey, David Rowley
Discussion: https://postgr.es/m/CAApHDvq=g2=ny59f1bvwRVvupsgPHK-KjLPBsSL25fVuGZ4idQ@mail.gmail.com
Waken related worker processes immediately at commit of a transaction
that has performed ALTER SUBSCRIPTION (including the RENAME and
OWNER variants). This reduces the response time for such operations.
In the real world that might not be worth much, but it shaves several
seconds off the runtime for the subscription test suite.
In the case of PREPARE, we just throw away this notification state;
it doesn't seem worth the work to preserve it. The workers will
still react after the eventual COMMIT PREPARED, but not as quickly.
Nathan Bossart
Discussion: https://postgr.es/m/20221122004119.GA132961@nathanxps13
Previously this function checked to see if we were ready to switch
to two_phase mode at its start, but that's silly: we should check
at the end, after we've done the work that might make us ready.
This simple change removes one sleep cycle from the time needed to
switch to two_phase mode. In the real world that might not be
worth much, but it shaves a few seconds off the runtime for the
subscription test suite.
Nathan Bossart
Discussion: https://postgr.es/m/20221122004119.GA132961@nathanxps13
VACUUM normally ends by running vac_update_datfrozenxid(), which
requires a scan of pg_class. Therefore, if one attempts to vacuum a
database one table at a time --- as vacuumdb has done since v12 ---
we will spend O(N^2) time in vac_update_datfrozenxid(). That causes
serious performance problems in databases with tens of thousands of
tables, and indeed the effect is measurable with only a few hundred.
To add insult to injury, only one process can run
vac_update_datfrozenxid at the same time per DB, so this behavior
largely defeats vacuumdb's -j option.
Hence, invent options SKIP_DATABASE_STATS and ONLY_DATABASE_STATS
to allow applications to postpone vac_update_datfrozenxid() until the
end of a series of VACUUM requests, and teach vacuumdb to use them.
Per bug #17717 from Gunnar L. Sadly, this answer doesn't seem
like something we'd consider back-patching, so the performance
problem will remain in v12-v15.
Tom Lane and Nathan Bossart
Discussion: https://postgr.es/m/17717-6c50eb1c7d23a886@postgresql.org
A schema rename should cause reporting the new qualified names of
tables to logical replication subscribers, but that wasn't happening.
Flush the RelationSyncCache to make it happen.
(If you ask me, the new test case shows that the behavior in this area
is still pretty dubious, but apparently it's operating as designed.)
Vignesh C
Discussion: https://postgr.es/m/CALDaNm32vLRv5KdrDFeVC-CU+4Wg1daA55hMqOxDGJBzvd76-w@mail.gmail.com
The ALTER DATABASE|FUNCTION|PROCEDURE|ROLE|ROUTINE|USER ... SET <name>
case in psql tab completion failed to exclude <name> = "SCHEMA", which
caused ALTER FUNCTION|PROCEDURE|ROUTINE ... SET SCHEMA to complete
with "FROM CURRENT" and "TO", which won't work.
Fix that, so that those cases now complete with the list of schemas,
like other ALTER ... SET SCHEMA commands.
Noticed while testing the recent patch to improve tab completion for
ALTER FUNCTION/PROCEDURE/ROUTINE, but this is not directly related to
that patch. Rather, this is a long-standing bug, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/CALDaNm0s7GQmkLP_mx5Cvk=UzYMnjhPmXBxU8DsHEunFbC5sTg@mail.gmail.com
Commit 57faaf376 added pg_truncate(const char *path, off_t length), but
"length" was ignored under WIN32 and the file was unconditionally
truncated to 0.
There was no live bug, since the only caller passes 0.
Fix, and back-patch to 14 where the function arrived.
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230106031652.GR3109%40telsasoft.com
Testing on CI showed that Cygwin's semctl() can fail with EAGAIN
(possibly due to resource limits in cygserver that could be tuned, not
examined). Switch to so-called POSIX semaphores instead, which don't
seem to fail in that way (possibly due to a more direct implementation
using Windows semaphore primitives instead of talking to cygserver,
based on a cursory glance at the source).
Other known problems still prevent PostgreSQL from running on Cygwin
without random crashes, but this rarer problem was noticed while
testing.
Discussion: https://postgr.es/m/CA%2BhUKG%2BQ6DU4Ov9LrvUyDcF3oHS4KMRVSKmVGaeePq-kOyG9gA%40mail.gmail.com
This is just refactoring; there should be no functonal change. It
might have the effect of slightly reducing the number of calls to
GetUserId(), but the real point is to facilitate future work in
this area.
Patch by me, reviewed by Mark Dilger.
Discussion: http://postgr.es/m/CA+TgmobFzTLkLwOquFrAcdsWBsOWDr-_H-jw+qBvfx-wSzMwDA@mail.gmail.com
Instead of having checks in AddRoleMems() and DelRoleMems(), have
the callers perform checks where it's required. In some cases it
isn't, either because the caller has already performed a check for
the same condition, or because the check couldn't possibly fail.
The "Skip permission check if nothing to do" check in each of
AddRoleMems() and DelRoleMems() is pointless. Some call sites
can't pass an empty list. Others can, but in those cases, the role
being modified is one that the current user has just created.
Therefore, they must have permission to modify it, and so no
permission check is required at all.
This patch is intended to have no user-visible consequences. It is
intended to simplify future work in this area.
Patch by me, reviewed by Mark Dilger.
Discussion: http://postgr.es/m/CA+TgmobFzTLkLwOquFrAcdsWBsOWDr-_H-jw+qBvfx-wSzMwDA@mail.gmail.com
We were identifying the updatable generated columns of inheritance
children by transposing the calculation made for their parent.
However, there's nothing that says a traditional-inheritance child
can't have generated columns that aren't there in its parent, or that
have different dependencies than are in the parent's expression.
(At present it seems that we don't enforce that for partitioning
either, which is likely wrong to some degree or other; but the case
clearly needs to be handled with traditional inheritance.)
Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field
in favor of identifying which generated columns depend on updated
columns during executor startup. In HEAD we can remove
extraUpdatedCols altogether; in back branches, it's still there but
always empty. Another difference between the HEAD and back-branch
versions of this patch is that in HEAD we can add the new bitmap field
to ResultRelInfo, but that would cause an ABI break in back branches.
Like 4b3e37993, add a List field at the end of struct EState instead.
Back-patch to v13. The bogus calculation is also being made in v12,
but it doesn't have the same visible effect because we don't use it
to decide which generated columns to recalculate; as a consequence of
which the patch doesn't apply easily. I think that there might still
be a demonstrable bug associated with trigger firing conditions, but
that's such a weird corner-case usage that I'm content to leave it
unfixed in v12.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
This fixes a couple of unused variable warnings that could be seen when
compiling with MEMORY_CONTEXT_CHECKING but not USE_ASSERT_CHECKING.
Defining MEMORY_CONTEXT_CHECKING without asserts is a little unusual,
however, we shouldn't be producing any warnings from such a build.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_D-vgLEh7eO47p=73u1jWO78NWf6Qfv1FndY1kG-Q-jA@mail.gmail.com
We cannot rely on TransactionIdDidAbort here, since in general it may
report transactions that were in-progress at the time of an earlier hard
crash as not aborted, effectively behaving as if they were still in
progress even after crash recovery completes. Go back to defensively
verifying that xmax didn't commit instead.
Oversight in commit 79d4bf4e.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230104035636.hy5djyr2as4gbc4q@awork3.anarazel.de
Commit 4f627f89 switched SLRU truncation for multixacts back to being a
task performed during VACUUM, but missed some comments that continued to
reference truncation happening as part of checkpointing. Update those
comments now.
Also update comments that became obsolete when commit c3ffa731 changed
the way that vacuum_multixact_freeze_min_age is applied by VACUUM as it
computes its MultiXactCutoff cutoff (which is used by VACUUM to decide
what to freeze). Explain the same issues by referencing how OldestMxact
is the latest valid value that relminmxid can ever be advanced to at the
end of a VACUUM (following the work in commit 0b018fab).
Combine multiple LOCK TABLE commands to reduce the number of
round trips to the server. This is particularly helpful when
dumping from a remote server, but it seems useful even without
that. In particular, shortening the time from seeing a table
in pg_class to acquiring lock on it reduces the window for
trouble from concurrent DDL.
Aleksander Alekseev, reviewed by Fabrízio de Royes Mello,
Gilles Darold, and Andres Freund
Discussion: https://postgr.es/m/CAJ7c6TO4z1+OBa-R+fC8FnaUgbEWJUf2Kq=nRngTW5EXtKru2g@mail.gmail.com
pg_xact lookups are relatively expensive. Move the xmin/xmax commit
status checks from the point that freeze plans are prepared to the point
that they're actually executed. Otherwise we'll repeat many commit
status checks whenever multiple successive VACUUM operations scan the
same pages and decide against freezing each time, which is a waste of
cycles.
Oversight in commit 1de58df4, which added page-level freezing.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkZpe4K6qMfEt8H4qYJCKc2R7TPvKsBva7jc9w7iGXQSw@mail.gmail.com
Improve comments added by commit 1de58df4 which describe the
lazy_scan_prune "freeze the page" path. These newly revised comments
are based on suggestions from Jeff Davis.
In passing, remove nearby visibility_cutoff_xid comments left over from
commit 6daeeb1f.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/ebc857107fe3edd422ef8a65191ca4a8da568b9b.camel@j-davis.com
Windows can enumerate the locales that are either installed or
supported by calling EnumSystemLocalesEx(), similar to what is already
done in the READ_LOCALE_A_OUTPUT switch. We can refactor some of the
logic already used in that switch into a new function
create_collation_from_locale().
The enumerated locales have BCP 47 shape, that is with a hyphen
between language and territory, instead of POSIX's underscore. The
created collations will retain the BCP 47 shape, but we will also
create a POSIX alias, so xx-YY will have an xx_YY alias.
A new test collate.windows.win1252 is added that is like
collate.linux.utf8.
Author: Juan Jose Santamaria Flecha <juanjo.santamaria@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/0050ec23-34d9-2765-9015-98c04f0e18ac@postgrespro.ru
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes. One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
When considering an empty grouping set, we fetched
phasedata->eqfunctions[-1]. Because the eqfunctions array is
palloc'd, that would always be an aset pointer in released versions,
and thus the code accidentally failed to malfunction (since it would
do nothing unless it found a null pointer). Nonetheless this seems
like trouble waiting to happen, so add a check for length == 0.
It's depressing that our valgrind testing did not catch this.
Maybe we should reconsider the choice to not mark that word NOACCESS?
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-vZuuPOZsKOYnSAaPYGKhmacxhki+vpOKk0O7rymccXQ@mail.gmail.com
The term "truncation" has been ambiguous since commit 10a8d13823 added
line pointer array truncation during heap pruning. Clear things up by
specifying that we're talking about rel truncation here, to match nearby
comments that apply to tuples with storage.
Don't allow VACUUM to WAL-log the value FrozenTransactionId as the
snapshotConflictHorizon of freezing or visibility map related WAL
records.
The only special XID value that's an allowable snapshotConflictHorizon
is InvalidTransactionId, which is interpreted as "record definitely
doesn't require a recovery conflict".
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WznuNGSzF8v6OsgjaC5aYsb3cZ6HW6MLm30X0d65cmSH6A@mail.gmail.com
A refcursor variable that is bound to a specific query (by declaring
it with "CURSOR FOR") now chooses a portal name in the same way as an
unbound, plain refcursor variable. Its string value starts out as
NULL, and unless that's overridden by manual assignment, it will be
replaced by a unique-within-session portal name during OPEN.
The previous behavior was to initialize such variables to contain
their own name, resulting in that also being the portal name unless
the user overwrote it before OPEN. The trouble with this is that
it causes failures due to conflicting portal names if the same
cursor variable name is used in different functions. It is pretty
non-orthogonal to have bound and unbound refcursor variables behave
differently on this point, too, so let's change it.
This change can cause compatibility problems for applications that
open a bound cursor in a plpgsql function and then use it in the
calling code without explicitly passing back the refcursor value
(portal name). If the calling code simply assumes that the portal
name matches the called function's variable name, it will now fail.
That can be fixed by explicitly assigning a string value to the
refcursor variable before OPEN, e.g.
DECLARE myc CURSOR FOR SELECT ...;
BEGIN
myc := 'myc'; -- add this
OPEN myc;
We have no documentation examples showing the troublesome usage
pattern, so we can hope it's rare in practice.
Patch by me; thanks to Pavel Stehule and Jan Wieck for review.
Discussion: https://postgr.es/m/1465101.1667345983@sss.pgh.pa.us
Cirrus is about to shut down its macOS-on-Intel support, so it's time to
move our CI testing over to ARM instances. The Homebrew package manager
changed its default installation prefix for the new architecture, so a
couple of tests need tweaks to find binaries.
Back-patch to 15, where in-tree CI began.
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20221122225744.GF11463%40telsasoft.com
When brin_minmax_multi_union merges summaries, we may end up with just a
single range after merge_overlapping_ranges. The summaries may contain
just one range each, and they may overlap (or be exactly the same).
With a single range there's no distance to calculate, but we happen to
call build_distances anyway - which is fine, we don't calculate the
distance in this case, except that with asserts this failed due to a
check there are at least two ranges.
The assert is unnecessarily strict, so relax it a bit and bail out if
there's just a single range. The relaxed assert would be enough, but
this way we don't allocate unnecessary memory for distance.
Backpatch to 14, where minmax-multi opclasses were introduced.
Reported-by: Jaime Casanova
Backpatch-through: 14
Discussion: https://postgr.es/m/YzVA55qS0hgz8P3r@ahch-to
f193883 has been incorrectly setting up the precision used in the
timestamp compilations returned by the following functions:
- LOCALTIME
- LOCALTIMESTAMP
- CURRENT_TIME
- CURRENT_TIMESTAMP
Specifying an out-of-range precision for CURRENT_TIMESTAMP and
LOCALTIMESTAMP was raising a WARNING without adjusting the precision,
leading to a subsequent error. LOCALTIME and CURRENT_TIME raised a
WARNING without an error, still the precision given to the internal
routines was not correct, so let's be clean.
Ian has reported the problems in timestamp.c, while I have noticed the
ones in date.c. Regression tests are added for all of them with
precisions high enough to provide coverage for the warnings, something
that went missing up to this commit.
Author: Ian Lawrence Barwick, Michael Paquier
Discussion: https://postgr.es/m/CAB8KJ=jQEnn9sYG+N752spt68wMrhmT-ocHCh4oeNmHF82QMWA@mail.gmail.com
New versions of perl trigger warnings within perl.h with our compiler
flags. At least -Wdeclaration-after-statement, -Wshadow=compatible-local are
known to be problematic.
To avoid these warnings, conditionally use #pragma GCC system_header before
including plperl.h.
Alternatively, we could add the include paths for problematic headers with
-isystem, but that is a larger hammer and is harder to search for.
A more granular alternative would be to use #pragma GCC diagnostic
push/ignored/pop, but gcc warns about unknown warnings being ignored, so every
to-be-ignored-temporarily compiler warning would require its own pg_config.h
symbol and #ifdef.
As the warnings are voluminous, it makes sense to backpatch this change. But
don't do so yet, we first want gather buildfarm coverage - it's e.g. possible
that some compiler claiming to be gcc compatible has issues with the pragma.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: Discussion: https://postgr.es/m/20221228182455.hfdwd22zztvkojy2@awork3.anarazel.de
Teach VACUUM to decide on whether or not to trigger freezing at the
level of whole heap pages. Individual XIDs and MXIDs fields from tuple
headers now trigger freezing of whole pages, rather than independently
triggering freezing of each individual tuple header field.
Managing the cost of freezing over time now significantly influences
when and how VACUUM freezes. The overall amount of WAL written is the
single most important freezing related cost, in general. Freezing each
page's tuples together in batch allows VACUUM to take full advantage of
the freeze plan WAL deduplication optimization added by commit 9e540599.
Also teach VACUUM to trigger page-level freezing whenever it detects
that heap pruning generated an FPI. We'll have already written a large
amount of WAL just to do that much, so it's very likely a good idea to
get freezing out of the way for the page early. This only happens in
cases where it will directly lead to marking the page all-frozen in the
visibility map.
In most cases "freezing a page" removes all XIDs < OldestXmin, and all
MXIDs < OldestMxact. It doesn't quite work that way in certain rare
cases involving MultiXacts, though. It is convenient to define "freeze
the page" in a way that gives FreezeMultiXactId the leeway to put off
the work of processing an individual tuple's xmax whenever it happens to
be a MultiXactId that would require an expensive second pass to process
aggressively (allocating a new multi is especially worth avoiding here).
FreezeMultiXactId is eager when processing is cheap (as it usually is),
and lazy in the event of an individual multi that happens to require
expensive second pass processing. This avoids regressions related to
processing of multis that page-level freezing might otherwise cause.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
Some compilers complain about sub_rteperminfos not being
initialized, evidently because they don't detect that it
is only used and set if isGeneralSelect is true.
Make it follow the long-established pattern for its
sibling variable sub_rtable.
Per reports from Pavel Stehule and the buildfarm.
Discussion: https://postgr.es/m/CAFj8pRDOvGOi-n616kM0Cc7qSbg_nGoS=-haB+D785sUXADqSg@mail.gmail.com
The modified error message for regcollationin failure includes
the database encoding, which it should've occurred to me is a
portability hazard for the regression tests. Adjust the test
so the expected output doesn't include that.
In passing, fix a comment typo introduced in b8c0ffbd2.
Per buildfarm.
Given the soft-input-error feature, we can reduce these functions
to be just thin wrappers around a soft-error call of the
corresponding datatype input function. This means less code and
more certainty that the to_reg* functions match the normal input
behavior.
Notably, it also means that they will accept numeric OID input,
which they didn't before. It's not clear to me if that omission
had more than laziness behind it, but it doesn't seem like
something we need to work hard to preserve.
Discussion: https://postgr.es/m/3910031.1672095600@sss.pgh.pa.us
This is not really complete, but it catches most cases of practical
interest. The main omissions are:
* regtype, regprocedure, and regoperator parse type names by
calling the main grammar, so any grammar-detected syntax error
will still be a hard error. Also, if one includes a type
modifier in such a type specification, errors detected by the
typmodin function will be hard errors.
* Lookup errors are handled just by passing missing_ok = true
to the relevant catalog lookup function. Because we've used
quite a restrictive definition of "missing_ok", this means that
edge cases such as "the named schema exists, but you lack
USAGE permission on it" are still hard errors.
It would make sense to me to replace most/all missing_ok
parameters with an escontext parameter and then allow these
additional lookup failure cases to be trapped too. But that's
a job for some other day.
Discussion: https://postgr.es/m/3342239.1671988406@sss.pgh.pa.us
This is slightly tedious because the adjustments cascade through
a couple of levels of subroutines, but it's not very hard.
I chose to avoid changing function signatures more than absolutely
necessary, by passing the escontext pointer in existing structs
where possible.
tsquery's nuisance NOTICEs about empty queries are suppressed in
soft-error mode, since they're not errors and we surely don't want
them to be shown to the user anyway. Maybe that whole behavior
should be reconsidered.
Discussion: https://postgr.es/m/3824377.1672076822@sss.pgh.pa.us
Historically these input functions just called strtoul or strtoull
and returned the result, with no error detection whatever. Upgrade
them to reject garbage input and out-of-range values, similarly to
our other numeric input routines.
To share the code for this with type oid, adjust the existing
"oidin_subr" to be agnostic about the SQL name of the type it is
handling, and move it to numutils.c; then clone it for 64-bit types.
Because the xid types previously accepted hex and octal input by
reason of calling strtoul[l] with third argument zero, I made the
common subroutine do that too, with the consequence that type oid
now also accepts hex and octal input. In view of 6fcda9aba, that
seems like a good thing.
While at it, simplify the existing over-complicated handling of
syntax errors from strtoul: we only need one ereturn not three.
Discussion: https://postgr.es/m/3526121.1672000729@sss.pgh.pa.us
When VACUUM determines that an existing MultiXact should use a freeze
plan that sets xmax to InvalidTransactionId, the original Multi may or
may not be before OldestMxact. Remove an incorrect assertion that
expected it to always be from before OldestMxact.
Oversight in commit 4ce3af.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/TYAPR01MB5866B24104FD80B5D7E65C3EF5ED9@TYAPR01MB5866.jpnprd01.prod.outlook.com
002_pg_upgrade.pl gains support for a new environment variable called
"filter_rules", that can be used to point to a file that includes a
set of custom regular expressions that would be applied to the dumps of
the origin and target clusters when doing a cross-version test (aka when
defining olddump and oldinstall), to give the possibility to reshape
dynamically the dumps in the same way as the internals of the buildfarm
code so as the tests are able to pass in scenarios where one expects
them to even if pg_dump generates slightly-different outputs depending
on the versions involved.
This option is not used when pg_upgrade runs with the same version for
the origin and target clusters, and it is the last piece I see as
required to be able to plug-in more efficiently the TAP tests of
pg_upgrade with the buildfarm or just a CI.
Author: Anton A. Melnikov
Discussion: https://postgr.es/m/49f389ba-95ce-8a9b-09ae-f60650c0e7c7@inbox.ru
This option extracts (potentially decompressing) full-page images
included in WAL records into a given target directory. These images are
subject to the same filtering rules as the normal display of the WAL
records, hence with --relation one can for example extract only the FPIs
issued on the relation defined. By default, the records are printed or
their stats computed (--stats), using --quiet would only save the images
without any output generated.
This is a tool aimed mostly for very experienced users, useful for
fixing page-level corruption or just analyzing the past state of a page,
and there were no easy way to do that with the in-core tools up to now
when looking at WAL.
Each block is saved in a separate file, to ease their manipulation, with
the file respecting <lsn>.<ts>.<db>.<rel>.<blk>_<fork> with as format.
For instance, 00000000-010000C0.1663.1.6117.123_main refers to:
- WAL record LSN in hexa format (00000000-010000C0).
- Tablespace OID (1663).
- Database OID (1).
- Relfilenode (6117).
- Block number (123).
- Fork name of the file this block came from (_main).
Author: David Christensen
Reviewed-by: Sho Kato, Justin Pryzby, Bharath Rupireddy, Matthias van de
Meent
Discussion: https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com
This enables streaming or serializing changes immediately in logical
decoding. This parameter is intended to be used to test logical decoding
and replication of large transactions for which otherwise we need to
generate the changes till logical_decoding_work_mem is reached.
This helps in reducing the timing of existing tests related to logical
replication of in-progress transactions and will help in writing tests for
for the upcoming feature for parallelly applying large in-progress
transactions.
Author: Shi yu
Reviewed-by: Sawada Masahiko, Shveta Mallik, Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi
Discussion: https://postgr.es/m/OSZPR01MB63104E7449DBE41932DB19F1FD1B9@OSZPR01MB6310.jpnprd01.prod.outlook.com
f4f2f2b was doing a sequential scan of pg_class before checking if a
relation had attributes dependent on aclitem as data typewhen building
the set of ALTER TABLE queries, but it would be costly on a regression
database.
While on it, make the query style more consistent with the rest.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20221223032724.GQ1153@telsasoft.com
While at it, use a common subroutine already.
This doesn't move the needle very far in terms of making these
functions non-throwing; the only case we're now able to trap is
numeric-OID-is-out-of-range. Still, it seems like a pretty
non-controversial step in that direction.
ed1a88dda added support functions for the ntile(), percent_rank() and
cume_dist() window functions but neglected to actually add these support
functions to the pg_proc entry for the corresponding window function.
Also, take this opportunity to add these window functions to one of the
regression tests added in ed1a88dda to give the support functions a little
bit of exercise. If I'd done that in the first place then the omission
would have been more obvious.
Bump the catversion, again.
The test added in commit e44dae07f9 has a thinko: it wants to read
info about a few WAL records, but it obtains the LSN of the final record
to read by asking for the WAL insert position; however,
pg_get_wal_records_info only accepts to read up to the flush position
(cf. IsFutureLSN()). In normal conditions there is no difference, since
the last record written by the preceding loop is known flushed and it's
the one the test wants; but it's possible to have some other process
insert another WAL record that isn't flushed, and that causes the whole
test to explode.
Fix by having pg_get_wal_records_info() read only up to the flushed
position. Backpatch to 15, which is where pg_walinspect appeared.
Author: Karina Litskevich <litskevichkarina@gmail.com>
Discussion: https://postgr.es/m/a5559c95-52c3-5eea-cd63-9b4f1c70ff96@gmail.com
Fix incorrect code which was trying to convert a Bitmapset of columns at
the attnums according to a parent table and transform them into the
equivalent Bitmapset with same attnums according to the given child table.
This code is new as of a61b1f748 and was failing to do the correct
translation when there was an intermediate parent table between 'rel' and
'top_parent_rel'.
Reported-by: Ranier Vilela
Author: Richard Guo, Amit Langote
Discussion: https://postgr.es/m/CAEudQArohfB_Gy%2BhcH2-bANUkxgjJiP%3DABq01_LgTNTbcNijag%40mail.gmail.com
An epoll fd belonging to the parent should be closed in the child. A
kqueue fd is automatically closed by fork(), but we should still adjust
our counter. For poll and Windows systems, nothing special is required.
On all systems we free the memory.
No caller yet, but we'll need this if we start using WaitEventSet in the
postmaster as planned.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2BZ-HpOj1JsO9eWUP%2Bar7npSVinsC_npxSy%2BjdOMsx%3DGg%40mail.gmail.com
At the time of commit 6a2a70a02 we didn't use latch infrastructure in
the postmaster. We're planning to start doing that, so we'd better make
sure that the signalfd inherited from a postmaster is not duplicated and
then leaked in the child.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BZ-HpOj1JsO9eWUP%2Bar7npSVinsC_npxSy%2BjdOMsx%3DGg%40mail.gmail.com
To be able to handle incoming connections on a server socket with
the WaitEventSet API, we'll need a new kind of event to indicate that
the the socket is ready to accept a connection.
On Unix, it's just the same as WL_SOCKET_READABLE, but on Windows there
is a different underlying kernel event that we need to map our
abstraction to.
No user yet, but a proposed patch would use this.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2BZ-HpOj1JsO9eWUP%2Bar7npSVinsC_npxSy%2BjdOMsx%3DGg%40mail.gmail.com
The regression test suite includes a table called "tab_core_types" that
has one attribute based on the type "aclitem". Keeping this attribute
as-is causes hard failures when running pg_upgrade with an origin on
~15. This commit updates upgrade_adapt.sql to automatically detect the
tables with such attributes and switch them to text so as pg_upgrade
is able to go through its run.
This does not provide the same detection coverage as pg_upgrade, where
we are able to find out aclitems used in arrays, domains or even
composite types, but this is (I guess) enough for most things like an
instance that had installcheck run on before the upgrade with a dump
generated from it.
Note that the buildfarm code has taken the simplest approach of just
dropping "tab_core_types", so what we have here is more modular.
Author: Anton A. Melnikov
Discussion: https://postgr.es/m/49f389ba-95ce-8a9b-09ae-f60650c0e7c7@inbox.ru
The query used to disable WITH OIDS in all the relations making use of
it was checking for materialized views, but this is not a supported
operation. On the contrary, this needs to be done on foreign tables.
While on it, use quote_ident() in the ALTER TABLE strings built on the
relation name.
Author: Anton A. Melnikov, Michael Paquier
Discussion: https://postgr.es/m/49f389ba-95ce-8a9b-09ae-f60650c0e7c7@inbox.ru
Backpatch-through: 12
Three error strings used with cache lookup failures were referring to
incorrect object types for ACL checks:
- Schemas
- Types
- Foreign Servers
There errors should never be triggered, but if they do incorrect
information would be reported.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221222153041.GN1153@telsasoft.com
Backpatch-through: 11
The former name was discussed as being confusing, so use "split", as per
a suggestion from Magnus Hagander.
While on it, one of the output arguments is renamed from "segno" to
"segment_number", as per a suggestion from Kyotaro Horiguchi.
The documentation is updated to reflect all these changes.
Bump catalog version.
Author: Bharath Rupireddy, Michael Paquier
Discussion: https://postgr.es/m/CABUevEytQVaOOhGdoh0D7hGwe3fuKcRF6NthsSW7ww04EmtFgQ@mail.gmail.com
WindowFuncs such as row_number() don't care if it's called with ROWS
UNBOUNDED PRECEDING AND CURRENT ROW or with RANGE UNBOUNDED PRECEDING AND
CURRENT ROW. The latter is less efficient as the RANGE option requires
that the executor check for peer rows, so using the ROW option instead
would cause less overhead. Because RANGE is part of the default frame
options for WindowClauses, it means WindowAgg is, by default, working much
harder than it needs to for window functions where the ROWS / RANGE option
has no effect on the window function's result.
On a test query from the discussion thread, a performance improvement of
344% was seen by using ROWS instead of RANGE.
Here we add a new support function node type to allow support functions to
be called for window functions so that the most optimal version of the
frame options can be set. The planner has been adjusted so that the frame
options are changed only if all window functions sharing the same window
clause agree on what the optimized frame options are.
Here we give the ability for row_number(), rank(), dense_rank(),
percent_rank(), cume_dist() and ntile() to alter their WindowClause's
frameOptions.
Reviewed-by: Vik Fearing, Erwin Brandstetter, Zhihong Yu
Discussion: https://postgr.es/m/CAGHENJ7LBBszxS+SkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvohAKEtTXxq7Pc-ic2dKT8oZfbRKeEJP64M0B6+S88z+A@mail.gmail.com
Use C99 designated initializer syntax for the array elements, instead of
writing the enumerator name and position in a comment. Replace nkeys
and key with a local variadic macro, for a shorter notation.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/CA%2BhUKGKdpDjKL2jgC-GpoL4DGZU1YPqnOFHbDqFkfRQcPaR5DQ%40mail.gmail.com
Perform a failsafe check every time VACUUM's first heap scan scans a
further FAILSAFE_EVERY_PAGES pages, rather than using an approach based
on the number of physical blocks that our current blkno is from the
blkno at the time of the previous failsafe check. That way VACUUM will
perform a failsafe check every time it has scanned a uniform number of
pages, without it mattering when or how VACUUM skipped pages using the
visibility map.
Sami Imseih, with changes to FAILSAFE_EVERY_PAGES comments added by me.
Author: Sami Imseih <simseih@amazon.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/401CE010-4049-4B94-9961-0B610A5D254D%40amazon.com
Use a dedicated struct for the XID/MXID cutoffs used by VACUUM, such as
FreezeLimit and OldestXmin. This state is initialized in vacuum.c, and
then passed around by code from vacuumlazy.c to heapam.c freezing
related routines. The new convention is that everybody works off of the
same cutoff state, which is passed around via pointers to const.
Also simplify some of the logic for dealing with frozen xmin in
heap_prepare_freeze_tuple: add dedicated "xmin_already_frozen" state to
clearly distinguish xmin XIDs that we're going to freeze from those that
were already frozen from before. That way the routine's xmin handling
code is symmetrical with the existing xmax handling code. This is
preparation for an upcoming commit that will add page level freezing.
Also refactor the control flow within FreezeMultiXactId(), while adding
stricter sanity checks. We now test OldestXmin directly, instead of
using FreezeLimit as an inexact proxy for OldestXmin. This is further
preparation for the page level freezing work, which will make the
function's caller cede control of page level freezing to the function
where appropriate (where heap_prepare_freeze_tuple sees a tuple that
happens to contain a MultiXactId in its xmax).
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/CAH2-WznS9TxXmz2_=SY+SyJyDFbiOftKofM9=aDo68BbXNBUMA@mail.gmail.com
perform_pullup_replace_vars() knows how to scan the whole parent
query tree when we are replacing Vars during a subquery flattening
operation. However, for the specific case of flattening a
UNION ALL leaf query, that's mostly wasted work: the only place
where relevant Vars could exist is in the AppendRelInfo that we
just made for this leaf. Teaching perform_pullup_replace_vars()
to just deal with that and exit is worthwhile because, if we have
N such subqueries to pull up, we were spending O(N^2) work uselessly
mutating the AppendRelInfos for all the other subqueries.
While we're at it, avoid calling substitute_phv_relids if there are no
PlaceHolderVars, and remove an obsolete check of parse->hasSubLinks.
Andrey Lepikhov and Tom Lane
Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
Andrey Lepikhov demonstrated a case where we spend an unreasonable
amount of time in pull_up_subqueries(). Not only is that recursing
with no explicit check for stack overrun, but the code seems not
interruptable by control-C. Let's stick a CHECK_FOR_INTERRUPTS
there, along with sprinkling some stack depth checks.
An actual fix for the excessive time consumption seems a bit
risky to back-patch; but this isn't, so let's do so.
Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
The previous coding of VA_ARGS_NARGS() always returned 1 on Visual
Studio, because it treats __VA_ARGS__ as a single token unless you jump
through extra hoops. Newer compilers have an option to fix that. Add a
comment about that so that we can remember to clean this up in the
future when our minimum MSVC version advances.
Author: Victor Spirin <v.spirin@postgrespro.ru>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/f450fc57-a147-19d0-e50c-33571c52cc13%40postgrespro.ru
A bitwise operator was getting used on two bools in
ATAddCheckConstraint() to track if constraints should be merged or not
with the existing ones of a relation, though obviously this should use
a boolean OR operator. This led to the same result, but let's be
clean.
Oversight in 074c5cf.
Author: Ranier Vilela
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAEudQAp2R2fbbi0OHHhv_n4=Ch0t1VtjObR9YMqtGKHJ+faUFQ@mail.gmail.com
This introduces palloc_aligned() and MemoryContextAllocAligned() which
allow callers to obtain memory which is allocated to the given size and
also aligned to the specified alignment boundary. The alignment
boundaries may be any power-of-2 value. Currently, the alignment is
capped at 2^26, however, we don't expect values anything like that large.
The primary expected use case is to align allocations to perhaps CPU
cache line size or to maybe I/O page size. Certain use cases can benefit
from having aligned memory by either having better performance or more
predictable performance.
The alignment is achieved by requesting 'alignto' additional bytes from
the underlying allocator function and then aligning the address that is
returned to the requested alignment. This obviously does waste some
memory, so alignments should be kept as small as what is required.
It's also important to note that these alignment bytes eat into the
maximum allocation size. So something like:
palloc_aligned(MaxAllocSize, 64, 0);
will not work as we cannot request MaxAllocSize + 64 bytes.
Additionally, because we're just requesting the requested size plus the
alignment requirements from the given MemoryContext, if that context is
the Slab allocator, then since slab can only provide chunks of the size
that's specified when the slab context is created, then this is not going
to work. Slab will generate an error to indicate that the requested size
is not supported.
The alignment that is requested in palloc_aligned() is stored along with
the allocated memory. This allows the alignment to remain intact through
repalloc() calls.
Author: Andres Freund, David Rowley
Reviewed-by: Maxim Orlov, Andres Freund, John Naylor
Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
This is the guts of float4in, callable as a routine to input floats,
which will be useful in an upcoming patch for allowing soft errors in
the seg module's input function.
A similar operation was performed some years ago for float8in in
commit 50861cd683.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/cee4e426-d014-c0b7-aa22-a659f2cd9130@dunslane.net
d21ded75f changed the way slab.c works but introduced a bug that meant we
could end up with the slab's curBlocklistIndex pointing to the wrong list.
The condition which was checking for this was failing to account for two
things:
1. The curBlocklistIndex could be 0 as we've currently got no non-full
blocks to put chunks on. In this case, the dlist_is_empty() check cannot
be performed as there can be any number of completely full blocks at that
index.
2. The curBlocklistIndex may be greater than the index we just moved the
block onto. Since we need to ensure we fill up fuller blocks first, we
must reset curBlocklistIndex when changing any blocklist element that's
less than the curBlocklistIndex too.
Reported-by: Takamichi Osumi
Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
This commit changes some of the bbstreamer files and pg_dump to use the
same style as a few other places (like common/compression.c), where the
name of the compression method is not part of the string, but an
argument of it. This reduces a bit the translation work with less
string patterns.
Discussion: https://postgr.es/m/Y5/5tdK+4n3clvtU@paquier.xyz
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.
get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths). This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).
Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com
Commit 927f453a9 disallowed batching added by commit b663a4136 to be
used for the inserts performed as part of cross-partition updates of
partitioned tables, mainly because the previous code in
nodeModifyTable.c couldn't handle pending inserts into foreign-table
partitions that are also UPDATE target partitions. But we don't have
such a limitation anymore (cf. commit ffbb7e65a), so let's allow for
this by removing from execPartition.c the restriction added by commit
927f453a9 that batching is only allowed if the query command type is
CMD_INSERT.
In postgres_fdw, since commit 86dc90056 changed it to effectively
disable cross-partition updates in the case where a foreign-table
partition chosen to insert rows into is also an UPDATE target partition,
allow batching in the case where a foreign-table partition chosen to
do so is *not* also an UPDATE target partition. This is enabled by the
"batch_size" option added by commit b663a4136, which is disabled by
default.
This patch also adjusts the test case added by commit 927f453a9 to
confirm that the inserts performed as part of a cross-partition update
of a partitioned table indeed uses batching.
Amit Langote, reviewed and/or tested by Georgios Kokolatos, Zhihong Yu,
Bharath Rupireddy, Hou Zhijie, Vignesh C, and me.
Discussion: http://postgr.es/m/CA%2BHiwqH1Lz1yJmPs%3DaD-pzd_HLLynLHvq5iYeT9mB0bBV7oJ6w%40mail.gmail.com
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause. Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order. Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.
While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c. However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice. The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.
Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC. This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).
Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
Slab has traditionally been fairly slow when compared with the AllocSet or
Generation memory allocators. Part of this slowness came from having to
write out an entire block when we allocate a new block in order to
populate the free list indexes within the block's memory. Additional
slowness came from having to move a block onto another dlist each time we
palloc or pfree a chunk from it.
Here we optimize both of those cases and do a little bit extra to improve
the performance of the slab allocator.
Here, instead of writing out the free list indexes when allocating a new
block, we introduce the concept of "unused" chunks. When a block is first
allocated all chunks are unused. These chunks only make it onto the
free list when they are pfree'd. When allocating new chunks on an
existing block, we have the choice of consuming a chunk from the free list
or an unused chunk. When both exist, we opt to use one from the free
list, as these have been used already and the memory of them is more
likely to be cached by the CPU.
Here we also reduce the number of block lists from there being one for
every possible value of free chunks on a block to just having a small
fixed number of block lists. We keep the 0th block list for completely
full blocks and anything else stores blocks for some range of free chunks
with fuller blocks appearing on lower block list array elements. This
reduces how often we must move a block to another list when we allocate or
free chunks, but still allows us to prefer to put new chunks on fuller
blocks and perhaps allow blocks with fewer chunks to be free'd later
once all their remaining chunks have been pfree'd.
Additionally, we now store a list of "emptyblocks", which are blocks that
no longer contain any allocated chunks. We now keep up to 10 of these
around to avoid having to thrash malloc/free when allocation patterns
continually cause blocks to become free of any allocated chunks only to
allocate more chunks again. Now only once we have 10 of these, we free
the block. This does raise the high water mark for the total memory that
a slab context can consume. It does not seem entirely unreasonable that
we might one day want to make this a property of SlabContext rather than a
compile-time constant. Let's wait and see if there is any evidence to
support that this is required before doing it.
Author: Andres Freund, David Rowley
Tested-by: Tomas Vondra, John Naylor
Discussion: https://postgr.es/m/20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().
This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.
Bump catalog version.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
SCRAM_KEY_LEN was a variable used in the internal routines of SCRAM to
size a set of fixed-sized arrays used in the SHA and HMAC computations
during the SASL exchange or when building a SCRAM password. This had a
hard dependency on SHA-256, reducing the flexibility of SCRAM when it
comes to the addition of more hash methods. A second issue was that
SHA-256 is assumed as the cryptohash method to use all the time.
This commit renames SCRAM_KEY_LEN to a more generic SCRAM_KEY_MAX_LEN,
which is used as the size of the buffers used by the internal routines
of SCRAM. This is aimed at tracking centrally the maximum size
necessary for all the hash methods supported by SCRAM. A global
variable has the advantage of keeping the code in its simplest form,
reducing the need of more alloc/free logic for all the buffers used in
the hash calculations.
A second change is that the key length (SHA digest length) and hash
types are now tracked by the state data in the backend and the frontend,
the common portions being extended to handle these as arguments by the
internal routines of SCRAM. There are a few RFC proposals floating
around to extend the SCRAM protocol, including some to use stronger
cryptohash algorithms, so this lifts some of the existing restrictions
in the code.
The code in charge of parsing and building SCRAM secrets is extended to
rely on the key length and on the cryptohash type used for the exchange,
assuming currently that only SHA-256 is supported for the moment. Note
that the mock authentication simply enforces SHA-256.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Jonathan Katz
Discussion: https://postgr.es/m/Y5k3Qiweo/1g9CG6@paquier.xyz
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
While fooling with my pet outer-join-variables patch, I discovered
that the test case I added in commit 11086f2f2 no longer demonstrates
what it's supposed to. The idea is to tempt the planner to reverse
the order of the two outer joins, which would leave noplace to
correctly evaluate the WHERE clause that's inserted between them.
Before the addition of the delay_upper_joins mechanism, it would
have taken the bait.
However, subsequent improvements broke the test in two different ways.
First, we now recognize the IS NULL coding pattern as an antijoin, and
we won't re-order antijoins; even if we did, the IS NULL test clauses
get removed so there would be no opportunity for them to misbehave.
Second, the planner now discovers that nested parameterized indexscans
are a lot cheaper than the double hash join it used back in the day,
and that approach doesn't want to re-order the joins anyway. Thus,
in HEAD the test passes even if one dikes out delay_upper_joins.
To fix, change the IS NULL tests to COALESCE clauses, which produce
the same results but the planner isn't smart enough to convert them
to antijoins. It'll still go for parameterized indexscans though,
so drop the index enabling that (don't know why I added that in the
first place), and disable nestloop joining just to be sure.
This time around, add an EXPLAIN to make the choice of plan visible.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query. We can just move that processing
to before the recursive parse_sub_analyze call, though.
While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.
Per bug #17723 from Vik Fearing. Back-patch to v14 where
the CYCLE feature was added.
Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
The environment variable PG_TEST_PG_UPGRADE_MODE can be set to
override the default transfer mode for the pg_upgrade tests.
(Automatically running the pg_upgrade tests for all supported modes
would be too slow.)
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
This option selects the default transfer mode. Having an explicit
option is handy to make scripts and tests more explicit. It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way. (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)
But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.
(This is our first use of the soft error mechanism for something
not directly related to a datatype input function. It won't be
the last.)
xml_is_document can be converted in the same way. That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.
Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
The key idea here is that xml_parse must distinguish hard errors
from soft errors. We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.
One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8. This might be something to improve
later, but it seems like a pretty low-probability scenario.
Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
When incremental sorts were added in v13 a 1.5x pessimism factor was added
to the cost modal. Seemingly this was done because the cost modal only
has an estimate of the total number of input rows and the number of
presorted groups. It assumes that the input rows will be evenly
distributed throughout the presorted groups. The 1.5x pessimism factor
was added to slightly reduce the likelihood of incremental sorts being
used in the hope to avoid performance regressions where an incremental
sort plan was picked and turned out slower due to a large skew in the
number of rows in the presorted groups.
An additional quirk with the path generation code meant that we could
consider both a sort and an incremental sort on paths with presorted keys.
This meant that with the pessimism factor, it was possible that we opted
to perform a sort rather than an incremental sort when the given path had
presorted keys.
Here we remove the 1.5x pessimism factor to allow incremental sorts to
have a fairer chance at being chosen against a full sort.
Previously we would generally create a sort path on the cheapest input
path (if that wasn't sorted already) and incremental sort paths on any
path which had presorted keys. This meant that if the cheapest input path
wasn't completely sorted but happened to have presorted keys, we would
create a full sort path *and* an incremental sort path on that input path.
Here we change this logic so that if there are presorted keys, we only
create an incremental sort path, and create sort paths only when a full
sort is required.
Both the removal of the cost pessimism factor and the changes made to the
path generation make it more likely that incremental sorts will now be
chosen. That, of course, as with teaching the planner any new tricks,
means an increased likelihood that the planner will perform an incremental
sort when it's not the best method. Our standard escape hatch for these
cases is an enable_* GUC. enable_incremental_sort already exists for
this.
This came out of a report by Pavel Luzanov where he mentioned that the
master branch was choosing to perform a Seq Scan -> Sort -> Group
Aggregate for his query with an ORDER BY aggregate function. The v15 plan
for his query performed an Index Scan -> Group Aggregate, of course, the
aggregate performed the final sort internally in nodeAgg.c for the
aggregate's ORDER BY. The ideal plan would have been to use the index,
which provided partially sorted input then use an incremental sort to
provide the aggregate with the sorted input. This was not being chosen
due to the pessimism in the incremental sort cost modal, so here we remove
that and rationalize the path generation so that sort and incremental sort
plans don't have to needlessly compete. We assume that it's senseless
to ever use a full sort on a given input path where an incremental sort
can be performed.
Reported-by: Pavel Luzanov
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/9f61ddbf-2989-1536-b31e-6459370a6baa%40postgrespro.ru
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing. d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan. This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that. It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.
The reason this test has become broken seems to be due to how the test was
originally coded. The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering. Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR. Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.
Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.
Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
The building of command completion tags could often be seen showing up in
profiles when running high tps workloads.
The query completion tags were being built with snprintf, which is slow at
the best of times when compared with more manual ways of formatting
strings. Here we introduce BuildQueryCompletionString() to do this job
for us. We also now store the completion tag's strlen in the
CommandTagBehavior struct so that we can quickly memcpy this number of
bytes into the completion tag string. Appending the rows affected is done
via pg_ulltoa_n. BuildQueryCompletionString returns the length of the
built string. This saves us having to call strlen to figure out how many
bytes to pass to pq_putmessage().
Author: David Rowley, Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAHoyFK-Xwqc-iY52shj0G+8K9FJpse+FuZ36XBKy78wDVnd=Qg@mail.gmail.com
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input. (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.) Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well. Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.
Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.
Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level. This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.
Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
35f059e9bd put the provariadic sanity
check into type_sanity.sql, even though it's not about types, and
moreover in the middle of some connected test group, which makes it
all very confusing. Move it to opr_sanity.sql, where it is in better
company.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.
(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all. That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)
Amul Sul, minor mods by me
Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.
line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style. I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place. But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.
Amul Sul, minor mods by me
Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
Add support for hexadecimal, octal, and binary integer literals:
0x42F
0o273
0b100101
per SQL:202x draft.
This adds support in the lexer as well as in the integer type input
functions.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
This referred to the size of the buffers for k_ipad and k_opad in HMAC
computations. This is unused since e6bdfd9, where SCRAM has switched to
the cryptohash routines for its HMAC calculations rather than its own
maths.
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/Y5gGMjXhyp0oK0mH@paquier.xyz
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock(). This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.
Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.
To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it. Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".
Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems. This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock. (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)
For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock. This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.
Per report from Peter Eisentraut. As before, back-patch to all
supported branches (which sadly no longer includes v10).
Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
Instead of half a dozen of mostly-duplicate ExecGrant_Foo() functions,
write one common function ExecGrant_generic() that can handle most of
them. We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/22c7e802-4e7d-8d87-8b71-cba95e6f4bcf%40enterprisedb.com
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum. That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.
setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums. push_path() had the same issue. This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.
Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files. There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is. Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array. Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do. In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.
Per report from exe-dealer@yandex.ru. Patch by me, but thanks to
David Johnston for initial investigation. Back-patch to v14 where
jsonb subscripting was introduced.
Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
If sendFileWithContent were used to send a file larger than the
bbsink buffer size, this would result in corruption. The only
files that are sent via sendFileWithContent are the backup label
file, the tablespace map file, and .done files for WAL segments
included in the backup. Of these, it seems that only the
tablespace_map file can become large enough to cause a problem,
and then only if you have a lot of tablespaces. If you do have
that situation, you might end up with a corrupted
tablespace_map file, which would be bad.
My commit bef47ff85d introduced
this problem.
Report and patch by Antonin Houska.
Discussion: http://postgr.es/m/15764.1670528645@antos
During ALTER TABLE execution, when prep-time handling of subcommands of
certain types determine that execution-time handling requires recursion,
they signal this by changing the subcommand type to a special value.
This can be done in a simpler way by using a separate flag introduced by
commit ec0925c22a, so do that.
Catversion bumped. It's not clear to me that ALTER TABLE subcommands
are stored anywhere in catalogs (CREATE FUNCTION rejects it in BEGIN
ATOMIC function bodies), but we do have both write and read support for
them, so be safe.
Discussion: https://postgr.es/m/20220929090033.zxuaezcdwh2fgfjb@alvherre.pgsql
3d14e17 has added support for this query but psql was not able to
complete it. Spotted while working on a different patch in the same
area.
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/Y3hw7yvG0VwpC1jq@paquier.xyz
This is straightforward as far as it goes. However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints. Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check. Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating. For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.
Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.
In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase. The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded. We should improve that, but it seems like a
separable project.
Andrew Dunstan and Tom Lane
Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
Formerly, semantic action functions for the JSON parser returned void,
so that there was no way for them to affect the parser's behavior.
That means in particular that they can't force an error exit except by
longjmp'ing. That won't do in the context of our project to make input
functions return errors softly. Hence, change them to return the same
JsonParseErrorType enum value as the parser itself uses. If an action
function returns anything besides JSON_SUCCESS, the parse is abandoned
and that error code is returned.
Action functions can thus easily return the same error conditions that
the parser already knows about. As an escape hatch for expansion, also
invent a code JSON_SEM_ACTION_FAILED that the core parser does not know
the exact meaning of. When returning this code, an action function
must use some out-of-band mechanism for reporting the error details.
This commit simply makes the API change and causes all the existing
action functions to return JSON_SUCCESS, so that there is no actual
change in behavior here. This is long enough and boring enough that
it seemed best to commit it separately from the changes that make
real use of the new mechanism.
In passing, remove a duplicate assignment of
transform_string_values_scalar.
Discussion: https://postgr.es/m/1436686.1670701118@sss.pgh.pa.us
We chose a specific wording of the not-implemented errors for
pseudotype I/O functions and other cases where there's little
value in implementing input and/or output. gtsvectorin never
got that memo though, nor did most of contrib. Make these all
fall in line, mostly because I'm a neatnik but also to remove
unnecessary translatable strings.
gbtreekey_in needs a bit of extra love since it supports
multiple SQL types. Sadly, gbtreekey_out doesn't have the
ability to do that, but I think it's unreachable anyway.
Noted while surveying datatype input functions to see what we
have left to fix.
Apparently anymultirange_in was written before we converted all
these pseudotype input functions to use a common macro, and it didn't
get fixed before committing. Sloppy merging probably explains its
unintuitive ordering, too, so rearrange.
Noted while surveying datatype input functions to see what we
have left to fix. I'm inclined to leave the pseudotypes as
throwing hard errors, because it's difficult to see a reason why
anyone would need something else. But in any case, if we want
to change that, we shouldn't have to change multiple copies of
the code.
9d9c02ccd added code to allow WindowAgg to take some shortcuts when a
monotonic WindowFunc reached some value that it could never come back
from due to the function's monotonic nature. That commit added a
runCondition field to WindowClause to store the condition which, when it
becomes false we can start taking shortcuts in nodeWindowAgg.c.
Here we fix an issue where subquery pullups didn't properly update the
runCondition to update the Vars to properly reference the new query level.
Here we also add a missing call to preprocess_expression() for the
WindowClause's runCondtion. The WindowFuncs in the targetlist will have
had this process done, so we must also do it for the WindowFuncs in the
runCondition so that they can be correctly found in the targetlist
during setrefs.c
Bug: #17709
Reported-by: Alexey Makhmutov
Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/17709-4f557160e3e8ee9a@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced
Buildfarm member wrasse has been complaining about empty declarations
as an effect of 8018ffb and 83a1a1b due to extra semicolons.
While on it, remove also the last backslash of the macros definitions,
causing more lines to be eaten in it than necessary, per comment from
Tom Lane.
Reported-by: Tom Lane, and buildfarm member wrasse
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/1188769.1670640236@sss.pgh.pa.us
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism. This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did). And I doubt
there's any measurable performance difference either. But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError. The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.
Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes. Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.
This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
More could be done in this line, but I just grabbed some low-hanging
fruit. Principal objective was to remove the need for several ugly
unconstify() usages in formatting.c.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted. There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK. The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.
Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions. This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong. There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup. This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose. See the
included documentation changes for details. Follow-on commits will
provide test code and usage examples.
The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly". This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role. That lets ordinary roles set placeholder variables, which
permission requirements are not known yet. Such a value wouldn't be used if
the variable finally appear to require superuser privileges.
The new flags are stored in the pg_db_role_setting.setuser array. Catversion
is bumped.
This commit is inspired by the previous work by Steve Chavez.
Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.
Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.
Dean Rasheed, reviewed by Nathan Bossart.
Backpatch to 15, where MERGE was added.
Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
Make the argument types of the File API match stdio better:
- Change the data buffer to void *, from char *.
- Change FileWrite() data buffer to const on top of that.
- Change amounts to size_t, from int.
In passing, change the FilePrefetch() amount argument from int to
off_t, to match the underlying posix_fadvise().
Discussion: https://www.postgresql.org/message-id/flat/11dda853-bb5b-59ba-a746-e168b1ce4bdb%40enterprisedb.com
In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo
to save the owning ModifyTableState for use by nodeModifyTable.c when
performing batch inserts, but as pointed out by Tom Lane, that changed
the array stride of es_result_relations, and that would break any
previously-compiled extension code that accesses that array. Fix by
removing that member from ResultRelInfo and instead adding a List member
at the end of EState to save such ModifyTableStates.
Per report from Tom Lane. Back-patch to v14, like the previous commit;
I chose to apply the patch to HEAD as well, to make back-patching easy.
Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
After restart, we don't perform streaming of an in-progress transaction if
it was previously decoded and confirmed by the client. To achieve that we
were comparing the END location of the WAL record being decoded with the
WAL location we have already decoded and confirmed by the client. While
decoding the commit record, to decide whether to process and send the
complete transaction, we compare its START location with the WAL location
we have already decoded and confirmed by the client. Now, if we need to
queue some change in the transaction while decoding the commit record
(e.g. snapshot), it is possible that we decide to stream the transaction
but later commit processing decides to skip it. In such a case, we would
needlessly send the changes and later when we decide to skip it, we will
send stream abort.
We also sometimes decide to stream the changes when we actually just need
to process them locally like a change for invalidations. This will lead us
to send empty streams. To avoid this, while queuing each change for
decoding, we remember whether the transaction has any change that actually
needs to be sent downstream and use that information later to decide
whether to stream the transaction or not.
Note, we can't avoid all cases where we have to send empty streams like
the case where the plugin later decides that the change is not
publishable. However, we will no longer need to send stream_abort when we
skip sending a particular transaction.
Author: Dilip Kumar
Reviewed-by: Hou Zhijie, Ashutosh Bapat, Shi yu, Amit Kapila
Discussion: https://postgr.es/m/CAFiTN-tHK=7LzfrPs8fbT2ksrOJGQbzywcgXst2bM9-rJJAAUg@mail.gmail.com
To run all tests that support running against existing server:
$ meson test --setup running
To run just the main pg_regress tests against existing server:
$ meson test --setup running regress-running/regress
To ensure the 'running' setup continues to work, test it as part of the
freebsd CI task.
Discussion: https://postgr.es/m/CAH2-Wz=XDQcmLoo7RR_i6FKQdDmcyb9q5gStnfuuQXrOGhB2sQ@mail.gmail.com
Combine some duplicated code stanzas by creating small functions.
Most of these duplications arose at a time when I wouldn't have
trusted C compilers to auto-inline small functions intelligently,
but they're probably poor practice now. Similarly split out some
bits that aren't actually duplicative as the code stands, but would
become so after an upcoming patch to add another error-handling
code path.
Take the opportunity to add some lengthier comments about what
we're doing here, too. Re-order one function that seemed not
very well-placed.
Patch by me, per suggestions from Andres Freund.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Generate a Makefile.global that's complete enough for PGXS to work for some
extensions. It is likely that this compatibility layer will not suffice for
every extension and not all platforms - we can expand it over time.
This allows extensions to use a single buildsystem across all the supported
postgres versions. Once all supported PG versions support meson, we can remove
the compatibility layer.
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
Previously export_dynamic was set in src/makefiles/Makefile.$port. For solaris
this required exporting with_gnu_ld. The determination of with_gnu_ld would be
nontrivial to copy for meson PGXS compatibility. It's also nice to delete
libtool.m4.
This uses -Wl,--export-dynamic on all platforms, previously all platforms but
FreeBSD used -Wl,-E. The likelihood of a name conflict seems lower with the
longer spelling.
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
The same code pattern is repeated 21 times for int64 counters (0 for
missing entry) and 5 times for doubles (0 for missing entry) on database
entries. This code is switched to use macros for the basic code
instead, shaving a few hundred lines of originally-duplicated code
patterns. The function names remain the same, but some fields of
PgStat_StatDBEntry have to be renamed to cope with the new style.
This is in the same spirit as 83a1a1b.
Author: Michael Paquier
Reviewed-by: Nathan Bossart, Bertrand Drouvot
Discussion: https://postgr.es/m/Y46stlxQ2LQE20Na@paquier.xyz
There likely are further issues, but as evidenced by the CI task proposed by
Justin in the referenced thread, this suffices to build and run basic tests in
cygwin (some fixes for the test infrastructure are needed, but that's
independent of the meson aspect).
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20221021034040.GT16921@telsasoft.com
Currently, information about the permissions to be checked on relations
mentioned in a query is stored in their range table entries. So the
executor must scan the entire range table looking for relations that
need to have permissions checked. This can make the permission checking
part of the executor initialization needlessly expensive when many
inheritance children are present in the range range. While the
permissions need not be checked on the individual child relations, the
executor still must visit every range table entry to filter them out.
This commit moves the permission checking information out of the range
table entries into a new plan node called RTEPermissionInfo. Every
top-level (inheritance "root") RTE_RELATION entry in the range table
gets one and a list of those is maintained alongside the range table.
This new list is initialized by the parser when initializing the range
table. The rewriter can add more entries to it as rules/views are
expanded. Finally, the planner combines the lists of the individual
subqueries into one flat list that is passed to the executor for
checking.
To make it quick to find the RTEPermissionInfo entry belonging to a
given relation, RangeTblEntry gets a new Index field 'perminfoindex'
that stores the corresponding RTEPermissionInfo's index in the query's
list of the latter.
ExecutorCheckPerms_hook has gained another List * argument; the
signature is now:
typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable,
List *rtePermInfos,
bool ereport_on_violation);
The first argument is no longer used by any in-core uses of the hook,
but we leave it in place because there may be other implementations that
do. Implementations should likely scan the rtePermInfos list to
determine which operations to allow or deny.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
9d9c02ccd added window "run conditions", which allows the evaluation of
monotonic window functions to be skipped when the run condition is no
longer true. Prior to this commit, once the run condition was no longer
true and we stopped evaluating the window functions, we simply just left
the ecxt_aggvalues[] and ecxt_aggnulls[] arrays alone to store whatever
value was stored there the last time the window function was evaluated.
Leaving a stale value in there isn't really a problem on 64-bit builds as
all of the window functions which we recognize as monotonic all return
int8, which is passed by value on 64-bit builds. However, on 32-bit
builds, this was a problem as the value stored in the ecxt_values[]
element would be a by-ref value and it would be pointing to some memory
which would get reset once the tuple context is destroyed. Since the
WindowAgg node will output these values in the resulting tupleslot, this
could be problematic for the top-level WindowAgg node which must look at
these values to filter out the rows that don't meet its filter condition.
Here we fix this by just zeroing the ecxt_aggvalues[] and setting the
ecxt_aggnulls[] array to true when the run condition first becomes false.
This results in the WindowAgg's output having NULLs for the WindowFunc's
columns rather than the stale or pointer pointing to possibly freed
memory. These tuples with the NULLs can only make it as far as the
top-level WindowAgg node before they're filtered out. To ensure that
these tuples *are* always filtered out, we now insist that OpExprs making
up the run condition are strict OpExprs. Currently, all the window
functions which the planner recognizes as monotonic return INT8 and the
operator which is used for the run condition must be a member of a btree
opclass. In reality, these restrictions exclude nothing that's built-in
to Postgres and are unlikely to exclude anyone's custom operators due to
the requirement that the operator is part of a btree opclass. It would be
unusual if those were not strict.
Reported-by: Sergey Shinderuk, using valgrind
Reviewed-by: Richard Guo, Sergey Shinderuk
Discussion: https://postgr.es/m/29184c50-429a-ebd7-f1fb-0589c6723a35@postgrespro.ru
Backpatch-through: 15, where 9d9c02ccd was added
The same code pattern is repeated 17 times for int64 counters (0 for
missing entry) and 5 times for timestamps (NULL for missing entry) on
table entries. This code is switched to use a macro for the basic code
instead, shaving a few hundred lines of originally-duplicated code. The
function names remain the same, but some fields of PgStat_StatTabEntry
have to be renamed to cope with the new style.
Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https:/postgr.es/m/20221204173207.GA2669116@nathanxps13
Passing a NULL snapshot (InvalidSnapshot) is going to work but only as long
as the index can't find any matching rows. This can be confusing for
the extension authors, so add an explicit check for this argument. The check
is implemented with Assert() in order to avoid overhead in release builds.
Reported-by: Sven Klemm
Discussion: https://postgr.es/m/CAJ7c6TPxitD4vbKyP-mpmC1XwyHdPPqvjLzm%2BVpB88h8LGgneQ%40mail.gmail.com
Author: Aleksander Alekseev
Reviewed-by: Pavel Borisov
By default, the contents generated by the custom and directory dump
formats are compressed. However, with the existing test facility, the
restore program will succeed regardless of whether the dumped output was
compressed or not without checking if anything has been compressed.
This commit implements a portable way to check the contents of the
custom and directory dump formats:
- glob_patterns, that can be defined for each test as an array of
glob()-compilable strings, tracking the contents that should or should
not be compressed. While this is useful to make sure that the table
data is compressed, this also checks that blobs.toc and toc.dat are
never compressed.
- command_like, to execute a command on a dump and check its generated
output. This is used here in correlation with pg_restore -l to check if
the dumps have been compressed or not, depending on if the build
supports gzip, or not.
This hole in the tests has come up when working on 5e73a60, where
compression has to be applied by default, if available, for both dump
formats.
The idea of glob_patterns comes from me, and Georgios has come up with
the design for command_like.
Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/DQn4czCWR1rcbGPLL7p3LfEr5-kGmlySm-H05VgroINdikvhtS5r9EdI6b8D8sjnbKdJ09k-cxs2AqijBeHAWk9Q8gvEAxPRHuLRhwONcGc=@pm.me
Keeping the SQL commands that initdb runs in string arrays before
feeding them to PG_CMD_PUTS() seems unnecessarily verbose and
inflexible. In some cases, the array only has one member. In other
cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a
command, but that would require breaking up the loop or using
workarounds like replace_token(). Unwind all that; it's much simpler
that way.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/2c50823b-f453-bb97-e38b-34751c51dcdf%40enterprisedb.com
A couple of places weren't up to speed for this. By sheer good
luck, we didn't fail but just selected a non-memoized join plan,
at least in the test case we have. Nonetheless, it's a bug,
and I'm not quite sure that it couldn't have worse consequences
in other examples. So back-patch to v14 where Memoize came in.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs48GkNom272sfp0-WeD6_0HSR19BJ4H1c9ZKSfbVnJsvRg@mail.gmail.com
For historical reasons, pg_dump refers to large objects as "BLOBs".
This term is not used anywhere else in PostgreSQL, and it also means
something different in the SQL standard and other SQL systems.
This patch renames internal functions, code comments, documentation,
etc. to use the "large object" or "LO" terminology instead. There is
no functionality change, so the archive format still uses the name
"BLOB" for the archive entry. Additional long command-line options
are added with the new naming.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com
The error messages reported during any failures while reading or
validating the header of a WAL currently includes only the offset of the
page but not the compiled LSN referring to the page, requiring an extra
step to compile it if looking at the surroundings with pg_waldump or
similar. Adding this information costs a bit in translation, but also
eases debugging.
Author: Bharath Rupireddy
Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
As pointed out by Dean Rasheed, we really should be using tmp >
-(PG_INTNN_MIN / 10) rather than tmp > (PG_INTNN_MAX / 10) for checking
for overflows in the accumulation in the pg_strtointNN functions. This
does happen to be the same number when dividing by 10, but there is a
pending patch which adds other bases and this is not the same number if we
were to divide by 2 rather than 10, for example. If the base 2 parsing
was to follow this example then we could accidentally think a string
containing the value of PG_INT32_MIN was an overflow in pg_strtoint32.
Clearly that shouldn't overflow.
This does not fix any actual live bugs, only some bad examples of overflow
checks for future bases.
Reported-by: Dean Rasheed
Discussion: https://postgr.es/m/CAEZATCVEtwfhdm-K-etZYFB0=qsR0nT6qXta_W+GQx4RYph1dg@mail.gmail.com
Just because I'm a neatnik, and I'm currently working on
code in this area. It annoys me to not be able to pgindent
my patches without working around unrelated changes.
It neglected to recurse to the subpath, meaning you'd get back
a path identical to the input. This could produce wrong query
results if the omission meant that the subpath fails to enforce
some join clause it should be enforcing. We don't have a test
case for this at the moment, but the code is obviously broken
and the fix is equally obvious. Back-patch to v14 where
Memoize was introduced.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_R=ORpz=Lkn2q3ebPC5EuWyfZF+tmfCPVLBVK5W39mHA@mail.gmail.com
These two functions failed to cover MaterialPath. That's not a
fatal problem, but we can generate better plans in some cases
if we support it.
Tom Lane and Richard Guo
Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
We might fail to generate a partitionwise join, because
reparameterize_path_by_child() does not support all path types.
This should not be a hard failure condition: we should just fall back
to a non-partitioned join. However, generate_partitionwise_join_paths
did not consider this possibility and would emit the (misleading)
error "could not devise a query plan for the given query" if we'd
failed to make any paths for a child join. Fix it to give up on
partitionwise joining if so. (The accepted technique for giving up
appears to be to set rel->nparts = 0, which I find pretty bizarre,
but there you have it.)
I have not added a test case because there'd be little point:
any omissions of this sort that we identify would soon get fixed
by extending reparameterize_path_by_child(), so the test would stop
proving anything. However, right now there is a known test case based
on failure to cover MaterialPath, and with that I've found that this
is broken in all supported versions. Hence, patch all the way back.
Original report and patch by me; thanks to Richard Guo for
identifying a test case that works against committed versions.
Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
Experiments have shown that modern versions of both gcc and clang are
unable to fully optimize the multiplication by 10 that we're doing in the
pg_strtointNN functions. Both compilers seem to be making use of "imul",
which is not the most efficient way to multiply by 10. This seems to be
due to the overflow checking that we're doing. Without the overflow
checks, both those compilers switch to a more efficient method of
multiplying by 10. In absence of overflow concern, integer multiplication
by 10 can be done by bit-shifting left 3 places to multiply by 8 and then
adding the original value twice.
To allow compilers this flexibility, here we adjust the code so that we
accumulate the number as an unsigned version of the type and remove the
use of pg_mul_sNN_overflow() and pg_sub_sNN_overflow(). The overflow
checking can be done simply by checking if the accumulated value has gone
beyond a 10th of the maximum *signed* value for the given type. If it has
then the accumulation of the next digit will cause an overflow. After
this is done, we do a final overflow check before converting the unsigned
version of the number back to its signed counterpart.
Testing has shown about an 8% speedup of a COPY into a table containing 2
INT columns.
Author: David Rowley, Dean Rasheed
Discussion: https://postgr.es/m/CAApHDvrL6_+wKgPqRHr7gH_6xy3hXM6a3QCsZ5ForurjDFfenA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrdYByjfj-=WbmVNFgmVZg88-dE7heukw8p55aJ+W=qxQ@mail.gmail.com
I wrote this to provide a home for a planned discussion of error
return conventions for non-error-throwing functions. But it seems
useful as documentation of existing code no matter what becomes of
that proposal, so commit separately.
When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.
The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.
What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.
Patch by me. Thanks to Tom Lane for reviewing.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
This is present in the declaration for ReadDataFromArchive, so we'd better
have it in the definition too in order to avoid compilers from complaining
about the mismatch of function signatures.
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.
For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.
For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.
No known problem exists in 11-14, so just add the test there.
Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
Ignore trailing spaces for non-deterministic collations when
hashing.
The previous behavior could lead to tuples falling into the wrong
partitions when hash partitioning is combined with the BPCHAR type and
a non-deterministic collation. Fortunately, it did not affect hash
indexes, because hash indexes do not use extended hash functions.
Decline to backpatch, per discussion.
Discussion: https://postgr.es/m/eb83d0ac7b299eb08f9b900dd08a5a0c5d90e517.camel@j-davis.com
Reviewed-by: Richard Guo, Tom Lane
It seems better to deal with this by explicit annotations on the
fields in question, instead of magic knowledge embedded in the
script. While that creates a risk-of-omission from failing to
annotate fields, the preceding commit should catch any such
oversights.
Discussion: https://postgr.es/m/263413.1669513145@sss.pgh.pa.us
Check that if we generate a call to copy, compare, write, or read
a specific node type, that node type does have the appropriate
support function. (This doesn't protect against trying to invoke
nonexistent code when considering generic field types such as
"Node *", but it seems like a useful check anyway.)
Check that array_size() refers to a field appearing earlier in
the struct. Aside from catching obvious errors like a misspelled
field name, this protects against a more subtle mistake: if the
size field appears later in the struct than the array field, then
compare and read functions would misbehave. There is actually
exactly that situation in PlannerInfo, but it's okay since we
do not need compare or read functionality for that (today anyway).
Discussion: https://postgr.es/m/263413.1669513145@sss.pgh.pa.us
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN". Fix that on the psql side, and add some
regression tests.
Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.
Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
Up to now we have allowed manual creation of an ON SELECT rule on
a table to convert it into a view. That was never anything but a
horrid, error-prone hack though. pg_dump used to rely on that
behavior to deal with cases involving circular dependencies,
where a dependency loop could be broken by separating the creation
of a view from installation of its ON SELECT rule. However, we
changed pg_dump to use CREATE OR REPLACE VIEW for that in commit
d8c05aff5 (which was later back-patched as far as 9.4), so there's
not a good argument anymore for continuing to support the behavior.
The proximate reason for axing it now is that we found that the
new statistics code has failure modes associated with the relkind
change caused by this behavior. We'll patch around that in v15,
but going forward it seems like a better idea to get rid of the
need to support relkind changes.
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
When checking for roles with a pg_ prefix in <= 9.5 servers, report
all found roles in a text file as how other checks are done instead
of just reporting that they exist. Rolenames are printed with their
Oids to match other reports.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/4BB2735C-C347-4CF7-AFB1-8573BE930066@yesql.se
ri_RootToPartitionMap is currently only initialized for tuple routing
target partitions, though a future commit will need the ability to use
it even for the non-partition child tables, so make adjustments to the
decouple it from the partitioning code.
Also, make it lazily initialized via ExecGetRootToChildMap(), making
that function its preferred access path. Existing third-party code
accessing it directly should no longer do so; consequently, it's been
renamed to ri_RootToChildMap, which also makes it consistent with
ri_ChildToRootMap.
ExecGetRootToChildMap() houses the logic of setting the map appropriately
depending on whether a given child relation is partition or not.
To support this, also add a separate entry point for TupleConversionMap
creation that receives an AttrMap. No new code here, just split an
existing function in two.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqEYUhDXSK5BTvG_xk=eaAEJCD4GS3C6uH7ybBvv+Z_Tmg@mail.gmail.com
For Updates and Deletes, we were not honoring the columns list for old
tuple values while sending tuple data via pgoutput. This results in
pgoutput emitting more columns than expected.
This is not a problem for built-in logical replication as we simply ignore
additional columns based on the relation information sent previously which
didn't have those columns. However, some other users of pgoutput plugin
may expect the columns as per the column list. Also, sending extra columns
unnecessarily consumes network bandwidth defeating the purpose of the
column list feature.
Reported-by: Gunnar Morling
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
The only use of $(LD) in Makefiles is for AIX, to generate the export file for
the backend. We only support the system linker on AIX and we already hardcode
the path to a number of other binaries. Removing LD substitution will simplify
the upcoming meson PGXS compatibility.
While at it, add a comment why -r is used.
A subsequent commit will remove the determination of LD from configure as
well.
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
Until now we emitted the cflags to build the CRC objects into architecture
specific variables. That doesn't make a whole lot of sense to me - we're never
going to target x86 and arm at the same time, so they don't need to be
separate variables.
It might be better to instead continue to have CFLAGS_SSE42 /
CFLAGS_ARMV8_CRC32C be computed by PGAC_ARMV8_CRC32C_INTRINSICS /
PGAC_SSE42_CRC32_INTRINSICS and then set CFLAGS_CRC based on those. But it
seems unlikely that we'd need other sets of CRC specific flags for those two
architectures at the same time.
This simplifies the upcoming meson PGXS compatibility.
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
Compression specifications are currently used by pg_basebackup and
pg_receivewal, and are able to let the user control in an extended way
the method and level of compression used. As an effect of this commit,
pg_dump's -Z/--compress is now able to use more than just an integer, as
of the grammar "method[:detail]".
The method can be either "none" or "gzip", and can optionally take a
detail string. If the detail string is only an integer, it defines the
compression level. A comma-separated list of keywords can also be used
method allows for more options, the only keyword supported now is
"level".
The change is backward-compatible, hence specifying only an integer
leads to no compression for a level of 0 and gzip compression when the
level is greater than 0.
Most of the code changes are straight-forward, as pg_dump was relying on
an integer tracking the compression level to check for gzip or no
compression. These are changed to use a compression specification and
the algorithm stored in it.
As of this change, note that the dump format is not bumped because there
is no need yet to track the compression algorithm in the TOC entries.
Hence, we still rely on the compression level to make the difference
when reading them. This will be mandatory once a new compression method
is added, though.
In order to keep the code simpler when parsing the compression
specification, the code is changed so as pg_dump now fails hard when
using gzip on -Z/--compress without its support compiled, rather than
enforcing no compression without the user knowing about it except
through a warning. Like before this commit, archive and custom formats
are compressed by default when the code is compiled with gzip, and left
uncompressed without gzip.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it. But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that. Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.
In passing, remove a comment that was obsoleted later.
Per report from Duncan Sands. Back-patch to v14 where this code
came in. (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)
Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
The planner will now add a given PartitioPruneInfo to
PlannedStmt.partPruneInfos instead of directly to the
Append/MergeAppend plan node. What gets set instead in the
latter is an index field which points to the list element
of PlannedStmt.partPruneInfos containing the PartitioPruneInfo
belonging to the plan node.
A later commit will make AcquireExecutorLocks() do the initial
partition pruning to determine a minimal set of partitions to be
locked when validating a plan tree and it will need to consult the
PartitioPruneInfos referenced therein to do so. It would be better
for the PartitioPruneInfos to be accessible directly than requiring
a walk of the plan tree to find them, which is easier when it can be
done by simply iterating over PlannedStmt.partPruneInfos.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
Matviews have been discarded from needing predicate locks since 3bf3ab8
and their introduction. At this point, there was no concurrent flavor
of REFRESH yet, hence there was no meaning in having materialized views
look at read/write conflicts with concurrent transactions using
themselves the serializable isolation level because they could only be
refreshed with an access exclusive lock. CONCURRENTLY, on the contrary,
allows reads and writes during a refresh as it holds a share update
exclusive lock.
Some isolation tests are added to show the effect of the change, with a
combination of one table and a matview based on it, using a mix of
REFRESH CONCURRENTLY and read/write queries.
This could arguably be considered as a bug, but as it is a subtle
behavior change potentially impacting applications no backpatch is
done.
Author: Yugo Nagata
Reviewed-by: Richard Guo, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/20220726164434.42d4e33911b4b4fcf751c4e7@sraoss.co.jp
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.
Per report from Xing Guo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
A future commit will move the checkAsUser field from RangeTblEntry
to a new node that, unlike RTEs, will only be created for tables
mentioned in the query but not for the inheritance child relations
added to the query by the planner. So, checkAsUser value for a
given child relation will have to be obtained by referring to that
for its ancestor mentioned in the query.
In preparation, it seems better to expand the use of RelOptInfo.userid
during planning in place of rte->checkAsUser so that there will be
fewer places to adjust for the above change.
Given that the child-to-ancestor mapping is not available during the
execution of a given "child" ForeignScan node, add a checkAsUser
field to ForeignScan to carry the child relation's RelOptInfo.userid.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
This adds coverage for a few scenarios not checked yet in psql, with
multiple query combined across files defined by \o, \g or both at the
same time. The results are saved in the output files defined, then
reloaded back to check what has been written to them.
Author: Daniel Verite
Discussion: https://postgr.es/m/25c2bb5b-9012-40f8-8088-774cb764046d@manitou-mail.org
This commit moves the code in charge of deparsing the method and detail
strings fed later to parse_compress_specification() to a common routine,
where the backward-compatible case of only an integer being found (N
= 0 => "none", N > 1 => gzip at level N) is handled.
Note that this has a side-effect for pg_basebackup, as we now attempt to
detect "server-" and "client-" before checking for the integer-only
pre-14 grammar, where values like server-N and client-N (without the
follow-up detail string) are now valid rather than failing because of an
unsupported method name. Past grammars are still handled the same way,
but these flavors are now authorized, and would now switch to consider N
= 0 as no compression and N > 1 as gzip with the compression level used
as N, with the caller still controlling if the compression method should
be done server-side, client-side or is unspecified. The documentation
of pg_basebackup is updated to reflect that.
This benefits other code paths that would like to rely on the same logic
as pg_basebackup and pg_receivewal with option values used for
compression specifications, one area discussed lately being pg_dump.
Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
The frontend-side routine in charge of building a SCRAM verifier
mentioned that the restrictions applying to SASLprep on the password
with the encoding are described at the top of fe-auth-scram.c, but this
information is in auth-scram.c.
This is wrong since 8f8b9be, so backpatch all the way down as this is an
important documentation bit.
Spotted while reviewing a different patch.
Backpatch-through: 11
Previously, we'd compress only when the active range of array entries
reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids).
If max_connections is large, the first term could result in not
compressing for a long time, resulting in much wastage of cycles in
hot-standby backends scanning the array to take snapshots. Get rid
of that term, and just bound it to 2 * pArray->numKnownAssignedXids.
That however creates the opposite risk, that we might spend too much
effort compressing. Hence, consider compressing only once every 128
commit records. (This frequency was chosen by benchmarking. While
we only tried one benchmark scenario, the results seem stable over
a fairly wide range of frequencies.)
Also, force compression when processing RecoveryInfo WAL records
(which should be infrequent); the old code could perform compression
then, but would do so only after the same array-range check as for
the transaction-commit path.
Also, opportunistically run compression if the startup process is about
to wait for WAL, though not oftener than once a second. This should
prevent cases where we waste lots of time by leaving the array
not-compressed for long intervals due to low WAL traffic.
Lastly, add a simple check to keep us from uselessly compressing
when the array storage is already compact.
Back-patch, as the performance problem is worse in pre-v14 branches
than in HEAD.
Simon Riggs and Michail Nikolaev, with help from Tom Lane and
Andres Freund.
Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
This is an oversight in commit 7c337b6b5: I apparently didn't think
about the possibility of a SQL function being executed multiple
times within a query. In that case, functions.c's primitive caching
mechanism allows the same utility parse tree to be presented for
execution more than once. We have to tell ProcessUtility to make
a working copy of the parse tree, or bad things happen.
Normally I'd add a regression test, but I think the reported crasher
is dependent on some rather random implementation choices that are
nowhere near functions.c, so its usefulness as a long-lived test
feels questionable. In any case, this fix is clearly correct given
the design choices of 7c337b6b5.
Per bug #17702 from Xin Wen. Thanks to Daniel Gustafsson for
analysis. Back-patch to v14 where the faulty commit came in
(before that, the responsibility for copying scribble-able
utility parse trees lay elsewhere).
Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition. However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization. The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.
While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.
Per bug #17700 from Xin Wen. Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo. Back-patch to v12 where this code
was added.
Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
When it's given as true, return a 0 in the position of the missing
column rather than raising an error.
This is currently unused, but it allows us to reimplement column
permission checking in a subsequent commit. It seems worth breaking
into a separate commit because it affects unrelated code.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFfiai=qBxPDTjaio_ZcaqUKh+FC=prESrB8ogZgFNNNQ@mail.gmail.com
Previously, an idle startup (recovery) process would wake up every 5
seconds to have a chance to poll for promote_trigger_file, even if that
GUC was not configured. That promotion triggering mechanism was
effectively superseded by pg_ctl promote and pg_promote() a long time
ago. There probably aren't many users left and it's very easy to change
to the modern mechanisms, so we agreed to remove the feature.
This is part of a campaign to reduce wakeups on idle systems.
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CANbhV-FsjnzVOQGBpQ589%3DnWuL1Ex0Ykn74Nh1hEjp2usZSR5g%40mail.gmail.com
This provides two new predefined roles: pg_vacuum_all_tables and
pg_analyze_all_tables. Roles which have been granted these roles can
perform vacuum or analyse respectively on any or all tables as if they
were a superuser. This removes the need to grant superuser privilege to
roles just so they can perform vacuum and/or analyze.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
This commit adds a basic set of authentication tests to check after the
new keywords added by a54b658 for the HBA and ident files, aka
"include", "include_if_exists" and "include_dir".
This includes checks for all the positive cases originally proposed,
where valid contents are generated for the HBA and ident files without
any errors happening in the server, checking as well the contents of
their respective system views. The error handling will be evaluated
separately (-DEXEC_BACKEND makes that trickier), and what we have here
covers most of the ground I would like to see covered if one manipulates
the tokenization logic of hba.c in the future.
While on it, some coverage is added for files included with '@' for
database or user name lists.
Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:
loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)
To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.
Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net
Backpatch to all live branches.
Two booleans used for timeout tracking were used within some SIGALRM
signal handlers, but they were not declared as sig_atomic_t, so mark
them as such. This has no consequence on WIN32 for both tools.
Author: Ranier Vilela
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/CAEudQArCDQQiPiFR16=yu9k5s2tp4tgEe1U1ZbkW4ofx81AWWQ@mail.gmail.com
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.
Backpatch to all live branches
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.
Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.
In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases. So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.
While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.
Back-patch to v14 where that commit appeared.
Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
Peer connections require support for local connections to work, but the
test missed the same check as the other ones in this suite. The
buildfarm does not run the authentication tests on Windows, and, more
surprisingly, the CI with meson was already able to skip it.
Author: Anton A. Melnikov
Discussion: https://postgr.es/m/28b9d685-9590-45b1-fe87-358d61c6950a@inbox.ru
The code has been assuming already in a few places that the initial
recursion nesting depth is 0, and the recent changes in hba.c (mainly
783e8c6) have relies on this assumption in more places. The maximum
recursion nesting level is assumed to be 10 for hba.c and GUCs.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20221124090724.n7amf5kpdhx6vb76@jrouhaud
Logical WAL senders display now as follows, gaining a database name:
postgres: walsender USER DATABASE HOST(PORT) STATE
Physical WAL senders show up the same, as of:
postgres: walsender USER HOST(PORT) STATE
This information was missing, hence it was not possible to know from ps
if a WAL sender was a logical or a physical one, and on which database
it is connected when it is logical.
Author: Tatsuhiro Nakamori
Reviewed-by: Fujii Masao, Bharath Rupireddy
Discussion: https://postgr.es/m/36a3b137e82e0ea9fe7e4234f03b64a1@oss.nttdata.com
pg_hba.conf and pg_ident.conf gain support for three record keywords:
- "include", to include a file.
- "include_if_exists", to include a file, ignoring it if missing.
- "include_dir", to include a directory of files. These are classified
by name (C locale, mostly) and need to be prefixed by ".conf", hence
following the same rules as GUCs.
This commit relies on the refactoring pieces done in efc9816, ad6c528,
783e8c6 and 1b73d0b, adding a small wrapper to build a list of
TokenizedAuthLines (tokenize_include_file), and the code is shaped to
offer some symmetry with what is done for GUCs with the same options.
pg_hba_file_rules and pg_ident_file_mappings gain a new field called
file_name, to track from which file a record is located, taking
advantage of the addition of rule_number in c591300 to offer an
organized view of the HBA or ident records loaded.
Bump catalog version.
Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
When building hash indexes using the spool method, tuples are added to the
index page in hashkey order. Because of this, we can safely skip
performing the binary search on the existing tuples on the page to find
the location to insert the tuple based on its hashkey value. For this
case, we can just always put the tuple at the end of the item array as the
tuples will always arrive in hashkey order.
Testing has shown that this can improve hash index build speeds by 5-15%
with a unique set of integer values.
Author: Simon Riggs
Reviewed-by: David Rowley
Tested-by: David Zhang, Tomas Vondra
Discussion: https://postgr.es/m/CANbhV-GBc5JoG0AneUGPZZW3o4OK5LjBGeKe_icpC3R1McrZWQ@mail.gmail.com
The memory context was created before attempting to open the first HBA
or ident file, which would cause it to leak. This had no consequences
for the system views for HBA and ident files, but this would cause
memory leaks in the postmaster on reload if the initial HBA and/or ident
files are missing, which is a valid behavior while the backend is
running.
Oversight in efc9816.
Author: Ted Yu
Discussion: https://postgr.es/m/CALte62xH6ivgiKKzPRJgfekPZC6FKLB3xbnf3=tZmc_gKj78dw@mail.gmail.com
This should have been added in efc9816, but it looks like I have found a
way to mess up a bit a patch split. This should have no consequence in
practice, but let's be clean.
Discussion: https://postgr.es/m/Y324HvGKiWxW2yxe@paquier.xyz
The list of TokenizedAuthLines generated at parsing for the HBA and
ident files is now stored in a static context called tokenize_context,
where only all the parsed tokens are stored. This context is created
when opening the first authentication file of a HBA/ident set (hba_file
or ident_file), and is cleaned up once we are done all the work around
it through a new routine called free_auth_file(). One call of
open_auth_file() should have one matching call of free_auth_file(), the
creation and deletion of the tokenization context is controlled by the
recursion depth of the tokenization.
Rather than having tokenize_auth_file() return a memory context that
includes all the records, the tokenization logic now creates and deletes
one memory context each time this function is called. This will
simplify recursive calls to this routine for the upcoming inclusion
record logic.
While on it, rename tokenize_inc_file() to tokenize_expand_file() as
this would conflict with the upcoming patch that will add inclusion
records for HBA/ident files. An '@' file has its tokens added to an
existing list.
Reloading HBA/indent configuration in a tight loop shows no leaks, as of
one type of test done (with and without -DEXEC_BACKEND).
Author: Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/Y324HvGKiWxW2yxe@paquier.xyz
Some custom table access method may have their tuple format and use custom
executor nodes for their custom scan types. The ability to set a custom slot
would save them from tuple format conversion. Other users of custom executor
nodes may also benefit.
Discussion: https://postgr.es/m/CAPpHfduJUU6ToecvTyRE_yjxTS80FyPpct4OHaLFk3OEheMTNA@mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Pavel Borisov
We're running out of bits for new permissions. This change doubles the
number of permissions we can accomodate from 16 to 32, so the
forthcoming new ones for vacuum/analyze don't exhaust the pool.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
Pass VACUUM parameters (VacuumParams state) to vacuum_set_xid_limits()
directly, rather than passing most individual VacuumParams fields as
separate arguments.
Also make vacuum_set_xid_limits() output parameter symbol names match
those used by its vacuumlazy.c caller.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=TE7gW5DgSahDkf0UEZigFGAoHNNN6EvSrdzC=Kn+hrA@mail.gmail.com
We shouldn't ever need to rely on whether HEAP_XMAX_INVALID is set in
t_infomask when considering whether or not an xmax should be deemed
already frozen, since that status flag is just a hint. The only
acceptable representation for an "xmax_already_frozen" raw xmax field is
the transaction ID value zero (also known as InvalidTransactionId).
Adjust code that superficially appeared to rely on HEAP_XMAX_INVALID to
make the rule about xmax_already_frozen clear. Also avoid needlessly
rereading the tuple's raw xmax.
Oversight in bugfix commit d2599ecf. There is no evidence that this
ever led to incorrect behavior, so no backpatch. The worst consequence
of this bug was that VACUUM could hypothetically fail to notice and
report on certain kinds of corruption, which seems fairly benign.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzkh3DMCDRPfhZxj9xCq9v3WmzvmbiCpf1dNKUBPadhCbQ@mail.gmail.com
Examine ParseNamespaceItem flags to detect whether a column name
is unreferenceable for lack of LATERAL, or could be referenced if
a qualified name were used, and give better hints for such cases.
Also, don't phrase the message to imply that there's only one
matching column when there is really more than one.
Many of the regression test output changes are not very interesting,
but just reflect reclassifying the "There is a column ... but it
cannot be referenced from this part of the query" messages as DETAIL
rather than HINT. They are details per our style guide, in the sense
of being factual rather than offering advice; and this change provides
room to offer actual HINTs about what to do.
While here, adjust the fuzzy-name-matching code to be a shade less
impenetrable. It was overloading the meanings of FuzzyAttrMatchState
fields way too much IMO, so splitting them into multiple fields seems
to make it clearer. It's not like we need to shave bytes in that
struct.
Per discussion of bug #17233 from Alexander Korolev.
Discussion: https://postgr.es/m/17233-afb9d806aaa64b17@postgresql.org
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples. While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits. It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.
Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages. Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.
Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.
Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped. This could be
catastrophic if catalog churn is high.
Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.
Backpatch to 13, where slot invalidation appeared.
Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
The lwlock wait queue scalability issue fixed in a4adc31f69 was quite hard to
find because of the exponential backoff and because we adjust spins_per_delay
over time within a backend.
To make it easier to find similar issues in the future, add a wait event for
the pg_usleep() in perform_spin_delay(). Showing a wait event while spinning
without sleeping would increase the overhead of spinlocks, which we do not
want.
We may at some later point want to have more granular wait events, but that'd
be a substantial amount of work. This provides at least some insights into
something currently hard to observe.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
https://postgr.es/m/20221120204310.xywrhyxyytsajuuq@awork3.anarazel.de
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL. Backpatch
the doc changes to all supported versions.
Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function. Add some commentary about that
while it's fresh in mind. Also remove some unused macros in
wparser_def.c.
While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.
Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
At least on linux, set_ps_display() breaks /proc/$pid/environ. The sanitizer's
helper library uses /proc/$pid/environ to implement getenv(), as it wants to
work independent of libc. When just using undefined and alignment sanitizers,
the sanitizer library is only initialized when the first error occurs, by
which time we've often already called set_ps_display(), preventing the
sanitizer libraries from seeing the options.
We can work around that by defining __ubsan_default_options, a weak symbol
libsanitizer uses to get defaults from the application, and return
getenv("UBSAN_OPTIONS"). But only if main already was reached, so that we
don't end up relying on a not-yet-working getenv().
As it's just a function that won't get called when not running a sanitizer, it
doesn't seem necessary to make compilation of the function conditional.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
The postmaster normally sends SIGQUIT to force-terminate its
child processes after a child crash or immediate-stop request.
If that doesn't result in child exit within a few seconds,
we follow it up with SIGKILL. This patch provides GUC flags
that allow either of these signals to be replaced with SIGABRT.
On typically-configured Unix systems, that will result in a
core dump being produced for each such child. This can be
useful for debugging problems, although it's not something you'd
want to have on in production due to the risk of disk space
bloat from lots of core files.
The old postmaster -T switch, which sent SIGSTOP in place of
SIGQUIT, is changed to be the same as send_abort_for_crash.
As far as I can tell from the code comments, the intent of
that switch was just to block things for long enough to force
core dumps manually, which seems like an unnecessary extra step.
(Maybe at the time, there was no way to get most kernels to
produce core files with per-PID names, requiring manual core
file renaming after each one. But now it's surely the hard way.)
I also took the opportunity to remove the old postmaster -n
(skip shmem reinit) switch, which hasn't actually done anything
in decades, though the documentation still claimed it did.
Discussion: https://postgr.es/m/2251016.1668797294@sss.pgh.pa.us
This switch impacts 9 patterns related to a SQL-mandated special syntax
for function calls:
- LOCALTIME [ ( typmod ) ]
- LOCALTIMESTAMP [ ( typmod ) ]
- CURRENT_TIME [ ( typmod ) ]
- CURRENT_TIMESTAMP [ ( typmod ) ]
- CURRENT_DATE
Five new entries are added to pg_proc to compensate the removal of
SQLValueFunction to provide backward-compatibility and making this
change transparent for the end-user (for example for the attribute
generated when a keyword is specified in a SELECT or in a FROM clause
without an alias, or when specifying something else than an Iconst to
the parser).
The parser included a set of checks coming from the files in charge of
holding the C functions used for the SQLValueFunction calls (as of
transformSQLValueFunction()), which are now moved within each function's
execution path, so this reduces the dependencies between the execution
and the parsing steps. As of this change, all the SQL keywords use the
same paths for their work, relying only on COERCE_SQL_SYNTAX. Like
fb32748, no performance difference has been noticed, while the perf
profiles get reduced with ExecEvalSQLValueFunction() gone.
Bump catalog version.
Reviewed-by: Corey Huinker, Ted Yu
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
As per one of the CI reports, there is an assertion failure which
indicates that we were trying to use an unenforced xmin horizon for
decoding snapshots. Though, we couldn't figure out the reason for
assertion failure these checks would help us in finding the reason if the
problem happens again in the future.
Author: Amit Kapila based on suggestions by Andres Freund
Reviewd by: Andres Freund
Discussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com
Until now LWLockDequeueSelf() sequentially searched the list of waiters to see
if the current proc is still is on the list of waiters, or has already been
removed. In extreme workloads, where the wait lists are very long, this leads
to a quadratic behavior. #backends iterating over a list #backends
long. Additionally, the likelihood of needing to call LWLockDequeueSelf() in
the first place also increases with the increased length of the wait queue, as
it becomes more likely that a lock is released while waiting for the wait list
lock, which is held for longer during lock release.
Due to the exponential back-off in perform_spin_delay() this is surprisingly
hard to detect. We should make that easier, e.g. by adding a wait event around
the pg_usleep() - but that's a separate patch.
The fix is simple - track whether a proc is currently waiting in the wait list
or already removed but waiting to be woken up in PGPROC->lwWaiting.
In some workloads with a lot of clients contending for a small number of
lwlocks (e.g. WALWriteLock), the fix can substantially increase throughput.
As the quadratic behavior arguably is a bug, we might want to decide to
backpatch this fix in the future.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de
Discussion: https://postgr.es/m/CALj2ACXktNbG=K8Xi7PSqbofTZozavhaxjatVc14iYaLu4Maag@mail.gmail.com
As the list of shared relations is fixed, we can just dispatch based
IsSharedRelation(), instead of first trying to look up stats for a non-shared
rel and falling back to shared stats.
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de
Discussion: https://postgr.es/m/8c1851a2-a98e-e1bc-7729-37b0b95f66ec@gmail.com
Commit 3d14e171e dropped regress_roleoption_donor twice and
regress_roleoption_protagonist not at all. Leaving roles behind
after "make installcheck" is unfriendly in its own right, plus
it causes repeated runs of "make installcheck" to fail.
Commit c94959d41 fixed DROP OPERATOR to reset oprcom/oprnegate links
to the dropped operator; but it missed updating this old comment that
claimed we allow such links to dangle.
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.
Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by meson or Makefile.global, or its own
tmp_check directory.
The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.
Per complaint from Andres Freund
This will be backpatched in due course after some testing.
Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
This commit changes six SQL keywords to use COERCE_SQL_SYNTAX rather
than relying on SQLValueFunction:
- CURRENT_ROLE
- CURRENT_USER
- USER
- SESSION_USER
- CURRENT_CATALOG
- CURRENT_SCHEMA
Among the six, "user", "current_role" and "current_catalog" require
specific SQL functions to allow ruleutils.c to map them to the SQL
keywords these require when using COERCE_SQL_SYNTAX. Having
pg_proc.proname match with the keyword ensures that the compatibility
remains the same when projecting any of these keywords in a FROM clause
to an attribute name when an alias is not specified. This is covered by
the tests added in 2e0d80c, making sure that a correct mapping happens
with each SQL keyword. The three others (current_schema, session_user
and current_user) already have pg_proc entries for this job, so this
brings more consistency between the way such keywords are treated in the
parser, the executor and ruleutils.c.
SQLValueFunction is reduced to half its contents after this change,
simplifying its logic a bit as there is no need to enforce a C collation
anymore for the entries returning a name as a result. I have made a few
performance tests, with a million-ish calls to these keywords without
seeing a difference in run-time or in perf profiles
(ExecEvalSQLValueFunction() is removed from the profiles). The
remaining SQLValueFunctions are now related to timestamps and dates.
Bump catalog version.
Reviewed-by: Corey Huinker
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.
Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f567, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).
For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.
Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.
Per buildfarm (now that we have debug_discard_caches coverage
again). Back-patch to v15 where this module was added.
Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000. Fix that, and adjust a couple
of other discrepancies that had crept in over time.
(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
Similar to how the INHERIT option controls whether or not the
permissions of the granted role are automatically available to the
grantee, the new SET permission controls whether or not the grantee
may use the SET ROLE command to assume the privileges of the granted
role.
In addition, the new SET permission controls whether or not it
is possible to transfer ownership of objects to the target role
or to create new objects owned by the target role using commands
such as CREATE DATABASE .. OWNER. We could alternatively have made
this controlled by the INHERIT option, or allow it when either
option is given. An advantage of this approach is that if you
are granted a predefined role with INHERIT TRUE, SET FALSE, you
can't go and create objects owned by that role.
The underlying theory here is that the ability to create objects
as a target role is not a privilege per se, and thus does not
depend on whether you inherit the target role's privileges. However,
it's surely something you could do anyway if you could SET ROLE
to the target role, and thus making it contingent on whether you
have that ability is reasonable.
Design review by Nathan Bossat, Wolfgang Walther, Jeff Davis,
Peter Eisentraut, and Stephen Frost.
Discussion: http://postgr.es/m/CA+Tgmob+zDSRS6JXYrgq0NWdzCXuTNzT5eK54Dn2hhgt17nm8A@mail.gmail.com
eqjoinsel() currently makes use of MCV stats only when we have such
stats for both sides of the clause. As coded, though, it would
fetch those stats even when they're present for just one side.
This can be a bit expensive with high statistics targets, leading
to wasted effort in common cases such as joining a unique column
to a non-unique column. So it seems worth the trouble to do a quick
pre-check to confirm that both sides have MCVs before fetching either.
Also, tweak the API spec for get_attstatsslot() to document the
method we're using here.
David Geier, Tomas Vondra, Tom Lane
Discussion: https://postgr.es/m/b9846ca0-5f1c-9b26-5881-aad3f42b07f0@gmail.com
The object_address test file turns to psql unaligned output for some
tests to avoid huge diffs for changes. But this is useful also to the
other large test in that file, so apply it there as well. This also
makes verifying the null and whitespace behavior easier.
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.
per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.
Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst
Backpatch to release 14 where this code was introduced.
This commit improves the handling of the following clauses:
- Addition of "CREATE" for ALTER DEFAULT PRIVILEGES .. GRANT/REVOKE.
- Addition of GRANT|ADMIN|INHERIT OPTION FOR for REVOKE, with some
completion for roles, INHERIT being added recently by e3ce2de.
- Addition of GRANT WITH ADMIN|INHERIT.
The list of privilege options common to GRANT and REVOKE is refactored
to avoid its duplication.
Author: Shi Yu
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Peter Smith
Discussion: https://postgr.es/m/OSZPR01MB6310FCE8609185A56344EED2FD559@OSZPR01MB6310.jpnprd01.prod.outlook.com
For now the task has been set to be manually triggered, as we are already
limited by the amount of CI time available for windows, particularly on cfbot.
Author: Melih Mutlu <m.melihmutlu@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAGPVpCSKS9E0An4=e7ZDnme+y=WOcQFJYJegKO8kE9=gh8NJKQ@mail.gmail.com
Standardize on the name snapshotConflictHorizon for all XID fields from
WAL records that generate recovery conflicts when in hot standby mode.
This supersedes the previous latestRemovedXid naming convention.
The new naming convention places emphasis on how the values are actually
used by REDO routines. How the values are generated during original
execution (details of which vary by record type) is deemphasized. Users
of tools like pg_waldump can now grep for snapshotConflictHorizon to see
all potential sources of recovery conflicts in a standardized way,
without necessarily having to consider which specific record types might
be involved.
Also bring a couple of WAL record types that didn't follow any kind of
naming convention into line. These are heapam's VISIBLE record type and
SP-GiST's VACUUM_REDIRECT record type. Now every WAL record whose REDO
routine calls ResolveRecoveryConflictWithSnapshot() passes through the
snapshotConflictHorizon field from its WAL record. This is follow-up
work to the refactoring from commit 9e540599 that made FREEZE_PAGE WAL
records use a standard snapshotConflictHorizon style XID cutoff.
No bump in XLOG_PAGE_MAGIC, since the underlying format of affected WAL
records doesn't change.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wzm2CQUmViUq7Opgk=McVREHSOorYaAjR1ZpLYkRN7_dPw@mail.gmail.com
Before, receiving a NegotiateProtocolVersion message would result in a
confusing error message like
expected authentication request from server, but received v
This adds proper handling of this protocol message and produces an
on-topic error message from it.
Reviewed-by: Jacob Champion <jchampion@timescale.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f9c7862f-b864-8ef7-a861-c4638c83e209%40enterprisedb.com
After sending a startup message, libpq expects either an error
response ('E') or an authentication request ('R'). Before processing
the message, it ensures it has read enough bytes to correspond to the
length specified in the message. However, when processing the 'R'
message, if an EOF status is returned it loops back waiting for more
input, even though we already checked that we have enough input. In
this particular case, this is probably not reachable anyway, because
other code ensures we have enough bytes for an authentication request
message, but the code is wrong and misleading. In the more general
case, processing a faulty message could result in an EOF status, which
would then result in an infinite loop waiting for the end of a message
that will never come. The correction is to make this an error.
Reported-by: Jacob Champion <jchampion@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/f9c7862f-b864-8ef7-a861-c4638c83e209@enterprisedb.com
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.
Backpatch to 15
As introduced in 006b69f, the order of the headers was incorrect.
However, it happens that lwlock.h can just be dropped from the list, so
let's be clean and remove it, fixing the order of the listed headers.
Commit 1cc29fe7c, which taught EXPLAIN to print PARAM_EXEC Params as
the referenced expressions, included some checks to prevent matching
Params found in SubPlans or InitPlans to NestLoopParams of upper query
levels. At the time, this seemed possibly necessary to avoid false
matches because of the planner's habit of re-using the same PARAM_EXEC
slot in multiple places in a plan. Furthermore, in the absence of
LATERAL no such reference could be valid anyway. But it's possible
now that we have LATERAL, and in the wake of 46c508fbc and 1db5667ba
I believe the false-match hazard is gone. Hence, remove the
in_same_plan_level checks. As shown in the regression test changes,
this provides a useful improvement in readability for EXPLAIN of
LATERAL-using subplans.
Richard Guo, reviewed by Greg Stark and myself
Discussion: https://postgr.es/m/CAMbWs4-YSOcQXAagJetP95cAeZPqzOy5kM5yijG0PVW5ztRb4w@mail.gmail.com
Commit 05a7be93 changed the timing of the first reply sent by a
walreceiver, which caused a few TAP tests that call wait_for_catchup()
when they haven't actually streamed anything yet to wait ~10 seconds
(wal_receiver_status_interval).
Before commit 05a7be93 the initial reply was sent after 100ms, but
there's no reason not to send it immediately as a slight improvement.
Do the same for HS feedback for consistency.
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/742545.1668377284%40sss.pgh.pa.us
Implement a data structure that is a List of Bitmapsets, which is
essentially a 2-D boolean array except that the rows need not all
be the same width. Operations such as union and intersection are
meaningful for these, just as they are for Bitmapsets. Eventually
we might build many of the same operations that we have written for
Bitmapsets, but for the first use-case we just need a few.
That first use-case is for antijoin detection: reduce_outer_joins
needs to find the set of Vars that are certain to be non-null in a
successfully joined (not null-extended) left join row, and also
find the set of Vars subject to higher-level IS NULL constraints,
and intersect them. We had been doing this by making Lists of
the Var nodes and then using list_intersect, which works but is
pretty inefficient compared to a bitmapset-like intersection.
Potentially it's O(N^2) if there are a lot of Vars involved,
which fortunately there generally aren't; still it's not great.
Moreover, that method requires the Vars of interest to be exactly
equal() in the join condition and the upper IS NULL condition,
which is problematic for my WIP patch that labels Vars according
to which outer joins have possibly nulled them.
Discussion: https://postgr.es/m/892228.1668437838@sss.pgh.pa.us
Discussion: https://postgr.es/m/CAMbWs4-mvPPCJ1W6iK6dD5HiNwoJdi6mZp=-7mE8N9Sh+cd0tQ@mail.gmail.com
We can re-use the clusters set up for this test script's first test,
instead of generating new ones. On my machine this is good for
about a 20% reduction in this script's runtime, from ~6.5 sec to
~5.2 sec.
This idea could be taken further, but it'd require a much more invasive
patch. These cases are easy because the Perl variable names were
already being re-used.
Anton A. Melnikov
Discussion: https://postgr.es/m/eb7aa992-c2d7-6ce7-4942-0c784231a362@inbox.ru
Two locations working on pg_ts_config_map are switched from
CatalogTupleInsert() to a multi-insert approach with tuple slots:
- ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING when inserting new
entries. The number of entries to insert is known in advance, so is the
number of slots needed. Note that CatalogTupleInsertWithInfo() is now
used for the entry updates.
- CREATE TEXT SEARCH CONFIGURATION, where up to ~20-ish records could be
inserted at once. The number of slots is not known in advance, hence
a slot initialization is delayed until a tuple is stored in it.
Like all the changes of this kind (1ff4161, 63110c6 or e3931d01), an
insert batch is capped at 64kB.
Author: Michael Paquier, Ranier Vilela
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/Y3M5bovrkTQbAO4W@paquier.xyz
This allows to insert at once all the enum values defined with a given
type into pg_enum, reducing the WAL produced by roughly 10%~. pg_enum's
indexes are opened and closed now once rather than N times. The number
of items to insert is known in advance, making this change
straight-forward, and would happen on a CREATE TYPE .. AS ENUM.
The amount of data inserted is capped at 64kB for each insert batch.
This is similar to commits 63110c6 and e3931d01, that worked on
different catalogs.
Reported-by: Ranier Vilela
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Ranier Vilela
Discussion: https://postgr.es/m/Y3M5bovrkTQbAO4W@paquier.xyz
This commit improves two code paths to open and close indexes a
minimum amount of times when doing a series of catalog updates or
inserts. CatalogTupleInsert() is costly when using it for multiple
inserts or updates compared to CatalogTupleInsertWithInfo(), as it would
need to open and close the indexes of the catalog worked each time an
operation is done.
This commit updates the following places:
- REINDEX CONCURRENTLY when copying statistics from one index relation
to the other. Multi-INSERTs are avoided here, as this would begin to
show benefits only for indexes with multiple expressions, for example,
which may not be the most common pattern. This change is noticeable in
profiles with indexes having many expressions, for example, and it would
improve any callers of CopyStatistics().
- Update of statistics on ANALYZE, that mixes inserts and updates.
In each case, the catalog indexes are opened only if at least one
insertion and/or update is required, to minimize the cost of the
operation. Like the previous coding, no indexes are opened as long as
at least one insert or update of pg_statistic has happened.
Author: Ranier Vilela
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/CAEudQAqh0F9y6Di_Wc8xW4zkWm_5SDd-nRfVsCn=h0Nm1C_mrg@mail.gmail.com
This commit introduces a basic facility to test SLRUs, in terms of
initialization, page reads, writes, flushes, truncation and deletions,
using SQL wrappers around the APIs of slru.c. This should be easily
extensible at will, and it can be used as a starting point for someone
willing to implement an external module that makes use of SLRUs (LWLock
tranche registering and SLRU initialization particularly).
As this requires a loaded library, the tests use a custom configuration
file and are disabled under installcheck.
Author: Aleksander Alekseev, Michael Paquier
Reviewed-by: Pavel Borisov, Daniel Gustafsson, Noah Misch, Maxim Orlov
Discussion: https://postgr.es/m/CAJ7c6TOFoWcHOW4BVe3BG_uikCrO9B91ayx9d6rh5JZr_tPESg@mail.gmail.com
Make heapam WAL records that describe freezing performed by VACUUM more
space efficient by storing each distinct "freeze plan" once, alongside
an array of associated page offset numbers (one per freeze plan). The
freeze plans required for most heap pages tend to naturally have a great
deal of redundancy, so this technique is very effective in practice. It
often leads to freeze WAL records that are less than 20% of the size of
equivalent WAL records generated using the previous approach.
The freeze plan concept was introduced by commit 3b97e6823b, which fixed
bugs in VACUUM's handling of MultiXacts. We retain the concept of
freeze plans, but go back to using page offset number arrays. There is
no loss of generality here because deduplication is an additive process
that gets applied mechanically when FREEZE_PAGE WAL records are built.
More than anything else, freeze plan deduplication is an optimization
that reduces the marginal cost of freezing additional tuples on pages
that will need to have at least one or two tuples frozen in any case.
Ongoing work that adds page-level freezing to VACUUM will take full
advantage of the improved cost profile through batching.
Also refactor some of the details surrounding recovery conflicts needed
to REDO freeze records in passing: make original execution responsible
for generating a standard latestRemovedXid cutoff, rather than working
backwards to get the same cutoff in the REDO routine. Bugfix commit
66fbcb0d2e did it the other way around, which is equivalent but obscures
what's going on.
Also rename the cutoff field from the WAL record/struct (rename the
field cutoff_xid to latestRemovedXid to match similar WAL records).
Processing of conflicts by REDO routines is already completely uniform,
so tools like pg_waldump should present the information driving the
process uniformly. There are two remaining WAL record types that still
don't quite follow this convention (heapam's VISIBLE record type and
SP-GiST's VACUUM_REDIRECT record type). They can be brought into line
by later work that totally standardizes how the cutoffs are presented.
Bump XLOG_PAGE_MAGIC.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAH2-Wz=XytErMnb8FAyFd+OQEbiipB0Q2FmFdXrggPL4VBnRYQ@mail.gmail.com
Some callers didn't check the return value of pclose() or
ClosePipeStream() correctly. Either they didn't check it at all or
they treated it like the return of fclose().
The correct way is to first check whether the return value is -1, and
then report errno, and then check the return value like a result from
system(), for which we already have wait_result_to_str() to make it
simpler. To make this more compact, expand wait_result_to_str() to
also handle -1 explicitly.
Reviewed-by: Ankit Kumar Pandey <itsankitkp@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/8cd9fb02-bc26-65f1-a809-b1cb360eef73@enterprisedb.com
This adds a new psql command \bind that sets query parameters and
causes the next query to be sent using the extended query protocol.
Example:
SELECT $1, $2 \bind 'foo' 'bar' \g
This may be useful for psql scripting, but one of the main purposes is
also to be able to test various aspects of the extended query protocol
from psql and to write tests more easily.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/e8dd1cd5-0e04-3598-0518-a605159fe314@enterprisedb.com
libpq now contains a mix of error message strings that end with
newlines and don't end with newlines, due to some newer code paths
with new ways of passing errors around. This leads to confusion and
mistakes both during development and translation.
This adds new functions libpq_append_error() and
libpq_append_conn_error() that encapsulate common code paths for
producing error message strings. Notably, these functions append the
newline, so that the string appearing in the code does not end with a
newline. This makes (almost) all error message strings in libpq
uniform in this regard (and also consistent with how we handle it
outside of libpq code). (There are a few exceptions that are
difficult to fit into this scheme, but they are only a few.)
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/7c0232ef-7b44-68db-599d-b327d0640a77@enterprisedb.com
Setting archive_library and archive_command at the same time is now an
error. Before, archive_library would take precedence over
archive_command.
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/20220914222736.GA3042279%40nathanxps13
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a
cleanup lock on the new bucket page after acquiring an exclusive lock on
it and raising a PANIC error on failure. However, it is quite possible
that checkpointer can acquire the pin on the same page before acquiring a
lock on it, and then the replay will lead to an error. So instead, directly
acquire the cleanup lock on the new bucket page during
XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation.
Reported-by: Andres Freund
Author: Robert Haas
Reviewed-By: Amit Kapila, Andres Freund, Vignesh C
Backpatch-through: 11
Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
The parsing of the authentication files for HBA and ident entries
happens in two phases:
- Tokenization of the files, creating a list of TokenizedAuthLines.
- Validation of the HBA and ident entries, building a set of HbaLines or
IdentLines.
The second phase doing the validation provides already some error
context about the configuration file and the line where a problem
happens, but there is no such information in the first phase when
tokenizing the files. This commit adds an ErrorContextCallback in
tokenize_auth_file(), with a context made of the line number and the
configuration file name involved in a problem. This is useful for files
included in an HBA file for user and database lists, and it will become
much more handy to track problems for files included via a potential
@include[_dir,_if_exists].
The error context is registered so as the full chain of events is
reported when using cascaded inclusions when for example
tokenize_auth_file() recurses over itself on new files, displaying one
context line for each file gone through when tokenizing things.
Author: Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/Y2xUBJ+S+Z0zbxRW@paquier.xyz
This adds a check on the recursion depth when including authentication
configuration files, something that has never been done when processing
'@' files for database and user name lists in pg_hba.conf. On HEAD,
this was leading to a rather confusing error, as of:
FATAL: exceeded maxAllocatedDescs (NN) while trying to open file "/path/blah.conf"
This refactors the code so as the error reported is now the following,
which is the same as for GUCs:
FATAL: could not open file "/path/blah.conf": maximum nesting depth exceeded
This reduces a bit the verbosity of the error message used for files
included in user and database lists, reporting only the file name of
what's failing to load, without mentioning the relative or absolute path
specified after '@' in a HBA file. The absolute path is built upon what
'@' defines anyway, so there is no actual loss of information. This
makes the future inclusion logic much simpler. A follow-up patch will
add an error context to be able to track on which line of which file the
inclusion is failing, to close the loop, providing all the information
needed to know the full chain of events.
This logic has been extracted from a larger patch written by Julien,
rewritten by me to have a unique code path calling AllocateFile() on
authentication files, and is useful on its own. This new interface
will be used later for authentication files included with
@include[_dir,_if_exists], in a follow-up patch.
Author: Michael Paquier, Julien Rouhaud
Discussion: https://www.postgresql.org/message-id/Y2xUBJ+S+Z0zbxRW@paquier.xyz
Add a NodeTag field to struct Bitmapset. This is free because of
alignment considerations on 64-bit hardware. While it adds some
space on 32-bit machines, we aren't optimizing for that case anymore.
The advantage is that data structures such as Lists of Bitmapsets
are now first-class objects to the Node infrastructure, and don't
require special-case code to handle.
This patch includes removal of one such special case, in indxpath.c:
bms_equal_any() can now be replaced by list_member(). There may be
more existing code that could be simplified, but I didn't look very
hard. We also get to drop the read_write_ignore annotations on a
couple of RelOptInfo fields.
The outfuncs/readfuncs support is arranged so that nothing changes
in the string representation of a Bitmapset field; therefore, this
doesn't need a catversion bump.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/109089.1668197158@sss.pgh.pa.us
The current code looks for the sample file in the source directory, but
it seems better to test against the installed sample file.
Backpatch to release 15 where the test was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Currently this only allows for one argument, which must be present, and
always returns a single string. With this change the following now all
work:
$all_config = $node->config_data;
%config_map = ($node->config_data);
$incdir = $node->config_data('--include-dir');
($incdir, $sharedir) = $node->config_data(
qw(--include-dir --share-dir));
Backpatch to release 15 where this was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Instead of dozens of mostly-duplicate pg_foo_aclcheck() functions,
write one common function object_aclcheck() that can handle almost all
of them. We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.
There are a few pg_foo_aclcheck() that don't work via the generic
function and have special APIs, so those stay as is.
I also changed most pg_foo_aclmask() functions to static functions,
since they are not used outside of aclchk.c.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com
Instead of dozens of mostly-duplicate pg_foo_ownercheck() functions,
write one common function object_ownercheck() that can handle almost
all of them. We already have all the information we need, such as
which system catalog corresponds to which catalog table and which
column is the owner column.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com