No GUCs that use NO_SHOW_ALL are reported in pg_show_all_settings(),
hence trying to check combinations of flags related to it is pointless.
These queries have been introduced by d10e41d, so backpatch down to 15
to keep all the branches consistent. Equivalent checks based on
NO_SHOW_ALL could be added in check_GUC_init() when a GUC is initially
loaded, but this can be done only on HEAD.
Author: Nitin Jadhav
Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com
Backpatch-through: 15
This test has been added as of 857ee8e that has introduced the SQL
function txid_status(), with the purpose of checking that a transaction
ID still in-progress during a crash is correctly marked as aborted after
recovery finishes.
This test is unstable, and some configuration scenarios may that easier
to reproduce (wal_level=minimal, wal_compression=on) because the WAL
holding the information about the in-progress transaction ID may not
have made it to disk yet, hence a post-crash recovery may cause the same
XID to be reused, triggering a test failure.
We have discussed a few approaches, like making this function force a
WAL flush to make it reliable across crashes, but we don't want to pay a
performance penalty in some scenarios, as well. The test could have
been tweaked to enforce a checkpoint but that actually breaks the
promise of the test to rely on a stable result of txid_status() after
a crash.
This issue has been reported a few times across the past years, with an
original report from Kyotaro Horiguchi. The buildfarm machines tanager,
hachi and gokiburi enable wal_compression, and fail on this test
periodically.
Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com
Backpatch-through: 11
This fixes a bug that, under some circumstances, would cause MERGE to
fail to properly recompute expressions for GENERATED STORED columns.
Formerly, ExecInitModifyTable() did not call ExecInitStoredGenerated()
for a MERGE command, which meant that the generated expressions
information was not computed until later, when the first merge action
was executed. However, if the first merge action to execute was an
UPDATE, then ExecInitStoredGenerated() could decide to skip some some
generated columns, if the columns on which they depended were not
updated, which was a problem if the MERGE also contained an INSERT
action, for which no generated columns should be skipped.
So fix by having ExecInitModifyTable() call ExecInitStoredGenerated()
for MERGE, and assume that it isn't safe to skip any generated columns
in a MERGE. Possibly that could be relaxed, by allowing some generated
columns to be skipped for a MERGE without an INSERT action, but it's
not clear that it's worth the effort.
Noticed while investigating bug #17759. Back-patch to v15, where MERGE
was added.
Dean Rasheed, reviewed by Tom Lane.
Discussion:
https://postgr.es/m/17759-e76d9bece1b5421c%40postgresql.orghttps://postgr.es/m/CAEZATCXb_ezoMCcL0tzKwRGA1x0oeE%3DawTaysRfTPq%2B3wNJn8g%40mail.gmail.com
The drop database command waits for the logical replication sync worker to
accept ProcSignalBarrier and the worker's slot creation waits for the drop
database to finish which leads to a deadlock. This happens because the
tablesync worker holds interrupts while creating a slot.
We prevent cancel/die interrupts while creating a slot in the table sync
worker because it is possible that before the server finishes this
command, a concurrent drop subscription happens which would complete
without removing this slot and that leads to the slot existing until the
end of walsender. However, the slot will eventually get dropped at the
walsender exit time, so there is no danger of the dangling slot.
This patch reallows cancel/die interrupts while creating a slot and
modifies the test to wait for slots to become zero to prevent finding an
ephemeral slot.
The reported hang doesn't happen in PG14 as the drop database starts to
wait for ProcSignalBarrier with PG15 (commits 4eb2176318 and e2f65f4255)
but it is good to backpatch this till PG14 as it is not a good idea to
prevent interrupts during a network call that could block indefinitely.
Reported-by: Lakshmi Narayanan Sreethar
Diagnosed-by: Andres Freund
Author: Hou Zhijie
Reviewed-by: Vignesh C, Amit Kapila
Backpatch-through: 14, where it was introduced in commit 6b67d72b60
Discussion: https://postgr.es/m/CA+kvmZELXQ4ZD3U=XCXuG3KvFgkuPoN1QrEj8c-rMRodrLOnsg@mail.gmail.com
When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the
libpq connection. In most paths that's fairly harmless, as the calling process
will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat
longer lived leak.
Fix by releasing resources, including the libpq connection, on error.
Add a test exercising the error code path. To make it reliable and safe, the
test tries to connect to port=-1, which happens to fail during connection
establishment, rather than during connection string parsing.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20230121011237.q52apbvlarfv6jm6@awork3.anarazel.de
Backpatch: 11-
b762fed64 recently changed this test to prevent subquery pullup to allow
us to test Memoize with lateral_vars. As pointed out by Tom Lane, OFFSET
0 is our standard way of preventing subquery pullups, so do it that way
instead.
Discussion: https://postgr.es/m/2144818.1674517061@sss.pgh.pa.us
Backpatch-through: 14, same as b762fed64
The test in question was meant to be testing Memoize to ensure it worked
correctly when the inner side of the join contained lateral vars, however,
nothing in the lateral subquery stopped it from being pulled up into the
main query, so the planner did that, and that meant no more lateral vars.
Here we add a simple ORDER BY to stop the planner from being able to
pullup the lateral subquery.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_LHJaN4L-tXpKMiPFnsCJWU1P8Xh59o0W7AA6UN99=cQ@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
The motivation for this change is that when pg_dump dumps a
partitioned index that's marked REPLICA IDENTITY, it generates a
command sequence that applies REPLICA IDENTITY before the partitioned
index has been marked valid, causing restore to fail. We could
perhaps change pg_dump to not do it like that, but that would be
difficult and would not fix existing dump files with the problem.
There seems to be very little reason for the backend to disallow
this anyway --- the code ignores indisreplident when the index
isn't valid --- so instead let's fix it by allowing the case.
Commit 9511fb37a previously expressed a concern that allowing
indisreplident to be set on invalid indexes might allow us to
wind up in a situation where a table could have indisreplident
set on multiple indexes. I'm not sure I follow that concern
exactly, but in any case the only way that could happen is because
relation_mark_replica_identity is too trusting about the existing set
of markings being valid. Let's just rip out its early-exit code path
(which sure looks like premature optimization anyway; what are we
doing expending code to make redundant ALTER TABLE ... REPLICA
IDENTITY commands marginally faster and not-redundant ones marginally
slower?) and fix it to positively guarantee that no more than one
index is marked indisreplident.
The pg_dump failure can be demonstrated in all supported branches,
so back-patch all the way. I chose to back-patch 9511fb37a as well,
just to keep indisreplident handling the same in all branches.
Per bug #17756 from Sergey Belyashov.
Discussion: https://postgr.es/m/17756-dd50e8e0c8dd4a40@postgresql.org
test_extensions' test_ext_cine extension has the same upgrade hazard
as test_ext7: the regression test leaves it in an updated state
from which no downgrade path to default is provided. This causes
the update_extensions.sql script helpfully provided by pg_upgrade
to fail. So drop it in cross-version-upgrade testing.
Not entirely sure how come I didn't hit this in testing yesterday;
possibly I'd built the upgrade reference databases with
testmodules-install-check disabled.
Backpatch to v10 where this module was introduced.
To test pg_upgrade across major PG versions, we have to be able to
modify or drop any old objects with no-longer-supported properties,
and we have to be able to deal with cosmetic changes in pg_dump output.
Up to now, the buildfarm and pg_upgrade's own test infrastructure had
separate implementations of the former, and we had nothing but very
ad-hoc rules for the latter (including an arbitrary threshold on how
many lines of unchecked diff were okay!). This patch creates a Perl
module that can be shared by both those use-cases, and adds logic
that deals with pg_dump output diffs in a much more tightly defined
fashion.
This largely supersedes previous efforts in commits 0df9641d3,
9814ff550, and 62be9e4cd, which developed a SQL-script-based solution
for the task of dropping old objects. There was nothing fundamentally
wrong with that work in itself, but it had no basis for solving the
output-formatting problem. The most plausible way to deal with
formatting is to build a Perl module that can perform editing on the
dump files; and once we commit to that, it makes more sense for the
same module to also embed the knowledge of what has to be done for
dropping old objects.
Back-patch versions of the helper module as far as 9.2, to
support buildfarm animals that still test that far back.
It's also necessary to back-patch PostgreSQL/Version.pm,
because the new code depends on that. I fixed up pg_upgrade's
002_pg_upgrade.pl in v15, but did not look into back-patching
it further than that.
Tom Lane and Andrew Dunstan
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
In commit 8bf6ec3ba I assumed that no code path could reach
ExecGetExtraUpdatedCols without having gone through
ExecInitStoredGenerated. That turns out not to be the case in
logical replication: if there's an ON UPDATE trigger on the target
table, trigger.c will call this code before anybody has set up its
generated columns. Having seen that, I don't have a lot of faith in
there not being other such paths. ExecGetExtraUpdatedCols can call
ExecInitStoredGenerated for itself, as long as we are willing to
assume that it is only called in CMD_UPDATE operations, which on
the whole seems like a safer leap of faith.
Per report from Vitaly Davydov.
Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
The current jsonpath code assumes that the referenced variable always exists.
It could only throw an error at the value valuation time. At the same time
existence checking assumes variable is present without valuation, and error
suppression doesn't work for missing variables.
This commit makes existense checking trigger an error for missing variables.
This makes the overall behavior consistent.
Backpatch to 12 where jsonpath was introduced.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwbeytffJkVnEqDyLZ%3DrQsznoTh1OgDoOF3VmOMkxcTMjA%40mail.gmail.com
Author: Alexander Korotkov, David G. Johnston
Backpatch-through: 12
We were identifying the updatable generated columns of inheritance
children by transposing the calculation made for their parent.
However, there's nothing that says a traditional-inheritance child
can't have generated columns that aren't there in its parent, or that
have different dependencies than are in the parent's expression.
(At present it seems that we don't enforce that for partitioning
either, which is likely wrong to some degree or other; but the case
clearly needs to be handled with traditional inheritance.)
Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field
in favor of identifying which generated columns depend on updated
columns during executor startup. In HEAD we can remove
extraUpdatedCols altogether; in back branches, it's still there but
always empty. Another difference between the HEAD and back-branch
versions of this patch is that in HEAD we can add the new bitmap field
to ResultRelInfo, but that would cause an ABI break in back branches.
Like 4b3e37993, add a List field at the end of struct EState instead.
Back-patch to v13. The bogus calculation is also being made in v12,
but it doesn't have the same visible effect because we don't use it
to decide which generated columns to recalculate; as a consequence of
which the patch doesn't apply easily. I think that there might still
be a demonstrable bug associated with trigger firing conditions, but
that's such a weird corner-case usage that I'm content to leave it
unfixed in v12.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes. One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
Cirrus is about to shut down its macOS-on-Intel support, so it's time to
move our CI testing over to ARM instances. The Homebrew package manager
changed its default installation prefix for the new architecture, so a
couple of tests need tweaks to find binaries.
Back-patch to 15, where in-tree CI began.
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20221122225744.GF11463%40telsasoft.com
The test added in commit e44dae07f9 has a thinko: it wants to read
info about a few WAL records, but it obtains the LSN of the final record
to read by asking for the WAL insert position; however,
pg_get_wal_records_info only accepts to read up to the flush position
(cf. IsFutureLSN()). In normal conditions there is no difference, since
the last record written by the preceding loop is known flushed and it's
the one the test wants; but it's possible to have some other process
insert another WAL record that isn't flushed, and that causes the whole
test to explode.
Fix by having pg_get_wal_records_info() read only up to the flushed
position. Backpatch to 15, which is where pg_walinspect appeared.
Author: Karina Litskevich <litskevichkarina@gmail.com>
Discussion: https://postgr.es/m/a5559c95-52c3-5eea-cd63-9b4f1c70ff96@gmail.com
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query. We can just move that processing
to before the recursive parse_sub_analyze call, though.
While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.
Per bug #17723 from Vik Fearing. Back-patch to v14 where
the CYCLE feature was added.
Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing. d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan. This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that. It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.
The reason this test has become broken seems to be due to how the test was
originally coded. The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering. Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR. Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.
Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.
Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum. That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.
setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums. push_path() had the same issue. This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.
Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files. There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is. Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array. Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do. In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.
Per report from exe-dealer@yandex.ru. Patch by me, but thanks to
David Johnston for initial investigation. Back-patch to v14 where
jsonb subscripting was introduced.
Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.
Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.
Dean Rasheed, reviewed by Nathan Bossart.
Backpatch to 15, where MERGE was added.
Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
A couple of places weren't up to speed for this. By sheer good
luck, we didn't fail but just selected a non-memoized join plan,
at least in the test case we have. Nonetheless, it's a bug,
and I'm not quite sure that it couldn't have worse consequences
in other examples. So back-patch to v14 where Memoize came in.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs48GkNom272sfp0-WeD6_0HSR19BJ4H1c9ZKSfbVnJsvRg@mail.gmail.com
When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.
The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.
What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.
Patch by me. Thanks to Tom Lane for reviewing.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.
For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.
For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.
No known problem exists in 11-14, so just add the test there.
Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN". Fix that on the psql side, and add some
regression tests.
Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.
Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
For Updates and Deletes, we were not honoring the columns list for old
tuple values while sending tuple data via pgoutput. This results in
pgoutput emitting more columns than expected.
This is not a problem for built-in logical replication as we simply ignore
additional columns based on the relation information sent previously which
didn't have those columns. However, some other users of pgoutput plugin
may expect the columns as per the column list. Also, sending extra columns
unnecessarily consumes network bandwidth defeating the purpose of the
column list feature.
Reported-by: Gunnar Morling
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it. But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that. Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.
In passing, remove a comment that was obsoleted later.
Per report from Duncan Sands. Back-patch to v14 where this code
came in. (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)
Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.
Per report from Xing Guo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition. However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization. The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.
While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.
Per bug #17700 from Xin Wen. Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo. Back-patch to v12 where this code
was added.
Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.
Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.
The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.
Per complaint from Andres Freund
Backpatched from 9b4eafcaf4 to all live branches
Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.
Per buildfarm (now that we have debug_discard_caches coverage
again). Back-patch to v15 where this module was added.
Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.
per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.
Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst
Backpatch to release 14 where this code was introduced.
The current code looks for the sample file in the source directory, but
it seems better to test against the installed sample file.
Backpatch to release 15 where the test was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Currently this only allows for one argument, which must be present, and
always returns a single string. With this change the following now all
work:
$all_config = $node->config_data;
%config_map = ($node->config_data);
$incdir = $node->config_data('--include-dir');
($incdir, $sharedir) = $node->config_data(
qw(--include-dir --share-dir));
Backpatch to release 15 where this was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Test files should now ignore has_wal_read_bug() so long as
wait_for_catchup() is their only known way of reaching the bug. That's
at least five files today, a number expected to grow over time. This
commit removes skip logic from three. By doing so, systems having the
bug regain the ability to catch other kinds of defects via those three
tests. The other two, 002_databases.pl and 031_recovery_conflict.pl,
have been unprotected. Back-patch to v15, where done_testing() first
became our standard.
Discussion: https://postgr.es/m/20221030031639.GA3082137@rfd.leadboat.com
The stanza "SET STORAGE may need to add a TOAST table" does not
test what it's supposed to, and hasn't done so since we added
the ability to store constant column default values as metadata.
We need to use a non-constant default to get the expected table
rewrite to actually happen.
Fix that, and add the missing checks that would have exposed the
problem to begin with.
Noted while reviewing a patch that made changes in this test case.
Back-patch to v11 where the problem came in.
"Triggers on partitioned tables cannot have transition tables." is
incorrect as we allow statement-level triggers on partitioned tables to
have transition tables.
This has been wrong since commit 86f575948; back-patch to v11 where that
commit came in.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAPmGK17gk4vXLzz2iG%2BG4LWRWCoVyam70nZ3OuGm1hMJwDrhcg%40mail.gmail.com
Commit f56f8f8da6 added some code in CloneFkReferencing that's way too
lax about a Constraint node it manufactures, not initializing enough
struct members -- initially_valid in particular was forgotten. This
causes some FKs in partitions added by ALTER TABLE ATTACH PARTITION to
be marked as not validated. Set initially_valid true, which fixes the
bug.
While at it, make the struct initialization more complete. Very similar
code was added in two other places by the same commit; make them all
follow the same pattern for consistency, though no bugs are apparent
there.
This bug has never been reported: I only happened to notice while
working on commit 614a406b4f. The test case that was added there with
the improper result is repaired.
Backpatch to 12.
Discussion: https://postgr.es/m/20221005105523.bhuhkdx4olajboof@alvherre.pgsql
Some cases would result in "cache lookup failed for statistics object",
due to trying to fetch inherited statistics when only non-inherited
ones are available or vice versa.
Richard Guo and Justin Pryzby
Discussion: https://postgr.es/m/20221030170520.GM16921@telsasoft.com
As currently designed, with a callback registered in a ERROR_CLEANUP
block, the shutdown callback would get called twice when updating
archive_library on SIGHUP, which is something that we want to avoid to
ease the life of extension writers.
Anyway, an ERROR in the archiver process is treated as a FATAL, stopping
it immediately, hence there is no need for a ERROR_CLEANUP block.
Instead of that, the shutdown callback is not called upon
before_shmem_exit(), giving to the modules the opportunity to do any
cleanup actions before the server shuts down its subsystems.
While on it, this commit adds some testing coverage for the shutdown
callback. Neither shell_archive nor basic_archive have been using it,
and one is added to shell_archive, whose trigger is checked in a TAP
test through a shutdown sequence.
Author: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20221015221328.GB1821022@nathanxps13
Backpatch-through: 15
If the non-recursive term of a SEARCH BREADTH FIRST recursive
query has only constants in its target list, the planner will
fold the starting RowExpr added by rewrite into a simple Const
of type RECORD. The executor doesn't have any problem with
that --- but EXPLAIN VERBOSE will encounter the Const as the
ultimate source of truth about what the field names of the
SET column are, and it didn't know what to do with that.
Fortunately, we can pull the identifying typmod out of the
Const, in much the same way that record_out would.
For reasons that remain a bit obscure to me, this only fails
with SEARCH BREADTH FIRST, not SEARCH DEPTH FIRST or CYCLE.
But I added regression test cases for both of those options
too, just to make sure we don't break it in future.
Per bug #17644 from Matthijs van der Vleuten. Back-patch
to v14 where these constructs were added.
Discussion: https://postgr.es/m/17644-3bd1f3036d6d7a16@postgresql.org
When a query whose results were requested in single-row mode is the last
in the queue by the time those results are being read, the single-row
flag was not being reset, because we were returning early from
pqPipelineProcessQueue. Move that stanza up so that the flag is always
reset at the end of sending that query's results.
Add a test for the situation.
Backpatch to 14.
Author: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/01af18c5-dacc-a8c8-07ee-aecc7650c3e8@dalibo.com
DEFAULT markers appearing in an INSERT on an updatable view
could be mis-processed if they were in a multi-row VALUES clause.
This would lead to strange errors such as "cache lookup failed
for type NNNN", or in older branches even to crashes.
The cause is that commit 41531e42d tried to re-use rewriteValuesRTE()
to remove any SetToDefault nodes (that hadn't previously been replaced
by the view's own default values) appearing in "product" queries,
that is DO ALSO queries. That's fundamentally wrong because the
DO ALSO queries might not even be INSERTs; and even if they are,
their targetlists don't necessarily match the view's column list,
so that almost all the logic in rewriteValuesRTE() is inapplicable.
What we want is a narrow focus on replacing any such nodes with NULL
constants. (That is, in this context we are interpreting the defaults
as being strictly those of the view itself; and we already replaced
any that aren't NULL.) We could add still more !force_nulls tests
to further lobotomize rewriteValuesRTE(); but it seems cleaner to
split out this case to a new function, restoring rewriteValuesRTE()
to the charter it had before.
Per bug #17633 from jiye_sw. Patch by me, but thanks to
Richard Guo and Japin Li for initial investigation.
Back-patch to all supported branches, as the previous fix was.
Discussion: https://postgr.es/m/17633-98cc85e1fa91e905@postgresql.org
There are a number of bugs in this area. Two of them are fixed here,
namely:
1. get_relation_idx_constraint_oid does not restrict the type of
constraint that's returned, so with sufficient bad luck it can
return the OID of a foreign key constraint. This has the effect that
a primary key in a partition can end up as a child of a foreign key,
which makes no sense (it needs to be the child of the equivalent
primary key.)
Change the API contract so that only index-backed constraints are
returned, mimicking get_constraint_index().
2. Both CloneFkReferenced and CloneFkReferencing clone a
self-referencing foreign key, so the partition ends up with
a duplicate foreign key. Change the former function to ignore such
constraints.
Add some tests to verify that things are better now. (However, these
new tests show some additional misbehavior that will be fixed later --
namely that there's a constraint marked NOT VALID.)
Backpatch to 12, where these constraints are possible at all.
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Discussion: https://postgr.es/m/20220603154232.1715b14c@karst
This reverts commit db0d67db24 and
several follow-on fixes. The idea of making a cost-based choice
of the order of the sorting columns is not fundamentally unsound,
but it requires cost information and data statistics that we don't
really have. For example, relying on procost to distinguish the
relative costs of different sort comparators is pretty pointless
so long as most such comparator functions are labeled with cost 1.0.
Moreover, estimating the number of comparisons done by Quicksort
requires more than just an estimate of the number of distinct values
in the input: you also need some idea of the sizes of the larger
groups, if you want an estimate that's good to better than a factor of
three or so. That's data that's often unknown or not very reliable.
Worse, to arrive at estimates of the number of calls made to the
lower-order-column comparison functions, the code needs to make
estimates of the numbers of distinct values of multiple columns,
which are necessarily even less trustworthy than per-column stats.
Even if all the inputs are perfectly reliable, the cost algorithm
as-implemented cannot offer useful information about how to order
sorting columns beyond the point at which the average group size
is estimated to drop to 1.
Close inspection of the code added by db0d67db2 shows that there
are also multiple small bugs. These could have been fixed, but
there's not much point if we don't trust the estimates to be
accurate in-principle.
Finally, the changes in cost_sort's behavior made for very large
changes (often a factor of 2 or so) in the cost estimates for all
sorting operations, not only those for multi-column GROUP BY.
That naturally changes plan choices in many situations, and there's
precious little evidence to show that the changes are for the better.
Given the above doubts about whether the new estimates are really
trustworthy, it's hard to summon much confidence that these changes
are better on the average.
Since we're hard up against the release deadline for v15, let's
revert these changes for now. We can always try again later.
Note: in v15, I left T_PathKeyInfo in place in nodes.h even though
it's unreferenced. Removing it would be an ABI break, and it seems
a bit late in the release cycle for that.
Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
I (Álvaro) broke tab-completion for GRANT .. ALL TABLES IN SCHEMA while
removing ALL from the publication syntax for schemas in the
aforementioned commit. I also missed to update a bunch of
tab-completion rules for ALTER/CREATE PUBLICATION that match each
individual piece of ALL TABLES IN SCHEMA. Repair those bugs.
While fixing up that commit, update a couple of outdated comments
related to the same change.
Backpatch to 15.
Author: Shi yu <shiy.fnst@fujitsu.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/OSZPR01MB6310FCE8609185A56344EED2FD559@OSZPR01MB6310.jpnprd01.prod.outlook.com