British Columbia (America/Vancouver) moved to permanent UTC-07 on
2026-03-09, which will affect their clocks beginning on 2026-11-01.
For lack of any clarity on the point, assume their TZ abbreviation
will be MST from that time forward.
Moldova (Europe/Chisinau) has followed EU DST transition times since
2022.
Backpatch-through: 14
We need to create top-level targets to run targets with the ninja
command like `ninja <target_name>`.
Some targets (man, html, ...) have the same target name on both
top-level and custom target. This creates a confusion for the meson
build:
$ meson compile -C build html
```
ERROR: Can't invoke target `html`: ambiguous name. Add target type
and/or path:
- ./doc/src/sgml/html:custom
- ./doc/src/sgml/html:alias
```
Solve that problem by adding '-custom' suffix to these problematic
targets' custom target names. Top-level targets can be called with
both meson and ninja now:
$ meson compile -C build html
$ ninja -C build html
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Suggested-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/5508e572-79ae-4b20-84d0-010a66d077f2%40eisentraut.org
Expressions in GRAPH_TABLE COLUMNS list may have lateral references.
get_rule_expr() requires lateral namespaces to deparse such
references. get_from_clause_item() does not pass them when processing
the expressions in COLUMNS list causing ERROR "bogus varlevelsup: 0
offset 0". Fix get_from_clause_item() to pass input deparse_context
containing lateral namespaces to get_rule_expr() instead of the dummy
context.
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHg%2BQDcLVa2iBnggkHxY4itZbXtDMfsYHEjnCUYe9hNbnxDi-w%40mail.gmail.com
GRAPH_TABLE clause is converted into a rangetable entry, which is
ignored by assign_query_collations(). Hence we assign collations
while transforming its parts. But expressions in COLUMNS clause
missed that treatment, so fix that.
While at it, also add comments about collation assignment to the parts
of GRAPH_TABLE clause, and also fix a small grammar issue.
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAHg+QDc4aaiufYSgrwMMPMMRTPtQ66SghcrPFbWJFZMqNaG+BA@mail.gmail.com
generate_queries_for_path_pattern_recurse() and
generate_setop_from_pathqueries() are recursive functions. For a
property graph with hundreds of tables, a graph pattern with a handful
element patterns can cause stack overflow. Fix it by calling
check_stack_depth() at the beginning of these functions.
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAHg+QDfgK0xddH8f3eAb+UVn7sBDOnv8RvM6OkP4HtHAt6aD7w@mail.gmail.com
Commit 0b096e379e changed pg_test_timing to measure timing
differences in nanoseconds instead of microseconds, but the resulting
deltas continued to be stored in int32.
That can overflow for large gaps (for example, values greater than about
2.14 seconds in nanoseconds), leading to truncation or incorrect output.
This commit fixes the issue by storing measured timing deltas in int64.
This prevents overflow for large values and better matches
nanosecond-resolution measurements.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/F780CEEB-A237-4302-9F55-60E9D8B6533D@gmail.com
ExecEvalHashedScalarArrayOp(), when using a strict equality function,
performs a short-circuit when looking up NULL values. When the function
is non-strict, the code incorrectly looked up the hash table for a
zero-valued Datum, which could have resulted in an accidental true
return if the hash table contained zero valued Datum, or could result
in a crash for non-byval types.
Here we fix this by adding an extra step when we build the hash table to
check what the result of a NULL lookup would be. This requires looping
over the array and checking what the non-hashed version of the code
would do. We cache the results of that in the expression so that we can
reuse the result any time we're asked to search for a NULL value.
It's important to note that non-strict equality functions are free to
treat any NULL value as equal to any non-NULL value. For example,
someone may wish to design a type that treats an empty string and NULL
as equal.
All built-in types have strict equality functions, so this could affect
custom / user-defined types.
Author: Chengpeng Yan <chengpeng_yan@outlook.com>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: ChangAo Chen <cca5507@qq.com>
Discussion: https://postgr.es/m/A16187AE-2359-4265-9F5E-71D015EC2B2D@outlook.com
Backpatch-through: 14
pg_test_timing reports timing differences in nanoseconds in master, and
in microseconds in v14 through v18, but previously the backward-clock
warning incorrectly labeled the value as milliseconds.
This commit fixes the warning message to use "ns" in master and
"us" in v14 through v18, matching the actual unit being reported.
Backpatch to all supported versions.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/F780CEEB-A237-4302-9F55-60E9D8B6533D@gmail.com
Backpatch-through: 14
If CheckAttributeType() is called with InvalidOid, it performs a bunch
of pointless, futile syscache lookups with InvalidOid, but ultimately
tolerates it and has no effect. We were calling it with InvalidOid on
dropped columns, but it seems accidental that it works, so let's stop
doing it.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi
Backpatch-through: 14
CheckAttributeType() checks that a composite type is not made a member
of itself with ALTER TABLE ADD COLUMN or ALTER TYPE ADD ATTRIBUTE,
even indirectly via a domain, array, another composite type or a range
type. But it missed checking for multiranges. That was a simple
oversight when multiranges were added.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi
Backpatch-through: 14
These tests sometimes run with wal_level=minimal, which does not allow
to run REPACK (CONCURRENTLY). Move them to test_decoding, which is
ensured to run with high enough wal_level.
Discussion: https://postgr.es/m/260901.1776696126@sss.pgh.pa.us
The psql describe (`\d`) footer titles were previously unintuitive when
listing publications that included or excluded specific tables. Even
though the tag for included publications was pre-existing, it is better
to update it to "Included in publications:" to match the phrasing of
the "Excluded from publications:" tag.
Footer titles for sequence and schema descriptions have been updated
similarly to maintain consistency.
Reported-by: Álvaro Herrera <alvherre@kurilemu.de>
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Yuchen Li <liyuchen_xyz@163.com>
Discussion: https://postgr.es/m/aeDs7iZUox1bbKAK%40alvherre.pgsql
There's some talk about upgrading our current -Wshadow=compatible-local
up to -Wshadow. There's some pending questions as to whether the churn
and extra backpatching pain are worthwhile for doing all of them. We
can't use the latter argument for ones that are new to v19, providing we
fix them now. So let's fix those ones so that the problem is not any
worse for if we decide to fix the remainder for v20.
Author: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Yuchen Li <liyuchen_xyz@163.com>
Discussion: https://postgr.es/m/CAApHDvp=rx5GxM=yW8QhFF3noXtYt7LkOxJ7zkaPOzpti4Gm8w@mail.gmail.com
The problem report was about setting GUCs in the startup packet for a
physical replication connection. Setting the GUC required an ACL
check, which performed a lookup on pg_parameter_acl.parname. The
catalog cache was hardwired to use DEFAULT_COLLATION_OID for
texteqfast() and texthashfast(), but the database default collation
was uninitialized because it's a physical walsender and never connects
to a database. In versions 18 and later, this resulted in a NULL
pointer dereference, while in version 17 it resulted in an ERROR.
As the comments stated, using DEFAULT_COLLATION_OID was arbitrary
anyway: if the collation actually mattered, it should have used the
column's actual collation. (In the catalog, some text columns are the
default collation and some are "C".)
Fix by using C_COLLATION_OID, which doesn't require any initialization
and is always available. When any deterministic collation will do,
it's best to consistently use the simplest and fastest one, so this is
a good idea anyway.
Another problem was raised in the thread, which this commit doesn't
fix (see second discussion link).
Reported-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/D18AD72A-5004-4EF8-AF80-10732AF677FA@yandex-team.ru
Discussion: https://postgr.es/m/4524ed61a015d3496fc008644dcb999bb31916a7.camel%40j-davis.com
Backpatch-through: 17
Commit 7a1f0f8747 optimized the slot verification query but
overlooked cases where all logical replication slots are already
invalidated. In this scenario, the CTE returns no rows, causing the
main query (which used a cross join) to return an empty result even
when invalid slots exist.
This commit fixes this by using a LEFT JOIN with the CTE, ensuring
that slots are properly reported even if the CTE returns no rows.
Author: Lakshmi N <lakshmin.jhs@gmail.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CA+3i_M8eT6j8_cBHkYykV-SXCxbmAxpVSKptjDVq+MFtpT-Paw@mail.gmail.com
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in a few places. Most of
these inconsistencies were introduced during Postgres 19 development.
This commit was written with help from clang-tidy, by mechanically
applying the same rules as similar clean-up commits (the earliest such
commit was commit 035ce1fe).
to_char() allocates its output buffer with 8 bytes per formatting
code in the pattern. If the locale's currency symbol, thousands
separator, or decimal or sign symbol is more than 8 bytes long,
in principle we could overrun the output buffer. No such locales
exist in the real world, so it seems sufficient to truncate the
symbol if we do see it's too long.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/638232.1776790821@sss.pgh.pa.us
Backpatch-through: 14
parse_affentry() and addCompoundAffixFlagValue() each collect fields
from an affix file into working buffers of size BUFSIZ. They failed
to defend against overlength fields, so that a malicious affix file
could cause a stack smash. BUFSIZ (typically 8K) is certainly way
longer than any reasonable affix field, but let's fix this while
we're closing holes in this area.
I chose to do this by silently truncating the input before it can
overrun the buffer, using logic comparable to the existing logic in
get_nextfield(). Certainly there's at least as good an argument for
raising an error, but for now let's follow the existing precedent.
Reported-by: Igor Stepansky <igor.stepansky@orca.security>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/864123.1776810909@sss.pgh.pa.us
Backpatch-through: 14
This function writes into a caller-supplied buffer of length
2 * MAXNORMLEN, which should be plenty in real-world cases.
However a malicious affix file could supply an affix long
enough to overrun that. Defend by just rejecting the match
if it would overrun the buffer. I also inserted a check of
the input word length against Affix->replen, just to be sure
we won't index off the buffer, though it would be caller error
for that not to be true.
Also make the actual copying steps a bit more readable, and remove
an unnecessary requirement for the whole input word to fit into the
output buffer (even though it always will with the current caller).
The lack of documentation in this code makes my head hurt, so
I also reverse-engineered a basic header comment for CheckAffix.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/641711.1776792744@sss.pgh.pa.us
Backpatch-through: 14
When using ALTER TABLE ... MERGE PARTITIONS or ALTER TABLE ... SPLIT
PARTITION, extension dependencies on partition indexes were being lost.
This happened because the new partition indexes are created fresh from
the parent partitioned table's indexes, while the old partition indexes
(with their extension dependencies) are dropped.
Fix this by collecting extension dependencies from source partition
indexes before detaching them, then applying those dependencies to the
corresponding new partition indexes after they're created. The mapping
between old and new indexes is done via their common parent partitioned
index.
For MERGE operations, all source partition indexes sharing a parent
partitioned index must have the same extension dependencies; if they
differ, an error naming both conflicting partition indexes is raised.
The check is implemented by collecting one entry per partition index,
sorting by parent index OID, and comparing adjacent entries in a single
pass. This is order-independent: the same set of partitions produces
the same decision regardless of the order they are listed in the MERGE
command, and subset mismatches are caught in both directions.
For SPLIT operations, the new partition indexes simply inherit all
extension dependencies from the source partition's index.
The regression tests exercising this feature live under
src/test/modules/test_extensions, where the test_ext3 and test_ext5
extensions are available; core regression tests cannot assume any
particular extension is installed.
Author: Matheus Alcantara <matheusssilv97@gmail.com>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Reported-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/CALdSSPjXtzGM7Uk4fWRwRMXcCczge5uNirPQcYCHKPAWPkp9iQ%40mail.gmail.com
Formerly, attempting to use WHERE CURRENT OF to update or delete from
a table with virtual generated columns would fail with the error
"WHERE CURRENT OF on a view is not implemented".
The reason was that the check preventing WHERE CURRENT OF from being
used on a view was in replace_rte_variables_mutator(), which presumed
that the only way it could get there was as part of rewriting a query
on a view. That is no longer the case, since replace_rte_variables()
is now also used to expand the virtual generated columns of a table.
Fix by doing the check for WHERE CURRENT OF on a view at parse time.
This is safe, since it is no longer possible for the relkind to change
after the query is parsed (as of b23cd185f).
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDc_TwzSgb=B_QgNLt3mvZdmRK23rLb+RkanSQkDF40GjA@mail.gmail.com
Backpatch-through: 18
If the SET or WHERE clause of an INSERT ... ON CONFLICT command
references EXCLUDED.col, where col is a virtual generated column, the
column was not properly expanded, leading to an "unexpected virtual
generated column reference" error, or incorrect results.
The problem was that expand_virtual_generated_columns() would expand
virtual generated columns in both the SET and WHERE clauses and in the
targetlist of the EXCLUDED pseudo-relation (exclRelTlist). Then
fix_join_expr() from set_plan_refs() would turn the expanded
expressions in the SET and WHERE clauses back into Vars, because they
would be found to match the expression entries in the indexed tlist
produced from exclRelTlist.
To fix this, arrange for expand_virtual_generated_columns() to not
expand virtual generated columns in exclRelTlist. This forces
set_plan_refs() to resolve generation expressions in the query using
non-virtual columns, as required by the executor.
In addition, exclRelTlist now always contains only Vars. That was
something already claimed in a couple of existing comments in the
planner, which relied on that fact to skip some processing, though
those did not appear to constitute active bugs.
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDf7wTLz_vqb1wi1EJ_4Uh+Vxm75+b4c-Ky=6P+yOAHjbQ@mail.gmail.com
Backpatch-through: 18
The ri_FetchConstraintInfo() and ri_LoadConstraintInfo() functions
were declared to return const RI_ConstraintInfo *, but callers
sometimes need to modify the struct, requiring casts to drop the
const. Remove the misapplied const qualifiers and the casts that
worked around them.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/548600ed-8bbb-4e50-8fc3-65091b122276@eisentraut.org
This commit tweaks ALTER INDEX .. ATTACH PARTITION to attempt a
validation of a parent index in the case where an index is already
attached but the parent is not yet valid. This occurs in cases where a
parent index was created invalid such as with CREATE INDEX ONLY, but was
left invalid after an invalid child index was attached (partitioned
indexes set indisvalid to false if at least one partition is
!indisvalid, indisvalid is true in a partitioned table iff all
partitions are indisvalid). This could leave a partition tree in a
situation where a user could not bring the parent index back to valid
after fixing the child index, as there is no built-in mechanism to do
so. This commit relies on the fact that repeated ATTACH PARTITION
commands on the same index silently succeed.
An invalid parent index is more than just a passive issue. It causes
for example ON CONFLICT on a partitioned table if the invalid parent
index is used to enforce a unique constraint.
Some test cases are added to track some of problematic patterns, using a
set of partition trees with combinations of invalid indexes and ATTACH
PARTITION.
Reported-by: Mohamed Ali <moali.pg@gmail.com>
Author: Sami Imseih <sanmimseih@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Haibo Yan <tristan.yim@gmail.com>
Discussion: http://postgr.es/m/CAGnOmWqi1D9ycBgUeOGf6mOCd2Dcf=6sKhbf4sHLs5xAcKVCMQ@mail.gmail.com
Backpatch-through: 14
This neglected to set TAP_TESTS = 1, and partially compensated
for that by writing duplicative hand-made rules for check and
installcheck. That's not really sufficient though. The way
I noticed the error was that "make distclean" didn't clean out
the tmp_check subdirectory, and there might be other consequences.
Do it the standard way instead.
FlushUnlockedBuffer() accepted io_object and io_context arguments but
hardcoded IOOBJECT_RELATION and IOCONTEXT_NORMAL when calling
FlushBuffer(). Pass them through instead. Also fix FlushBuffer() to use
its io_object parameter for I/O timing stats rather than hardcoding
IOOBJECT_RELATION.
Not actively broken since all current callers pass IOOBJECT_RELATION and
IOCONTEXT_NORMAL, so not backpatched.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/BC97546F-5C15-42F2-AD57-CFACDB9657D0@gmail.com
The btree_gist enum test expects a bitmap heap scan. Since b46e1e54d0
enabled setting the VM during on-access pruning and 378a21618 set
pd_prune_xid on INSERT, scans of enumtmp may set pages all-visible.
If autovacuum or autoanalyze then updates pg_class.relallvisible, the
planner could choose an index-only scan instead.
Make the enumtmp a temp table to exclude it from autovacuum/autoanalyze.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/46733d68-aec0-4d09-8120-4c66b87047a4%40gmail.com
Since b46e1e54d0 allowed setting the VM on-access and 378a21618 set
pd_prune_xid on INSERT, the testing of generic/custom plans in
src/test/regress/sql/plancache.sql was destabilized.
One of the queries of test_mode could have set the pages all-visible and
if autovacuum/autoanalyze ran and updated pg_class.relallvisible, it
would affect whether we got an index-only or sequential scan.
Preclude this by disabling autovacuum and autoanalyze for test_mode and
carefully sequencing when ANALYZE is run.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/71277259-264e-4983-a201-938b404049d7%40gmail.com
GetLocalPinLimit() and GetAdditionalLocalPinLimit(), currently in use
only by the read stream, previously allowed a backend to pin all
num_temp_buffers local buffers. This meant that the read stream could
use every available local buffer for read-ahead, leaving none for other
concurrent pin-holders like other read streams and related buffers like
the visibility map buffer needed during on-access pruning.
This became more noticeable since b46e1e54d0, which allows on-access
pruning to set the visibility map, which meant that some scans also
needed to pin a page of the VM. It caused a test in
src/test/regress/sql/temp.sql to fail in some cases.
Cap the local pin limit to num_temp_buffers / 4, providing some
headroom. This doesn't guarantee that all needed pins will be available
— for example, a backend can still open more cursors than there are
buffers — but it makes it less likely that read-ahead will exhaust the
pool.
Note that these functions are not limited by definition to use in the
read stream; however, this cap should be appropriate in other contexts.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/97529f5a-ec10-46b1-ab50-4653126c6889%40gmail.com
We installed this in commit eea9fa9b2 to protect against foreseeable
mistakes that would break ABI in stable branches by renumbering
NodeTag enum entries. However, we now have much more thorough
ABI stability checks thanks to buildfarm members using libabigail
(see the .abi-compliance-history mechanism). So this incomplete,
single-purpose check seems like an anachronism. I wouldn't object
to keeping it were it not that it requires an additional manual step
when making a new stable git branch. That seems like something easy
to screw up, so let's get rid of it.
This patch just removes the logic that checks for changes in the last
auto-assigned NodeTag value. We still need eea9fa9b2's cross-check
on the supplied list of header files, to prevent divergence between
the makefile and meson build systems. We'll also sometimes need the
nodetag_number() infrastructure for hand-assigning new NodeTags in
stable branches.
Discussion: https://postgr.es/m/1458883.1776143073@sss.pgh.pa.us
We were using "select count(*) into x from generate_series(1,
1_000_000_000_000)" to waste one second waiting for a statement
timeout trap. Aside from consuming CPU to little purpose, this could
easily eat several hundred MB of temporary file space, which has been
observed to cause out-of-disk-space errors in the buildfarm.
Let's just use "pg_sleep(10)", which is far less resource-intensive.
Also update the "when others" exception handler so that if it does
ever again trap an error, it will tell us what error. The cause of
these intermittent buildfarm failures had been obscure for awhile.
Discussion: https://postgr.es/m/557992.1776779694@sss.pgh.pa.us
Backpatch-through: 14
This batch is similar to 462fe0ff62 and addresses a variety of code
style issues, including grammar mistakes, typos, inconsistent variable
names in function declarations, and incorrect function names in comments
and documentation. These fixes have accumulated on the community
mailing lists since the commit mentioned above.
Notably, Alexander Lakhin previously submitted a patch identifying many
of the trivial typos and grammar issues that had been reported on
pgsql-hackers. His patch covered a somewhat large portion of the issues
addressed here, though not all of them.
The documentation changes only affect HEAD.
When a rule action or rule qualification references NEW.col where col
is a generated column (stored or virtual), the rewriter produces
incorrect results.
rewriteTargetListIU removes generated columns from the query's target
list, since stored generated columns are recomputed by the executor
and virtual ones store nothing. However, ReplaceVarsFromTargetList
then cannot find these columns when resolving NEW references during
rule rewriting. For UPDATE, the REPLACEVARS_CHANGE_VARNO fallback
redirects NEW.col to the original target relation, making it read the
pre-update value (same as OLD.col). For INSERT,
REPLACEVARS_SUBSTITUTE_NULL replaces it with NULL. Both are wrong
when the generated column depends on columns being modified.
Fix by building target list entries for generated columns from their
generation expressions, pre-resolving the NEW.attribute references
within those expressions against the query's targetlist, and passing
them together with the query's targetlist to ReplaceVarsFromTargetList.
Back-patch to all supported branches. Virtual generated columns were
added in v18, so the back-patches in pre-v18 branches only handle
stored generated columns.
Reported-by: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDexGTmCZzx=73gXkY2ZADS6LRhpnU+-8Y_QmrdTS6yUhA@mail.gmail.com
Backpatch-through: 14
When the startup process exists with a FATAL error during PM_STARTUP,
the postmaster called ExitPostmaster() directly, assuming that no other
processes are running at this stage. Since 7ff23c6d27, this
assumption is not true, as the checkpointer, the background writer, the
IO workers and bgworkers kicking in early would be around.
This commit removes the startup-specific shortcut happening in
process_pm_child_exit() for a failing startup process during PM_STARTUP,
falling down to the existing exit() flow to signal all the started
children with SIGQUIT, so as we have no risk of creating orphaned
processes.
This required an extra change in HandleFatalError() for v18 and newer
versions, as an assertion could be triggered for PM_STARTUP. It is now
incorrect. In v17 and older versions, HandleChildCrash() needs to be
changed to handle PM_STARTUP so as children can be waited on.
While on it, fix a comment at the top of postmaster.c. It was claiming
that the checkpointer and the background writer were started after
PM_RECOVERY. That is not the case.
Author: Ayush Tiwari <ayushtiwari.slg01@gmail.com>
Discussion: https://postgr.es/m/CAJTYsWVoD3V9yhhqSae1_wqcnTdpFY-hDT7dPm5005ZFsL_bpA@mail.gmail.com
Backpatch-through: 15
The documentation previously described the io_max_workers,
io_worker_idle_timeout, and io_worker_launch_interval GUCs as
type "int". However, the documentation consistently uses "integer"
for parameters of this type.
This commit updates these parameter descriptions to use "integer"
for consistency.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwEpMDpB-K8SSUVRRHg6L6z3pLAkekd9aviOS=ns0EC=+Q@mail.gmail.com
The documentation for jit_debugging_support and jit_profiling_support
previously stated that these parameters can only be set at server start.
However, both parameters use the PGC_SU_BACKEND context, meaning they
can be set at session start by superusers or users granted the appropriate
SET privilege, but cannot be changed within an active session.
This commit updates the documentation to reflect the actual behavior.
Backpatch to all supported versions.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwEpMDpB-K8SSUVRRHg6L6z3pLAkekd9aviOS=ns0EC=+Q@mail.gmail.com
Backpatch-through: 14
Replace the outdated DatumGetCString(DirectFunctionCall1(textout, ...))
pattern with TextDatumGetCString(). The macro is the modern, more
efficient way to convert a text Datum to a C string as it avoids
unnecessary function call machinery and handles detoasting internally.
Since plsample serves as reference code for extension authors, it
should follow current idiomatic practices.
Author: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b95-xMvUN1PEqxv8y6g-A-8k+fSgyv20kSZc9eF1wZAUPg@mail.gmail.com
Commit cfcd57111 et al fell over under Valgrind testing.
(It seems to be enough to #define USE_VALGRIND, you don't actually
need to run it under Valgrind to see failures.) The cause is that
remove_rel_from_eclass updates each EquivalenceMember's em_relids,
and those can be aliases of the left_relids or right_relids of some
RestrictInfo in ec_sources. If the update made em_relids empty then
bms_del_member will have pfree'd the relid set, so that the subsequent
attempt to clean up ec_sources accesses already-freed memory.
We missed seeing ill effects before cfcd57111 because (a) if the
pfree happens then we will remove the EquivalenceMember altogether,
making the source RestrictInfo no longer of use, and (b) the
cleanup of ec_sources didn't touch left/right_relids before that.
I'm unclear though on how cfcd57111 managed to pass non-USE_VALGRIND
testing. Apparently we managed to store another Bitmapset into the
freed space before trying to access it, but you'd not think that would
happen 100% of the time. I think what USE_VALGRIND changes is that it
makes list.c much more memory-hungry, so that the freed space gets
claimed by some List node before a Bitmapset can be put there.
This failure can be seen in v16, v17, and master, but oddly enough not
v18. That's because the SJE patch replaced the simple bms_del_members
calls used here with adjust_relid_set, which is careful not to
scribble on its input. But commit 20efbdffe just recently put back
the old coding and thus resurrected the problem.
Discussion: https://postgr.es/m/458729.1776724816@sss.pgh.pa.us
Backpatch-through: 16, 17, master
The second function signature entry for pg_get_tablespace_ddl() was
missing the role="func_signature" attribute. This commit adds the
missing attribute to ensure consistent formatting with other function
entries.
Author: Tatsuya Kawata <kawatatatsuya0913@gmail.com>
Discussion: https://postgr.es/m/CAHza6qcSgwdh+f41zEm6NSaGHvs5_cwjVu22+KTic=TfnonrFA@mail.gmail.com
The original implementation of remove_rel_from_restrictinfo()
thought it could skate by with removing no-longer-valid relid
bits from only the clause_relids and required_relids fields.
This is quite bogus, although somehow we had not run across a
counterexample before now. At minimum, the left_relids and
right_relids fields need to be fixed because they will be
examined later by clause_sides_match_join(). But it seems
pretty foolish not to fix all the relid fields, so do that.
This needs to be back-patched as far as v16, because the
bug report shows a planner failure that does not occur
before v16. I'm a little nervous about back-patching,
because this could cause unexpected plan changes due to
opening up join possibilities that were rejected before.
But it's hard to argue that this isn't a regression. Also,
the fact that this changes no existing regression test results
suggests that the scope of changes may be fairly narrow.
I'll refrain from back-patching further though, since no
adverse effects have been demonstrated in older branches.
Bug: #19460
Reported-by: François Jehl <francois.jehl@pigment.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/19460-5625143cef66012f@postgresql.org
Backpatch-through: 16
Before each call to the SRF, initialize isnull and isDone, as per the
comments for struct ReturnSetInfo. This fixes a Coverity warning
about rsi.isDone not being initialized. The built-in
{multi,}range_minus_multi functions don't return without setting it,
but a user-supplied function might not be as accommodating.
We also add statistics tracking around the function call, which
will be expected once user-defined withoutPortionProcs functions
are supported, and a cross-check on rsi.returnMode just for
paranoia's sake.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Co-authored-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://postgr.es/m/4126231.1776622202@sss.pgh.pa.us
Although REPACK (CONCURRENTLY) uses replication slots, there is no
concern that the slot will leak data of other users, because the
MAINTAIN privilege on the table is required anyway; requiring
REPLICATION is user-unfriendly without providing any actual protection.
A related aspect is that the REPLICATION attribute is not needed to
prevent REPACK from stealing slots from logical replication, since
commit e76d8c749c made REPACK use a separate pool of replication
slots.
Similarly, there's no reason to require that the table owner has the
LOGIN privilege. Bypass the default behavior in the background worker
launch sequence.
Because there are now successful concurrent repack runs in the
regression tests, we're forced to run test_plan_advice under
wal_level=replica, so add that. Also, move the cluster.sql test to a
different parallel group in parallel_schedule: apparently the use of the
repack worker causes it to exceed the maximum limit of processes in some
runs (the actual limit reached is the number of XIDs in a snapshot's xip
array).
Author: Antonin Houska <ah@cybertec.at>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Chao Li <lic@highgo.com>
Discussion: https://postgr.es/m/aeJHPNmL4vVy3oPw@pryzbyj2023
Create the PL/pgSQL function and procedure for the top-level WAIT FOR
checks in a single transaction, then wait once for standby replay before
running both tests. Also revise some surrounding comments.
This avoids an extra 'wait_for_catchup()' on the delayed standby without
changing the test coverage.
Discussion: https://postgr.es/m/CABPTF7WZ1yuYz8V%3Dxsbghg8e7qaAm5MpyNw6BthWcbN7%2BP6biw%40mail.gmail.com
Author: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>