Commit cda6a8d01d removed a few datatypes, but didn't update
pg_upgrade --check to throw error if these types are used. So the users
find that pg_upgrade --check tells them that everything is fine, only to
fail when the real upgrade is attempted.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Discussion: https://postgr.es/m/202309201654.ng4ksea25mti@alvherre.pgsql
pgbouncer can cause PQbackendPID() to return negative values due to it
filling be_pid with random bytes (even these days pid_max can only be
set up to 2^22 on 64b machines on Linux, for example, so this cannot
happen with normal PID numbers). When this happens, pg_basebackup may
generate a temporary slot name that may not be accepted by the parser,
leading to spurious failures, like:
pg_basebackup: error: could not send replication command
ERROR: replication slot name "pg_basebackup_-1201966863" contains
invalid character
This commit fixes that problem by formatting the result from
PQbackendPID() as an unsigned integer when creating the temporary
replication slot name, so as the invalid character is gone and the
command can be parsed.
Author: Jelte Fennema
Reviewed-by: Daniel Gustafsson, Nishant Sharma
Discussion: https://postgr.es/m/CAGECzQQOGvYfp8ziF4fWQ_o8s2K7ppaoWBQnTmdakn3s-4Z=5g@mail.gmail.com
Backpatch-through: 11
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.
To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.
An invalid database cannot be connected to anymore, but can still be
dropped.
Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.
Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.
Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
On Windows, it's sometimes difficult to create a file with a path longer
than 255 chars, and if it can be created it might not be seen by the
archiver. This can be triggered by the test for tar backups with
filenames greater than 100 bytes. So we skip that test if the path would
exceed 255.
Backpatch to all live branches.
Reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/666ac55b-3400-fb2c-2cea-0281bf36a53c@dunslane.net
Creation of a file with a very long name can create problems on Windows
due to its file path limits. Work around that by creating the file via a
symlink with a shorter name.
Error displayed by buildfarm animal fairywren.o
Backpatch to all live branches
Hash partitioning on an enum is problematic because the hash codes are
derived from the OIDs assigned to the enum values, which will almost
certainly be different after a dump-and-reload than they were before.
This means that some rows probably end up in different partitions than
before, causing restore to fail because of partition constraint
violations. (pg_upgrade dodges this problem by using hacks to force
the enum values to keep the same OIDs, but that's not possible nor
desirable for pg_dump.)
Users can work around that by specifying --load-via-partition-root,
but since that's a dump-time not restore-time decision, one might
find out the need for it far too late. Instead, teach pg_dump to
apply that option automatically when dealing with a partitioned
table that has hash-on-enum partitioning.
Also deal with a pre-existing issue for --load-via-partition-root
mode: in a parallel restore, we try to TRUNCATE target tables just
before loading them, in order to enable some backend optimizations.
This is bad when using --load-via-partition-root because (a) we're
likely to suffer deadlocks from restore jobs trying to restore rows
into other partitions than they came from, and (b) if we miss getting
a deadlock we might still lose data due to a TRUNCATE removing rows
from some already-completed restore job.
The fix for this is conceptually simple: just don't TRUNCATE if we're
dealing with a --load-via-partition-root case. The tricky bit is for
pg_restore to identify those cases. In dumps using COPY commands we
can inspect each COPY command to see if it targets the nominal target
table or some ancestor. However, in dumps using INSERT commands it's
pretty impractical to examine the INSERTs in advance. To provide a
solution for that going forward, modify pg_dump to mark TABLE DATA
items that are using --load-via-partition-root with a comment.
(This change also responds to a complaint from Robert Haas that
the dump output for --load-via-partition-root is pretty confusing.)
pg_restore checks for the special comment as well as checking the
COPY command if present. This will fail to identify the combination
of --load-via-partition-root and --inserts in pre-existing dump files,
but that should be a pretty rare case in the field. If it does
happen you will probably get a deadlock failure that you can work
around by not using parallel restore, which is the same as before
this bug fix.
Having done this, there seems no remaining reason for the alarmism
in the pg_dump man page about combining --load-via-partition-root
with parallel restore, so remove that warning.
Patch by me; thanks to Julien Rouhaud for review. Back-patch to
v11 where hash partitioning was introduced.
Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
Try to disable ASLR when building in EXEC_BACKEND mode, to avoid random
memory mapping failures while testing. For developer use only, no
effect on regular builds.
This has been originally applied as of f3e7806 for v15~, but
recently-added buildfarm member gokiburi tests this configuration on
older branches as well, causing it to fail randomly as ASLR would be
enabled.
Suggested-by: Andres Freund <andres@anarazel.de>
Tested-by: Bossart, Nathan <bossartn@amazon.com>
Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
Backpatch-through: 12
No buildfarm members have reported that yet, but a recently-refreshed
Debian host did.
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz
Backpatch-through: 11
To test pg_upgrade across major PG versions, we have to be able to
modify or drop any old objects with no-longer-supported properties,
and we have to be able to deal with cosmetic changes in pg_dump output.
Up to now, the buildfarm and pg_upgrade's own test infrastructure had
separate implementations of the former, and we had nothing but very
ad-hoc rules for the latter (including an arbitrary threshold on how
many lines of unchecked diff were okay!). This patch creates a Perl
module that can be shared by both those use-cases, and adds logic
that deals with pg_dump output diffs in a much more tightly defined
fashion.
This largely supersedes previous efforts in commits 0df9641d3,
9814ff550, and 62be9e4cd, which developed a SQL-script-based solution
for the task of dropping old objects. There was nothing fundamentally
wrong with that work in itself, but it had no basis for solving the
output-formatting problem. The most plausible way to deal with
formatting is to build a Perl module that can perform editing on the
dump files; and once we commit to that, it makes more sense for the
same module to also embed the knowledge of what has to be done for
dropping old objects.
Back-patch versions of the helper module as far as 9.2, to
support buildfarm animals that still test that far back.
It's also necessary to back-patch PostgreSQL/Version.pm,
because the new code depends on that. I fixed up pg_upgrade's
002_pg_upgrade.pl in v15, but did not look into back-patching
it further than that.
Tom Lane and Andrew Dunstan
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
The ALTER DATABASE|FUNCTION|PROCEDURE|ROLE|ROUTINE|USER ... SET <name>
case in psql tab completion failed to exclude <name> = "SCHEMA", which
caused ALTER FUNCTION|PROCEDURE|ROUTINE ... SET SCHEMA to complete
with "FROM CURRENT" and "TO", which won't work.
Fix that, so that those cases now complete with the list of schemas,
like other ALTER ... SET SCHEMA commands.
Noticed while testing the recent patch to improve tab completion for
ALTER FUNCTION/PROCEDURE/ROUTINE, but this is not directly related to
that patch. Rather, this is a long-standing bug, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/CALDaNm0s7GQmkLP_mx5Cvk=UzYMnjhPmXBxU8DsHEunFbC5sTg@mail.gmail.com
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes. One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
The query used to disable WITH OIDS in all the relations making use of
it was checking for materialized views, but this is not a supported
operation. On the contrary, this needs to be done on foreign tables.
While on it, use quote_ident() in the ALTER TABLE strings built on the
relation name.
Author: Anton A. Melnikov, Michael Paquier
Discussion: https://postgr.es/m/49f389ba-95ce-8a9b-09ae-f60650c0e7c7@inbox.ru
Backpatch-through: 12
Specifically, when pg_basebackup is invoked with -Tx=y, don't error
out if x could plausibly be an absolute path either on Windows or on
non-Windows systems. We don't know whether the remote system is
running the same OS as the local system, so it's not appropriate to
assume that our local rule about absolute pathnames is the same as
the rule on the remote system.
Patch by me, reviewed by Tom Lane, Andrew Dunstan, and
Davinder Singh.
Discussion: http://postgr.es/m/CA+TgmoY+jC3YiskomvYKDPK3FbrmsDU7_8+wMHt02HOdJeRb0g@mail.gmail.com
clang 15+ will issue a set-but-not-used warning when the only
use of a variable is in autoincrements (e.g., "foo++;").
That's perfectly sensible, but it detects a few more cases that
we'd not noticed before. Silence the warnings with our usual
methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by
actually removing a useless variable.
One thing that we can't nicely get rid of is that with %pure-parser,
Bison emits "yynerrs" as a local variable that falls foul of this
warning. To silence those, I inserted "(void) yynerrs;" in the
top-level productions of affected grammars.
Per recently-established project policy, this is a candidate
for back-patching into out-of-support branches: it suppresses
annoying compiler warnings but changes no behavior. Hence,
back-patch to 9.5, which is as far as these patches go without
issues. (A preliminary check shows that the prior branches
need some other set-but-not-used cleanups too, so I'll leave
them for another day.)
Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
The zlib documentation mentions the values supported for the compression
strategy, but this code has been using a hardcoded value of 0 rather
than Z_DEFAULT_STRATEGY. This commit adjusts the code to use
Z_DEFAULT_STRATEGY.
Backpatch down to where this code has been added to ease the backport of
any future patch touching this area.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/1400032.1662217889@sss.pgh.pa.us
Backpatch-through: 10
96ef3b8ff1 accidentally copied a not
applicable comment from the float8_pass_by_value code to the
data_checksums code. Remove that.
87d3b35a1c changed pg_upgrade to
checking the checksum version rather than just the Boolean presence of
checksums, but didn't change the field type in its ControlData struct
from bool. So this would not work correctly if there ever is a
checksum version larger than 1.
errno was not reported correctly after attempting to clone a file,
leading to incorrect error reports. While scanning through the code, I
have not noticed any similar mistakes.
Error introduced in 3a769d8.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220731134135.GY15006@telsasoft.com
Backpatch-through: 12
When you hit ^C, the terminal driver in Unix-like systems echoes "^C" as
well as sending an interrupt signal (depending on stty settings). At
least libedit (but maybe also libreadline) is then confused about the
current cursor location, and corrupts the display if you try to scroll
back. Fix, by moving to a new line before the next prompt is displayed.
Back-patch to all supported released.
Author: Pavel Stehule <pavel.stehule@gmail.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3278793.1626198638%40sss.pgh.pa.us
This reverts commits a04ccf6df et al. in the back branches only.
There was some disagreement already over whether to back-patch
157f8739a, on the grounds that it is the sort of behavioral
change that we don't like to back-patch. Furthermore, it now
looks like the logic needs some more work, which we don't have
time for before the upcoming 14.4 release. Revert for now, and
perhaps reconsider later.
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
After closedir() dirent->d_name is not valid anymore. As there alerady are a
few places relying on the limited lifetime of pg_waldump, do so here as well,
and just pg_strdup() the string.
The bug was introduced in fc49e24fa6.
Found by UBSan, run locally.
Backpatch: 11-, like fc49e24fa6 itself.
The previous method for doing that was to write zeroes into a
predetermined set of page locations. However, there's a roughly
1-in-64K chance that the existing checksum will match by chance,
and yesterday several buildfarm animals started to reproducibly
see that, resulting in test failures because no checksum mismatch
was reported.
Since the checksum includes the page LSN, test success depends on
the length of the installation's WAL history, which is affected by
(at least) the initial catalog contents, the set of locales installed
on the system, and the length of the pathname of the test directory.
Sooner or later we were going to hit a chance match, and today is
that day.
Harden these tests by specifically inverting the checksum field and
leaving all else alone, thereby guaranteeing that the checksum is
incorrect.
In passing, fix places that were using seek() to set up for syswrite(),
a combination that the Perl docs very explicitly warn against. We've
probably escaped problems because no regular buffered I/O is done on
these filehandles; but if it ever breaks, we wouldn't deserve or get
much sympathy.
Although we've only seen problems in HEAD, now that we recognize the
environmental dependencies it seems like it might be just a matter
of time until someone manages to hit this in back-branch testing.
Hence, back-patch to v11 where we started doing this kind of test.
Discussion: https://postgr.es/m/3192026.1648185780@sss.pgh.pa.us
It seems possible for the condition being tested to be true in
production, and nobody would never know (except when some data
eventually becomes corrupt?).
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m//202109040001.zky3wgv2qeqg@alvherre.pgsql
When cross-building to windows, or building with mingw on windows, the build
could fail with
x86_64-w64-mingw32-gcc: error: win32ver.o: No such file or director
because pg_dumpall didn't depend on WIN32RES, but it's recipe references
it. The build nevertheless succeeded most of the time, due to
pg_dump/pg_restore having the required dependency, causing win32ver.o to be
built.
Reported-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKGJeekpUPWW6yCVdf9=oBAcCp86RrBivo4Y4cwazAzGPng@mail.gmail.com
Backpatch: 10-, omission present on all live branches
Slow hosts may avoid load-induced, spurious failures by setting
environment variable PG_TEST_TIMEOUT_DEFAULT to some number of seconds
greater than 180. Developers may see faster failures by setting that
environment variable to some lesser number of seconds. In tests, write
$PostgreSQL::Test::Utils::timeout_default wherever the convention has
been to write 180. This change raises the default for some briefer
timeouts. Back-patch to v10 (all supported versions).
Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
When opening a WAL file smaller than XLOG_BLCKSZ (e.g. 0 bytes long) while
determining the wal_segment_size, pg_waldump checked errno, despite errno not
being set by the short read. Resulting in a bogus error message.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220214.181847.775024684568733277.horikyota.ntt@gmail.com
Backpatch: 11-, the bug was introducedin fc49e24fa
Following migration of Windows buildfarm members running TAP tests to
use of ucrt64 perl for those tests, special processing for msys perl is
no longer necessary and so is removed.
Backpatch to release 10
Discussion: https://postgr.es/m/c65a8781-77ac-ea95-d185-6db291e1baeb@dunslane.net
"pg_ctl stop/restart" checked that the postmaster PID is valid just
once, as a side-effect of sending the stop signal, and then would
wait-till-timeout for the postmaster.pid file to go away. This
neglects the case wherein the postmaster dies uncleanly after we
signal it. Similarly, once "pg_ctl promote" has sent the signal,
it'd wait for the corresponding on-disk state change to occur
even if the postmaster dies.
I'm not sure how we've managed not to notice this problem, but it
seems to explain slow execution of the 017_shm.pl test script on AIX
since commit 4fdbf9af5, which added a speculative "pg_ctl stop" with
the idea of making real sure that the postmaster isn't there. In the
test steps that kill-9 and then restart the postmaster, it's possible
to get past the initial signal attempt before kill() stops working
for the doomed postmaster. If that happens, pg_ctl waited till
PGCTLTIMEOUT before giving up ... and the buildfarm's AIX members
have that set very high.
To fix, include a "kill(pid, 0)" test (similar to what
postmaster_is_alive uses) in these wait loops, so that we'll
give up immediately if the postmaster PID disappears.
While here, I chose to refactor those loops out of where they were.
do_stop() and do_restart() can perfectly well share one copy of the
wait-for-stop loop, and it seems desirable to put a similar function
beside that for wait-for-promote.
Back-patch to all supported versions, since pg_ctl's wait logic
is substantially identical in all, and we're seeing the slow test
behavior in all branches.
Discussion: https://postgr.es/m/20220210023537.GA3222837@rfd.leadboat.com
It seems highly unlikely that gettext() can be relied on to be
async-signal-safe. psql used to understand that, but someone got
it wrong long ago in the src/bin/scripts/ version of handle_sigint,
and then the bad idea was perpetuated when those two versions were
unified into src/fe_utils/cancel.c.
I'm unsure why there have not been field complaints about this
... maybe gettext() is signal-safe once it's translated at least
one message? But we have no business assuming any such thing.
In cancel.c (v13 and up), I preserved our ability to localize
"Cancel request sent" messages by invoking gettext() before
the signal handler is set up. In earlier branches I just made
src/bin/scripts/ not localize those messages, as psql did then.
(Just for extra unsafety, the src/bin/scripts/ version was
invoking fprintf() from a signal handler. Sigh.)
Noted while fixing signal-safety issues in PQcancel() itself.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
The existing pg_upgrade/test.sh and the buildfarm code have been holding
the same set of SQL queries when doing cross-version upgrade tests to
adapt the objects created by the regression tests before the upgrade
(mostly, incompatible or non-existing objects need to be dropped from
the origin, perhaps re-created).
This moves all those SQL queries into a new, separate, file with a set
of \if clauses to handle the version checks depending on the old version
of the cluster to-be-upgraded.
The long-term plan is to make the buildfarm code re-use this new SQL
file, so as committers are able to fix any compatibility issues in the
tests of pg_upgrade with a refresh of the core code, without having to
poke at the buildfarm client. Note that this is only able to handle the
main regression test suite, and that nothing is done yet for contrib
modules yet (these have more issues like their database names).
A backpatch down to 10 is done, adapting the version checks as this
script needs to be only backward-compatible, so as it becomes possible
to clean up a maximum amount of code within the buildfarm client.
Author: Justin Pryzby, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 10
When a stored expression depends on a user-defined cast, the backend
records the dependency as being on the cast's implementation function
--- or indeed, if there's no cast function involved but just
RelabelType or CoerceViaIO, no dependency is recorded at all. This
is problematic for pg_dump, which is at risk of dumping things in the
wrong order leading to restore failures. Given the lack of previous
reports, the risk isn't that high, but it can be demonstrated if the
cast is used in some view whose rowtype is then used as an input or
result type for some other function. (That results in the view
getting hoisted into the functions portion of the dump, ahead of
the cast.)
A logically bulletproof fix for this would require including the
cast's OID in the parsed form of the expression, whence it could be
extracted by dependency.c, and then the stored dependency would force
pg_dump to do the right thing. Such a change would be fairly invasive,
and certainly not back-patchable. Moreover, since we'd prefer that
an expression using cast syntax be equal() to one doing the same
thing by explicit function call, the cast OID field would have to
have special ignored-by-comparisons semantics, making things messy.
So, let's instead fix this by a very simple hack in pg_dump: change
the object-type priority order so that casts are initially sorted
before functions, immediately after types. This fixes the problem
in a fairly direct way for casts that have no implementation function.
For those that do, the implementation function will be hoisted to just
before the cast by the dependency sorting step, so that we still have
a valid dump order. (I'm not sure that this provides a full guarantee
of no problems; but since it's been like this for many years without
any previous reports, this is probably enough to fix it in practice.)
Per report from Дмитрий Иванов.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAPL5KHoGa3uvyKp6z6m48LwCnTsK+LRQ_mcA4uKGfqAVSEjV_A@mail.gmail.com
If a table contains a generated column that's preceded by a dropped
column, dumpTableData_insert failed to account for the dropped
column, and would emit DEFAULT placeholder(s) in the wrong column(s).
This resulted in failures at restore time. The default COPY code path
did not have this bug, likely explaining why it wasn't noticed sooner.
While we're fixing this, we can be a little smarter about the
situation: (1) avoid unnecessarily fetching the values of generated
columns, (2) omit generated columns from the output, too, if we're
using --column-inserts. While these modes aren't expected to be
as high-performance as the COPY path, we might as well be as
efficient as we can; it doesn't add much complexity.
Per report from Дмитрий Иванов.
Back-patch to v12 where generated columns came in.
Discussion: https://postgr.es/m/CAPL5KHrkBniyQt5e1rafm5DdXvbgiiqfEQEJ9GjtVzN71Jj5pA@mail.gmail.com
Previously it was impossible to terminate these programs via control-C
while they were prompting for a password. We can fix that trivially
for their initial password prompts, by moving setup of the SIGINT
handler from just before to just after their initial GetConnection()
calls.
This fix doesn't permit escaping out of later re-prompts, but those
should be exceedingly rare, since the user's password or the server's
authentication setup would have to have changed meanwhile. We
considered applying a fix similar to commit 46d665bc2, but that
seemed more complicated than it'd be worth. Moreover, this way is
back-patchable, which that wasn't.
The misbehavior exists in all supported versions, so back-patch to all.
Tom Lane and Nathan Bossart
Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
The error handling here was a mess, as a result of a fundamentally
bad design (relying on errno to keep its value much longer than is
safe to assume) as well as a lot of just plain sloppiness, both as
to noticing errors at all and as to reporting the correct errno.
Moreover, the recent addition of LZ4 compression broke things
completely, because liblz4 doesn't use errno to report errors.
To improve matters, keep the error state in the DirectoryMethodData or
TarMethodData struct, and add a string field so we can handle cases
that don't set errno. (The tar methods already had a version of this,
but it can be done more efficiently since all these cases use a
constant error string.) Make the dir and tar methods handle errors
in basically identical ways, which they didn't before.
This requires copying errno into the state struct in a lot of places,
which is a bit tedious, but it has the virtue that we can get rid of
ad-hoc code to save and restore errno in a number of places ... not
to mention that it fixes other places that should've saved/restored
errno but neglected to.
In passing, fix some pointlessly static buffers to be ordinary
local variables.
There remains an issue about exactly how to handle errors from
fsync(), but that seems like material for its own patch.
While the LZ4 problems are new, all the rest of this is fixes for
old bugs, so backpatch to v10 where walmethods.c was introduced.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
Coverity complained that applying get_gz_error after a failed gzclose,
as we did in one place in pg_basebackup, is unsafe. I think it's
right: it's entirely likely that the call is touching freed memory.
Change that to inspect errno, as we do for other gzclose calls.
Also, be careful to initialize errno to zero immediately before any
gzclose() call where we care about the error status. (There are
some calls where we don't, because we already failed at some previous
step.) This ensures that we don't get a misleadingly irrelevant
error code if gzclose() fails in a way that doesn't set errno.
We could work harder at that, but it looks to me like all such cases
are basically can't-happen if we're not misusing zlib, so it's
not worth the extra notational cruft that would be required.
Also, fix several places that simply failed to check for close-time
errors at all, mostly at some remove from the close or gzclose itself;
and one place that did check but didn't bother to report the errno.
Back-patch to v12. These mistakes are older than that, but between
the frontend logging API changes that happened in v12 and the fact
that frontend code can't rely on %m before that, the patch would need
substantial revision to work in older branches. It doesn't quite
seem worth the trouble given the lack of related field complaints.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
The documentation says plainly that \password acts on "the current user"
by default. What it actually acted on, or tried to, was the username
used to log into the current session. This is not the same thing if
one has since done SET ROLE or SET SESSION AUTHENTICATION. Aside from
the possible surprise factor, it's quite likely that the current role
doesn't have permissions to set the password of the original role.
To fix, use "SELECT CURRENT_USER" to get the role name to act on.
(This syntax works with servers at least back to 7.0.) Also, in
hopes of reducing confusion, include the role name that will be
acted on in the password prompt.
The discrepancy from the documentation makes this a bug, so
back-patch to all supported branches.
Patch by me; thanks to Nathan Bossart for review.
Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us