currently have any better strategy for this query than re-running the
sub-select over and over; it seems unlikely that doing so 10000 times
is a more useful test than doing it a few dozen times.
entries for the victim database go away sooner rather than later. We already
did the equivalent thing at the per-relation level, not sure why it's not
been done for whole databases. With this change, pgstat_vacuum_tabstat
should usually not find anything to do; though we still need it as a backstop
in case DROPDB or TABPURGE messages get lost under load.
keeping private state in each backend that has inserted and deleted the same
tuple during its current top-level transaction. This is sufficient since
there is no need to be able to determine the cmin/cmax from any other
transaction. This gets us back down to 23-byte headers, removing a penalty
paid in 8.0 to support subtransactions. Patch by Heikki Linnakangas, with
minor revisions by moi, following a design hashed out awhile back on the
pghackers list.
get away with not (re)initializing a local variable if the variable is marked
"isconst" and not "isnull". Unfortunately it makes this decision after having
already freed the old value, meaning that something like
for i in 1..10 loop
declare c constant text := 'hi there';
leads to subsequent accesses to freed memory, and hence probably crashes.
(In particular, this is why Asif Ali Rehman's bug leads to crash and not
just an unexpectedly-NULL value for SQLERRM: SQLERRM is marked CONSTANT
and so triggers this error.)
The whole thing seems wrong on its face anyway: CONSTANT means that you can't
change the variable inside the block, not that the initializer expression is
guaranteed not to change value across successive block entries. Hence,
remove the "optimization" instead of trying to fix it.
DECLARE section needs to know about it. Formerly, everyplace besides DECLARE
that created variables needed to do "plpgsql_add_initdatums(NULL)" to prevent
those variables from being sucked up as part of a subsequent DECLARE block.
This is obviously error-prone, and in fact the SQLSTATE/SQLERRM patch had
failed to do it for those two variables, leading to the bug recently exhibited
by Asif Ali Rehman: a DECLARE within an exception handler tried to reinitialize
SQLERRM.
Although the SQLSTATE/SQLERRM patch isn't in any pre-8.1 branches, and so
I can't point to a demonstrable failure there, it seems wise to back-patch
this into the older branches anyway, just to keep the logic similar to HEAD.
For win32 in general, this makes it possible to run the regression tests
as an admin user by using the same restricted token method that's used
by pg_ctl and initdb.
For vc++, it adds building of pg_regress.exe, adds a resultmap, and
fixes how it runs the install.
Magnus Hagander
where possible, and fix some sites that apparently thought that fgets()
will overwrite the buffer by one byte.
Also add some strlcpy() to eliminate some weird memory handling.
> Currently, an index split writes all the data on the split page to
> WAL. That's a lot of WAL traffic. The tuples that are copied to the
> right page need to be WAL logged, but the tuples that stay on the
> original page don't.
Heikki Linnakangas
already collected in the current transaction; this allows plpgsql functions to
watch for stats updates even though they are confined to a single transaction.
Use this instead of the previous kluge involving pg_stat_file() to wait for
the stats collector to update in the stats regression test. Internally,
decouple storage of stats snapshots from transaction boundaries; they'll
now stick around until someone calls pgstat_clear_snapshot --- which xact.c
still does at transaction end, to maintain the previous behavior. This makes
the logic a lot cleaner, at the price of a couple dozen cycles per transaction
exit.
changes (with an upper limit of 30 seconds), and record the delay time in
the postmaster log. This should give us some info about what's happening
with the intermittent stats failures in buildfarm. After an idea of
Andrew Dunstan's.
"database system is ready to accept connections", which is issued by the
postmaster when it really is ready to accept connections. Per proposal from
Markus Schiltknecht and subsequent discussion.
thought that it didn't have to reposition the underlying tuplestore if the
portal is atEnd. But this is not so, because tuplestores have separate read
and write cursors ... and the read cursor hasn't moved from the start.
This mistake explains bug #2970 from William Zhang.
Note: the coding here is pretty inefficient, but given that no one has noticed
this bug until now, I'd say hardly anyone uses the case where the cursor has
been advanced before being persisted. So maybe it's not worth worrying about.
out that ExecEvalVar and friends don't necessarily have access to a tuple
descriptor with correct typmod: it definitely can contain -1, and possibly
might contain other values that are different from the Var's value.
Arguably this should be cleaned up someday, but it's not a simple change,
and in any case typmod discrepancies don't pose a security hazard.
Per reports from numerous people :-(
I'm not entirely sure whether the failure can occur in 8.0 --- the simple
test cases reported so far don't trigger it there. But back-patch the
change all the way anyway.
had stopped working for tables buried inside views or sub-selects. This is
because I had gotten rid of the simplify_jointree() preprocessing step, and
optimize_minmax_aggregates() wasn't smart enough to deal with a non-canonical
FromExpr. Per gripe from Bill Howe.
that aren't turned into true joins). Since this is the last missing bit of
infrastructure, go ahead and fill out the hash integer_ops and float_ops
opfamilies with cross-type operators. The operator family project is now
DONE ... er, except for documentation ...
describe the maximum size of index tuples (which is typically AM-dependent
anyway); and consequently remove the bogus deduction for "special space"
that was built into it.
Adjust TOAST_TUPLE_THRESHOLD and TOAST_MAX_CHUNK_SIZE to avoid wasting two
bytes per toast chunk, and to ensure that the calculation correctly tracks any
future changes in page header size. The computation had been inaccurate in a
way that didn't cause any harm except space wastage, but future changes could
have broken it more drastically.
Fix the calculation of BTMaxItemSize, which was formerly computed as 1 byte
more than it could safely be. This didn't cause any harm in practice because
it's only compared against maxalign'd lengths, but future changes in the size
of page headers or btree special space could have exposed the problem.
initdb forced because of change in TOAST_MAX_CHUNK_SIZE, which alters the
storage of toast tables.
threshold for tuple length. On 4-byte-MAXALIGN machines, the toast code
creates tuples that have t_len exactly TOAST_TUPLE_THRESHOLD ... but this
number is not itself maxaligned, so if heap_insert maxaligns t_len before
comparing to TOAST_TUPLE_THRESHOLD, it'll uselessly recurse back to
tuptoaster.c, wasting cycles. (It turns out that this does not happen on
8-byte-MAXALIGN machines, because for them the outer MAXALIGN in the
TOAST_MAX_CHUNK_SIZE macro reduces TOAST_MAX_CHUNK_SIZE so that toast tuples
will be less than TOAST_TUPLE_THRESHOLD in size. That MAXALIGN is really
incorrect, but we can't remove it now, see below.) There isn't any particular
value in maxaligning before comparing to the thresholds, so just don't do
that, which saves a small number of cycles in itself.
These numbers should be rejiggered to minimize wasted space on toast-relation
pages, but we can't do that in the back branches because changing
TOAST_MAX_CHUNK_SIZE would force an initdb (by changing the contents of toast
tables). We can move the toast decision thresholds a bit, though, which is
what this patch effectively does.
Thanks to Pavan Deolasee for discovering the unintended recursion.
Back-patch into 8.2, but not further, pending more testing. (HEAD is about
to get a further patch modifying the thresholds, so it won't help much
for testing this form of the patch.)
observe the xmloption.
Reorganize the representation of the XML option in the parse tree and the
API to make it easier to manage and understand.
Add regression tests for parsing back XML expressions.
generated solution files for what to install, instead of blindly copying
everything as it previously did. With the previous quick-n-dirty
version, it would copy old DLLs if you reconfigured in a way that didn't
include subprojects like a PL for example.
Magnus Hagander.
made query plan. Use of ALTER COLUMN TYPE creates a hazard for cached
query plans: they could contain Vars that claim a column has a different
type than it now has. Fix this by checking during plan startup that Vars
at relation scan level match the current relation tuple descriptor. Since
at that point we already have at least AccessShareLock, we can be sure the
column type will not change underneath us later in the query. However,
since a backend's locks do not conflict against itself, there is still a
hole for an attacker to exploit: he could try to execute ALTER COLUMN TYPE
while a query is in progress in the current backend. Seal that hole by
rejecting ALTER TABLE whenever the target relation is already open in
the current backend.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0556
we should check that the function code returns the claimed result datatype
every time we parse the function for execution. Formerly, for simple
scalar result types we assumed the creation-time check was sufficient, but
this fails if the function selects from a table that's been redefined since
then, and even more obviously fails if check_function_bodies had been OFF.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0555
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
nonportable "hh" sprintf(3) length modifier. Instead, do the parsing
and output by hand. The code to do this isn't ideal, but this is
an interim measure anyway: the uuid type should probably use the
in-memory struct layout specified by RFC 4122. For now, this patch
should hopefully rectify the buildfarm failures for the uuid test.
Along the way, re-add pg_cast entries for uuid <-> varchar, which
I mistakenly removed earlier, and bump the catversion.
In this case extractQuery should returns -1 as nentries. This changes
prototype of extractQuery method to use int32* instead of uint32* for
nentries argument.
Based on that gincostestimate may see two corner cases: nothing will be found
or seqscan should be used.
Per proposal at http://archives.postgresql.org/pgsql-hackers/2007-01/msg01581.php
PS tsearch_core patch should be sightly modified to support changes, but I'm
waiting a verdict about reviewing of tsearch_core patch.
The original coding failed (tried to access deallocated memory) if there were
two active call sites (fn_extra pointers) for the same function and the
function definition was updated. Also, if an update of a recursive function
was detected upon nested entry to the function, the existing compiled version
was summarily deallocated, resulting in crash upon return to the outer
instance. Problem observed while studying a bug report from Sergiy
Vyshnevetskiy.
Bug does not exist before 8.1 since older versions just leaked the memory of
obsoleted compiled functions, rather than trying to reclaim it.
by plpgsql can themselves use SPI --- possibly indirectly, as in the case
of domain_in() invoking plpgsql functions in a domain check constraint.
Per bug #2945 from Sergiy Vyshnevetskiy.
Somewhat arbitrarily, I've chosen to back-patch this as far as 8.0. Given
the lack of prior complaints, it doesn't seem critical for 7.x.
Hashing for aggregation purposes still needs work, so it's not time to
mark any cross-type operators as hashable for general use, but these cases
work if the operators are so marked by hand in the system catalogs.
match because they contain a null join key (and the join operator is
known strict). Improves performance significantly when the inner
relation contains a lot of nulls, as per bug #2930.
handy to prevent core dump files from disappearing, but it's useless now
because (a) we don't drop core in individual DB subdirectories anymore,
and (b) CREATE DATABASE forces an internal checkpoint anyway.
reports; inspired by the misleading CONTEXT lines shown in recent bug report
from Stefan Kaltenbrunner. Also, allow statement-type names shown in these
messages to be translated.
that defined in RFC 4122. This patch includes the basic implementation,
plus regression tests. Documentation and perhaps some additional
functionality will come later. Catversion bumped.
Patch from Gevik Babakhani; review from Peter, Tom, and myself.
safely in the presence of subtransactions. To ensure that any ExprContext
shutdown callbacks are called at the right times, we have to have a separate
EState for each level of subtransaction. Per "TupleDesc reference leak" bug
report from Stefan Kaltenbrunner.
Although I'm convinced the code is wrong as far back as 8.0, it doesn't seem
that there are any ways for the problem to really manifest before 8.2: AFAICS,
8.0 and 8.1 only use the ExprContextCallback mechanism to handle set-returning
functions, which cannot usefully be executed in a "simple expression" anyway.
Hence, no backpatch before 8.2 --- the risk of unforeseen breakage seems
to outweigh the chance of fixing something.
activity quiesce. Possibly this will fix the large increase in
non-reproducible stats test failures we've noted since turning on
stats_row_level by default.
versus varchar[]. This oversight probably explains Ryan Holmes' recent
complaint --- he was getting a generic selectivity estimate instead of
anything intelligent.
exactly at the point where we need to insert a new item, the calculation used
the wrong size for the "high key" of the new left page. This could lead to
choosing an unworkable split, resulting in "PANIC: failed to add item to the
left sibling" (or "right sibling") failure. Although this bug has been there
a long time, it's very difficult to trigger a failure before 8.2, since there
was generally a lot of free space on both sides of a chosen split. In 8.2,
where the user-selected fill factor determines how much free space the code
tries to leave, an unworkable split is much more likely. Report by Joe
Conway, diagnosis and fix by Heikki Linnakangas.
input in the stats collector. Our select() emulation is apparently buggy
for UDP sockets :-(. This should resolve problems with stats collection
(and hence autovacuum) failing under more than minimal load. Diagnosis
and patch by Magnus Hagander.
Patch probably needs to be back-ported to 8.1 and 8.0, but first let's
see if it makes the buildfarm happy...
suffix, to distinguish them from doubles. Make some function declarations
and definitions use the "const" qualifier for arguments consistently.
Ignore warning 4102 ("unreferenced label"), because such warnings
are always emitted by bison-generated code. Patch from Magnus Hagander.
- Add new SQL command SET XML OPTION (also available via regular GUC) to
control the DOCUMENT vs. CONTENT option in implicit parsing and
serialization operations.
- Subtle corrections in the handling of the standalone property in
xmlroot().
- Allow xmlroot() to work on content fragments.
- Subtle corrections in the handling of the version property in
xmlconcat().
- Code refactoring for producing XML declarations.
page about the maximum UTF8 sequence length we support (4 bytes since 8.1,
3 before that). pg_utf2wchar_with_len never got updated to support 4-byte
characters at all, and in any case had a buffer-overrun risk in that it
could produce multiple pg_wchars from what mblen claims to be just one UTF8
character. The only reason we don't have a major security hole is that most
callers allocate worst-case output buffers; the sole exception in released
versions appears to be pre-8.2 iwchareq() (ie, ILIKE), which can be crashed
due to zeroing out its return address --- but AFAICS that can't be exploited
for anything more than a crash, due to inability to control what gets written
there. Per report from James Russell and Michael Fuhr.
Pre-8.1 the risk is much less, but I still think pg_utf2wchar_with_len's
behavior given an incomplete final character risks buffer overrun, so
back-patch that logic change anyway.
This patch also makes sure that UTF8 sequences exceeding the supported
length (whichever it is) are consistently treated as error cases, rather
than being treated like a valid shorter sequence in some places.
involving unions of types having typmods. Variants of the failure are known
to occur in 8.1 and up; not sure if it's possible in 8.0 and 7.4, but since
the code exists that far back, I'll just patch 'em all. Per report from
Brian Hurt.
FAMILY; and add FAMILY option to CREATE OPERATOR CLASS to allow adding a
class to a pre-existing family. Per previous discussion. Man, what a
tedious lot of cutting and pasting ...
which I had removed in the first cut of the EquivalenceClass rewrite to
simplify that patch a little. But it's still important --- in a four-way
join problem mergejoinscansel() was eating about 40% of the planning time
according to gprof. Also, improve the EquivalenceClass code to re-use
join RestrictInfos rather than generating fresh ones for each join
considered. This saves some memory space but more importantly improves
the effectiveness of caching planning info in RestrictInfos.
columns procost and prorows, to allow simple user adjustment of the estimated
cost of a function call, as well as control of the estimated number of rows
returned by a set-returning function. We might eventually wish to extend this
to allow function-specific estimation routines, but there seems to be
consensus that we should try a simple constant estimate first. In particular
this provides a relatively simple way to control the order in which different
WHERE clauses are applied in a plan node, which is a Good Thing in view of the
fact that the recent EquivalenceClass planner rewrite made that much less
predictable than before.
provide just a boolean 'amcanorder', instead of fields that specify the
sort operator strategy numbers. We have decided to require ordering-capable
AMs to use btree-compatible strategy numbers, so the old fields are
overkill (and indeed misleading about what's allowed).
representation of equivalence classes of variables. This is an extensive
rewrite, but it brings a number of benefits:
* planner no longer fails in the presence of "incomplete" operator families
that don't offer operators for every possible combination of datatypes.
* avoid generating and then discarding redundant equality clauses.
* remove bogus assumption that derived equalities always use operators
named "=".
* mergejoins can work with a variety of sort orders (e.g., descending) now,
instead of tying each mergejoinable operator to exactly one sort order.
* better recognition of redundant sort columns.
* can make use of equalities appearing underneath an outer join.
currentMarkData from IndexScanDesc to the opaque structs for the
AMs that need this information (currently gist and hash).
Patch from Heikki Linnakangas, fixes by Neil Conway.
the generated files, to help Visual C++ to run these tests. The tests still
pass in VPATH and normal builds.
Patch from Magnus Hagander, editorialized by me.
is deleted. A backend about to unlink a file now sends a "revoke fsync"
request to the bgwriter to make it clean out pending fsync requests. There
is still a race condition where the bgwriter may try to fsync after the unlink
has happened, but we can resolve that by rechecking the fsync request queue
to see if a revoke request arrived meanwhile. This eliminates the former
kluge of "just assuming" that an ENOENT failure is okay, and lets us handle
the fact that on Windows it might be EACCES too without introducing any
questionable assumptions. After an idea of mine improved by Magnus.
The HEAD patch doesn't apply cleanly to 8.2, but I'll see about a back-port
later. In the meantime this could do with some testing on Windows; I've been
able to force it through the code path via ENOENT, but that doesn't prove that
it actually fixes the Windows problem ...
* After Markos patch, now builds pgcrypto without zlib again
* Updates README with xml info
* xml requires xslt and iconv
* disable unnecessary warning about __cdecl()
* Add a buildenv.bat called from all other bat files to set up things
like PATH for flex/bison. (Can't just set it before calling, doesn't
always work when building from the GUI)
The implementation is somewhat ugly logic-wise, but I don't see an
easy way to make it more concise.
When writing this, I noticed that my previous implementation of
width_bucket() doesn't handle NaN correctly:
postgres=# select width_bucket('NaN', 1, 5, 5);
width_bucket
--------------
6
(1 row)
AFAICS SQL:2003 does not define a NaN value, so it doesn't address how
width_bucket() should behave here. The patch changes width_bucket() so
that ereport(ERROR) is raised if NaN is specified for the operand or the
lower or upper bounds to width_bucket(). For float8, NaN is disallowed
for any of the floating-point inputs, and +/- infinity is disallowed
for the histogram bounds (but allowed for the operand).
Update docs and regression tests, bump the catversion.
it was checking a pg_constraint OID instead of pg_class OID, resulting in
"relation with OID nnnnn does not exist" failures for anyone who wasn't
owner of the table being examined. Per bug #2848 from Laurence Rowe.
Note: for existing 8.2 installations a simple version update won't fix this;
the easiest fix is to CREATE OR REPLACE this view with the corrected
definition.
accessing it, like DROP DATABASE. This allows the regression tests to pass
with autovacuum enabled, which open the gates for finally enabling autovacuum
by default.
standard convention the 21st century runs from 2001-2100, not 2000-2099,
so make it work like that. Per bug #2885 from Akio Iwaasa.
Backpatch to 8.2, but no further, since this is really a definitional
change; users of older branches are probably more interested in stability.
hold true for operators in a btree operator family. This is mostly to
clarify my own thinking about what the planner can assume for optimization
purposes. (blowing dust off an old abstract-algebra textbook...)
(or other types of pg_class entry): the function pgstat_vacuum_tabstat,
invoked during VACUUM startup, had runtime proportional to the number of
stats table entries times the number of pg_class rows; in other words
O(N^2) if the stats collector's information is reasonably complete.
Replace list searching with a hash table to bring it back to O(N)
behavior. Per report from kim at myemma.com.
Back-patch as far as 8.1; 8.0 and before use different coding here.
Made this option mark the .c files, so the environment variable is no longer needed.
Created a special MinGW file with the special error message.
Do not print port into log file when running regression tests.
which comparison operators to use for plan nodes involving tuple comparison
(Agg, Group, Unique, SetOp). Formerly the executor looked up the default
equality operator for the datatype, which was really pretty shaky, since it's
possible that the data being fed to the node is sorted according to some
nondefault operator class that could have an incompatible idea of equality.
The planner knows what it has sorted by and therefore can provide the right
equality operator to use. Also, this change moves a couple of catalog lookups
out of the executor and into the planner, which should help startup time for
pre-planned queries by some small amount. Modify the planner to remove some
other cavalier assumptions about always being able to use the default
operators. Also add "nulls first/last" info to the Plan node for a mergejoin
--- neither the executor nor the planner can cope yet, but at least the API is
in place.
1) gendef works from inside visual studio - use a tempfile instead of
redirection, because for some reason you can't redirect dumpbin from
inside (patch from Joachim Wieland)
2) gendef must process only *.obj, or you get weird errors in some build
scenarios when it tries to process a logfile
Magnus Hagander
the same output level that was used when building a single project
before, and really needed to get reasonable information about what
happens (non-verbose just says "starting build of foo" and "done
building foo", more or less).
Magnus Hagander
management. The paper clearly describes many of the ideas embodied in
our current hashing code, but as far as I could find out there is not
a direct code heritage. (Mike Olsen recalls discussion of this paper
at Postgres meetings but believes it "informed the Postgres implementation
probably just at the design level". Margo herself says she wasn't
involved with Postgres' hash code.) Credit where credit is due 'n all
that, even if fifteen years after the fact.
per-column options for btree indexes. The planner's support for this is still
pretty rudimentary; it does not yet know how to plan mergejoins with
nondefault ordering options. The documentation is pretty rudimentary, too.
I'll work on improving that stuff later.
Note incompatible change from prior behavior: ORDER BY ... USING will now be
rejected if the operator is not a less-than or greater-than member of some
btree opclass. This prevents less-than-sane behavior if an operator that
doesn't actually define a proper sort ordering is selected.
when collapsing of JOIN trees is stopped by join_collapse_limit. For instance
a list of 11 LEFT JOINs with limit 8 now produces something like
((1 2 3 4 5 6 7 8) 9 10 11 12)
instead of
(((1 2 3 4 5 6 7 8) (9)) 10 11 12)
The latter structure is really only required for a FULL JOIN.
Noted while studying an example from Shane Ambler.
hash joins with the estimated-larger relation on the inside. There are
several cases where doing that makes perfect sense, and in cases where it
doesn't, the regular cost computation really ought to be able to figure that
out. Make some marginal tweaks in said computation to try to get results
approximating reality a bit better. Per an example from Shane Ambler.
Also, fix an oversight in the original patch to add seq_page_cost: the costs
of spilling a hash join to disk should be scaled by seq_page_cost.
sets the items, and serializes the value back (rather than adding an
arbitrary number of XML preambles as before).
The libxml memory management via palloc had to be disabled because it
crashes when libxml tries to access memory that was helpfully freed
earlier by PostgreSQL. This needs further thought.
database privileges from a pre-8.2 server. This ensures that the reloaded
database will maintain the same behavior it had in the previous installation,
ie, everybody has connect privilege. Per gripe from L Bayuk.
an optarg). Add some comments noting that code in three different files has
to be kept in sync. Fix erroneous description of -S switch (it sets work_mem
not silent_mode), and do some light copy-editing elsewhere in postgres-ref.
form '^(foo)$'. Before, these could never be optimized into indexscans.
The recent changes to make psql and pg_dump generate such patterns (for \d
commands and -t and related switches, respectively) therefore represented
a big performance hit for people with large pg_class catalogs, as seen in
recent gripe from Erik Jones. While at it, be more paranoid about
case-sensitivity checking in multibyte encodings, and fix some other
corner cases in which a regex might be interpreted too liberally.
having md.c return a success/failure boolean to smgr.c, which was just going
to elog anyway, let md.c issue the elog messages itself. This allows better
error reporting, particularly in cases such as "short read" or "short write"
which Peter was complaining of. Also, remove the kluge of allowing mdread()
to return zeroes from a read-beyond-EOF: this is now an error condition
except when InRecovery or zero_damaged_pages = true. (Hash indexes used to
require that behavior, but no more.) Also, enforce that mdwrite() is to be
used for rewriting existing blocks while mdextend() is to be used for
extending the relation EOF. This restriction lets us get rid of the old
ad-hoc defense against creating huge files by an accidental reference to
a bogus block number: we'll only create new segments in mdextend() not
mdwrite() or mdread(). (Again, when InRecovery we allow it anyway, since
we need to allow updates of blocks that were later truncated away.)
Also, clean up the original makeshift patch for bug #2737: move the
responsibility for padding relation segments to full length into md.c.
The purpose is to allow autovacuum-esq conditional vacuuming and
clustering using SQL to discover the required stats.
No documentation updates required. Catalog version updated.
Glen Parker
valid result from a computation if one of the input values was infinity.
The previous code assumed an operation that returned infinity was an
overflow.
Handle underflow/overflow consistently, and add checks for aggregate
overflow.
Consistently prevent Inf/Nan from being cast to integer data types.
Fix INT_MIN % -1 to prevent overflow.
Update regression results for new error text.
Per report from Roman Kononov.
pg_opclass during LookupOpclassInfo(), I'd turned pg_opclass_oid_index
into a critical system index. However the problem could only manifest
during a backend's first attempt to load opclass data, and then only
if it had successfully loaded pg_internal.init and subsequently received
a relcache flush; which made it impossible to reproduce in sequential
tests and darn hard even in parallel tests. Memo to self: when
exercising cache flush scenarios, must disable LookupOpclassInfo's
internal cache too.
or contradictory keys even in cross-data-type scenarios. This is another
benefit of the opfamily rewrite: we can find the needed comparison
operators now.
the 8.1 SQL function definition for it. Per report from Rajesh Kumar Mallah,
such a DBA error doesn't seem at all improbable, and the cost of checking for
it is not very high compared to the cost of running this function. (It would
have been better to change the C name of the function so it wouldn't be called
by the old SQL definition, but it's too late for that now in the 8.2 branch.)
of increasing size, instead of one at a time. This reduces the memory
management overhead when num_temp_buffers is large: in the previous coding
we would actually waste 50% of the space used for temp buffers, because aset.c
would round the individual requests up to 16K. Problem noted while studying
a performance issue reported by Steven Flatt.
Back-patch as far as 8.1 --- older versions used few enough local buffers
that the issue isn't significant for them.
has a small maxBlockSize: the maximum request size that we will treat as a
"chunk" needs to be limited to fit in maxBlockSize. Otherwise we will round
up the request size to the next power of 2, wasting space, which is a bit
pointless if we aren't going to make the blocks big enough to fit additional
stuff in them. The example motivating this is local buffer management, which
makes repeated allocations of 8K (one BLCKSZ buffer) in TopMemoryContext,
which has maxBlockSize = 8K because for the most part allocations there are
small. This leads to each local buffer actually eating 16K of space, which
adds up when there are thousands of them. I intend to change localbuf.c to
aggregate its requests, which will prevent this particular misbehavior, but
it seems likely that similar scenarios could arise elsewhere, so fixing the
core problem seems wise as well.
PQdsplen()) normally, instead of replacing them by \uXXXX sequences.
Assume that they in fact occupy zero screen space for formatting purposes.
Per gripe from Michael Fuhr and ensuing discussion.
involving HashAggregate over SubqueryScan (this is the known case, there
may well be more). The bug is only latent in releases before 8.2 since they
didn't try to access tupletable slots' descriptors during ExecDropTupleTable.
The least bogus fix seems to be to make subqueries share the parent query's
memory context, so that tupdescs they create will have the same lifespan as
those of the parent query. There are comments in the code envisioning going
even further by not having a separate child EState at all, but that will
require rethinking executor access to range tables, which I don't want to
tackle right now. Per bug report from Jean-Pierre Pelletier.
ps_TupFromTlist in plan nodes that make use of it. This was being done
correctly in join nodes and Result nodes but not in any relation-scan nodes.
Bug would lead to bogus results if a set-returning function appeared in the
targetlist of a subquery that could be rescanned after partial execution,
for example a subquery within EXISTS(). Bug has been around forever :-(
... surprising it wasn't reported before.
were marked canSetTag. While it's certainly correct to return the result
of the last one that is marked canSetTag, it's less clear what to do when
none of them are. Since plpgsql will complain if zero is returned, the
8.2.0 behavior isn't good. I've fixed it to restore the prior behavior of
returning the physically last query's result code when there are no
canSetTag queries.
Use a TRY block instead of (inadequate) ad-hoc coding to ensure that
libxml is cleaned up after a failure. Report the intended SQLCODE
instead of defaulting to XX000. Avoid risking use of a dangling
pointer by keeping the persistent error buffer in TopMemoryContext.
Be less trusting that error messages don't contain %.
This patch doesn't do anything about changing the way the messages
are put together --- this is just about mechanism.
bletcherous and unsafe manipulation of global encoding setting.
Clean up libxml reporting mechanism a bit (it still looks like a
dangling-pointer crash waiting to happen, though, not to mention
being far less than sane from a localization standpoint).
the XmlExpr code in various lists, use a representation that has some hope
of reverse-listing correctly (though it's still a de-escaping function
shy of correctness), generally try to make it look more like Postgres
coding conventions.
cases. Operator classes now exist within "operator families". While most
families are equivalent to a single class, related classes can be grouped
into one family to represent the fact that they are semantically compatible.
Cross-type operators are now naturally adjunct parts of a family, without
having to wedge them into a particular opclass as we had done originally.
This commit restructures the catalogs and cleans up enough of the fallout so
that everything still works at least as well as before, but most of the work
needed to actually improve the planner's behavior will come later. Also,
there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way
to create a new family right now is to allow CREATE OPERATOR CLASS to make
one by default. I owe some more documentation work, too. But that can all
be done in smaller pieces once this infrastructure is in place.
operator strategy numbers, ie, GiST and GIN. This is almost cosmetic
enough to not need a catversion bump, but since the opr_sanity regression
test has to change in sync with the catalog entry, I figured I'd better
do one.
are all in new-in-8.2 logic associated with indexability of ScalarArrayOpExpr
(IN-clauses) or amortization of indexscan costs across repeated indexscans
on the inside of a nestloop. In particular:
Fix some logic errors in the estimation for multiple scans induced by a
ScalarArrayOpExpr indexqual.
Include a small cost component in bitmap index scans to reflect the costs of
manipulating the bitmap itself; this is mainly to prevent a bitmap scan from
appearing to have the same cost as a plain indexscan for fetching a single
tuple.
Also add a per-index-scan-startup CPU cost component; while prior releases
were clearly too pessimistic about the cost of repeated indexscans, the
original 8.2 coding allowed the cost of an indexscan to effectively go to zero
if repeated often enough, which is overly optimistic.
Pay some attention to index correlation when estimating costs for a nestloop
inner indexscan: this is significant when the plan fetches multiple heap
tuples per iteration, since high correlation means those tuples are probably
on the same or adjacent heap pages.
joinclause doesn't use any outer-side vars) requires a "bushy" plan to be
created. The normal heuristic to avoid joins with no joinclause has to be
overridden in that case. Problem is new in 8.2; before that we forced the
outer join order anyway. Per example from Teodor.
representing externally-supplied values, since the APIs that carry such
values only specify type not typmod. However, for PARAM_SUBLINK Params
it is handy to carry the typmod of the sublink's output column. This
is a much cleaner solution for the recently reported 'could not find
pathkey item to sort' and 'failed to find unique expression in subplan
tlist' bugs than my original 8.2-compatible patch. Besides, someday we
might want to support typmods for external parameters ...
in normal operation, and we can avoid rewriting pg_control at every log
segment switch if we don't insist that these values be valid. Reducing
the number of pg_control updates is a good idea for both performance and
reliability. It does make pg_resetxlog's life a bit harder, but that seems
a good tradeoff; and anyway the change to pg_resetxlog amounts to automating
something people formerly needed to do by hand, namely look at the existing
pg_xlog files to make sure the new WAL start point was past them.
In passing, change the wording of xlog.c's "database system was interrupted"
messages: describe the pg_control timestamp as "last known up at" rather than
implying it is the exact time of service interruption. With this change the
timestamp will generally be the time of the last checkpoint, which could be
many minutes before the failure; and we've already seen indications that
people tend to misinterpret the old wording.
initdb forced due to change in pg_control layout. Simon Riggs and Tom Lane
release it in a subtransaction abort, but this neglects possibility that
someone outside SPI already did. Fix is for spi.c to forget about a tuptable
as soon as it's handed it back to the caller.
Per bug #2817 from Michael Andreen.
rearrangeable outer joins and the WHERE clause is non-strict and mentions
only nullable-side relations. New bug in 8.2, caused by new logic to allow
rearranging outer joins. Per bug #2807 from Ross Cohen; thanks to Jeff
Davis for producing a usable test case.
a sublink's test expression have the correct vartypmod, rather than defaulting
to -1. There's at least one place where this is important because we're
expecting these Vars to be exactly equal() to those appearing in the subplan
itself. This is a pretty klugy solution --- it would likely be cleaner to
change Param nodes to include a typmod field --- but we can't do that in the
already-released 8.2 branch.
Per bug report from Hubert Fongarnand.
identify long-running transactions. Since we already need to record
the transaction-start time (e.g. for now()), we don't need any
additional system calls to report this information.
Catversion bumped, initdb required.
capitalize the strings like sentences. Remove unnecessarily
specific descriptions of the units used by GUC variables, since
we now allow any reasonable unit to be specified.
FormatMessage() (This should have been in 8.2.0, patched to 8.2.X and
HEAD):
I think this problem to be complex....
http://archives.postgresql.org/pgsql-hackers/2006-11/msg00042.php
FormatMessage of windows cannot consider the encoding of the database.
However, I should try the solution now. It is necessary to clear the
problem.
Multi character-code exists together in message and log. It doesn't
consider
the data base encoding that the user intended....
The user in multi-byte country can try this.
http://inet.winpg.jp/~saito/pg_bug/MessageCheck.c
That is, it is likely to become it in this manner.(Japanese)
http://inet.winpg.jp/~saito/pg_bug/FormatMessage998.png
Hiroshi Saito
by name on each and every row processed. Profiling suggests this may
buy a percent or two for simple UPDATE scenarios, which isn't huge,
but when it's so easy to get ...
by the change to make limit values int8 instead of int4. (Specifically, you
can do DatumGetInt32 safely on a null value, but not DatumGetInt64.) Per
bug #2803 from Greg Johnson.
should allow delete-pending files to actually go away, and thereby work
around the various complaints we've seen about 'permission denied'
errors in such cases. Should be reasonably harmless in any case...
StartupXLOG and ShutdownXLOG no longer need to be critical sections, because
in all contexts where they are invoked, elog(ERROR) would be translated to
elog(FATAL) anyway. (One change in bgwriter.c is needed to make this true:
set ExitOnAnyError before trying to exit. This is a good fix anyway since
the existing code would have gone into an infinite loop on elog(ERROR) during
shutdown.) That avoids a misleading report of PANIC during semi-orderly
failures. Modify the postmaster to include the startup process in the set of
processes that get SIGTERM when a fast shutdown is requested, and also fix it
to not try to restart the bgwriter if the bgwriter fails while trying to write
the shutdown checkpoint. Net result is that "pg_ctl stop -m fast" does
something reasonable for a system in warm standby mode, and so should Unix
system shutdown (ie, universal SIGTERM). Per gripe from Stephen Harris and
some corner-case testing of my own.
remove page on next level linked from next inner page, ginScanToDelete()
wrongly sets parent page. Bug reveals when many item pointers from index
was deleted ( several hundred thousands).
Bug is discovered by hubert depesz lubaczewski <depesz@gmail.com>
Suppose, we need rc2 before release...
result now depends on the lc_messages setting, as noted by Bruce.
Also, mark to_number() and the numeric-type variants of to_char() as stable,
because their results depend on lc_numeric; this is a longstanding oversight.
Also, mark to_date() and to_char(interval) as stable; although these appear
not to depend on any GUC variables as of CVS HEAD, that seems a property
unlikely to survive future improvements. It seems best to mark all the
formatting functions stable and be done with it.
catversion not bumped, because this does not seem critical enough to force
a post-RC1 initdb, and anyway we cannot do so in the back branches.
(in particular, causing the ReadyForQuery message to be eaten) before
returning from do_copy. The only known consequence of failing to do so is
that get_prompt might show a wrong result for the %x transaction status
escape, as reported by Bernd Helmle; but it's possible there are other issues.
Back-patch as far as 7.4, the oldest version supporting %x.
vacuum/analyze timestamp columns at the end, rather than at a random
spot in the middle as in the original patch. This was deemed more usable
as well as less likely to break existing application code. initdb forced
accordingly. In passing, remove former kluge for initializing
pg_stat_file()'s pg_proc entry --- bootstrap mode was fixed recently
so that this can be done without any hacks, but I overlooked this usage.
HeapTuple that is no longer allocated as a single palloc() block; if
used carelessly, this might result in a subsequent memory leak after
heap_freetuple().
AbortTransaction, which would lead to recursion and eventual PANIC exit
as illustrated in recent report from Jeff Davis. First, in xact.c create
a special dedicated memory context for AbortTransaction to run in. This
solves the problem as long as AbortTransaction doesn't need more than 32K
(or whatever other size we create the context with). But in corner cases
it might. Second, in trigger.c arrange to keep pending after-trigger event
records in separate contexts that can be freed near the beginning of
AbortTransaction, rather than having them persist until CleanupTransaction
as before. Third, in portalmem.c arrange to free executor state data
earlier as well. These two changes should result in backing off the
out-of-memory condition before AbortTransaction needs any significant
amount of memory, at least in typical cases such as memory overrun due
to too many trigger events or too big an executor hash table. And all
the same for subtransaction abort too, of course.
zic's Europe/London, rather than Europe/Dublin as before. This seems
a less surprising choice, particularly with respect to dates before
1948. Original suggestion was to translate to straight GMT, but this
seems wrong given that these zones *are* DST-aware. Per offlist
discussion with Magnus.
in the middle of executing a SPI query. This doesn't entirely fix the
problem of memory leakage in plpgsql exception handling, but it should
get rid of the lion's share of leakage.
because on that platform strftime produces localized zone names in varying
encodings. Even though it's only in a comment, this can cause encoding
errors when reloading the dump script. Per suggestion from Andreas
Seltenreich. Also, suppress %Z on Windows in the %s escape of
log_line_prefix ... not sure why this one is different from the other two,
but it shouldn't be.
python 2.5. This involves fixing several violations of the published
spec for creating PyTypeObjects, and adding another regression test
expected output for yet another variation of error message spelling.
Windows), arrange for each postmaster child process to be its own process
group leader, and deliver signals SIGINT, SIGTERM, SIGQUIT to the whole
process group not only the direct child process. This provides saner behavior
for archive and recovery scripts; in particular, it's possible to shut down a
warm-standby recovery server using "pg_ctl stop -m immediate", since delivery
of SIGQUIT to the startup subprocess will result in killing the waiting
recovery_command. Also, this makes Query Cancel and statement_timeout apply
to scripts being run from backends via system(). (There is no support in the
core backend for that, but it's widely done using untrusted PLs.) Per gripe
from Stephen Harris and subsequent discussion.
Typo in the changes to plperl - uses wrong dir, and had a missing slash.
Also fixes error checking for xsubpp - it was broken in a way that hid
the problem above when run more than once (which is the normal case when
developing).
Negotiation failure is only likely to happen if one side or the other is
misconfigured, eg. bad client certificate. I'm not 100% convinced that
a retry is really the best thing, hence not back-patching this fix for now.
Per gripe from Sergio Cinos.
promoted to FATAL) end in exit(1) not exit(0). Then change the postmaster to
allow exit(1) without a system-wide panic, but not for the startup subprocess
or the bgwriter. There were a couple of places that were using exit(1) to
deliberately force a system-wide panic; adjust these to be exit(2) instead.
This fixes the problem noted back in July that if the startup process exits
with elog(ERROR), the postmaster would think everything is hunky-dory and
proceed to start up. Alternative solutions such as trying to run the entire
startup process as a critical section seem less clean, primarily because of
the fact that a fair amount of startup code is shared by all postmaster
children in the EXEC_BACKEND case. We'd need an ugly special case somewhere
near the head of main.c to make it work if it's the child process's
responsibility to determine what happens; and what's the point when the
postmaster already treats different children differently?
* New versions of OpenSSL come with proper debug versions, and use
suffixed names on the LIBs for that. Adapts library handling to deal
with that.
* Fixes error where it incorrectly enabled Kerberos based on NLS
configuration instead of Kerberos configuration
* Specifies path of perl in config, instead of using current one.
Required when using a 64-bit perl normally, but want to build pl/perl
against 32-bit one (required)
* Fix so pgevent generates win32ver.rc automatically
Magnus Hagander
any no-longer-needed segments; just truncate them to zero bytes and leave
the files in place for possible future re-use. This avoids problems when
the segments are re-used due to relation growth shortly after truncation.
Before, the bgwriter, and possibly other backends, could still be holding
open file references to the old segment files, and would write dirty blocks
into those files where they'd disappear from the view of other processes.
Back-patch as far as 8.0. I believe the 7.x branches are not vulnerable,
because they had no bgwriter, and "blind" writes by other backends would
always be done via freshly-opened file references.
preference for filling pages out-of-order tends to confuse the sanity checks
in md.c, as per report from Balazs Nagy in bug #2737. The fix is to ensure
that the smgr-level code always has the same idea of the logical EOF as the
hash index code does, by using ReadBuffer(P_NEW) where we are adding a single
page to the end of the index, and using smgrextend() to reserve a large batch
of pages when creating a new splitpoint. The patch is a bit ugly because it
avoids making any changes in md.c, which seems the most prudent approach for a
backpatchable beta-period fix. After 8.3 development opens, I'll take a look
at a cleaner but more invasive patch, in particular getting rid of the now
unnecessary hack to allow reading beyond EOF in mdread().
Backpatch as far as 7.4. The bug likely exists in 7.3 as well, but because
of the magnitude of the 7.3-to-7.4 changes in hash, the later-version patch
doesn't even begin to apply. Given the other known bugs in the 7.3-era hash
code, it does not seem worth trying to develop a separate patch for 7.3.
cases where we already hold the desired lock "indirectly", either via
membership in a MultiXact or because the lock was originally taken by a
different subtransaction of the current transaction. These cases must be
accounted for to avoid needless deadlocks and/or inappropriate replacement of
an exclusive lock with a shared lock. Per report from Clarence Gardner and
subsequent investigation.
of an index on a serial column, rather than the name of the associated
sequence. Fallout from recent changes in dependency setup for serials.
Per bug #2732 from Basil Evseenko.
added to information_schema (per a SQL2003 addition). The original coding
failed if a referenced column participated in more than one pg_constraint
entry. Also, it did not work if an FK relied directly on a unique index
without any constraint syntactic sugar --- this case is outside the SQL spec,
but PG has always supported it, so it's reasonable for our information_schema
to handle it too. Per bug#2750 from Stephen Haberman.
Although this patch changes the initial catalog contents, I didn't force
initdb. Any beta3 testers who need the fix can install it via CREATE OR
REPLACE VIEW, so forcing them to initdb seems an unnecessary imposition.
accurately: we have to distinguish the effects of the join's own ON
clauses from the effects of pushed-down clauses. Failing to do so
was a quick hack long ago, but it's time to be smarter. Per example
from Thomas H.
30 seconds instead of retrying forever. Also modify xlog.c so that if
it fails to rename an old xlog segment up to a future slot, it will
unlink the segment instead. Per discussion of bug #2712, in which it
became apparent that Windows can handle unlinking a file that's being
held open, but not renaming it.
The former coding relied on the actual allocated size of the last block,
which made it behave strangely if the first allocation in a context was
larger than ALLOC_CHUNK_LIMIT: subsequent allocations would be referenced
to that and not to the intended series of block sizes. Noted while
studying a memory wastage gripe from Tatsuo.
more space is needed, instead of incrementing by a fixed amount; the old
method wastes lots of space and time when the ultimate size is large.
Per gripe from Tatsuo.
text_to_array(): they all had O(N^2) behavior on long input strings in
multibyte encodings, because of repeated rescanning of the input text to
identify substrings whose positions/lengths were computed in characters
instead of bytes. Fix by tracking the current source position as a char
pointer as well as a character-count. Also avoid some unnecessary palloc
operations. text_to_array() also leaked memory intracall due to failure
to pfree temporary strings. Per gripe from Tatsuo Ishii.
sub-arrays. Per discussion, if all inputs are empty arrays then result
must be an empty array too, whereas a mix of empty and nonempty arrays
should (and already did) draw an error. In the back branches, the
construct was strict: any NULL input immediately yielded a NULL output;
so I left that behavior alone. HEAD was simply ignoring NULL sub-arrays,
which doesn't seem very sensible. For lack of a better idea it now
treats NULL sub-arrays the same as empty ones.
include it if it links properly. It seems too risky to assume that
standard functions like pow() are not special-cased by the compiler.
Per report from Andreas Lange that build fails on Solaris cc compiler
with -fast. Even though we don't consider that a supported option,
I'm worried that similar issues will arise with other compilers.
manually release the LDAP handle via ldap_unbind(). This isn't a
significant problem in practice because an error eventually results
in exiting the process, but we can cleanup correctly without too
much pain.
In passing, fix an error in snprintf() usage: the "size" parameter
to snprintf() is the size of the destination buffer, including space
for the NUL terminator. Also, depending on the value of NAMEDATALEN,
the old coding could have allowed for a buffer overflow.
stale relcache init files (pg_internal.init), and there is no mechanism for
updating them during WAL replay. Easiest solution is just to delete the init
files at conclusion of startup, and let the first backend started in each
database take care of rebuilding the init file. Simon Riggs and Tom Lane.
Back-patched to 8.1. Arguably this should be fixed in 8.0 too, but it would
require significantly more code since 8.0 has no handy startup-time scan of
pg_database to piggyback on. Manual solution of the problem is possible
in 8.0 (just delete the pg_internal.init files before starting WAL replay),
so that may be a sufficient answer.
in PITR scenarios. We now WAL-log the replacement of old XIDs with
FrozenTransactionId, so that such replacement is guaranteed to propagate to
PITR slave databases. Also, rather than relying on hint-bit updates to be
preserved, pg_clog is not truncated until all instances of an XID are known to
have been replaced by FrozenTransactionId. Add new GUC variables and
pg_autovacuum columns to allow management of the freezing policy, so that
users can trade off the size of pg_clog against the amount of freezing work
done. Revise the already-existing code that forces autovacuum of tables
approaching the wraparound point to make it more bulletproof; also, revise the
autovacuum logic so that anti-wraparound vacuuming is done per-table rather
than per-database. initdb forced because of changes in pg_class, pg_database,
and pg_autovacuum catalogs. Heikki Linnakangas, Simon Riggs, and Tom Lane.
deletion code to avoid the case where an upper-level btree page remains "half
dead" for a significant period of time, and to block insertions into a key
range that is in process of being re-assigned to the right sibling of the
deleted page's parent. This prevents the scenario reported by Ed L. wherein
index keys could become out-of-order in the grandparent index level.
Since this is a moderately invasive fix, I'm applying it only to HEAD.
The bug exists back to 7.4, but the back branches will get a different patch.
(blobs) with comments, per bug #2727 from Konstantin Pelepelin.
Mea culpa for not having tested this case.
Back-patch to 8.1; prior branches don't dump blob comments at all.
node of a SubLink or SubPlan testexpr field. Bug resulted from replacing
the old lefthand/exprs list fields with a simple expression field, and not
remembering that expression_tree_walker is coded to save a few cycles by
recursing directly to self on list fields (on the assumption the walker
isn't interested in List nodes per se). On non-list fields it must of
course call the walker. Possibly that hack isn't worth the risk of more
such bugs, but I'll leave it be for now. Per bug report from James Robinson.
outer joins. Originally it was only looking for overlap of the righthand
side of a left join, but we have to do it on the lefthand side too.
Per example from Jean-Pierre Pelletier.
if available (which it usually should be), during processing of Bind
and Execute protocol messages. This improves usefulness of
log_min_error_statement logging for extended query protocol.
sin_port in the returned IP address struct when servname is NULL. This has
been observed to cause failure to bind the stats collection socket, and
could perhaps cause other issues too. Per reports from Brad Nicholson
and Chris Browne.
that conflict with the OID that we want to use for the new database.
This avoids the risk of trying to remove files that maybe we shouldn't
remove. Per gripe from Jon Lapham and subsequent discussion of 27-Sep.
timezone actually has a daylight-savings rule. This avoids breaking
cases that used to work because they went through the DecodePosixTimezone
code path. Per contrib regression failures (mea culpa for not running
those yesterday...). Also document the already-applied change to allow
GMT offsets up to 14 hours.
input routines. Remove the former "DecodePosixTimezone" function in favor of
letting the zic code handle POSIX-style zone specs (see tzparse()). In
particular this means that "PST+3" now means the same as "-03", whereas it
used to mean "-11" --- the zone abbreviation is effectively just a noise word
in this syntax. Make sure that all named and POSIX-style zone names will be
parsed as a single token. Fix long-standing bogosities in printing and input
of fractional-hour timezone offsets (since the tzparse() code will accept
these, we'd better make 'em work). Also correct an error in the original
coding of the zic-zone-name patch: in "timestamp without time zone" input,
zone names are supposed to be allowed but ignored, but the coding was such
that the zone changed the interpretation anyway.
example SET TIME ZONE 'america/new_york' works now. This seems a good
idea on general user-friendliness grounds, and is part of the solution
to the timestamp-input parsing problems I noted recently.
modules; the first try was not usable in EXEC_BACKEND builds (e.g.,
Windows). Instead, just provide some entry points to increase the
allocation requests during postmaster start, and provide a dedicated
LWLock that can be used to synchronize allocation operations performed
by backends. Per discussion with Marc Munro.
one of the program's core data structures, make use of the existing
ability to selectively exclude TOC items by ID. Slightly more code but
much less likely to create future maintenance problems.
1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout occurs
then pgwin32_waitforsinglesocket() tries to write empty packet goes to
WaitForMultipleObjectsEx again.
2) pgwin32_send(): add loop around WSASend and pgwin32_waitforsinglesocket().
The reason is: for overlapped socket, 'ok' result from
pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
it can become busy again and following WSASend call will fail with
WSAEWOULDBLOCK error.
See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php
rows --- if the surrounding query queued any trigger events between the rows,
the events would be fired at the wrong time, leading to bizarre behavior.
Per report from Merlin Moncure.
This is a simple patch that should solve the problem fully in the back
branches, but in HEAD we also need to consider the possibility of queries
with RETURNING clauses. Will look into a fix for that separately.
I introduced in 7.4.1 :-(. It's correct to allow unknown to be coerced to
ANY or ANYELEMENT, since it's a real-enough data type, but it most certainly
isn't an array datatype. This can cause a backend crash but AFAICT is not
exploitable as a security hole. Per report from Michael Fuhr.
Note: as fixed in HEAD, this changes a constant in the pg_stats view,
resulting in a change in the expected regression outputs. The back-branch
patches have been hacked to avoid that, so that pre-existing installations
won't start failing their regression tests.
to process all inclusion switches then all exclusion switches, so that the
behavior is independent of switch ordering.
Use of -T does not cause non-table objects to be suppressed. And
the patterns are now interpreted the same way psql's \d commands do it,
rather than as pure regex commands; this allows for example -t schema.tab
to do what it should have been doing all along. Re-enable the --blobs
switch to do something useful, ie, add back blobs into a dump they were
otherwise suppressed from.
pg_dump as well as psql. Since psql already uses dumputils.c, while there's
not any code sharing in the other direction, this seems the easiest way.
Also, fix misinterpretation of patterns using regex | by adding parentheses
(same bug found previously in similar_escape()). This should be backpatched.
quote chars inside quote marks, should emit one quote *and stay in inquotes
mode*. No doubt the lack of reports of this have something to do with the
poor documentation of the feature ...
portable long options. But we have had portable long options for a long
time now, so this is obsolete. Now people have added options which *only*
work with -X but not as regular long option, so I'm putting a stop to this:
-X is deprecated; it still works, but it has been removed from the
documentation, and please don't add more of them.
max_stack_depth is not set to an unsafe value.
This commit also provides configure-time checking for <sys/resource.h>,
and cleans up some perhaps-unportable code associated with use of that
include file and getrlimit().
platform limit, rather than just blindly raising max_stack_depth.
Also, tweak the code to work properly if someone sets max_stack_depth
to more than 2Gb, which guc.c will allow on a 64-bit machine.
overlapping possible matches for the separator string, such as
string_to_array('123xx456xxx789', 'xx').
Also, revise the logic of replace(), split_part(), and string_to_array()
to avoid O(N^2) work from redundant searches and conversions to pg_wchar
format when there are N matches to the separator string.
Backpatched the full patch as far as 8.0. 7.4 also has the bug, but the
code has diverged a lot, so I just went for a quick-and-dirty fix of the
bug itself in that branch.
been initialized yet. This can happen because there are code paths that call
SysCacheGetAttr() on a tuple originally fetched from a different syscache
(hopefully on the same catalog) than the one specified in the call. It
doesn't seem useful or robust to try to prevent that from happening, so just
improve the function to cope instead. Per bug#2678 from Jeff Trout. The
specific example shown by Jeff is new in 8.1, but to be on the safe side
I'm backpatching 8.0 as well. We could patch 7.x similarly but I think
that's probably overkill, given the lack of evidence of old bugs of this ilk.
remaining functions, simplify pglz_compress's API to not require a useless
data copy when compression fails. Also add a check in pglz_decompress that
the expected amount of data was decompressed.
static variables. This avoids any risk of potential non-reentrancy,
and in particular offers a much cleaner workaround for the Intel compiler
bug that was affecting ginutil.c.
Buildfarm results from 'gazelle' show that there are indeed libedit
versions for which history.h is a needed header, even though it's
apparently been dropped entirely in other versions. Grumble.
Per Bob Friesenhahn's report, this file is not supplied by some versions
of libedit, and even when it is supplied it seems to be just a link to
readline.h, so we don't need to include it anyway.
Also, ensure that we won't try to use a too-old version of Bison.
The previous coding would bleat but then use it anyway; better to invoke
the 'missing' script if any grammar files need to be rebuilt.
gotten rather thoroughly whacked out by careless recent changes: the
intended ratio between the two was off by a lot, and the minimum number
of shared buffers tried had increased by a lot. Problem exposed by
failures on buildfarm members with smaller SHMMAX values.
repeatedly. Now that we don't have to worry about memory leaks from
glibc's qsort, we can safely put CHECK_FOR_INTERRUPTS into the tuplesort
comparators, as was requested a couple months ago. Also, get rid of
non-reentrancy and an extra level of function call in tuplesort.c by
providing a variant qsort_arg() API that passes an extra void * argument
through to the comparison routine. (We might want to use that in other
places too, I didn't look yet.)
1) Make vcbuild actually build the pgevent dll.
2) Change the pgevent DLL file so it doens't specify ordinal for the
functions. You're not supposed to do that. You're actually supposed to
declare them as PRIVATE as well, but mingw doesn't support that. VC++
will throw a warning and not an error though, so we can live with it.
Magnus Hagander
postgresql.conf.
- shared_buffers = 32000kB => 32MB
- temp_buffers = 8000kB => 8MB
- wal_buffers = 8 => 64kB
The code of initdb was a bit modified to write MB-unit values.
Values greater than 8000kB are rounded out to MB.
GUC_UNIT_XBLOCKS is added for wal_buffers. It is like GUC_UNIT_BLOCKS,
but uses XLOG_BLCKSZ instead of BLCKSZ.
Also, I cleaned up the test of GUC_UNIT_* flags in preparation to
add more unit flags in less bits.
ITAGAKI Takahiro
which turns out to be a dominant part of the runtime in scenarios
involving lots of parse-time warnings (such as Stephen Frost's example
of an INSERT with a lot of backslash-containing strings). There's not
a whole lot we can do about the character-at-a-time scanning, but we
can at least avoid traversing the query twice.
severity. This is to ensure the user can cancel a query that's spitting
out lots of notice/warning messages, even if they're coming from a loop
that doesn't otherwise contain a CHECK_FOR_INTERRUPTS. Per gripe from
Stephen Frost.
present; intervening positions are filled with nulls. This behavior
is required by SQL99 but was not implementable before 8.2 due to lack
of support for nulls in arrays. I have only made it work for the
one-dimensional case, which is all that SQL99 requires. It seems quite
complex to get it right in higher dimensions, and since we never allowed
extension at all in higher dimensions, I think that must count as a
future feature addition not a bug fix.
the SQL spec, viz IS NULL is true if all the row's fields are null, IS NOT
NULL is true if all the row's fields are not null. The former coding got
this right for a limited number of cases with IS NULL (ie, those where it
could disassemble a ROW constructor at parse time), but was entirely wrong
for IS NOT NULL. Per report from Teodor.
I desisted from changing the behavior for arrays, since on closer inspection
it's not clear that there's any support for that in the SQL spec. This
probably needs more consideration.
to performance. (A wholesale effort to get rid of strncpy should be
undertaken sometime, but not during beta.) This commit also fixes dynahash.c
to correctly truncate overlength string keys for hashtables, so that its
callers don't have to anymore.
wrong answer, as has been seen to occur with a buggy Linux kernel. Not
really our bug, but it's a simple test in a seldom-used control path,
so might as well have a defense.
'postmaster', so as not to depend on the existence of the postmaster
symlink. Also, implement postmaster-still-alive and postmaster-kill
operations for Windows, per Magnus.
return true for exactly the characters treated as whitespace by their flex
scanners. Per report from Victor Snezhko and subsequent investigation.
Also fix a passel of unsafe usages of <ctype.h> functions, that is, ye olde
char-vs-unsigned-char issue. I won't miss <ctype.h> when we are finally
able to stop using it.
(possibly (un)translated) letters that are actually expected as input.
Also reject invalid responses instead of silenty taken them as "no".
with help from Bernd Helmle
even when a single relation requires more than max_fsm_pages pages. Also,
make VACUUM emit a warning in this case, since it likely means that VACUUM
FULL or other drastic corrective measure is needed. Per reports from Jeff
Frost and others of unexpected changes in the claimed max_fsm_pages need.
is a large enough histogram, it will use the number of matches in the
histogram to derive a selectivity estimate, rather than the admittedly
pretty bogus heuristics involving examining the pattern contents. I set
'large enough' at 100, but perhaps we should change that later. Also
apply the same technique in contrib/ltree's <@ and @> estimator. Per
discussion with Stefan Kaltenbrunner and Matteo Beccati.
tables in the query compete for cache space, not just the one we are
currently costing an indexscan for. This seems more realistic, and it
definitely will help in examples recently exhibited by Stefan
Kaltenbrunner. To get the total size of all the tables involved, we must
tweak the handling of 'append relations' a bit --- formerly we looked up
information about the child tables on-the-fly during set_append_rel_pathlist,
but it needs to be done before we start doing any cost estimation, so
push it into the add_base_rels_to_query scan.
contrib functionality. Along the way, remove the USER_LOCKS configuration
symbol, since it no longer makes any sense to try to compile that out.
No user documentation yet ... mmoncure has promised to write some.
Thanks to Abhijit Menon-Sen for creating a first draft to work from.
oversight in original implementation of VALUES. Also fix an oversight
in recent addition of options to CREATE TABLE AS: they weren't getting
propagated if the query was a set-operation such as UNION.
the table being analyzed. This prevents two ANALYZEs from running
concurrently on the same table and possibly suffering concurrent-update
failures while trying to store their results into pg_statistic. The
downside is that a database-wide ANALYZE executed within a transaction
block will hold ShareUpdateExclusiveLock on many tables simultaneously,
which could lead to concurrency issues or even deadlock against another
such ANALYZE. However, this seems a corner case of less importance
than getting unexpected errors from a foreground ANALYZE when autovacuum
elects to analyze the same table concurrently. Per discussion.
after an error during VACUUM. We have a PG_TRY block anyway around the only
call sites, so just reset it in the CATCH clause instead of having
AtEOXact_Buffers blindly do it during xact end. I think the old code was
actively wrong for the case of a failure during ANALYZE inside a
subtransaction --- the flag wouldn't get cleared until main transaction end.
Probably not worth back-patching though.
and create a new view pg_timezone_names that provides information about
the zones known in the 'zic' database. Magnus Hagander, with some
additional work by Tom Lane.
a schema is our own temp schema or another backend's temp schema, and use
these in place of some former kluges in information_schema. Per my
proposal of yesterday.
an earlier discussion. Centralize assumptions about what libpq depends
on in one place in Makefile.global. I am unconvinced that this list
is complete, but since ecpg seems to have gotten along with just these
entries, we'll try it this way and see what happens.
we probably should make them work reliably for all arrays. Fix code
to handle NULLs and multidimensional arrays, move it into arrayfuncs.c.
GIN is still restricted to indexing arrays with no null elements, however.
agreed these symbols are less easily confused. I made new pg_operator
entries (with new OIDs) for the old names, so as to provide backward
compatibility while making it pretty easy to remove the old names in
some future release cycle. This commit only touches the core datatypes,
contrib will be fixed separately.
to a relation on the nullable side of an outer join. I had removed
this during the outer join planning rewrite a few months ago ... I think
I intended to put it somewhere else, but forgot ...
than being equivalent to setting log_min_duration_statement to zero, this
option now forces logging of all query durations, but doesn't force logging
of query text. Also, add duration logging coverage for fastpath function
calls.
proposal. Parameter logging works even for binary-format parameters, and
logging overhead is avoided when disabled.
log_statement = all output for the src/test/examples/testlibpq3.c example
now looks like
LOG: statement: execute <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL: parameters: $1 = 'joe''s place'
LOG: statement: execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL: parameters: $1 = '2'
and log_min_duration_statement = 0 results in
LOG: duration: 2.431 ms parse <unnamed>: SELECT * FROM test1 WHERE t = $1
LOG: duration: 2.335 ms bind <unnamed> to <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL: parameters: $1 = 'joe''s place'
LOG: duration: 0.394 ms execute <unnamed>: SELECT * FROM test1 WHERE t = $1
DETAIL: parameters: $1 = 'joe''s place'
LOG: duration: 1.251 ms parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
LOG: duration: 0.566 ms bind <unnamed> to <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL: parameters: $1 = '2'
LOG: duration: 0.173 ms execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4
DETAIL: parameters: $1 = '2'
(This example demonstrates the folly of ignoring parse/bind steps for duration
logging purposes, BTW.)
Along the way, create a less ad-hoc mechanism for determining which commands
are logged by log_statement = mod and log_statement = ddl. The former coding
was actually missing quite a few things that look like ddl to me, and it
did not handle EXECUTE or extended query protocol correctly at all.
This commit does not do anything about the question of whether log_duration
should be removed or made less redundant with log_min_duration_statement.
that has parameters is always planned afresh for each Bind command,
treating the parameter values as constants in the planner. This removes
the performance penalty formerly often paid for using out-of-line
parameters --- with this definition, the planner can do constant folding,
LIKE optimization, etc. After a suggestion by Andrew@supernews.
can create or modify rules for the table. Do setRuleCheckAsUser() while
loading rules into the relcache, rather than when defining a rule. This
ensures that permission checks for tables referenced in a rule are done with
respect to the current owner of the rule's table, whereas formerly ALTER TABLE
OWNER would fail to update the permission checking for associated rules.
Removal of separate RULE privilege is needed to prevent various scenarios
in which a grantee of RULE privilege could effectively have any privilege
of the table owner. For backwards compatibility, GRANT/REVOKE RULE is still
accepted, but it doesn't do anything. Per discussion here:
http://archives.postgresql.org/pgsql-hackers/2006-04/msg01138.php
the target relation(s). There might be some cases where we could discard
the pending event instead, but for the moment a conservative approach
seems sufficient. Per report from Markus Schiltknecht and subsequent
discussion.
working in a multibyte encoding. This fixes the problems exhibited in
bug #1931 and other reports of ILIKE misbehavior in UTF8 encoding.
It's a pretty grotty solution though --- should rethink how to do it
after we install better locale support, someday.
cascaded first to days and only what is leftover into seconds. This
seems to satisfy the principle of least surprise given the general
conversion to three-part interval values --- it was an oversight that
these cases weren't dealt with in 8.1. Michael Glaesemann
of the syntax as this fundamentally dead-end approach can, in particular
combinations of single and multi column assignments. Improve rather
inadequate documentation and provide some regression tests.
PGPROC array into snapshots, and use this information to avoid visits
to pg_subtrans in HeapTupleSatisfiesSnapshot. This appears to solve
the pg_subtrans-related context swap storm problem that's been reported
by several people for 8.1. While at it, modify GetSnapshotData to not
take an exclusive lock on ProcArrayLock, as closer analysis shows that
shared lock is always sufficient.
Itagaki Takahiro and Tom Lane
RETURNING play nice with views/rules. To wit, have the rule rewriter
rewrite any RETURNING clause found in a rule to produce what the rule's
triggering query asked for in its RETURNING clause, in particular drop
the RETURNING clause if no RETURNING in the triggering query. This
leaves the responsibility for knowing how to produce the view's output
columns on the rule author, without requiring any fundamental changes
in rule semantics such as adding new rule event types would do. The
initial implementation constrains things to ensure that there is
exactly one, unconditionally invoked RETURNING clause among the rules
for an event --- later we might be able to relax that, but for a post
feature freeze fix it seems better to minimize how much invention we do.
Per gripe from Jaime Casanova.
"server_version" but uses the handy PG_VERSION_NUM which allows apps to
do things like if ($version >= 80200) without having to parse apart the
value of server_version themselves.
Greg Sabino Mullane greg@turnstep.com
queries via a cursor, fetching a limited number of rows at a time and
therefore not risking exhausting memory. A disadvantage of the scheme
is that 'aligned' output mode will align each group of rows independently
leading to odd-looking output, but all the other output formats work
reasonably well. Chris Mair, with some additional hacking by moi.
existing for backend GUC variables, and use this to eliminate repeated
fetching/parsing of psql variables in psql's inner loops. In a trivial
test with lots of 'select 1;' commands, psql's CPU time went down almost
10%, although of course the effect on total elapsed time was much less.
Per discussion about how to ensure the upcoming FETCH_COUNT patch doesn't
cost any performance when not being used.
optionally bind. I re-added the "statement:" label so people will
understand why the line is being printed (it is log_*statement
behavior).
Use single quotes for bind values, instead of double quotes, and double
literal single quotes in bind values (and document that). I also made
use of the DETAIL line to have much cleaner output.
pgstat_bestart() has been called; else any lock-block occurring
during InitPostgres() is disastrous. I believe this explains
recent wasp regression failure; at least it explains the crash I
got while trying to duplicate the problem. I also made
pgstat_report_activity() safe against the same scenario, just
in case. The report_waiting hazard was created by my patch of
19-Aug to include waiting status in pg_stat_activity.
builds all the files needed for its regression tests, but the tests
themselves fail because of diffs in the #line directives output by
ecpg itself. Not sure what to do about that.
trivial if it contains either Vars referencing the corresponding subplan
columns, or Consts equaling the corresponding subplan columns. This
lets the planner eliminate the SubqueryScan in some cases generated by
generate_setop_tlist().
Fix all the standard PLs to be able to return tuples from FOO_RETURNING
statements as well as utility statements that return tuples. Also,
fix oversight that SPI_processed wasn't set for a utility statement
returning tuples. Per recent discussion.
locks that would conflict with a specified lock request, without
actually trying to get that lock. Use this instead of the former ad hoc
method of doing the first wait step in CREATE INDEX CONCURRENTLY.
Fixes problem with undetected deadlock and in many cases will allow the
index creation to proceed sooner than it otherwise could've. Per
discussion with Greg Stark.
on the same index page; we can avoid data copying as well as buffer refcount
manipulations in this common case. Makes for a small but noticeable
improvement in mergejoin speed.
Heikki Linnakangas
of the transaction ID counter. Nothing is done with the epoch except to
store it in checkpoint records, but this provides a foundation with which
add-on code can pretend that XIDs never wrap around. This is a severely
trimmed and rewritten version of the xxid patch submitted by Marko Kreen.
Per discussion, the epoch counter seems the only part of xxid that really
needs to be in the core server.
by abandoning the idea that it should say SERIAL in the dump. Instead,
dump serial sequences and column defaults just like regular ones.
Add a new backend command ALTER SEQUENCE OWNED BY to let pg_dump recreate
the sequence-to-column dependency that was formerly created "behind the
scenes" by SERIAL. This restores SERIAL to being truly "just a macro"
consisting of component operations that can be stated explicitly in SQL.
Furthermore, the new command allows sequence ownership to be reassigned,
so that old mistakes can be cleaned up.
Also, downgrade the OWNED-BY dependency from INTERNAL to AUTO, since there
is no longer any very compelling argument why the sequence couldn't be
dropped while keeping the column. (This forces initdb, to be sure the
right kinds of dependencies are in there.)
Along the way, add checks to prevent ALTER OWNER or SET SCHEMA on an
owned sequence; you can now only do this indirectly by changing the
owning table's owner or schema. This is an oversight in previous
releases, but probably not worth back-patching.
each object to be deleted, instead of the previous hack that just skipped
INTERNAL dependencies, which didn't really work. Per report from Tom Lane.
To do this, introduce a new performMultipleDeletions entry point in
dependency.c to delete multiple objects at once. The dependency code then has
the responsability of tracking INTERNAL and AUTO dependencies as needed.
Along the way, change ObjectAddresses so that we can allocate an ObjectAddress
list from outside dependency.c and not have to export the internal
representation.
functions in its targetlist, to avoid introducing multiple evaluations
of volatile functions that textually appear only once. This is a
slightly tighter version of Jaime Casanova's recent patch.
that ps_status provides by appending 'waiting' to the PS display. This
completes the project of making it feasible to turn off process title
updates and instead rely on pg_stat_activity. Per my suggestion a few
weeks ago.
the rel, it's easy to get rid of the narrow race-condition window that
used to exist in VACUUM and CLUSTER. Did some minor code-beautification
work in the same area, too.
than N seconds apart. This allows a simple, if not very high performance,
means of guaranteeing that a PITR archive is no more than N seconds behind
real time. Also make pg_current_xlog_location return the WAL Write pointer,
add pg_current_xlog_insert_location to return the Insert pointer, and fix
pg_xlogfile_name_offset to return its results as a two-element record instead
of a smashed-together string, as per recent discussion.
Simon Riggs
mergejoin possibility where the inner rel was less well sorted than
the outer (ie, it matches some but not all of the merge clauses that
can work with the outer), if the inner path in question is also the
overall cheapest path for its rel. This is an old bug, but I'm not
sure it's worth back-patching, because it's such a corner case.
Noted while investigating a test case from Peter Hardman.
subquery's pathkey is a RelabelType applied to something that appears
in the subquery's output; for example where the subquery returns a
varchar Var and the sort order is shown as that Var coerced to text.
This comes up because varchar doesn't have its own sort operator.
Per example from Peter Hardman.
internal TypInfo table in bootstrap mode. This allows array_in and
array_out to be used during early bootstrap, which eliminates the
former obstacle to giving OUT parameters to built-in functions.
such as debugging and performance measurement. This consists of two features:
a table of "rendezvous variables" that allows separately-loaded shared
libraries to communicate, and a new GUC setting "local_preload_libraries"
that allows libraries to be loaded into specific sessions without explicit
cooperation from the client application. To make local_preload_libraries
as flexible as possible, we do not restrict its use to superusers; instead,
it is restricted to load only libraries stored in $libdir/plugins/. The
existing LOAD command has also been modified to allow non-superusers to
LOAD libraries stored in this directory.
This patch also renames the existing GUC variable preload_libraries to
shared_preload_libraries (after a suggestion by Simon Riggs) and does some
code refactoring in dfmgr.c to improve clarity.
Korry Douglas, with a little help from Tom Lane.
Fixed broken newline on Windows.
Fixed a nasty buffer underrun that only occured when using Informix
no_indicator NULL setting on timestamps and intervals.
requiring read permissions. Up till now there was no possible case
in which the RTEs wouldn't already have ACL_SELECT set ... but now that
you can say something like 'INSERT INTO foo ... RETURNING *' this is
an essential step. With this commit, a RETURNING clause adds the
requirement for SELECT permissions on the target table if and only if
the clause actually reads the value of at least one target-table column.
cannot assume that there's exactly one Query in the Portal, as we can for
ONE_SELECT mode, because non-SELECT queries might have extra queries added
during rule rewrites. Fix things up so that we'll use ONE_RETURNING mode
when a Portal contains one primary (canSetTag) query and that query has
a RETURNING list. This appears to be a second showstopper reason for running
the Portal to completion before we start to hand anything back --- we want
to be sure that the rule-added queries get run too.
_SPI_execute_plan's return code should reflect the type of the query
that is marked canSetTag, not necessarily the last one in the list.
This is arguably a bug fix, but I'm hesitant to back-patch it because
it's the sort of subtle change that might break someone's code, and it's
best not to do that kind of thing in point releases.
a Coverity warning, these are risky since the hashtable isn't necessarily
fully set up yet. They're unnecessary anyway: a deletable hashtable
should be in a memory context that will be cleared following elog(ERROR).
Per report from Martijn van Oosterhout.
and instead make the grammar production for the RETURN statement do the
heavy lifting. The lookahead idea was copied from the main parser, but
it does not work in plpgsql's parser because here gram.y looks explicitly
at the scanner's yytext variable, which will be out of sync after a
failed lookahead step. A minimal example is
create or replace function foo() returns void language plpgsql as '
begin
perform return foo bar;
end';
which can be seen by testing to deliver "foo foo bar" to the main parser
instead of the expected "return foo bar". This isn't a huge bug since
RETURN is not found in the main grammar, but it could bite someone who
tried to use "return" as an identifier.
Back-patch to 8.1. Bug exists further back, but HEAD patch doesn't apply
cleanly, and given the lack of field complaints it doesn't seem worth
the effort to develop adjusted patches.
when what's being executed is a COMMIT or ROLLBACK. Per report from
Sergey Koposov. Backpatch to 8.1; 8.0 and before don't have the bug
due to lack of any logging at all here.
well as vacuum_cost_delay. Since datestyle is a string variable,
this exercises memory allocation issues that might not appear when
modifying an integer GUC variable. Also, we can observe the side
effects of changing datestyle to check that assign hooks are called
at the right times.
nonunique join value, leading to plan-choice-dependent results ... and
it seems some platforms will choose a different plan. Tweak the test
so that it has well-defined results. Per report from Olivier Prenant.
merely a matter of fixing the error check, since the underlying Portal
infrastructure already handles it. This in turn allows these statements
to be used in some existing plpgsql and plperl contexts, such as a
plpgsql FOR loop. Also, do some marginal code cleanup in places that
were being sloppy about distinguishing SELECT from SELECT INTO.
plpgsql support to come later. Along the way, convert execMain's
SELECT INTO support into a DestReceiver, in order to eliminate some ugly
special cases.
Jonah Harris and Tom Lane
The main reason for refactoring was that set_config_option() was too
overloaded function and its behavior did not consistent. Old version of
set_config_function hides some messages. For example if you type:
tcp_port = 5432.1
then old implementation ignore this error without any message to log
file in the signal context (configuration reload). Main problem was that
semantic analysis of postgresql.conf is not perform in the
ProcessConfigFile function, but in the set_config_options *after*
context check. This skipped check for variables with PG_POSTMASTER
context. There was request from Joachim Wieland to add more messages
about ignored changes in the config file as well.
Zdenek Kotala
same data type and same typmod, we show that typmod as the output
typmod, rather than generic -1. This responds to several complaints
over the past few years about UNIONs unexpectedly dropping length or
precision info.
as micro-seconds, rather than as 100 microseconds, as it does now. This
actually fixes all setitimer calls on Win32, but statement_timeout is
the most visible fix.
Backpatch to 8.1.X. 8.0 works as documented.
loaded libraries: call functions _PG_init() and _PG_fini() if the library
defines such symbols. Hence we no longer need to specify an initialization
function in preload_libraries: we can assume that the library used the
_PG_init() convention, instead. This removes one source of pilot error
in use of preloaded libraries. Original patch by Ralf Engelschall,
preload_libraries changes by me.
o print user name for all
o print portal name if defined for all
o print query for all
o reduce log_statement header to single keyword
o print bind parameters as DETAIL if text mode
operation every so often. This improves the usefulness of PITR log
shipping for hot standby: formerly, if the standby server crashed, it
was necessary to restart it from the last base backup and replay all
the WAL since then. Now it will only need to reread about the same
amount of WAL as the master server would. The behavior might also
come in handy during a long PITR replay sequence. Simon Riggs,
with some editorialization by Tom Lane.
without indexes) but not to display temp tables. It's a bit hard to
credit that sanity_check could get through a database-wide VACUUM
while the preceding create_index test is still trying to clean up
its temp tables ... but I see no other explanation for the current
failure report from buildfarm member sponge.
to happen automatically during pg_stop_backup(). Add some functions for
interrogating the current xlog insertion point and for easily extracting
WAL filenames from the hex WAL locations displayed by pg_stop_backup
and friends. Simon Riggs with some editorialization by Tom Lane.
list, when some of the child rels have been excluded by constraint
exclusion. This doesn't save a huge amount of time but it'll save some,
and it makes the EXPLAIN output look saner. We already did the
equivalent thing in set_append_rel_pathlist(), but not here.
contradictory WHERE-clauses applied to a relation. This makes the
GUC variable constraint_exclusion rather inappropriately named,
but I've refrained for the moment from renaming it.
Per example from Martin Lesser.
This doesn't matter too much for ordinary NOTs, since prepqual.c does
its best to get rid of those, but it helps with IS NOT TRUE clauses
which the rule rewriter likes to insert. Per example from Martin Lesser.
that's shorter-lived than the expression state being evaluated in it really
doesn't work :-( --- we end up with fn_extra caches getting deleted while
still in use. Rather than abandon the notion of caching expression state
across domain_in calls altogether, I chose to make domain_in a bit cozier
with ExprContext. All we really need for evaluating variable-free
expressions is an ExprContext, not an EState, so I invented the notion of a
"standalone" ExprContext. domain_in can prevent resource leakages by doing
a ReScanExprContext on this rather than having to free it entirely; so we
can make the ExprContext have the same lifespan (and particularly the same
per_query memory context) as the expression state structs.
the DROP pass rather than the ADD_CONSTR pass. On examining the code I
think this was just an oversight rather than intentional, and it seems
to satisfy the principle of least surprise better than the alternative
solution that was discussed. Add an example to the ref page showing how
to do ALTER TYPE and update the default in one command. Per gripe from
Markus Bertheau that that wasn't possible.
check). This isn't supported by pg_regress since the recent rewrite
into C. While we could add char classes to pg_regress.c's code, it's
not really needed at the moment: thanks to Andrew's patch to make
pg_regress always accept the 'standard' comparison file, we can just
drop the version check.
temporary context that can be reset when advancing to the next sublist.
This is faster and more thorough at recovering space than the previous
method; moreover it will do the right thing if something in the sublist
tries to register an expression context callback.
(e.g. "INSERT ... VALUES (...), (...), ...") and elsewhere as allowed
by the spec. (e.g. similar to a FROM clause subselect). initdb required.
Joe Conway and Tom Lane.
(table or index) before trying to open its relcache entry. This fixes
race conditions in which someone else commits a change to the relation's
catalog entries while we are in process of doing relcache load. Problems
of that ilk have been reported sporadically for years, but it was not
really practical to fix until recently --- for instance, the recent
addition of WAL-log support for in-place updates helped.
Along the way, remove pg_am.amconcurrent: all AMs are now expected to support
concurrent update.
created in the bootstrap phase proper, rather than added after-the-fact
by initdb. This is cleaner than before because it allows us to retire the
undocumented ALTER TABLE ... CREATE TOAST TABLE command, but the real reason
I'm doing it is so that toast tables of shared catalogs will now have
predetermined OIDs. This will allow a reasonably clean solution to the
problem of locking tables before we load their relcache entries, to appear
in a forthcoming patch.
vacuums. This allows a OLTP-like system with big tables to continue
regular vacuuming on small-but-frequently-updated tables while the
big tables are being vacuumed.
Original patch from Hannu Krossing, rewritten by Tom Lane and updated
by me.
because they are used for testing the return value from system().
(WIN32 doesn't overlay the return code with other failure conditions
like Unix does, so they are just simple macros.)
Fix regression checks to properly handle diff failures on Win32 using
the new macros.
it's handled just about like timezone; in particular, don't try
to read anything during InitializeGUCOptions. Should solve current
startup failure on Windows, and avoid wasted cycles if a nondefault
setting is specified in postgresql.conf too. Possibly we need to
think about a more general solution for handling 'expensive to set'
GUC options.
the float8 versions of the aggregates, which is all that the standard requires.
Sergey's original patch also provided versions using numeric arithmetic,
but given the size and slowness of the code, I doubt we ought to include
those in core.
the opportunity to treat COUNT(*) as a zero-argument aggregate instead
of the old hack that equated it to COUNT(1); this is materially cleaner
(no more weird ANYOID cases) and ought to be at least a tiny bit faster.
Original patch by Sergey Koposov; review, documentation, simple regression
tests, pg_dump and psql support by moi.
eliminate unnecessary code, force initdb because stored rules change
(limit nodes are now supposed to be int8 not int4 expressions).
Update comments and error messages, which still all said 'integer'.
not "unset". An "unset" state doesn't really exist; all variables behave
like an empty string value if the string being pointed to has not been
initialized.
When we are about to split an index page to do an insertion, first look
to see if any entries marked LP_DELETE exist on the page, and if so remove
them to try to make enough space for the desired insert. This should reduce
index bloat in heavily-updated tables, although of course you still need
VACUUM eventually to clean up the heap.
Junji Teramoto
configuration files that can be altered by a DBA. The australian_timezones
GUC setting disappears, replaced by a timezone_abbreviations setting (set this
to 'Australia' to get the effect of australian_timezones). The list of zone
names defined by default has undergone a bit of cleanup, too. Documentation
still needs some work --- in particular, should we fix Table B-4, or just get
rid of it? Joachim Wieland, with some editorializing by moi.
thinking that indexes of different sizes are equally attractive. Per
gripe from Jim Nasby. (I remain unconvinced that there's such a problem
in existing releases, but CVS HEAD definitely has got a problem because
of its new count-only-leaf-pages approach to indexscan costing.)
BufferAlloc tries to insert a new mapping entry before deleting the old one
for a buffer, we have a transient need for more than NBuffers entries ---
one more in 8.1, and as many as NUM_BUFFER_PARTITIONS more in CVS HEAD.
In theory this could lead to an "out of shared memory" failure if shmem
had already been completely claimed by the time the extra entries were
needed.
to the low-order bits of the entry hash value. Also make some incidental
cleanups in the dynahash API, such as not exporting the hash header
structs to the world.
effects in a nestloop inner indexscan, I had only dealt with plain index
scans and the index portion of bitmap scans. But there will be cache
benefits for the heap accesses of bitmap scans too, so fix
cost_bitmap_heap_scan() to account for that.
-built-in mechanism through the -MP flag. Adjust the file extensions to
look more like Automake practice. This frees up the .d suffix for use by
DTrace.
opclass. This is not so much because anyone's likely to create an index
on TID, as that sorting TIDs can be useful. Also added max and min
aggregates while at it, so that one can investigate the clusteredness of
a table with queries like SELECT min(ctid), max(ctid) FROM tab WHERE ...
Greg Stark and Tom Lane
pg_regress: there's no other way to cope with testing a relocated
installation. Seems better to call it --psqldir though, since the
only thing we need to find in that case is psql. It'd be better if
we could use find_other_exec, but that's not happening unless we are
willing to install pg_regress alongside psql, which seems unlikely
to happen.
the check on diff's exit status to check for literally 0 or 1. Someone
should look into why WIFEXITED/WEXITSTATUS don't work for this, but I've
spent more than enough time on it already.
recovery. In the first place, it doesn't work because slru's
latest_page_number isn't set up yet (this is why we've been hearing reports
of strange "apparent wraparound" log messages during crash recovery, but
only from people who'd managed to advance their next-mxact counters some
considerable distance from 0). In the second place, it seems a bit unwise
to be throwing away data during crash recovery anwyway. This latter
consideration convinces me to just disable truncation during recovery,
rather than computing latest_page_number and pushing ahead.
just exec instead of creating a subprocess. This reduces process usage
from four processes per parallel test to two. I have no idea whether
a comparable optimization is possible or useful in the Windows port.
This allows it to be used on Windows without installing mingw
(though you do still need 'diff'), and opens the door to future
improvements such as message localization.
Magnus Hagander and Tom Lane.
"DESCRIPTION", which is actually only allowed for device drivers. The
compilers ignore it with a warning - if we remove them, we get rid of
the warning.
Magnus Hagander
code to forcibly drop regressuser[1-4] and regressgroup[1-2]. Instead,
let the privileges.sql test do that for itself (this is made easy by
the recent addition of DROP ROLE IF EXISTS). Per a recent patch proposed
by Joachim Wieland --- the rest of his patch is superseded by the
rewrite into C, but this is a good idea we should adopt.
pg_usleep at all. Instead call the replacement function in
port/win32/signal.c by that name. Avoids tricky macro-redefinition
logic and suppresses a compiler warning; furthermore it ensures that
no one can accidentally use the non-signal-aware version of pg_usleep
in a Windows backend.
EINTR; the stats code was failing to do this and so were a couple of places
in the postmaster. The stats code assumed that recv() could not return EINTR
if a preceding select() showed the socket to be read-ready, but this is
demonstrably false with our Windows implementation of recv(), and it may
not be the case on all Unix variants either. I think this explains the
intermittent stats regression test failures we've been seeing, as well
as reports of stats collector instability under high load on Windows.
Backpatch as far as 8.0.
I think this explains the 'implicit declaration of function gai_strerror'
warnings visible in the current buildfarm report from snake: if
sys/socket.h is included again after getaddrinfo.h, the file would
merrily undefine the gai_strerror macro.
variable (this accounts for regression failures on PPC64, and in fact
won't work on any big-endian machine). Get rid of hardwired knowledge
about datum size rules; make it look just like datumCopy().
compiler warning, specifically #ifdef or #if defined tests on symbols
that are defined in a file not included. The results are a bit noisy
and require care to interpret, but it's a lot better than no tool at all.
I'm going to insist on reversion of this entire patch unless pgrminclude
is upgraded to a less broken state, but in the meantime let's get contrib
passing regression again.
fe-auth.c:573: warning: passing argument 1 of 'free' discards qualifiers
from pointer target type
pg_krb5_authname used to return a (const char *) to memory allocated by
krb. Somewhere along the lines this was changed so that a copy was
made, returned, and freed instead. However the const modifier was never
removed.
- halt.c did not include stdlib.h, thus missed exit() prototype
- Makefile ignores BINDIR for install.
- Makefile calls install with user/group args, thus failing for regular user.
While trying it I noticed that the Makefile does not support VPATH builds ...
it can handle small fillfactors for ordinary-sized index entries without
failing on large ones; fix nbtinsert.c to distinguish leaf and nonleaf
pages; change the minimum fillfactor to 10% for all index types.
- Replace sorted array of entries in maintenance_work_mem to binary tree,
this should improve create performance.
- More precisely calculate allocated memory, eliminate leaks
with user-defined extractValue()
- Improve wordings in tsearch2
a table. Otherwise a USING clause that yields NULL can leave the table
violating its constraint (possibly there are other cases too). Per report
from Alexander Pravking.
To this end, add a couple of columns to pg_class, relminxid and relvacuumxid,
based on which we calculate the pg_database columns after each vacuum.
We now force all databases to be vacuumed, even template ones. A backend
noticing too old a database (meaning pg_database.datminxid is in danger of
falling behind Xid wraparound) will signal the postmaster, which in turn will
start an autovacuum iteration to process the offending database. In principle
this is only there to cope with frozen (non-connectable) databases without
forcing users to set them to connectable, but it could force regular user
database to go through a database-wide vacuum at any time. Maybe we should
warn users about this somehow. Of course the real solution will be to use
autovacuum all the time ;-)
There are some additional improvements we could have in this area: for example
the vacuum code could be smarter about not updating pg_database for each table
when called by autovacuum, and do it only once the whole autovacuum iteration
is done.
I updated the system catalogs documentation, but I didn't modify the
maintenance section. Also having some regression tests for this would be nice
but it's not really a very straightforward thing to do.
Catalog version bumped due to system catalog changes.
I take out patch for this as a promise. This is client-build support of
MS-VC6+.
Fix for different getaddrinfo structure ordering on Win32 for IPv6.
Hiroshi Saito
Studio 2005. Basically MS defined errcode in the headers with a typedef,
so we have to #define it out of the way.
While at it, fix a function declaration in plpython that didn't match
the implementation (volatile missing).
Magnus Hagander
discussion (including making def_arg allow reserved words), add missed
opt_definition for UNIQUE case. Put the reloptions support code in a less
random place (I chose to make a new file access/common/reloptions.c).
Eliminate header inclusion creep. Make the index options functions safely
user-callable (seems like client apps might like to be able to test validity
of options before trying to make an index). Reduce overhead for normal case
with no options by allowing rd_options to be NULL. Fix some unmaintainably
klugy code, including getting rid of Natts_pg_class_fixed at long last.
Some stylistic cleanup too, and pay attention to keeping comments in sync
with code.
Documentation still needs work, though I did fix the omissions in
catalogs.sgml and indexam.sgml.
the read lock we hold on the table's parent relation until commit.
Update equalfuncs.c for the new field in AlterTableCmd. Various
improvements to comments, variable names, and error reporting.
There is room for further improvement here, but this is at least
a step in the right direction.
Open items:
There were a few tangentially related issues that have come up that I think
are TODOs. I'm likely to tackle one or two of these next so I'm interested in
hearing feedback on them as well.
. Constraints currently do not know anything about inheritance. Tom suggested
adding a coninhcount and conislocal like attributes have to track their
inheritance status.
. Foreign key constraints currently do not get copied to new children (and
therefore my code doesn't verify them). I don't think it would be hard to
add them and treat them like CHECK constraints.
. No constraints at all are copied to tables defined with LIKE. That makes it
hard to use LIKE to define new partitions. The standard defines LIKE and
specifically says it does not copy constraints. But the standard already has
an option called INCLUDING DEFAULTS; we could always define a non-standard
extension LIKE table INCLUDING CONSTRAINTS that gives the user the option to
request a copy including constraints.
. Personally, I think the whole attislocal thing is bunk. The decision about
whether to drop a column from children tables or not is something that
should be up to the user and trying to DWIM based on whether there was ever
a local definition or the column was acquired purely through inheritance is
hardly ever going to match up with user expectations.
. And of course there's the whole unique and primary key constraint issue. I
think to get any traction at all on this you have a prerequisite of a real
partitioned table implementation where the system knows what the partition
key is so it can recognize when it's a leading part of an index key.
Greg Stark
ScalarArrayOpExpr index quals: we were estimating the right total
number of rows returned, but treating the index-access part of the
cost as if a single scan were fetching that many consecutive index
tuples. Actually we should treat it as a multiple indexscan, and
if there are enough of 'em the Mackert-Lohman discount should kick in.
clauses containing no variables and no volatile functions. Such a clause
can be used as a one-time qual in a gating Result plan node, to suppress
plan execution entirely when it is false. Even when the clause is true,
putting it in a gating node wins by avoiding repeated evaluation of the
clause. In previous PG releases, query_planner() would do this for
pseudoconstant clauses appearing at the top level of the jointree, but
there was no ability to generate a gating Result deeper in the plan tree.
To fix it, get rid of the special case in query_planner(), and instead
process pseudoconstant clauses through the normal RestrictInfo qual
distribution mechanism. When a pseudoconstant clause is found attached to
a path node in create_plan(), pull it out and generate a gating Result at
that point. This requires special-casing pseudoconstants in selectivity
estimation and cost_qual_eval, but on the whole it's pretty clean.
It probably even makes the planner a bit faster than before for the normal
case of no pseudoconstants, since removing pull_constant_clauses saves one
useless traversal of the qual tree. Per gripe from Phil Frost.
be delivered directly to the collector process. The extra process context
swaps required to transfer data through the buffer process seem to outweigh
any value the buffering might have. Per recent discussion and tests.
I modified Bruce's draft patch to use poll() rather than select() where
available (this makes a noticeable difference on my system), and fixed
up the EXEC_BACKEND case.
the order in which it visits tables is not dependent on the physical order
of pg_constraint entries, and neither are the error messages it gives.
This should correct recently-noticed instability in regression tests.
tuple hash table entries. This addresses the problem previously noted
that use of a 'physical tlist' in the input scan node could bloat the
hash table entries far beyond what the planner expects. It's a better
answer than my previous thought of undoing the physical tlist optimization,
because we can also remove columns that are needed to compute the aggregate
functions but aren't part of the grouping column set.
* new split algorithm (as proposed in http://archives.postgresql.org/pgsql-hackers/2006-06/msg00254.php)
* possible call pickSplit() for second and below columns
* add spl_(l|r)datum_exists to GIST_SPLITVEC -
pickSplit should check its values to use already defined
spl_(l|r)datum for splitting. pickSplit should set
spl_(l|r)datum_exists to 'false' (if they was 'true') to
signal to caller about using spl_(l|r)datum.
* support for old pickSplit(): not very optimal
but correct split
* remove 'bytes' field from GISTENTRY: in any case size of
value is defined by it's type.
* split GIST_SPLITVEC to two structures: one for using in picksplit
and second - for internal use.
* some code refactoring
* support of subsplit to rtree opclasses
TODO: add support of subsplit to contrib modules
this someday, but right now it seems that posix_fadvise is immature to
the point of being broken on many platforms ... and we don't have any
benchmark evidence proving it's worth spending time on.
per-tuple space overhead for sorts in memory. I chose to replace the
previous patch that tried to write out the bare minimum amount of data
when sorting on disk; instead, just dump the MinimalTuples as-is. This
wastes 3 to 10 bytes per tuple depending on architecture and null-bitmap
length, but the simplification in the writetup/readtup routines seems
worth it.
analyzing, so that future analyze threshold calculations don't get confused.
Also, make sure we correctly track the decrease of live tuples cause by
deletes.
Per report from Dylan Hansen, patches by Tom Lane and me.
tuples with less header overhead than a regular HeapTuple, per my
recent proposal. Teach TupleTableSlot code how to deal with these.
As proof of concept, change tuplestore.c to store MinimalTuples instead
of HeapTuples. Future patches will expand the concept to other places
where it is useful.
will be expanded to a list of their member fields, rather than creating
a nested rowtype field as formerly. (The old behavior is still available
by omitting '.*'.) This syntax is not allowed by the SQL spec AFAICS,
so changing its behavior doesn't violate the spec. The new behavior is
substantially more useful since it allows, for example, triggers to check
for data changes with 'if row(new.*) is distinct from row(old.*)'. Per
my recent proposal.
palloc() will normally round allocation requests up to the next power of 2,
so make dynahash choose allocation sizes that are as close to a power of 2
as possible.
Back-patch to 8.1 --- the problem exists further back, but a much larger
patch would be needed and it doesn't seem worth taking any risks.
opposed to what other versions apparently do, so it's not safe to print an
error message. Besides, getopt_long itself already did, so it's redundant
anyway.
PGDG:
> Yes. In fact the copyright belongs to credativ GmbH the company that
> paid Carsten for his work. As you may or may not know I'm the CEO of
> that company and can assure you that his work was contributed to the
> PostgreSQL project.
After updating to the latest cvs, and also building most of the addons
(like PLs), the following patch is neededf for win32 + Visual C++.
* Switch to use the new win32 semaphore code
* Rename win32_open to pgwin32_open. win32_open collides with symbols
defined in Perl. MingW didn't detect ig, MSVC did. And it's a bit too
generic a name to export globally, imho...
* Python defines some partially broken #pragmas in the headers when
doing a debug build. Workaround.
Magnus Hagander
aggregates. We just disallowed that, and AFAICS there should be no other
cases where direct (non-aggregated) references to input columns are allowed
in a query with aggregation and no GROUP BY.
This is disallowed by the SQL spec because it doesn't have any very sensible
interpretation. Historically Postgres has allowed it but behaved strangely.
As of PG 8.1 a server crash is possible if the MIN/MAX index optimization gets
applied; rather than try to "fix" that, it seems best to just enforce the
spec restriction. Per report from Josh Drake and Alvaro Herrera.
GetVariable() and be consistent about treatment of the list header.
Motivated by noticing strspn() taking an unreasonable percentage of
runtime --- the call removed from GetVariable() was the only one that
could be in a high-usage path ...
changing semantics too much. statement_timestamp is now set immediately
upon receipt of a client command message, and the various places that used
to do their own gettimeofday() calls to mark command startup are referenced
to that instead. I have also made stats_command_string use that same
value for pg_stat_activity.query_start for both the command itself and
its eventual replacement by <IDLE> or <idle in transaction>. There was
some debate about that, but no argument that seemed convincing enough to
justify an extra gettimeofday() call.
libpq/md5.h, so that there's a clear separation between backend-only
definitions and shared frontend/backend definitions. (Turns out this
is reversing a bad decision from some years ago...) Fix up references
to crypt.h as needed. I looked into moving the code into src/port, but
the headers in src/include/libpq are sufficiently intertwined that it
seems more work than it's worth to do that.
current commands; instead, store current-status information in shared
memory. This substantially reduces the overhead of stats_command_string
and also ensures that pg_stat_activity is fully up to date at all times.
Per my recent proposal.
for it. Hopefully will fix core dump evidenced by some buildfarm members
since fadvise patch went in. The actual definition of the function is not
ABI-compatible with compiler's default assumption in the absence of any
declaration, so it's clearly unsafe to try to call it without seeing a
declaration.
We have once or twice seen failures suggesting that control didn't get
to the exception block before the timeout elapsed, which is unlikely
but not impossible in a parallel regression test (with a dozen other
backends competing for cycles). This change doesn't completely prevent
the problem of course, but it should reduce the probability enough that
we don't see it anymore. Per buildfarm results.
by creating a reference-count mechanism, similar to what we did a long time
ago for catcache entries. The back branches have an ugly solution involving
lots of extra copies, but this way is more efficient. Reference counting is
only applied to tupdescs that are actually in caches --- there seems no need
to use it for tupdescs that are generated in the executor, since they'll go
away during plan shutdown by virtue of being in the per-query memory context.
Neil Conway and Tom Lane
remove the infrastructure needed to enforce the limit, ie, the global
LRU list of cache entries. On small-to-middling databases this wins
because maintaining the LRU list is a waste of time. On large databases
this wins because it's better to keep more cache entries (we assume
such users can afford to use some more per-backend memory than was
contemplated in the Berkeley-era catcache design). This provides a
noticeable improvement in the speed of psql \d on a 10000-table
database, though it doesn't make it instantaneous.
While at it, use per-catcache settings for the number of hash buckets
per catcache, rather than the former one-size-fits-all value. It's a
bit silly to be using the same number of hash buckets for, eg, pg_am
and pg_attribute. The specific values I used might need some tuning,
but they seem to be in the right ballpark based on CATCACHE_STATS
results from the standard regression tests.
the lower-level large object functions fails, it will have already set
a suitable error message --- probably something from the backend ---
and it is not useful to overwrite that with a generic 'error while
reading large object' message. So remove redundant messages.
places --- that risks corrupting data structures, losing sync with the
backend, etc. We now longjmp only from calls to readline, fgets, and
fread, which we assume are coded to protect themselves against interrupts
at undesirable times. This requires adding explicit tests for
cancel_pressed in long-running loops, but on the whole it's far cleaner.
Martijn van Oosterhout and Tom Lane.
function call. Previously, there may have been no CHECK_FOR_INTERRUPTS
at all in the fastpath code path, making it impossible to cancel an
operation such as \lo_import externally. This addition doesn't ensure
you can cancel, since your SIGINT may arrive while the backend is idle
waiting for the client, but it gives the largest window we can easily
provide. Noted while experimenting with new control-C code for psql.
already-aborted transaction block. GetSnapshotData throws an Assert if
not in a valid transaction; hence we mustn't attempt to set a snapshot
for the function until after checking for aborted transaction. This is
harmless AFAICT if Asserts aren't enabled (GetSnapshotData will compute
a bogus snapshot, but it doesn't matter since HandleFunctionRequest will
throw an error shortly anywy). Hence, not a major bug.
Along the way, add some ability to log fastpath calls when statement
logging is turned on. This could probably stand to be improved further,
but not logging anything is clearly undesirable.
Backpatched as far as 8.0; bug doesn't exist before that.
was invoking obj_description() for each large object chunk, instead of once
per large object. This code is new as of 8.1, which may explain why the
problem hadn't been noticed already.
LWLocks during a panic exit. This avoids the possible self-deadlock pointed
out by Qingqing Zhou. Also, I noted that an error during LoadFreeSpaceMap()
or BuildFlatFiles() would result in exit(0) which would leave the postmaster
thinking all is well. Added a critical section to ensure such errors don't
allow startup to proceed.
Backpatched to 8.1. The 8.0 code is a bit different and I'm not sure if the
problem exists there; given we've not seen this reported from the field, I'm
going to be conservative about backpatching any further.
o remove many WIN32_CLIENT_ONLY defines
o add WIN32_ONLY_COMPILER define
o add 3rd argument to open() for portability
o add include/port/win32_msvc directory for
system includes
Magnus Hagander
it is just the total time to do INSTR_TIME_SET_CURRENT(), and not any of
the other code involved in InstrStartNode/InstrStopNode. Even though I
fear we may end up reverting this patch altogether, we may as well have
the most correct version in our CVS archive.
choose_bitmap_and(). It was way too fuzzy --- per comment, it was meant to be
1% relative difference, but was actually coded as 0.01 absolute difference,
thus causing selectivities of say 0.001 and 0.000000000001 to be treated as
equal. I believe this thinko explains Maxim Boguk's recent complaint. While
we could change it to a relative test coded like compare_fuzzy_path_costs(),
there's a bigger problem here, which is that any fuzziness at all renders the
comparison function non-transitive, which could confuse qsort() to the point
of delivering completely wrong results. So forget the whole thing and just
do an exact comparison.
that the Mackert-Lohmann formula applies across all the repetitions of the
nestloop, not just each scan independently. We use the M-L formula to
estimate the number of pages fetched from the index as well as from the table;
that isn't what it was designed for, but it seems reasonably applicable
anyway. This makes large numbers of repetitions look much cheaper than
before, which accords with many reports we've received of overestimation
of the cost of a nestloop. Also, change the index access cost model to
charge random_page_cost per index leaf page touched, while explicitly
not counting anything for access to metapage or upper tree pages. This
may all need tweaking after we get some field experience, but in simple
tests it seems to be giving saner results than before. The main thing
is to get the infrastructure in place to let cost_index() and amcostestimate
functions take repeated scans into account at all. Per my recent proposal.
Note: this patch changes pg_proc.h, but I did not force initdb because
the changes are basically cosmetic --- the system does not look into
pg_proc to decide how to call an index amcostestimate function, and
there's no way to call such a function from SQL at all.
cost_nonsequential_access() is really totally inappropriate for its only
remaining use, namely estimating I/O costs in cost_sort(). The routine
was designed on the assumption that disk caching might eliminate the need
for some re-reads on a random basis, but there's nothing very random in
that sense about sort's access pattern --- it'll always be picking up the
oldest outputs. If we had a good fix on the effective cache size we
might consider charging zero for I/O unless the sort temp file size
exceeds it, but that's probably putting much too much faith in the
parameter. Instead just drop the logic in favor of a fixed compromise
between seq_page_cost and random_page_cost per page of sort I/O.
This shouldn't affect simple indexscans much, while for bitmap scans that
are touching a lot of index rows, this seems to bring the estimates more
in line with reality. Per recent discussion.
assumed that a sequential page fetch has cost 1.0. This patch doesn't
in itself change the system's behavior at all, but it opens the door to
people adopting other units of measurement for EXPLAIN costs. Also, if
we ever decide it's worth inventing per-tablespace access cost settings,
this change provides a workable intellectual framework for that.
for LC_MESSAGES; instead, just press forward, leaving the effective setting
at 'C'. There is not any very good reason to complain when we are going
to replace the value soon with whatever postgresql.conf says. This change
should solve the occasionally-reported problem of initdb failing with
'failed to initialize lc_messages'; the current theory is that that is
a reflection of either wrong LANG/LC_MESSAGES or completely broken locale
support.
and there's only one place that's a kluge, ie, appendStringLiteralConn.
Note that pg_dump itself doesn't use appendStringLiteralConn, so its
behavior is not affected; only the other utility programs care.
o turns off escape_string_warning in pg_dumpall.c
o optionally use E'' for \password (undocumented option?)
o honor standard_conforming-strings for \copy (but not
support literal E'' strings)
o optionally use E'' for \d commands
o turn off escape_string_warning for createdb, createuser,
droplang
as this seems only likely to create headaches for module developers. Put
the macro in the pre-existing fmgr.h file instead. Avoid being too cute
about how many fields we can cram into a word, and avoid trying to fetch
from a library we've already unlinked.
Along the way, it occurred to me that the magic block really ought to be
'const' so it can be stored in the program text area. Do the same for
the existing data blocks for PG_FUNCTION_INFO_V1 functions.
It now only checks four things:
Major version number (7.4 or 8.1 for example)
NAMEDATALEN
FUNC_MAX_ARGS
INDEX_MAX_KEYS
The three constants were chosen because:
1. We document them in the config page in the docs
2. We mark them as changable in pg_config_manual.h
3. Changing any of these will break some of the more popular modules:
FUNC_MAX_ARGS changes fmgr interface, every module uses this NAMEDATALEN
changes syscache interface, every PL as well as tsearch uses this
INDEX_MAX_KEYS breaks tsearch and anything using GiST.
Martijn van Oosterhout
---------------------------------------------------------------------------
Add dynamic record inspection to PL/PgSQL, useful for generic triggers:
tval2 := r.(cname);
or
columns := r.(*);
Titus von Boxberg
---------------------------------------------------------------------------
Delay write of pg_stats file to once every five minutes, during
shutdown, or when requested by a backend:
It changes so the file is only written once every 5 minutes (changeable
of course, I just picked something) instead of once every half second.
It's still written when the stats collector shuts down, just as before.
And it is now also written on backend request. A backend requests a
rewrite by simply sending a special stats message. It operates on the
assumption that the backends aren't actually going to read the
statistics file very often, compared to how frequent it's written today.
Magnus Hagander
and standard_conforming_strings; likewise for the other client programs
that need it. As per previous discussion, a pg_dump dump now conforms
to the standard_conforming_strings setting of the source database.
We don't use E'' syntax in the dump, thereby improving portability of
the SQL. I added a SET escape_strings_warning = off command to keep
the dumps from getting a lot of back-chatter from that.
Per Coverity bug #304. Thanks to Martijn van Oosterhout for reporting it.
Zero out the pointer fields of PGresult so that these mistakes are more
easily catched, per discussion.
current setting of standard_conforming_strings to decide how to quote
strings that will be used later. There is much more to do here but
this particular change breaks the build on Windows, so fix it now.
JOIN, which I removed in a recent fit of over-optimism that we wouldn't
have any future use for it. Now it's needed to support disambiguating
WITH CHECK OPTION from WITH TIME ZONE. As proof of concept, add stub
grammar productions for WITH CHECK OPTION.
'off'. This allows pg_dump output with standard_conforming_strings =
'on' to generate proper strings that can be loaded into other databases
without the backslash doubling we typically do. I have added the
dumping of the standard_conforming_strings value to pg_dump.
I also added standard backslash handling for plpgsql.
per-call overhead is quite significant, at least on Linux: whatever
it's doing is more than just shoving the bytes into a buffer. Buffering
the data so we can call fwrite() just once per row seems to be a win.
kept but now deprecated. Patch from Adam Sjøgren. Add regression test to
show plperl trigger data (Andrew).
TBD: apply similar changes to plpgsql, plpython and pltcl.
* some refactoring and simplify code int gistutil.c and gist.c
* now in some cases it can be called used-defined
picksplit method for non-first column in index, but here
is a place to do more.
* small fix of docs related to support NULL.
and transaction visibility fields of tuples being sorted. These are
always uninteresting in a tuple being sorted (if the fields were actually
selected, they'd have been pulled out into user columns beforehand).
This saves about 24 bytes per row being sorted, which is a useful savings
for any but the widest of sort rows. Per recent discussion.
any use in the past many years, we'd have made some effort to include
them in all executor node types; but in fact they were only in
nodeAppend.c and nodeIndexscan.c, up until I copied nodeIndexscan.c's
occurrence into the new bitmap node types. Remove some other unused
macros in execdebug.h, too. Some day the whole header probably ought to
go away in favor of better-designed facilities.
and standard_conforming_strings. The encoding changes are needed for proper
escaping in multibyte encodings, as per the SQL-injection vulnerabilities
noted in CVE-2006-2313 and CVE-2006-2314. Concurrent fixes are being applied
to the server to ensure that it rejects queries that may have been corrupted
by attempted SQL injection, but this merely guarantees that unpatched clients
will fail rather than allow injection. An actual fix requires changing the
client-side code. While at it we have also fixed these routines to understand
about standard_conforming_strings, so that the upcoming changeover to SQL-spec
string syntax can be somewhat transparent to client code.
Since the existing API of PQescapeString and PQescapeBytea provides no way to
inform them which settings are in use, these functions are now deprecated in
favor of new functions PQescapeStringConn and PQescapeByteaConn. The new
functions take the PGconn to which the string will be sent as an additional
parameter, and look inside the connection structure to determine what to do.
So as to provide some functionality for clients using the old functions,
libpq stores the latest encoding and standard_conforming_strings values
received from the backend in static variables, and the old functions consult
these variables. This will work reliably in clients using only one Postgres
connection at a time, or even multiple connections if they all use the same
encoding and string syntax settings; which should cover many practical
scenarios.
Clients that use homebrew escaping methods, such as PHP's addslashes()
function or even hardwired regexp substitution, will require extra effort
to fix :-(. It is strongly recommended that such code be replaced by use of
PQescapeStringConn/PQescapeByteaConn if at all feasible.
parser will allow "\'" to be used to represent a literal quote mark. The
"\'" representation has been deprecated for some time in favor of the
SQL-standard representation "''" (two single quote marks), but it has been
used often enough that just disallowing it immediately won't do. Hence
backslash_quote allows the settings "on", "off", and "safe_encoding",
the last meaning to allow "\'" only if client_encoding is a valid server
encoding. That is now the default, and the reason is that in encodings
such as SJIS that allow 0x5c (ASCII backslash) to be the last byte of a
multibyte character, accepting "\'" allows SQL-injection attacks as per
CVE-2006-2314 (further details will be published after release). The
"on" setting is available for backward compatibility, but it must not be
used with clients that are exposed to untrusted input.
Thanks to Akio Ishida and Yasuo Ohgaki for identifying this security issue.
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
issued by autovacuum. Add accessor functions to them, and use those in the
pg_stat_*_tables system views.
Catalog version bumped due to changes in the pgstat views and the pgstat file.
Patch from Larry Rosenman, minor improvements by me.
deciding whether a potential additional indexscan is redundant or not. As now
coded, any use of a partial index that was already used in a previous AND arm
will be rejected as redundant. This might be overly restrictive, but not
considering the point at all is definitely bad, as per example in bug #2441
from Arjen van der Meijden. In particular, a clauseless scan of a partial
index was *never* considered redundant by the previous coding, and that's
surely wrong. Being more flexible would also require some consideration
of how not to double-count the index predicate's selectivity.
the partial index predicate in the scan's "recheck condition". Otherwise,
if the scan becomes lossy for lack of bitmap memory, we would fail to enforce
that returned rows satisfy the predicate. Noted while studying bug #2441
from Arjen van der Meijden.
condition: when there are multiple possible index paths involving
ScalarArrayOpExprs, they are logically to be ANDed together not ORed.
This thinko was a direct consequence of trying to put the processing
inside generate_bitmap_or_paths(), which I now see was a bit too cute.
So pull it out and make the callers do it separately (there are only two
that need it anyway). Partially responds to bug #2441 from Arjen van der Meijden.
There are some additional infelicities exposed by his example, but they
are also in 8.1.x, while this mistake is not.
sections now isn't nested. All user-defined functions now is
called outside critsections. Small improvements in WAL
protocol.
TODO: improve XLOG replay
always has been, because it's not got any .globl declaration! We've
been relying on the solaris_sparc.s code instead. Rip it out.
(Not back-patched, since this is just cosmetic cleanup.)
throw warnings for 100%-SQL-standard constructs, clean up some minor
infelicities, try to un-break ecpg to the best of my ability. (It's not clear
how ecpg is going to find out the setting of standard_conforming_strings,
though.) I think pg_dump still needs work, too.
(relpages/reltuples). To do this, create formal support in heapam.c for
"overwrite" tuple updates (including xlog replay capability) and use that
instead of the ad-hoc overwrites we'd been using in VACUUM and CREATE INDEX.
Take the responsibility for updating stats during CREATE INDEX out of the
individual index AMs, and do it where it belongs, in catalog/index.c. Aside
from being more modular, this avoids having to update the same tuple twice in
some paths through CREATE INDEX. It's probably not measurably faster, but
for sure it's a lot cleaner than before.
into a single mostly-physical-order scan of the index. This requires some
ticklish interlocking considerations, but should create no material
performance impact on normal index operations (at least given the
already-committed changes to make scans work a page at a time). VACUUM
itself should get significantly faster in any index that's degenerated to a
very nonlinear page order. Also, we save one pass over the index entirely,
except in the case where there were no deletions to do and so only one pass
happened anyway.
Original patch by Heikki Linnakangas, rework by Tom Lane.
btgettuple and btgetmulti). This eliminates the problem of "re-finding" the
exact stopping point, since the stopping point is effectively always a page
boundary, and index items are never moved across pre-existing page boundaries.
A small penalty is that the keys_are_unique optimization is effectively
disabled (and, therefore, is removed in this patch), causing us to apply
_bt_checkkeys() to at least one more tuple than necessary when looking up a
unique key. However, the advantages for non-unique cases seem great enough to
accept this tradeoff. Aside from simplifying and (sometimes) speeding up the
indexscan code, this will allow us to reimplement btbulkdelete as a largely
sequential scan instead of index-order traversal, thereby significantly
reducing the cost of VACUUM. Those changes will come in a separate patch.
Original patch by Heikki Linnakangas, rework by Tom Lane.
it's not necessary to have three separate calls anymore. This patch also
fixes things so we don't try to read pg_internal.init until after we've
obtained lock on the target database; which was fairly harmless, but it's
certainly cleaner this way.
The former approach used ExclusiveLock on pg_database, which being a
cluster-wide lock meant only one of these operations could proceed at
a time; worse, it also blocked all incoming connections in ReverifyMyDatabase.
Now that we have LockSharedObject(), we can use locks of different types
applied to databases considered as objects. This allows much more
flexible management of the interlocking: two CREATE DATABASEs need not
block each other, and need not block connections except to the template
database being used. Similarly DROP DATABASE doesn't block unrelated
operations. The locking used in flatfiles.c is also much narrower in
scope than before. Per recent proposal.
in various places that were previously doing ad hoc pg_database searches.
This may speed up database-related privilege checks a little bit, but
the main motivation is to eliminate the performance reason for having
ReverifyMyDatabase do such a lot of stuff (viz, avoiding repeat scans
of pg_database during backend startup). The locking reason for having
that routine is about to go away, and it'd be good to have the option
to break it up.
initPlan sets a parameter for another. This could not (I think) happen before
8.1, but it's possible now because the initPlans generated by MIN/MAX
optimization might themselves use initPlans. We attach those initPlans as
siblings of the MIN/MAX ones, not children, to avoid duplicate computation
when multiple MIN/MAX aggregates are present; so this leads to the case of an
initPlan needing the result of a sibling initPlan, which is not possible with
ordinary query nesting. Hadn't been noticed because in most contexts having
too much stuff listed in extParam is fairly harmless. Fixes "plan should not
reference subplan's variable" bug reported by Catalin Pitis.
This formulation requires every AM to provide amvacuumcleanup, unlike before,
but it's surely a whole lot cleaner. Also, add an 'amstorage' column to
pg_am so that we can get rid of hardwired knowledge in DefineOpClass().
the union of its child relations as well. This might have been a good idea
when it was originally coded, but it's a fatally bad idea when inheritance is
being used for partitioning. It's better to have no stats at all than
completely misleading stats. Per report from Mark Liberman.
The bug arguably exists all the way back, but I've only patched HEAD and 8.1
because we weren't particularly trying to support partitioning before 8.1.
Eventually we ought to look at deriving union statistics instead of just
punting, but for now the drop kick looks good.
input datatypes given, and use this before trying OpernameGetCandidates.
This is faster than the old method when there's an exact match, and it
does not seem materially slower when there's not. And it definitely
makes some of the callers cleaner, because they didn't really want to
know about a list of candidates anyway. Per discussion with Atsushi Ogawa.
support both FOR UPDATE and FOR SHARE in one command, as well as both
NOWAIT and normal WAIT behavior. The more general code is actually
simpler and cleaner.
MIN/MAX not be converted to use an index if the query WHERE clause contains
any volatile functions or subplans.
I had originally feared that the conversion might alter the behavior of such a
query with respect to a volatile function. Well, so it might, but only in the
sense that the function would get evaluated at a subset of the table rows
rather than all of them --- and we have never made any such guarantee anyway.
(For instance, we don't refuse to use an index for an ordinary non-aggregate
query when one of the non-indexable filter conditions contains a volatile
function.)
The prohibition against subplans was because of worry that that case wasn't
adequately tested, which it wasn't, but it turns out to be possible to make
8.1 fail anyway:
regression=# select o.ten, (select max(unique2) from tenk1 i where ten = o.ten
or ten = (select f1 from int4_tbl limit 1)) from tenk1 o;
ERROR: direct correlated subquery unsupported as initplan
This is due to bogus code in SS_make_initplan_from_plan (it's an initplan,
ergo it can't have any parParams). Having fixed that, we might as well allow
subplans as well as initplans.
be exported on Linux and Darwin. We already did this on Windows but
that's not enough, as evidenced by the fact that libecpg had an unexpected
dependency on one such symbol. We should try to do it on more platforms.
Fix ecpg's oversight, and bump libpq's major .so version number to reflect
the unwanted but nonetheless real ABI break.
cases. This was not needed in the existing uses within selfuncs.c, but if
we're gonna export it for general use, the extra generality seems helpful.
Motivated by looking at ltree example.
> >> >> > 1) named parameters additionally to args[]
> >> >> > 2) return composite-types from plpython as dictionary
> >> >> > 3) return result-set from plpython as list, iterator or generator
1) named parameters additionally to args[]
2) return composite-types from plpython as dictionary
3) return result-set from plpython as list, iterator or generator
Hannu Krosing
Sven Suursoho
In the SSL code in libpq it does some processing with DH parameters:
SSL_CTX_set_tmp_dh_callback()
This function is marked as server use only[1], the client always uses
the DH parameters in the server, so all the code in the client dealing
with the DH parameters is useless. This patch removes it.
It's not clear why the code was added in the first place, it's been
there almost since the beginning[2]. At the time there was a suggestion
of merging the front-end and backend SSL code, but looking at the
changes since, that seems unlikely.
As a further example, the s_server program allows you to specify DH
params, but s_client doesn't. In the GnuTLS documentation under
gnutls_dh_params_generate2() it says[3]:
Also note that the DH parameters are only useful to servers. Since
clients use the parameters sent by the server, it's of no use to call
this in client side.
shutdown, or when requested by a backend:
It changes so the file is only written once every 5 minutes (changeable
of course, I just picked something) instead of once every half second.
It's still written when the stats collector shuts down, just as before.
And it is now also written on backend request. A backend requests a
rewrite by simply sending a special stats message. It operates on the
assumption that the backends aren't actually going to read the
statistics file very often, compared to how frequent it's written today.
Magnus Hagander
set to the large object context ("fscxt"), as this is inevitably a source of
transaction-duration memory leaks. Not sure why we'd not noticed it before;
maybe people weren't touching a whole lot of LOs in the same transaction
before the 8.1 pg_dump changes. Per report from Wayne Conrad.
Backpatched as far as 8.1, but the problem doubtless goes all the way back.
I'm disinclined to spend the time to try to verify that the older branches
would still work if patched, seeing that this code was significantly modified
for 8.0 and again for 8.1, and that we don't have any trouble reports before
8.1. (Maybe the leaks were smaller before?)
thereby saving a visit to the metapage in most index searches/updates.
This wouldn't actually save any I/O (since in the old regime the metapage
generally stayed in cache anyway), but it does provide a useful decrease
in bufmgr traffic in high-contention scenarios. Per my recent proposal.
implied by the predicate of a partial index being used to scan a table.
However, this optimization is unsafe in an UPDATE, DELETE, or SELECT FOR
UPDATE query, because the quals need to be rechecked by EvalPlanQual if
there's an update conflict. Per example from Jean-Samuel Reynaud.
transaction_timestamp() (just like now()).
Also update statement_timeout() to mention it is statement arrival time
that is measured.
Catalog version updated.
not named ones, and replace linear searches of the list with array indexing.
The named-parameter support has been dead code for many years anyway,
and recent profiling suggests that the searching was costing a noticeable
amount of performance for complex queries.
to track the number of LWLock acquisitions and the number of times we
block waiting for an LWLock, on a per-process basis. After having needed
this twice in the past few months, seems like it should go into CVS.
of rejecting palloc(0). Also, tweak like_selectivity() to avoid assuming
the presented pattern is nonempty; although that assumption is valid,
it doesn't really help much, and the new coding is more correct anyway
since it properly handles redundant wildcards. In combination these
changes should eliminate a Coverity warning noted by Martijn.
whenever we start to read within that file. The first page carries
extra identification information that really ought to be checked, but
as the code stood, this was only checked when we switched sequentially
into a new WAL file, or if by chance the starting checkpoint record was
within the first page. This patch ensures that we will detect bogus
'long header' information before we start replaying the WAL sequence.
symbol for PPC64 hardware. I hadn't known that Apple supported PPC64 at
all, but darn if there aren't 64-bit variant libraries in OS X as well
as support in their gcc.
no evidence that any currently-supported platform needs this, and good
reason to think that any platform that did need it couldn't use the static
libraries anyway --- libpq, at least, has circular references. Removing
the code shuts up tsort warnings about the circular references on some
platforms.
8.0), and add as suggestion to use log_min_error_statement for this
purpose. I also fixed the code so the first EXECUTE has it's prepare,
rather than the last which is what was in the current code. Also remove
"protocol" prefix for SQL EXECUTE output because it is not accurate.
Backpatch to 8.1.X.
to occur between pg_start_backup() and pg_stop_backup(), even if the GUC
setting full_page_writes is OFF. Per discussion, doing this in combination
with the already-existing checkpoint during pg_start_backup() should ensure
safety against partial page updates being included in the backup. We do
not have to force full page writes to occur during normal PITR operation,
as I had first feared.
CREATE AGGREGATE aggname (input_type) (parameter_list)
along with the old syntax where the input type was named in the parameter
list. This fits more naturally with the way that the aggregate is identified
in DROP AGGREGATE and other utility commands; furthermore it has a natural
extension to handle multiple-input aggregates, where the basetype-parameter
method would get ugly. In fact, this commit fixes the grammar and all the
utility commands to support multiple-input aggregates; but DefineAggregate
rejects it because the executor isn't fixed yet.
I didn't do anything about treating agg(*) as a zero-input aggregate instead
of artificially making it a one-input aggregate, but that should be considered
in combination with supporting multi-input aggregates.
update no-longer-existing pages to fall through as no-ops, but make a note
of each page number referenced by such records. If we don't see a later
XLOG entry dropping the table or truncating away the page, complain at
the end of XLOG replay. Since this fixes the known failure mode for
full_page_writes = off, revert my previous band-aid patch that disabled
that GUC variable.
If a process abandons a wait in LockBufferForCleanup (in practice,
only happens if someone cancels a VACUUM) just before someone else
sends it a signal indicating the buffer is available, it was possible
for the wakeup to remain in the process' semaphore, causing misbehavior
next time the process waited for an lmgr lock. Rather than try to
prevent the race condition directly, it seems best to make the lock
manager robust against leftover wakeups, by having it repeat waiting
on the semaphore if the lock has not actually been granted or denied
yet.
alternatives ("|" symbol). The original coding allowed the added ^ and $
constraints to be absorbed into the first and last alternatives, producing
a pattern that would match more than it should. Per report from Eric Noriega.
I also changed the pattern to add an ARE director ("***:"), ensuring that
SIMILAR TO patterns do not change behavior if regex_flavor is changed. This
is necessary to make the non-capturing parentheses work, and seems like a
good idea on general principles.
Back-patched as far as 7.4. 7.3 also has the bug, but a fix seems impractical
because that version's regex engine doesn't have non-capturing parens.
upper-level insertion completes a previously-seen split, we cannot simply grab
the downlink block number out of the buffer, because the buffer could contain
a later state of the page --- or perhaps the page doesn't even exist at all
any more, due to relation truncation. These possibilities have been masked up
to now because the use of full_page_writes effectively ensured that no xlog
replay routine ever actually saw a page state newer than its own change.
Since we're deprecating full_page_writes in 8.1.*, there's no need to fix this
in existing release branches, but we need a fix in HEAD if we want to have any
hope of re-allowing full_page_writes. Accordingly, adjust the contents of
btree WAL records so that we can always get the downlink block number from the
WAL record rather than having to depend on buffer contents. Per report from
Kevin Grittner and Peter Brant.
Improve a few comments in related code while at it.
# Need to recomple any libpgport object files because we need these
# object files to use the same compile flags as libpq. If we used
# the object files from libpgport, this would not be true on all
# platforms.
had a bad side-effect: it stopped finding plans that involved BitmapAnd
combinations of indexscans using both join and non-join conditions. Instead,
make choose_bitmap_and more aggressive about detecting redundancies between
BitmapOr subplans.
at least one join condition as an indexqual. Before bitmap indexscans, this
oversight didn't really cost much except for redundantly considering the
same join paths twice; but as of 8.1 it could result in silly bitmap scans
that would do the same BitmapOr twice and then BitmapAnd these together :-(
when trying to locate the referent of a RECORD variable. This fixes the
'record type has not been registered' failure reported by Stefan
Kaltenbrunner about a month ago. A side effect of the way I chose to
fix it is that most variable references in join conditions will now be
properly labeled with the variable's source table name, instead of the
not-too-helpful 'outer' or 'inner' we used to use.
identically named user and group: we merge these into a single entity
with LOGIN permission. Also, add ORDER BY commands to ensure consistent
dump ordering, for ease of comparing outputs from different installations.
output, ie, no OR immediately below an OR. Otherwise we get Asserts or
wrong answers for cases such as
select * from tenk1 a, tenk1 b
where (a.ten = b.ten and (a.unique1 = 100 or a.unique1 = 101))
or (a.hundred = b.hundred and a.unique1 = 42);
Per report from Rafael Martinez Guerrero.
Per recent discussion, this seems to be making the stats less accurate
rather than more so, particularly on Windows where PID values may be
reused very quickly. Patch by Peter Brant.
generated text files. Fix build of that file, too.
Put the text files in the right place during make dist, so there are no
extra manual steps required anymore.
that apply the necessary domain constraint checks immediately. This fixes
cases where domain constraints went unchecked for statement parameters,
PL function local variables and results, etc. We can also eliminate existing
special cases for domains in places that had gotten it right, eg COPY.
Also, allow domains over domains (base of a domain is another domain type).
This almost worked before, but was disallowed because the original patch
hadn't gotten it quite right.
files of the same languages. That way, similar or equal translations in
different programs are automatically propagated and the life of translators
becomes a little bit easier.
XLOG_BLCKSZ. This ought to help in preventing configuration mismatch
problems if anyone tries to ship PITR files between servers compiled
with different XLOG_BLCKSZ settings. Simon Riggs
instead a dedicated symbol. This probably makes no functional difference
for likely values of BLCKSZ, but it makes the intent clearer.
Simon Riggs, minor editorialization by Tom Lane.
functions are not strict, they will be called (passing a NULL first parameter)
during any attempt to input a NULL value of their datatype. Currently, all
our input functions are strict and so this commit does not change any
behavior. However, this will make it possible to build domain input functions
that centralize checking of domain constraints, thereby closing numerous holes
in our domain support, as per previous discussion.
While at it, I took the opportunity to introduce convenience functions
InputFunctionCall, OutputFunctionCall, etc to use in code that calls I/O
functions. This eliminates a lot of grotty-looking casts, but the main
motivation is to make it easier to grep for these places if we ever need
to touch them again.
used within WAL files. Historically this was the same as the data file
BLCKSZ, but there's no necessary connection, and it's possible that
performance gains might ensue from reducing XLOG_BLCKSZ. In any case
distinguishing two symbols should improve code clarity. This commit
does not actually change the page size, only provide the infrastructure
to make it possible to do so. initdb forced because of addition of a
field to pg_control.
Mark Wong, with some help from Simon Riggs and Tom Lane.
to fix regressions introduced in the recent patch adding additional
\connect options. This is based on work by Volkan YAZICI, although
this version of the patch doesn't bear much resemblance to Volkan's
version.
\connect takes 4 optional arguments: database name, user name, host
name, and port number. If any of those parameters are omitted or
specified as "-", the value of that parameter from the previous
connection is used instead; if there is no previous connection,
the libpq default is used. Note that this behavior makes it
impossible to reuse the libpq defaults without quitting psql and
restarting it; I don't really see the use case for needing to do
that.
whitespace issues nearby.
DROP OWNED BY is actually a bit kludgy, but it seems better to do it this way
rather than duplicating the words_after_create list just to add a single
element.
incrementally by successive inserts rather than by sorting the data.
We were only using the slow path during bootstrap, apparently because
when first written it failed during bootstrap --- but it works fine now
AFAICT. Removing it saves a hundred or so lines of code and produces
noticeably (~10%) smaller initial states of the system catalog indexes.
While that won't make much difference for heavily-modified catalogs,
for the more static ones there may be a useful long-term performance
improvement.
misleadingly-named WriteBuffer routine, and instead require routines that
change buffer pages to call MarkBufferDirty (which does exactly what it says).
We also require that they do so before calling XLogInsert; this takes care of
the synchronization requirement documented in SyncOneBuffer. Note that
because bufmgr takes the buffer content lock (in shared mode) while writing
out any buffer, it doesn't matter whether MarkBufferDirty is executed before
the buffer content change is complete, so long as the content change is
completed before releasing exclusive lock on the buffer. So it's OK to set
the dirtybit before we fill in the LSN.
This eliminates the former kluge of needing to set the dirtybit in LockBuffer.
Aside from making the code more transparent, we can also add some new
debugging assertions, in particular that the caller of MarkBufferDirty must
hold the buffer content lock, not merely a pin.
torn-page problems. This introduces some issues of its own, mainly
that there are now some critical sections of unreasonably broad scope,
but it's a step forward anyway. Further cleanup will require some
code refactoring that I'd prefer to get Oleg and Teodor involved in.
startup or recovery process. Since such a process isn't a real backend,
pgstat.c gets confused. This accounts for recent reports of strange
"invalid server process ID -1" log messages during crash recovery.
There isn't any point in attempting to make the report, since we'll discard
stats in such scenarios anyhow.
This commit doesn't make much functional change, but it does eliminate some
duplicated code --- for instance, PageIsNew tests are now done inside
XLogReadBuffer rather than by each caller.
The GIST xlog code still needs a lot of love, but I'll worry about that
separately.
have symlinks (ie, Windows). Although it'll never be called on to do anything
useful during normal operation on such a platform, it's still needed to
re-create dropped directories during WAL replay.
failures even when the hardware and OS did nothing wrong. Per recent analysis
of a problem report from Alex Bahdushka.
For the moment I've just diked out the test of the parameter, rather than
removing the GUC infrastructure and documentation, in case we conclude that
there's something salvageable there. There seems no chance of it being
resurrected in the 8.1 branch though.
passed extend = true whenever we are reading a page we intend to reinitialize
completely, even if we think the page "should exist". This is because it
might indeed not exist, if the relation got truncated sometime after the
current xlog record was made and before the crash we're trying to recover
from. These two thinkos appear to explain both of the old bug reports
discussed here:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01369.php
tuples as needed "to keep VACUUM from complaining", but actually there is
a more compelling reason to do it: failure to do so violates MVCC semantics.
This is because a pre-existing serializable transaction might try to use
the index after we finish (re)building it, and it might fail to find tuples
it should be able to see. We got this mostly right, but not in the case
of partial indexes: the code mistakenly discarded recently-dead tuples for
partial indexes. Fix that, and adjust the comments.
when an error occurs during xlog replay. Also, replace the former risky
'write into a fixed-size buffer with no overflow detection' API for XLOG
record description routines; use an expansible StringInfo instead. (The
latter accounts for most of the patch bulk.)
Qingqing Zhou
command or expression, rather than one copy for each textual occurrence as
it did before. This might result in some small performance improvement,
but the compelling reason to do it is that not doing so can result in
unexpected grouping failures because the main SQL parser won't see different
parameter numbers as equivalent. Add a regression test for the failure case.
Per report from Robert Davidson.
the logic it contained to switch to insertion sort for near-sorted input was
in fact a big loss, because it could fairly easily be fooled into applying
insertion sort to large subfiles that weren't all that well ordered. Remove
that, and instead add a simple check for already-perfectly-sorted input, as
per suggestion from Dann Corbit. This adds at worst O(N*lgN) overhead, and
usually far less, while sometimes allowing a subfile sort to finish in O(N)
time. Preliminary testing says this is an improvement over the basic
Bentley & McIlroy code for many nonrandom inputs, and it costs almost
nothing when the input is random.
> 1) Fix the problems with the \s command.
> When the saveHistory is executed by the \s command we must not do the
> conversion \n -> \x01 (per
> http://archives.postgresql.org/pgsql-hackers/2006-03/msg00317.php )
>
> 2) Fix the handling of Ctrl+C
>
> Now when you do
> wsdb=# select 'your long query here '
> wsdb-#
> and press afterwards the CtrlC the line "select 'your long query here
'"
> will be in the history
>
> (partly per
> http://archives.postgresql.org/pgsql-hackers/2006-03/msg00297.php )
>
> 3) Fix the handling of commands with not closed brackets, quotes,
double
> quotes. (now those commands are not splitted in parts...)
>
> 4) Fix the behaviour when SINGLELINE mode is used. (before it was
almost
> broken ;(
Sergey E. Koposov
byte-swapping on the port number which causes the call to fail on Intel
Macs.
This patch uses htons() instead of htonl() and fixes this bug.
Ashley Clark
2005-05-13. When we find that a new inner tuple can't possibly match any
outer tuple (because it contains a NULL), we can't immediately skip the
tuple when we are in NEXTINNER state. Doing so can lead to emitting
multiple copies of the tuple in FillInner mode, because we may rescan the
tuple after returning to a previous marked tuple. Instead, proceed to
NEXTOUTER state the same as we used to do. After we've found that there's
no need to return to the marked position, we can go to SKIPINNER_ADVANCE
state instead of SKIP_TEST when the inner tuple is unmatchable; this
preserves the performance improvement. Per bug report from Bruce.
I also made a couple of cosmetic code rearrangements and added a regression
test for the problem.
The original coding stored the raw parser output (ColumnDef and TypeName
nodes) which was ugly, bulky, and wrong because it failed to create any
dependency on the referenced datatype --- and in fact would not track type
renamings and suchlike. Instead store a list of column type OIDs in the
RTE.
Also fix up general failure of recordDependencyOnExpr to do anything sane
about recording dependencies on datatypes. While there are many cases where
there will be an indirect dependency (eg if an operator returns a datatype,
the dependency on the operator is enough), we do have to record the datatype
as a separate dependency in examples like CoerceToDomain.
initdb forced because of change of stored rules.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
similar constants if they were not previously defined. All these
constants must be defined by limits.h according to C89, so we can
safely assume they are present.
case where we run low on array slots before we run low on memory is much
more probable than I had thought, and so it's important to treat each
tape fairly in that case. To fix this, track per-tape slot allocations
just like we track per-tape space allocation. Also, in the FINALMERGE
code path avoid scanning all the input tapes when we really only need to
read from one. This should fix poor behavior with very large work_mem
as exhibited by Stefan Kaltenbrunner.
I didn't do anything about putting an upper bound on the number of tapes,
but maybe we should still consider that.
var_samp(), stddev_pop(), and stddev_samp(). var_samp() and stddev_samp()
are just renamings of the historical Postgres aggregates variance() and
stddev() -- the latter names have been kept for backward compatibility.
This patch includes updates for the documentation and regression tests.
The catversion has been bumped.
NB: SQL2003 requires that DISTINCT not be specified for any of these
aggregates. Per discussion on -patches, I have NOT implemented this
restriction: if the user asks for stddev(DISTINCT x), presumably they
know what they are doing.
performance issue during regular merge passes not only the 'final merge'
case. The original design contemplated that there would never be more
than about one free block per 'tape', hence no need for an efficient
method of keeping the free blocks sorted. But given the later addition
of merge preread behavior in tuplesort.c, there is likely to be about
work_mem worth of free blocks, which is not so small ... and for that
matter the number of tapes isn't necessarily small anymore either. So
we'd better get rid of the assumption entirely. Instead, I'm assuming
that the usage pattern will involve alternation between merge preread
and writing of a new run. This makes it reasonable to just add blocks
to the list without sorting during successive ltsReleaseBlock calls,
and then do a qsort() when we start getting ltsGetFreeBlock() calls.
Experimentation seems to confirm that there aren't many qsort calls
relative to the number of ltsReleaseBlock/ltsGetFreeBlock calls.
we are doing the final merge pass on-the-fly, and not writing the data
back onto a 'tape', the number of free blocks in the tape set will become
large, leading to a lot of time wasted in ltsReleaseBlock(). There is
really no need to track the free blocks anymore in this state, so add a
simple shutoff switch. Per report from Stefan Kaltenbrunner.
(respectively) to rename yylex and related symbols. Some were doing
it this way already, while others used not-too-reliable sed hacks in
the Makefiles. It's all nice and consistent now.
not likely ever to be implemented seeing it's been removed from SQL2003.
This allows getting rid of the 'filter' version of yylex() that we had in
parser.c, which should save at least a few microseconds in parsing.
- new function justify_interval(interval)
- modified function justify_hours(interval)
- modified function justify_days(interval)
These functions are defined to meet the requirements as discussed in
this thread. Specifically:
- justify_hours makes certain the sign bit on the hours
matches the sign bit on the days. It only checks the
sign bit on the days, and not the months, when
determining if the hours should be positive or negative.
After the call, -24 < hours < 24.
- justify_days makes certain the sign bit on the days
matches the sign bit on the months. It's behavior does
not depend on the hours, nor does it modify the hours.
After the call, -30 < days < 30.
- justify_interval makes sure the sign bits on all three
fields months, days, and hours are all the same. After
the call, -24 < hours < 24 AND -30 < days < 30.
Mark Dilger
> I've now tested this patch at home w/ 8.2HEAD and it seems to fix the
> bug. I plan on testing it under 8.1.2 at work tommorow with
> mod_auth_krb5, etc, and expect it'll work there. Assuming all goes
> well and unless someone objects I'll forward the patch to -patches.
> It'd be great to have this fixed as it'll allow us to use Kerberos to
> authenticate to phppgadmin and other web-based tools which use
> Postgres.
While playing with this patch under 8.1.2 at home I discovered a
mistake in how I manually applied one of the hunks to fe-auth.c.
Basically, the base code had changed and so the patch needed to be
modified slightly. This is because the code no longer either has a
freeable pointer under 'name' or has 'name' as NULL.
The attached patch correctly frees the string from pg_krb5_authname
(where it had been strdup'd) if and only if pg_krb5_authname returned
a string (as opposed to falling through and having name be set using
name = pw->name;). Also added a comment to this effect.
Backpatch to 8.1.X.
Stephen Frost
columns of the grouping clause to avoid redundant sorts. The optimizer
is not currently capable of doing this, so this patch implements a
simple hack in the analysis phase (transformGroupClause): if any
subset of the GROUP BY clause matches a prefix of the ORDER BY list,
that prefix is moved to the front of the GROUP BY clause. This
shouldn't change the semantics of the query, and allows a redundant
sort to be avoided for queries like "GROUP BY a, b ORDER BY b".
In particular, ensure that enlargement of the memtuples[] array doesn't
fall foul of MaxAllocSize when work_mem is very large, and don't bother
enlarging it if that would force an immediate switch into 'tape' mode anyway.
we'll go over to disk-based sort if we reach that limit.
This fixes Stefan Kaltenbrunner's observation that sorting can suffer an
'invalid memory alloc request size' failure when sort_mem is set large
enough. It's unfortunately not so easy to fix in 8.1 ...
> type
Wouldn't it be better to use the UINT64CONST macro? I realize this
file is Windows-only, but we do worry about more than one compiler
on that platform.
Kris Jurka
/dev/tty, but it isn't a device file and doesn't work as expected.
This fixes a known bug where psql does not prompt for a password on some
Win32 systems.
Backpatch to 8.1.X.
Robert Kinberg
relations are still checked for permissions etc as soon as they are
opened. The original form of the patch could hold exclusive lock for a
long time on relations that the user doesn't even have permissions to
access, let alone truncate.
checkpoint in the bgwriter. This forestalls overflow of the fsync request
queue, which is not fatal but causes considerable performance degradation
when it occurs (because backends then have to do their own fsyncs). Per
patch from Itagaki Takahiro, modified a little bit by me.
a need for it back in the neolithic era, but it's certainly dead code in
any PG release we would recognize as such. Since it forces an additional
network round trip to the backend, getting rid of it should provide some
small performance improvement for large-object-using clients.
then modified within the same transaction. The code was using a linked list
of active PLpgSQL_expr structs, which was OK when it was written because
plpgsql never released any parse data structures for the life of the backend.
But since Neil fixed plpgsql's memory management, elements of the linked list
could be freed, leading to crash when the list is chased. Per report and test
case from Kris Jurka.
make use of the recently added ability to create a shell type explicitly.
I also put in place some infrastructure to allow dump/no dump decisions
to be made separately for each database object, rather than the former
hardwired 'dump if in a dumpable schema' policy. This was needed anyway
for shell types so now seemed a convenient time to do it. The flexibility
isn't exposed to the user yet, but is ready for future extensions.
are unnecessarily allocated on the heap rather than the stack. If the
StringInfo doesn't outlive the stack frame in which it is created,
there is no need to allocate it on the heap via makeStringInfo() --
stack allocation is faster. While it's not a big deal unless the
code is in a critical path, I don't see a reason not to save a few
cycles -- using stack allocation is not less readable.
I also cleaned up a bit of code along the way: moved variable
declarations into a more tightly-enclosing scope where possible,
fixed some pointless copying of strings in dblink, etc.
more compliant with the error message style guide. In particular,
errdetail should begin with a capital letter and end with a period,
whereas errmsg should not. I also fixed a few related issues in
passing, such as fixing the repeated misspelling of "lexeme" in
contrib/tsearch2 (per Tom's suggestion).
creation of a shell type. This allows a less hacky way of dealing with
the mutual dependency between a datatype and its I/O functions: make a
shell type, then make the functions, then define the datatype fully.
We should fix pg_dump to handle things this way, but this commit just deals
with the backend.
Martijn van Oosterhout, with some corrections by Tom Lane.
(I didn't use his patch, however). A void-returning PL/Python function
must return None (from Python), which is translated into a void datum
(and *not* NULL) for Postgres. I also added some regression tests for
this functionality.
bits indicating which optional capabilities can actually be exercised
at runtime. This will allow Sort and Material nodes, and perhaps later
other nodes, to avoid unnecessary overhead in common cases.
This commit just adds the infrastructure and arranges to pass the correct
flag values down to plan nodes; none of the actual optimizations are here
yet. I'm committing this separately in case anyone wants to measure the
added overhead. (It should be negligible.)
Simon Riggs and Tom Lane
each tuple, as per my proposal of several days ago. Also, clean up
sort memory management by keeping all working data in a separate memory
context, and refine the handling of low-memory conditions.
the script is not executable as UCS_to_most.pl is in CVS. It also won't
pick up any custom setting of the perl version/location to use. This
patch calls perl scripts like $(PERL) $(srcdir)/script.pl.
Kris Jurka
possible ScanDirection alternatives rather than magic numbers
(-1, 0, 1). Also, use the ScanDirection macros in a few places
rather than directly checking whether `dir == ForwardScanDirection'
and the like. Per patch from James William Pye. His patch also
changed ScanDirection to be a "char" rather than an enum, which
I haven't applied.
by decompiling the typdefaultbin expression, not just printing the typdefault
text which may be out-of-date or assume the wrong schema search path. (It's
the same hazard as for adbin vs adsrc in column defaults.) The catalogs.sgml
spec for pg_type implies that the correct procedure is to look to
typdefaultbin first and consider typdefault only if typdefaultbin is NULL.
I made dumping of both domains and base types do that, even though in the
current backend code typdefaultbin is always correct for domains and
typdefault for base types --- might as well try to future-proof it a little.
Per bug report from Alexander Galler.
in leaking memory when invoking a PL/Python procedure that raises an
exception. Unfortunately this still leaks memory, but at least the
largest leak has been plugged.
This patch also fixes a reference counting mistake in PLy_modify_tuple()
for 8.0, 8.1 and HEAD: we don't actually own a reference to `platt', so
we shouldn't Py_DECREF() it.
allocates the control data. The per-tape buffers are allocated only
on first use. This saves memory in situations where tuplesort.c
overestimates the number of tapes needed (ie, there are fewer runs
than tapes). Also, this makes legitimate the coding in inittapes()
that includes tape buffer space in the maximum-memory calculation:
when inittapes runs, we've already expended the whole allowed memory
on tuple storage, and so we'd better not allocate all the tape buffers
until we've flushed some tuples out of memory.
with fixed merge order (fixed number of "tapes") was based on obsolete
assumptions, namely that tape drives are expensive. Since our "tapes"
are really just a couple of buffers, we can have a lot of them given
adequate workspace. This allows reduction of the number of merge passes
with consequent savings of I/O during large sorts.
Simon Riggs with some rework by Tom Lane
up a bunch of the support utilities.
In src/backend/utils/mb/Unicode remove nearly duplicate copies of the
UCS_to_XXX perl script and replace with one version to handle all generic
files. Update the Makefile so that it knows about all the map files.
This produces a slight difference in some of the map files, using a
uniform naming convention and not mapping the null character.
In src/backend/utils/mb/conversion_procs create a master utf8<->win
codepage function like the ISO 8859 versions instead of having a separate
handler for each conversion.
There is an externally visible change in the name of the win1258 to utf8
conversion. According to the documentation notes, it was named
incorrectly and this changes it to a standard name.
Running the Unicode mapping perl scripts has shown some additional mapping
changes in koi8r and iso8859-7.
we are not holding a buffer content lock; where it was, InterruptHoldoffCount
is positive and so we'd not respond to cancel signals as intended. Also
add missing vacuum_delay_point() call in btvacuumcleanup. This should fix
complaint from Evgeny Gridasov about failure to respond to SIGINT/SIGTERM
in a timely fashion (bug #2257).
option state hasn't been fully set up. This is possible via PQreset()
and might occur in other code paths too, so a state flag seems the
most robust solution. Per report from Arturs Zoldners.
Var referencing the subselect output. While this case could possibly be made
to work, it seems not worth expending effort on. Per report from Magnus
Naeslund(f).
id (CVE-2006-0553). Also fix related bug in SET SESSION AUTHORIZATION that
allows unprivileged users to crash the server, if it has been compiled with
Asserts enabled. The escalation-of-privilege risk exists only in 8.1.0-8.1.2.
However, the Assert-crash risk exists in all releases back to 7.3.
Thanks to Akio Ishida for reporting this problem.
---------------------------------------------------------------------------
> I've now tested this patch at home w/ 8.2HEAD and it seems to fix the
> bug. I plan on testing it under 8.1.2 at work tommorow with
> mod_auth_krb5, etc, and expect it'll work there. Assuming all goes
> well and unless someone objects I'll forward the patch to -patches.
> It'd be great to have this fixed as it'll allow us to use Kerberos to
> authenticate to phppgadmin and other web-based tools which use
> Postgres.
While playing with this patch under 8.1.2 at home I discovered a
mistake in how I manually applied one of the hunks to fe-auth.c.
Basically, the base code had changed and so the patch needed to be
modified slightly. This is because the code no longer either has a
freeable pointer under 'name' or has 'name' as NULL.
The attached patch correctly frees the string from pg_krb5_authname
(where it had been strdup'd) if and only if pg_krb5_authname returned
a string (as opposed to falling through and having name be set using
name = pw->name;). Also added a comment to this effect.
Please review.
Stephen Frost (sfrost@snowman.net) wrote:
> bug. I plan on testing it under 8.1.2 at work tommorow with
> mod_auth_krb5, etc, and expect it'll work there. Assuming all goes
> well and unless someone objects I'll forward the patch to -patches.
> It'd be great to have this fixed as it'll allow us to use Kerberos to
> authenticate to phppgadmin and other web-based tools which use
> Postgres.
While playing with this patch under 8.1.2 at home I discovered a
mistake in how I manually applied one of the hunks to fe-auth.c.
Basically, the base code had changed and so the patch needed to be
modified slightly. This is because the code no longer either has a
freeable pointer under 'name' or has 'name' as NULL.
The attached patch correctly frees the string from pg_krb5_authname
(where it had been strdup'd) if and only if pg_krb5_authname returned
a string (as opposed to falling through and having name be set using
name = pw->name;). Also added a comment to this effect.
Please review.
Stephen Frost (sfrost@snowman.net) wrote:
> True, but they're not being used where you'd expect. This seems to be
> something to do with the fact that it's not pg_authid which is being
> accessed, but rather the view pg_roles.
I looked into this and it seems the problem is that the view doesn't
get flattened into the main query because of the has_nullable_targetlist
limitation in prepjointree.c. That's triggered because pg_roles has
'********'::text AS rolpassword
which isn't nullable, meaning it would produce wrong behavior if
referenced above the outer join.
Ultimately, the reason this is a problem is that the planner deals only
in simple Vars while processing joins; it doesn't want to think about
expressions. I'm starting to think that it may be time to fix this,
because I've run into several related restrictions lately, but it seems
like a nontrivial project.
In the meantime, reducing the LEFT JOIN to pg_roles to a JOIN as per
Peter's suggestion seems like the best short-term workaround.
--enable-depend it often tries to create the .deps directory twice and
bails out when it already exists due to a race condition of if doesn't
exist, then create. This patch prevents mkdir from returning an error.
Kris Jurka
consistently. This is mostly cosmetic right at the moment because
check_assignable() does nothing for ROW or RECORD datums, but that might
not always be so. This also syncs several different places that read
INTO target lists. They're just enough different that it seems
impractical to factor them into a single routine, but they surely
should be the same as much as possible.
the API of PQdsplen without bothering to fix its callers. Although
ReportSyntaxErrorPosition could probably do with more smarts about
handling control characters, for the moment I'll just get it back to
handling tabs consistently.
comments on cluster global objects like databases, tablespaces, and
roles.
It touches a lot of places, but not much in the way of big changes. The
only design decision I made was to duplicate the query and manipulation
functions rather than to try and have them handle both shared and local
comments. I believe this is simpler for the code and not an issue for
callers because they know what type of object they are dealing with.
This has resulted in a shobj_description function analagous to
obj_description and backend functions [Create/Delete]SharedComments
mirroring the existing [Create/Delete]Comments functions.
pg_shdescription.h goes into src/include/catalog/
Kris Jurka
This is mostly just over-compulsiveness on my part, but the exercise
did reveal one real bug: errors.out has a space difference now where
it should not.
(optionally) to a new host and port without exiting psql. This
eliminates, IMHO, a surprise in that you can now connect to PostgreSQL
on a differnt machine from the one where you started your session. This
should help people who use psql as an administrative tool.
David Fetter
during the vacuumcleanup scan that we're going to do anyway. Should
save a few cycles (one calculation per page, not per tuple) as well
as not having to depend on assumptions about heap and index being
in step.
I think this could probably be made to work for GIST too, but that
code looks messy enough that I'm disinclined to try right now.
partial. None of the existing AMs do anything useful except counting
tuples when there's nothing to delete, and we can get a tuple count
from the heap as long as it's not a partial index. (hash actually can
skip anyway because it maintains a tuple count in the index metapage.)
GIST is not currently able to exploit this optimization because, due to
failure to index NULLs, GIST is always effectively partial. Possibly
we should fix that sometime.
Simon Riggs w/ some review by Tom Lane.
Currently, while \e saves a single statement as one entry, interactive
statements are saved one line at a time. Ideally all statements
would be saved like \e does.
Sergey E. Koposov
regardless of the current schema search path. Since CREATE OPERATOR CLASS
only allows one default opclass per datatype regardless of schemas, this
should have minimal impact, and it fixes problems with failure to find a
desired opclass while restoring dump files. Per discussion at
http://archives.postgresql.org/pgsql-hackers/2006-02/msg00284.php.
Remove now-redundant-or-unused code in typcache.c and namespace.c,
and backpatch as far as 8.0.
If the second output column value is 'a\nb', the 'b' should appear
in the second display column, rather than the first column as it
does now.
Change libpq's PQdsplen() to return more useful values.
> Note: this changes the PQdsplen function, it can now return zero or
> minus one which was not possible before. It doesn't appear anyone is
> actually using the functions other than psql but it is a change. The
> functions are not actually documentated anywhere so it's not like we're
> breaking a defined interface. The new semantics follow the Unicode
> standard.
BACKWARD COMPATIBLE CHANGE.
The only user-visible change I saw in the regression tests is that a
SELECT * on a table where all the columns have been dropped doesn't
return a blank line like before. This seems like a step forward.
Martijn van Oosterhout
not print the owner name in the object comment.
eg:
--
-- Name: actor; Type: TABLE; Schema: public; Owner: chriskl; Tablespace:
--
Becomes:
--
-- Name: actor; Type: TABLE; Schema: public; Owner: -; Tablespace:
--
This makes it far easier to do 'user independent' dumps. Especially for
distribution to third parties.
Christopher Kings-Lynne
Fixed missing continuation line character.
Do not translate $-quoting.
Bit field notation belongs to a variable not a variable list.
Output of line number only done by one function.
the format on Tuple(Numeric) and the format to calculate(NumericVar)
are different. I understood that to reduce I/O. However, when many
comparisons or calculations of NUMERIC are executed, the conversion
of Numeric and NumericVar becomes a bottleneck.
It is profile result when "create index on NUMERIC column" is executed:
% cumulative self self total
time seconds seconds calls s/call s/call name
17.61 10.27 10.27 34542006 0.00 0.00 cmp_numerics
11.90 17.21 6.94 34542006 0.00 0.00 comparetup_index
7.42 21.54 4.33 71102587 0.00 0.00 AllocSetAlloc
7.02 25.64 4.09 69084012 0.00 0.00 set_var_from_num
4.87 28.48 2.84 69084012 0.00 0.00 alloc_var
4.79 31.27 2.79 142205745 0.00 0.00 AllocSetFreeIndex
4.55 33.92 2.65 34542004 0.00 0.00 cmp_abs
4.07 36.30 2.38 71101189 0.00 0.00 AllocSetFree
3.83 38.53 2.23 69084012 0.00 0.00 free_var
The create index command executes many comparisons of Numeric values.
Functions other than comparetup_index spent a lot of cycles for
conversion from Numeric to NumericVar.
An attached patch enables the comparison of Numeric values without
executing conversion to NumericVar. The execution time of that SQL
becomes half.
o Test SQL (index_test table has 1,000,000 tuples)
create index index_test_idx on index_test(num_col);
o Test results (executed the test five times)
(1)PentiumIII
original: 39.789s 36.823s 36.737s 37.752s 37.019s
patched : 18.560s 19.103s 18.830s 18.408s 18.853s
4.07 36.30 2.38 71101189 0.00 0.00 AllocSetFree
3.83 38.53 2.23 69084012 0.00 0.00 free_var
The create index command executes many comparisons of Numeric values.
Functions other than comparetup_index spent a lot of cycles for
conversion from Numeric to NumericVar.
An attached patch enables the comparison of Numeric values without
executing conversion to NumericVar. The execution time of that SQL
becomes half.
o Test SQL (index_test table has 1,000,000 tuples)
create index index_test_idx on index_test(num_col);
o Test results (executed the test five times)
(1)PentiumIII
original: 39.789s 36.823s 36.737s 37.752s 37.019s
patched : 18.560s 19.103s 18.830s 18.408s 18.853s
(2)Pentium4
original: 16.349s 14.997s 12.979s 13.169s 12.955s
patched : 7.005s 6.594s 6.770s 6.740s 6.828s
(3)Itanium2
original: 15.392s 15.447s 15.350s 15.370s 15.417s
patched : 7.413s 7.330s 7.334s 7.339s 7.339s
(4)Ultra Sparc
original: 64.435s 59.336s 59.332s 58.455s 59.781s
patched : 28.630s 28.666s 28.983s 28.744s 28.595s
Atsushi Ogawa
would basically punt in all cases for 'foo <> ALL (array)', which resulted
in a performance regression for NOT IN compared to what we were doing in
8.1 and before. Per report from Pavel Stehule.
relations: fix the executor so that we can have an Append plan on the
inside of a nestloop and still pass down outer index keys to index scans
within the Append, then generate such plans as if they were regular
inner indexscans. This avoids the need to evaluate the outer relation
multiple times.
... in fact, it will be applied now in any query whatsoever. I'm still
a bit concerned about the cycles that might be expended in failed proof
attempts, but given that CE is turned off by default, it's the user's
choice whether to expend those cycles or not. (Possibly we should
change the simple bool constraint_exclusion parameter to something
more fine-grained?)
internally $$ strings are converted to single-quote strings.
In ecpg, output newlines in commands using standard C escapes, rather
than using literal newlines, which is not portable.
thereby sharing code with the inheritance case. This puts the UNION-ALL-view
approach to partitioned tables on par with inheritance, so far as constraint
exclusion is concerned: it works either way. (Still need to update the docs
to say so.) The definition of "simple UNION ALL" is a little simpler than
I would like --- basically the union arms can only be SELECT * FROM foo
--- but it's good enough for partitioned-table cases.
MemSet on AIX by setting MEMSET_LOOP_LIMIT to zero.
Add optimization to skip MemSet tests in MEMSET_LOOP_LIMIT == 0 case and
just call memset() directly.
it later. This fixes a problem where EXEC_BACKEND didn't have progname
set, causing a segfault if log_min_messages was set below debug2 and our
own snprintf.c was being used.
Also alway strdup() progname.
Backpatch to 8.1.X and 8.0.X.
inheritance trees on-the-fly, which pretty well constrained us to considering
only one way of planning inheritance, expand inheritance sets during the
planner prep phase, and build a side data structure that can be consulted
later to find which RTEs are members of which inheritance sets. As proof of
concept, use the data structure to plan joins against inheritance sets more
efficiently: we can now use indexes on the set members in inner-indexscan
joins. (The generated plans could be improved further, but it'll take some
executor changes.) This data structure will also support handling UNION ALL
subqueries in the same way as inheritance sets, but that aspect of it isn't
finished yet.
to avoid sharing substructure with the lower-level indexquals. This is
currently only an issue if there are SubPlans in the indexquals, which is
uncommon but not impossible --- see bug #2218 reported by Nicholas Vinen.
We use the same kluge for indexqual vs indexqualorig in the index scans
themselves ... would be nice to clean this up someday.
requested sort order. It was assuming that build_index_pathkeys always
generates a pathkey per index column, which was not true if implied equality
deduction had determined that two index columns were effectively equated to
each other. Simplest fix seems to be to install an option that causes
build_index_pathkeys to support this behavior as well as the original one.
Per report from Brian Hirt.
memory in the executor's per-query memory context. It also inefficient:
it invokes get_call_result_type() and TupleDescGetAttInMetadata() for
every call to return_next, rather than invoking them once (per PL/Perl
function call) and memoizing the result.
This patch makes the following changes:
- refactor the code to include all the "per PL/Perl function call" data
inside a single struct, "current_call_data". This means we don't need to
save and restore N pointers for every recursive call into PL/Perl, we
can just save and restore one.
- lookup the return type metadata needed by plperl_return_next() once,
and then stash it in "current_call_data", so as to avoid doing the
lookup for every call to return_next.
- create a temporary memory context in which to evaluate the return
type's input functions. This memory context is reset for each call to
return_next.
The patch appears to fix the memory leak, and substantially reduces
the overhead imposed by return_next.
one 'creating subdirectories' message instead of one per subdirectory.
The original decision to print something for each subdirectory was made
when there were only one or two of 'em; we have way too many now.
Per discussion.
While we normally prefer the notation "foo.*" for a whole-row Var, that does
not work at SELECT top level, because in that context the parser will assume
that what is wanted is to expand the "*" into a list of separate target
columns, yielding behavior different from a whole-row Var. We have to emit
just "foo" instead in that context. Per report from Sokolov Yura.
and rely exclusively on the SQL type system to tell the difference between
the types. Prevent creation of invalid CIDR values via casting from INET
or set_masklen() --- both of these operations now silently zero any bits
to the right of the netmask. Remove duplicate CIDR comparison operators,
letting the type rely on the INET operators instead.
just refer to btree index entries as plain IndexTuples, which is what
they have been for a very long time. This is mostly just an exercise
in removing extraneous notation, but it does save a palloc/pfree cycle
per index insertion.
because pqSendSome will absorb input data anytime it'd be forced to block.
Avoiding a kernel call per PQputCopyData call helps COPY speed materially.
Alon Goldshuv
and non-required keys in a btree index scan, mark the required scankeys
with private flag bits SK_BT_REQFWD and/or SK_BT_REQBKWD. This seems
at least marginally clearer to me, and it eliminates a wired-into-the-
data-structure assumption that required keys are consecutive. Even though
that assumption will remain true for the foreseeable future, having it
in there makes the code seem more complex than necessary.
and DELETE. If specified, the alias must be used instead of the full
table name. Also, the alias currently cannot be used in the SET clause
of UPDATE.
Patch from Atsushi Ogawa, various editorialization by Neil Conway.
Along the way, make the rowtypes regression test pass if add_missing_from
is enabled, and add a new (skeletal) regression test for DELETE.
to try to create a log segment file concurrently, but the code erroneously
specified O_EXCL to open(), resulting in a needless failure. Before 7.4,
it was even a PANIC condition :-(. Correct code is actually simpler than
what we had, because we can just say O_CREAT to start with and not need a
second open() call. I believe this accounts for several recent reports of
hard-to-reproduce "could not create file ...: File exists" errors in both
pg_clog and pg_subtrans.
Continue to support GRANT ON [TABLE] for sequences for backward
compatibility; issue warning for invalid sequence permissions.
[Backward compatibility warning message.]
Add USAGE permission for sequences that allows only currval() and
nextval(), not setval().
Mention object name in grant/revoke warnings because of possible
multi-object operations.
temp table not only our own process' tables. It's not real important
since vacuum.c will skip temp tables anyway, but might as well make the
code do what it claims to do.
index's support-function cache (in index_getprocinfo). Since none of that
data can change for an index that's in active use, it seems sufficient to
treat all open indexes the same way we were treating "nailed" system indexes
--- that is, just re-read the pg_class row and leave the rest of the relcache
entry strictly alone. The pg_class re-read might not be strictly necessary
either, but since the reltablespace and relfilenode can change in normal
operation it seems safest to do it. (We don't support changing any of the
other info about an index at all, at the moment.)
Back-patch as far as 8.0. It might be possible to adapt the patch to 7.4,
but it would take more work than I care to expend for such a low-probability
problem. 7.3 is out of luck for sure.
occurs when it tries to heap_open pg_tablespace. When control returns to
smgrcreate, that routine will be holding a dangling pointer to a closed
SMgrRelation, resulting in mayhem. This is of course a consequence of
the violation of proper module layering inherent in having smgr.c call
a tablespace command routine, but the simplest fix seems to be to change
the locking mechanism. There's no real need for TablespaceCreateDbspace
to touch pg_tablespace at all --- it's only opening it as a way of locking
against a parallel DROP TABLESPACE command. A much better answer is to
create a special-purpose LWLock to interlock these two operations.
This drops TablespaceCreateDbspace quite a few layers down the food chain
and makes it something reasonably safe for smgr to call.
This is utterly insignificant in normal operation, but it becomes a
problem during cache inval stress testing. The original coding in fact
had no leak --- the 8.0 List rewrite created the issue. I wonder whether
list_concat should pfree the discarded header?
files: avoid creating stats hashtable entries for tables that aren't being
touched except by vacuum/analyze, ensure that entries for dropped tables are
removed promptly, and tweak the data layout to avoid storing useless struct
padding. Also improve the performance of pgstat_vacuum_tabstat(), and make
sure that autovacuum invokes it exactly once per autovac cycle rather than
multiple times or not at all. This should cure recent complaints about 8.1
showing much higher stats I/O volume than was seen in 8.0. It'd still be a
good idea to revisit the design with an eye to not re-writing the entire
stats dataset every half second ... but that would be too much to backpatch,
I fear.
cursors. Patch from Joachim Wieland, review and ediorialization by Neil
Conway. The view lists cursors defined by DECLARE CURSOR, using SPI, or
via the Bind message of the frontend/backend protocol. This means the
view does not list the unnamed portal or the portal created to implement
EXECUTE. Because we do list SPI portals, there might be more rows in
this view than you might expect if you are using SPI implicitly (e.g.
via a procedural language).
Per recent discussion on -hackers, the query string included in the
view for cursors defined by DECLARE CURSOR is based on
debug_query_string. That means it is not accurate if multiple queries
separated by semicolons are submitted as one query string. However,
there doesn't seem a trivial fix for that: debug_query_string
is better than nothing. I also changed SPI_cursor_open() to include
the source text for the portal it creates: AFAICS there is no reason
not to do this.
Update the documentation and regression tests, bump the catversion.
are two basically different kinds of scankeys, and we ought to try harder
to indicate which is used in each place in the code. I've chosen the names
"search scankey" and "insertion scankey", though you could make about
as good an argument for "operator scankey" and "comparison function
scankey".
an array of regtype, rather than an array of OIDs. This is likely to
be more useful to user, and the type OID can easily be obtained by
casting a regtype value to OID. Per suggestion from Tom.
Update the documentation and regression tests, and bump the catversion.
a va_list. Christof Petig's previous patch made this change, but neglected
to update ecpglib/descriptor.c, resulting in a compiler warning (and a
likely runtime crash) on AMD64 and PPC.
data type is unspecified or is declared to be "unknown", the type will
be inferred from the context in which the parameter is used. This was
already possible for protocol-level prepared statements.
isn't being used anywhere anymore, and there seems no point in a generic
index_keytest() routine when two out of three remaining access methods
aren't using it. Also, add a comment documenting a convention for
letting access methods define private flag bits in ScanKey sk_flags.
There are no such flags at the moment but I'm thinking about changing
btree's handling of "required keys" to use flag bits in the keys
rather than a count of required key positions. Also, if some AM did
still want SK_NEGATE then it would be reasonable to treat it as a private
flag bit.
transaction as aborted. Since we only call XactLockTableWait on XIDs
that we believe to be currently running, the odds of this code ever
actually firing are minimal. It's certainly unnecessary, since a
transaction that's not either running or committed will be presumed
aborted anyway. What's more, it's not hard to imagine scenarios where
this could result in corrupting pg_clog: for instance, if a bogus XID
somehow got passed to XactLockTableWait. I think the code probably
dates from the ancient era when we didn't have TransactionIdIsInProgress;
back then it may have been necessary, but now I think it's a waste of
cycles and potentially dangerous. Per discussion with Qingqing Zhou
and Karsten Hilbert.
permissions on the functions and operators contained in the opclass.
Since we already require superuser privilege to create an operator class,
there's no expansion-of-privilege hazard here, but if someone were to get
the idea of building an opclass containing functions that need security
restrictions, we'd better warn them off. Also, change the permission
checks from have-execute-privilege to have-ownership, and then comment
them all out since they're dead code anyway under the superuser restriction.
type definition. Because use of a type's I/O conversion functions isn't
access-checked, CREATE TYPE amounts to granting public execute permissions
on the functions, and so allowing it to anybody means that someone could
theoretically gain access to a function he's not supposed to be able to
execute. The parameter-type restrictions already enforced by CREATE TYPE
make it fairly unlikely that this oversight is meaningful in practice,
but still it seems like a good idea to plug the hole going forward.
Also, document the implicit grant just in case anybody gets the idea of
building I/O functions that might need security restrictions.
fmgr_info(), in the TopMemoryContext. I couldn't see that the code
actually leaked, but in general I think it's fragile to assume that
pfree'ing an FmgrInfo along with its fn_extra field is enough to
reclaim all the resources allocated by fmgr_info(). I changed the
code to do its allocations in a new child context of
TopMemoryContext, MbProcContext. When we want to release the
allocations we can just reset the context, which is cleaner.
our own command (or more generally, xmin = our xact and cmin >= current
command ID) should not be seen as good. Else we may try to update rows
we already updated. This error was inserted last August while fixing the
even bigger problem that the old coding wouldn't see *any* tuples inserted
by our own transaction as good. Per report from Euler Taveira de Oliveira.
rather than "return expr;" -- the latter style is used in most of the
tree. I kept the parentheses when they were necessary or useful because
the return expression was complex.
listed in the column's most-common-values statistics entry. This gives
us an exact selectivity result for the portion of the column population
represented by the MCV list, which can be a big leg up in accuracy if
that's a large fraction of the population. The heuristics involving
pattern contents and prefix are applied only to the part of the population
not included in the MCV list.
one argument at a time and then inserting the argument into a Python
list via PyList_SetItem(). This "steals" the reference to the argument:
that is, the reference to the new list member is now held by the Python
list itself. This works fine, except if an elog occurs. This causes the
function's PG_CATCH() block to be invoked, which decrements the
reference counts on both the current argument and the list of arguments.
If the elog happens to occur during the second or subsequent iteration
of the loop, the reference count on the current argument will be
decremented twice.
The fix is simple: set the local pointer to the current argument to NULL
immediately after adding it to the argument list. This ensures that the
Py_XDECREF() in the PG_CATCH() block doesn't double-decrement.
operator names. This is needed when dumping operator definitions that have
COMMUTATOR (or similar) links to operators in other schemas.
Apparently Daniel Whitter is the first person ever to try this :-(
and nail a couple more system indexes into cache. This doesn't make
any difference in normal system operation, but when forcing constant
cache resets it's difficult to get through the rules regression test
without these changes.
access information about the prepared statements that are available
in the current session. Original patch from Joachim Wieland, various
improvements by Neil Conway.
The "statement" column of the view contains the literal query string
sent by the client, without any rewriting or pretty printing. This
means that prepared statements created via SQL will be prefixed with
"PREPARE ... AS ", whereas those prepared via the FE/BE protocol will
not. That is unfortunate, but discussion on -patches did not yield an
efficient way to improve this, and there is some merit in returning
exactly what the client sent to the backend.
Catalog version bumped, regression tests updated.
use it. While it normally has been opened earlier during btree index
build, testing shows that it's possible for the link to be closed again
if an sinval reset occurs while the index is being built.
dead and have become unreferenced. Before 8.1, such members were left
for AtEOXact_CatCache() to clean up, but now AtEOXact_CatCache isn't
supposed to have anything to do. In an assert-enabled build this bug
leads to an assertion failure at transaction end, but in a non-assert
build the dead member is effectively just a small memory leak.
Per report from Jeremy Drake.
rather than elog(FATAL), when there is no more room in ShmemBackendArray.
This is a security issue since too many connection requests arriving close
together could cause the postmaster to shut down, resulting in denial of
service. Reported by Yoshiyuki Asaba, fixed by Magnus Hagander.
the relation but it finds a pre-existing valid buffer. The buffer does not
correspond to any page known to the kernel, so we *must* do smgrextend to
ensure that the space becomes allocated. The 7.x branches all do this
correctly, but the corner case got lost somewhere during 8.0 bufmgr rewrites.
(My fault no doubt :-( ... I think I assumed that such a buffer must be
not-BM_VALID, which is not so.)
an LWLock instead of a spinlock. This hardly matters on Unix machines
but should improve startup performance on Windows (or any port using
EXEC_BACKEND). Per previous discussion.
from Andrus Moor. The former state-machine-style coding wasn't actually
doing much except obscuring the control flow, and it didn't extend
readily to fix this case, so I just took it out. Also, add a
YY_FLUSH_BUFFER call to ensure the lexer is reset correctly if the
previous scan failed partway through the file.
a little bit, and set the minimum buffers-per-connection ratio to 10 not
5. I folded the two test routines into one to counteract the illusion
that the tests can be twiddled independently, and added some documentation
pointing out the necessary connection between the sets of values tested.
Fixes strange choices of parameters that I noticed CVS tip making on
Darwin with Apple's undersized default SHMMAX.
selection of a field from the result of a function returning RECORD.
I believe this case is new in 8.1; it's due to the addition of OUT parameters.
Per example from Michael Fuhr.
in favor of having just one set of macros that don't do HOLD/RESUME_INTERRUPTS
(hence, these correspond to the old SpinLockAcquire_NoHoldoff case).
Given our coding rules for spinlock use, there is no reason to allow
CHECK_FOR_INTERRUPTS to be done while holding a spinlock, and also there
is no situation where ImmediateInterruptOK will be true while holding a
spinlock. Therefore doing HOLD/RESUME_INTERRUPTS while taking/releasing a
spinlock is just a waste of cycles. Qingqing Zhou and Tom Lane.
setup. This protects against undesired changes in locale behavior
if someone carelessly does setlocale(LC_ALL, "") (and we know who
you are, perl guys).
get_func_arg_info() for consistency with other names there.
This code will probably be useful to other PLs when they start to
support OUT parameters, so better to have it in the main backend.
Also, fix plpgsql validator to detect bogus OUT parameters even when
check_function_bodies is off.
(previously we only did = and <> correctly). Also, allow row comparisons
with any operators that are in btree opclasses, not only those with these
specific names. This gets rid of a whole lot of indefensible assumptions
about the behavior of particular operators based on their names ... though
it's still true that IN and NOT IN expand to "= ANY". The patch adds a
RowCompareExpr expression node type, and makes some changes in the
representation of ANY/ALL/ROWCOMPARE SubLinks so that they can share code
with RowCompareExpr.
I have not yet done anything about making RowCompareExpr an indexable
operator, but will look at that soon.
initdb forced due to changes in stored rules.
if (c == '\\' && cstate->line_buf.len == 0)
The problem with that is the because of the input and _output_
buffering, cstate->line_buf.len could be zero even if we are not on the
first character of a line. In fact, for a typical line, it is zero for
all characters on the line. The proper solution is to introduce a
boolean, first_char_in_line, that we set as we enter the loop and clear
once we process a character.
I have restructured the line-reading code in copy.c by:
o merging the CSV/non-CSV functions into a single function
o used macros to centralize and clarify the buffering code
o updated comments
o renamed client_encoding_only to encoding_embeds_ascii
o added a high-bit test to the encoding_embeds_ascii test for
performance
o in CSV mode, allow a backslash followed by a non-period to
continue being processed as a data value
There should be no performance impact from this patch because it is
functionally equivalent. If you apply the patch you will see copy.c is
much clearer in this area now and might suggest additional
optimizations.
I have also attached a 8.1-only patch to fix the CSV \. handling bug
with no code restructuring.
- use "bool" rather than "int" for boolean variables
- use "PLy_malloc" rather than "malloc" in two places
- define "PLy_strdup", and use it rather than malloc() + strcpy() in
two places (which should have been memcpy(), anyway).
- remove a bunch of redundant parentheses from expressions that do not
need the parentheses for code clarity
#define HIGHBIT (0x80)
#define IS_HIGHBIT_SET(ch) ((unsigned char)(ch) & HIGHBIT)
and removed CSIGNBIT and mapped it uses to HIGHBIT. I have also added
uses for IS_HIGHBIT_SET where appropriate. This change is
purely for code clarity.
See:
Subject: [HACKERS] bugs with certain Asian multibyte charsets
From: Tatsuo Ishii <ishii@sraoss.co.jp>
To: pgsql-hackers@postgresql.org
Date: Sat, 24 Dec 2005 18:25:33 +0900 (JST)
for more details/
differ by more than the last directory component. Instead of insisting
that they match up to the last component, accept whatever common prefix
they have, and try to replace the non-matching part of bin_path with
the non-matching part of target_path in the actual executable's path.
In one way this is tighter than the old code, because it insists on
a match to the part of bin_path we want to substitute for, rather than
blindly stripping one directory component from the executable's path.
Per gripe from Martin Pitt and subsequent discussion.
Also make the code more robust by searching for target encoding
in the internal charset map.
Problem reported by Sagi Bashari on 2005/12/21.
See "[BUGS] BUG #2120: Crash when doing UTF8<->ISO_8859_8 encoding conversion"
on pgsql-bugs list for more details.
equal: if strcoll claims two strings are equal, check it with strcmp, and
sort according to strcmp if not identical. This fixes inconsistent
behavior under glibc's hu_HU locale, and probably under some other locales
as well. Also, take advantage of the now-well-defined behavior to speed up
texteq, textne, bpchareq, bpcharne: they may as well just do a bitwise
comparison and not bother with strcoll at all.
NOTE: affected databases may need to REINDEX indexes on text columns to be
sure they are self-consistent.
Per my recent proposal. I ended up basing the implementation on the
existing mechanism for enforcing valid join orders of IN joins --- the
rules for valid outer-join orders are somewhat similar.
file. The original code probed the PGPROC array separately for each PID,
which was not good for large numbers of backends: not only is the runtime
O(N^2) but most of it is spent holding ProcArrayLock. Instead, take the
lock just once and copy the active PIDs into an array, then use qsort
and bsearch so that the lookup time is more like O(N log N).
messages, when client attempts to execute these outside a transaction (start
one) or in a failed transaction (reject message, except for COMMIT/ROLLBACK
statements which we can handle). Per report from Francisco Figueiredo Jr.
reduce contention for the former single LockMgrLock. Per my recent
proposal. I set it up for 16 partitions, but on a pgbench test this
gives only a marginal further improvement over 4 partitions --- we need
to test more scenarios to choose the number of partitions.
that simplify_boolean_equality() may leave behind. This is only relevant
if the user writes something a bit silly, like CASE x=y WHEN TRUE THEN.
Per example from Michael Fuhr; may or may not explain bug #2106.
the data defining the semantics of a lock method (ie, conflict resolution
table and ancillary data, which is all constant) and the hash tables
storing the current state. The only thing we give up by this is the
ability to use separate hashtables for different lock methods, but there
is no need for that anyway. Put some extra fields into the LockMethod
definition structs to clean up some other uglinesses, like hard-wired
tests for DEFAULT_LOCKMETHOD and USER_LOCKMETHOD. This commit doesn't
do anything about the performance issues we were discussing, but it clears
away some of the underbrush that's in the way of fixing that.
> Now, the arguments of the drop function can be tab completed. for example
>
> drop function strpos (
> <press tab>
> drop FUNCTION strpos (text, text)
>
> or:
>
> wsdb=# drop FUNCTION length (
> bit) bytea) character) lseg) path) text)
> <press c>
> wsdb# DROP FUNCTION length ( character)
>
> I think that this patch should be rather useful. At it least I hate
> always to type all the arguments of the dropped functions.
>
> 2) Also some fixes applied for the
> CREATE INDEX syntax
>
> now the parenthesises are inserted by tab pressing.
> suppose I have the table q3c:
Sergey E. Koposov
I have the problem, when building by MS-VC6.
An error occurs in the 8.1.0 present source codes.
nmake -f win32.mak
..\..\port\getaddrinfo.c(244) : error C2065: 'WSA_NOT_ENOUGH_MEMORY'
..\..\port\getaddrinfo.c(342) : error C2065: 'WSATYPE_NOT_FOUND'
This is used by winsock2.h. However, Construction of a windows base is
winsock.h.
Then, Since MinGW has special environment, this is right. but, it is not
found in VC6.
Furthermore, in getaddrinfo.c, IPV6-API is used by
LoadLibraryA("ws2_32");
Referring to of dll the external memory generates this violation by VC6
specification.
I considered whether the whole should have been converted into winsock2.
However, Now, DLL of MinGW creation operates wonderfully as it is.
That's right, it has pliability by replacement of simple DLL.
Then, I propose the system using winsock(non IPV6) in construction of
VC6.
Hiroshi Saito
_bt_checkkeys(), instead of checking it in the top-level nbtree.c routines
as formerly. This saves a little bit of loop overhead, but more importantly
it lets us skip performing the index key comparisons for dead tuples.
checks, which were once needed because PageGetMaxOffsetNumber would
fail on empty pages, but are now just redundant. Also, don't set up
local variables that aren't needed in the fast path --- most of the
time, we only need to advance offnum and not step across a page boundary.
Motivated by noticing _bt_step at the top of OProfile profile for a
pgbench run.
SLRU area. The number of slots is still a compile-time constant (someday
we might want to change that), but at least it's a different constant for
each SLRU area. Increase number of subtrans buffers to 32 based on
experimentation with a heavily subtrans-bashing test case, and increase
number of multixact member buffers to 16, since it's obviously silly for
it not to be at least twice the number of multixact offset buffers.
lock, not exclusive, if the desired page is already in memory. This can
be demonstrated to be a significant win on the pg_subtrans cache when there
is a large window of open transactions. It should be useful for pg_clog
as well. I didn't try to make GetMultiXactIdMembers() use the code, as
that would have taken some restructuring, and what with the local cache
for multixact contents it probably wouldn't really make a difference.
Per my recent proposal.
clauses even if it's an outer join. This is a corner case since such
clauses could only arise from weird OUTER JOIN ON conditions, but worth
fixing. Per example from Ron at cheapcomplexdevices.com.
incorrect implementation of argument reordering, arbitrary limit of output
size for sprintf and fprintf, willingness to access more bytes than "%.Ns"
specification allows, wrong formatting of LONGLONG_MIN, various field-padding
bugs and omissions. I believe it now accurately implements a subset of
the Single Unix Spec requirements (remaining unimplemented features are
documented, too). Bruce Momjian and Tom Lane.
than owned by nobody. This results in cleaner display of language ACLs,
since the backend's aclchk.c uses the same convention. AFAICS there is
no practical difference but it's nice to avoid emitting SET SESSION
AUTHORIZATION; also this will make it easier to transition pg_dump to
some future version in which we may include an explicit ownership column
in pg_language. Per gripe from David Begley.
Map them to a single day, so '30 hours' is 'AM'.
Have to_char(interval) and to_char(time) use "HH", "HH12" as 12-hour
intervals, rather than bypass and print the full interval hours. This
is neeeded because to_char(time) is mapped to interval in this function.
Intervals should use "HH24", and document suggestion.
Allow "D" format specifiers for interval/time.
if we already have a stronger lock due to the index's table being the
update target table of the query. Same optimization I applied earlier
at the table level. There doesn't seem to be much interest in the more
radical idea of not locking indexes at all, so do what we can ...
relation if it's already been locked by execMain.c as either a result
relation or a FOR UPDATE/SHARE relation. This avoids an extra trip to
the shared lock manager state. Per my suggestion yesterday.
child plan nodes until we have acquired lock on the relation to scan.
The relative order of initialization of plan nodes isn't real important in
other cases, but it's critical here because one is supposed to lock a
relation before its indexes, not vice versa. The original coding was at
least vulnerable to deadlock against DROP INDEX, and perhaps worse things.
Also add a retry for Unixen returning EINTR, which hasn't been reported
as an issue but at least theoretically could be. Patch by Qingqing Zhou,
some minor adjustments by me.
#2075: consider an index redundant if any of its index conditions were already
used, rather than if all of them were. Also, make the selectivity comparison
a bit fuzzy, so that very small differences in estimated selectivities don't
skew the results.
it's worth probing the outer relation for emptiness before building the
hash table. To wit, if we're rescanning a join previously performed,
remember whether we found it nonempty the previous time, and don't bother
with the probe if it was nonempty. This buys back the performance lost
in examples like Mario Weilguni's.
one child or the other had a problem: they did not leave the node in a
state that ExecReScanHashJoin would understand. In particular it would
tend to fail to reset the child plans when needed. Per report from
Mario Weilguni.
ScalarArrayOpExpr when possible, that is, whenever there is an array type
for the values of the expression list. This completes the project I've
been working on to improve the speed of index searches with long IN lists,
as per discussion back in mid-October.
I did not force initdb, but until you do one you will see failures in the
"rules" regression test, because some of the standard system views use IN
and their compiled formats have changed.
they were broken-out AND or OR lists. The least grotty way to do this
seemed to be to set up a general mechanism for handling nodes as though
they were ANDs or ORs. There's no other immediate use for it, but perhaps
we might want to use the mechanism someday for things like BETWEEN
SYMMETRIC.
"ctid IN (list)" will still work after we convert IN to ScalarArrayOpExpr.
Make some minor efficiency improvements while at it, such as ensuring that
multiple TIDs are fetched in physical heap order. And fix EXPLAIN so that
it shows what's really going on for a TID scan.
when we first read the page, rather than checking them one at a time.
This allows us to take and release the buffer content lock just once
per page, instead of once per tuple. Since it's a shared lock the
contention penalty for holding the lock longer shouldn't be too bad.
We can safely do this only when using an MVCC snapshot; else the
assumption that visibility won't change over time is uncool. Therefore
there are now two code paths depending on the snapshot type. I also
made the same change in nodeBitmapHeapscan.c, where it can be done always
because we only support MVCC snapshots for bitmap scans anyway.
Also make some incidental cleanups in the APIs of these functions.
Per a suggestion from Qingqing Zhou.
qualification when the underlying operator is indexable and useOr is true.
That is, indexkey op ANY (ARRAY[...]) is effectively translated into an
OR combination of one indexscan for each array element. This only works
for bitmap index scans, of course, since regular indexscans no longer
support OR'ing of scans. There are still some loose ends to clean up
before changing 'x IN (list)' to translate as a ScalarArrayOpExpr;
for instance predtest.c ought to be taught about it. But this gets the
basic functionality in place.
a TupleTableSlot: instead of calling ExecClearTuple, inline the needed
operations, so that we can avoid redundant steps. In particular, when
the old and new tuples are both on the same disk page, avoid releasing
and re-acquiring the buffer pin --- this saves work in both the bufmgr
and ResourceOwner modules. To make this improvement actually useful,
partially revert a change I made on 2004-04-21 that caused SeqNext
et al to call ExecClearTuple before ExecStoreTuple. The motivation
for that, to avoid grabbing the BufMgrLock separately for releasing
the old buffer and grabbing the new one, no longer applies. My
profiling says that this saves about 5% of the CPU time for an
all-in-memory seqscan.
generate their output tuple descriptors from their target lists (ie, using
ExecAssignResultTypeFromTL()). We long ago fixed things so that all node
types have minimally valid tlists, so there's no longer any good reason to
have two different ways of doing it. This change is needed to fix bug
reported by Hayden James: the fix of 2005-11-03 to emit the correct column
names after optimizing away a SubqueryScan node didn't work if the new
top-level plan node used ExecAssignResultTypeFromOuterPlan to generate its
tupdesc, since the next plan node down won't have the correct column labels.
a SubLink expression into a rule query. Pre-8.1 we essentially did this
unconditionally; 8.1 tries to do it only when needed, but was missing a
couple of cases. Per report from Kyle Bateman. Add some regression test
cases covering this area.
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.
process of dropping roles by dropping objects owned by them and privileges
granted to them, or giving the owned objects to someone else, through the
use of the data stored in the new pg_shdepend catalog.
Some refactoring of the GRANT/REVOKE code was needed, as well as ALTER OWNER
code. Further cleanup of code duplication in the GRANT code seems necessary.
Implemented by me after an idea from Tom Lane, who also provided various kind
of implementation advice.
Regression tests pass. Some tests for the new functionality are also added,
as well as rudimentary documentation.
tuple in-place, but instead passes back an all-new tuple structure if
any changes are needed. This is a much cleaner and more robust solution
for the bug discovered by Alexey Beschiokov; accordingly, revert the
quick hack I installed yesterday.
With this change, HeapTupleData.t_datamcxt is no longer needed; will
remove it in a separate commit in HEAD only.
doing heap_insert or heap_update, wipe out any extracted fields in
the TupleTableSlot containing the tuple, because they might not be valid
anymore if tuptoaster.c changed the tuple. Safe because slot must be
in the materialized state, but mighty ugly --- find a better answer!
the array (for array_push) or higher-dimensional array (for array_cat)
rather than decrementing it as before. This avoids generating lower
bounds other than one for any array operation within the SQL spec. Per
recent discussion.
Interestingly, this seems to have been the original behavior, because
while updating the docs I noticed that a large fraction of relevant
examples were *wrong* for the old behavior and are now right. Is it
worth correcting this in the back-branch docs?
recursed twice on its first argument, leading to exponential time spent
on a deep nest of COALESCEs ... such as a deeply nested FULL JOIN would
produce. Per report from Matt Carter.
functionality, but I still need to make another pass looking at places
that incidentally use arrays (such as ACL manipulation) to make sure they
are null-safe. Contrib needs work too.
I have not changed the behaviors that are still under discussion about
array comparison and what to do with lower bounds.
that was added to localbuf.c in 8.1; therefore, applying it to a temp table
left corrupt lookup state in memory. The only case where this had a
significant chance of causing problems was an ON COMMIT DELETE ROWS temp
table; the other possible paths left bogus state that was unlikely to
be used again. Per report from Csaba Nagy.
names from being added to pgindent's typedef list. The existance of
them caused weird formatting in the date/type files, and in keywords.c.
Backpatch to 8.1.X.
columns, shifting comment to the right when more than 150 'else if'
clauses were used, and update typedefs for 8.1.X.
NetBSD patched updated, with documentation.
sense and rename to "outerjoin_delayed" to more clearly reflect what it
means). I had decided that it was redundant in 8.1, but the folly of this
is exposed by a bug report from Sebastian Böck. The place where it's
needed is to prevent orindxpath.c from cherry-picking arms of an outer-join
OR clause to form a relation restriction that isn't actually legal to push
down to the relation scan level. There may be some legal cases that this
forbids optimizing, but we'd need much closer analysis to determine it.