just exec instead of creating a subprocess. This reduces process usage
from four processes per parallel test to two. I have no idea whether
a comparable optimization is possible or useful in the Windows port.
This allows it to be used on Windows without installing mingw
(though you do still need 'diff'), and opens the door to future
improvements such as message localization.
Magnus Hagander and Tom Lane.
code to forcibly drop regressuser[1-4] and regressgroup[1-2]. Instead,
let the privileges.sql test do that for itself (this is made easy by
the recent addition of DROP ROLE IF EXISTS). Per a recent patch proposed
by Joachim Wieland --- the rest of his patch is superseded by the
rewrite into C, but this is a good idea we should adopt.
the read lock we hold on the table's parent relation until commit.
Update equalfuncs.c for the new field in AlterTableCmd. Various
improvements to comments, variable names, and error reporting.
There is room for further improvement here, but this is at least
a step in the right direction.
Open items:
There were a few tangentially related issues that have come up that I think
are TODOs. I'm likely to tackle one or two of these next so I'm interested in
hearing feedback on them as well.
. Constraints currently do not know anything about inheritance. Tom suggested
adding a coninhcount and conislocal like attributes have to track their
inheritance status.
. Foreign key constraints currently do not get copied to new children (and
therefore my code doesn't verify them). I don't think it would be hard to
add them and treat them like CHECK constraints.
. No constraints at all are copied to tables defined with LIKE. That makes it
hard to use LIKE to define new partitions. The standard defines LIKE and
specifically says it does not copy constraints. But the standard already has
an option called INCLUDING DEFAULTS; we could always define a non-standard
extension LIKE table INCLUDING CONSTRAINTS that gives the user the option to
request a copy including constraints.
. Personally, I think the whole attislocal thing is bunk. The decision about
whether to drop a column from children tables or not is something that
should be up to the user and trying to DWIM based on whether there was ever
a local definition or the column was acquired purely through inheritance is
hardly ever going to match up with user expectations.
. And of course there's the whole unique and primary key constraint issue. I
think to get any traction at all on this you have a prerequisite of a real
partitioned table implementation where the system knows what the partition
key is so it can recognize when it's a leading part of an index key.
Greg Stark
the order in which it visits tables is not dependent on the physical order
of pg_constraint entries, and neither are the error messages it gives.
This should correct recently-noticed instability in regression tests.
will be expanded to a list of their member fields, rather than creating
a nested rowtype field as formerly. (The old behavior is still available
by omitting '.*'.) This syntax is not allowed by the SQL spec AFAICS,
so changing its behavior doesn't violate the spec. The new behavior is
substantially more useful since it allows, for example, triggers to check
for data changes with 'if row(new.*) is distinct from row(old.*)'. Per
my recent proposal.
We have once or twice seen failures suggesting that control didn't get
to the exception block before the timeout elapsed, which is unlikely
but not impossible in a parallel regression test (with a dozen other
backends competing for cycles). This change doesn't completely prevent
the problem of course, but it should reduce the probability enough that
we don't see it anymore. Per buildfarm results.
as this seems only likely to create headaches for module developers. Put
the macro in the pre-existing fmgr.h file instead. Avoid being too cute
about how many fields we can cram into a word, and avoid trying to fetch
from a library we've already unlinked.
Along the way, it occurred to me that the magic block really ought to be
'const' so it can be stored in the program text area. Do the same for
the existing data blocks for PG_FUNCTION_INFO_V1 functions.
It now only checks four things:
Major version number (7.4 or 8.1 for example)
NAMEDATALEN
FUNC_MAX_ARGS
INDEX_MAX_KEYS
The three constants were chosen because:
1. We document them in the config page in the docs
2. We mark them as changable in pg_config_manual.h
3. Changing any of these will break some of the more popular modules:
FUNC_MAX_ARGS changes fmgr interface, every module uses this NAMEDATALEN
changes syscache interface, every PL as well as tsearch uses this
INDEX_MAX_KEYS breaks tsearch and anything using GiST.
Martijn van Oosterhout
---------------------------------------------------------------------------
Add dynamic record inspection to PL/PgSQL, useful for generic triggers:
tval2 := r.(cname);
or
columns := r.(*);
Titus von Boxberg
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
issued by autovacuum. Add accessor functions to them, and use those in the
pg_stat_*_tables system views.
Catalog version bumped due to changes in the pgstat views and the pgstat file.
Patch from Larry Rosenman, minor improvements by me.
throw warnings for 100%-SQL-standard constructs, clean up some minor
infelicities, try to un-break ecpg to the best of my ability. (It's not clear
how ecpg is going to find out the setting of standard_conforming_strings,
though.) I think pg_dump still needs work, too.
CREATE AGGREGATE aggname (input_type) (parameter_list)
along with the old syntax where the input type was named in the parameter
list. This fits more naturally with the way that the aggregate is identified
in DROP AGGREGATE and other utility commands; furthermore it has a natural
extension to handle multiple-input aggregates, where the basetype-parameter
method would get ugly. In fact, this commit fixes the grammar and all the
utility commands to support multiple-input aggregates; but DefineAggregate
rejects it because the executor isn't fixed yet.
I didn't do anything about treating agg(*) as a zero-input aggregate instead
of artificially making it a one-input aggregate, but that should be considered
in combination with supporting multi-input aggregates.
generated text files. Fix build of that file, too.
Put the text files in the right place during make dist, so there are no
extra manual steps required anymore.
that apply the necessary domain constraint checks immediately. This fixes
cases where domain constraints went unchecked for statement parameters,
PL function local variables and results, etc. We can also eliminate existing
special cases for domains in places that had gotten it right, eg COPY.
Also, allow domains over domains (base of a domain is another domain type).
This almost worked before, but was disallowed because the original patch
hadn't gotten it quite right.
command or expression, rather than one copy for each textual occurrence as
it did before. This might result in some small performance improvement,
but the compelling reason to do it is that not doing so can result in
unexpected grouping failures because the main SQL parser won't see different
parameter numbers as equivalent. Add a regression test for the failure case.
Per report from Robert Davidson.
2005-05-13. When we find that a new inner tuple can't possibly match any
outer tuple (because it contains a NULL), we can't immediately skip the
tuple when we are in NEXTINNER state. Doing so can lead to emitting
multiple copies of the tuple in FillInner mode, because we may rescan the
tuple after returning to a previous marked tuple. Instead, proceed to
NEXTOUTER state the same as we used to do. After we've found that there's
no need to return to the marked position, we can go to SKIPINNER_ADVANCE
state instead of SKIP_TEST when the inner tuple is unmatchable; this
preserves the performance improvement. Per bug report from Bruce.
I also made a couple of cosmetic code rearrangements and added a regression
test for the problem.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
var_samp(), stddev_pop(), and stddev_samp(). var_samp() and stddev_samp()
are just renamings of the historical Postgres aggregates variance() and
stddev() -- the latter names have been kept for backward compatibility.
This patch includes updates for the documentation and regression tests.
The catversion has been bumped.
NB: SQL2003 requires that DISTINCT not be specified for any of these
aggregates. Per discussion on -patches, I have NOT implemented this
restriction: if the user asks for stddev(DISTINCT x), presumably they
know what they are doing.
not likely ever to be implemented seeing it's been removed from SQL2003.
This allows getting rid of the 'filter' version of yylex() that we had in
parser.c, which should save at least a few microseconds in parsing.
- new function justify_interval(interval)
- modified function justify_hours(interval)
- modified function justify_days(interval)
These functions are defined to meet the requirements as discussed in
this thread. Specifically:
- justify_hours makes certain the sign bit on the hours
matches the sign bit on the days. It only checks the
sign bit on the days, and not the months, when
determining if the hours should be positive or negative.
After the call, -24 < hours < 24.
- justify_days makes certain the sign bit on the days
matches the sign bit on the months. It's behavior does
not depend on the hours, nor does it modify the hours.
After the call, -30 < days < 30.
- justify_interval makes sure the sign bits on all three
fields months, days, and hours are all the same. After
the call, -24 < hours < 24 AND -30 < days < 30.
Mark Dilger