per-table overrides of parameters.
This removes a whole class of problems related to misusing the catalog,
and perhaps more importantly, gives us pg_dump support for the parameters.
Based on a patch by Euler Taveira de Oliveira, heavily reworked by me.
has_column_privilege and has_any_column_privilege SQL functions; fix the
information_schema views that are supposed to pay attention to column
privileges; adjust pg_stats to show stats for any column you have select
privilege on; and fix COPY to allow copying a subset of columns if the user
has suitable per-column privileges for all the columns.
To improve efficiency of some of the information_schema views, extend the
has_xxx_privilege functions to allow inquiring about the OR of a set of
privileges in just one call. This is just exposing capability that already
existed in the underlying aclcheck routines.
In passing, make the information_schema views report the owner's own
privileges as being grantable, since Postgres assumes this even when the grant
option bit is not set in the ACL. This is a longstanding oversight.
Also, make the new has_xxx_privilege functions for foreign data objects follow
the same coding conventions used by the older ones.
Stephen Frost and Tom Lane
post-data step is run in a separate worker child (a thread on Windows, a child
process elsewhere) up to the concurrent number specified by the new pg_restore
command-line --multi-thread | -m switch.
Andrew Dunstan, with some editing by Tom Lane.
qualifier, and add support for this in pg_dump.
This allows TOAST tables to have user-defined fillfactor, and will also
enable us to move the autovacuum parameters to reloptions without taking
away the possibility of setting values for TOAST tables.
case that the command is rewritten into another type of command. The old
behavior to return the command tag of the last executed command was
pretty surprising. In PL/pgSQL, for example, it meant that if a command
was rewritten to a utility statement, FOUND wasn't set at all.
CREATE/ALTER/DROP USER MAPPING are now allowed either by the server owner or
by a user with USAGE privileges for his own user name. This is more or less
what the SQL standard wants anyway (plus "implementation-defined")
Hide information_schema.user_mapping_options.option_value, unless the current
user is the one associated with the user mapping, or is the server owner and
the mapping is for PUBLIC, or is a superuser. This is to protect passwords.
Also, fix a bug in information_schema._pg_foreign_servers, which hid servers
using wrappers where the current user did not have privileges on the wrapper.
The correct behavior is to hide servers where the current user has no
privileges on the server.
to the display, not restricted in the display; new text:
The letter <literal>S</literal> adds the listing of system
objects; without <literal>S</literal>, only non-system
objects are shown.
GUC variable effective_io_concurrency controls how many concurrent block
prefetch requests will be issued.
(The best way to handle this for plain index scans is still under debate,
so that part is not applied yet --- tgl)
Greg Stark
III. Server Administration
15. Installation from Source Code
16. Installation from Source Code on Windows
17. Server Setup and Operation
to give users of binary installations a better idea where to start reading.
suggested by Nikolay Samokhvalov
like a makefile with real dependencies.
Instead of overwriting the old po file, write the new one to .po.new. This is
less annoying and integrates better with the NLS web site.
Also, we can now merge languages that don't have a po file yet, by merging
against all other po files of that language, to pick up recurring translations
automatically. This previously only worked when a po file already existed.
the default. This setting enables constraint exclusion checks only for
appendrel members (ie, inheritance children and UNION ALL arms), which are
the cases in which constraint exclusion is most likely to be useful. Avoiding
the overhead for simple queries that are unlikely to benefit should bring
the cost down to the point where this is a reasonable default setting.
Per today's discussion.
not include postgres.h nor anything else it doesn't directly need. Add
#includes to calling files as needed to compensate. Per my proposal of
yesterday.
This should be noted as a source code change in the 8.4 release notes,
since it's likely to require changes in add-on modules.
to pass the full username@realm string to the authentication instead of
just the username. This makes it possible to use pg_ident.conf to authenticate
users from multiple realms as different database users.
particular this allows EmitWarningsOnPlaceholders messages to show up in the
postmaster log by default. Update elog.h comment to make it clearer what INFO
is for, and fix one example in the SGML docs that was misusing it. Per my
gripe of yesterday.
performing dumps and restores in accordance with a security policy that
forbids logging in directly as superuser, but instead specifies that you
should log into an admin account and then SET ROLE to the superuser.
In passing, clean up some ugly and mostly-broken code for quoting shell
arguments in pg_dumpall.
Benedek László, with some help from Tom Lane
and change auto_explain's custom GUC variables to be named auto_explain.xxx
not just explain.xxx. Per discussion in connection with the
pg_stat_statements patch, it seems like a good idea to have the convention
that custom variable classes are named the same as their defining module.
Committing separately since this should happen regardless of what happens
with pg_stat_statements itself.
so that user-defined window functions are possible. For the moment you'll
have to write them in C, for lack of any interface to the WindowObject API
in the available PLs, but it's better than no support at all.
There was some debate about the best syntax for this. I ended up choosing
the "it's an attribute" position --- the other approach will inevitably be
more work, and the likely market for user-defined window functions is
probably too small to justify it.
patch. This includes the ability to force the frame to cover the whole
partition, and the ability to make the frame end exactly on the current row
rather than its last ORDER BY peer. Supporting any more of the full SQL
frame-clause syntax will require nontrivial hacking on the window aggregate
code, so it'll have to wait for 8.5 or beyond.
This doesn't do any remote or external things yet, but it gives modules
like plproxy and dblink a standardized and future-proof system for
managing their connection information.
Martin Pihlak and Peter Eisentraut
and certificate revokation list by using connection parameters or environment
variables.
Original patch by Mark Woodward, heavily reworked by Alvaro Herrera and
Magnus Hagander.
vacuuming (it's not), say "database-wide VACUUM" instead of "full-database
VACUUM" in the relevant hint messages. Also, document the permissions needed
to do this. Per today's discussion.
the basic representational details (typlen, typalign, typbyval, typstorage)
to be copied from an existing type rather than listed explicitly in the
CREATE TYPE command. The immediate reason for this is to provide a simple
solution for add-on modules that want to define types represented as int8,
float4, or float8: as of 8.4 the appropriate PASSEDBYVALUE setting is
platform-specific and so it's hard for a SQL script to know what to do.
This patch fixes the contrib/isn breakage reported by Rushabh Lathia.
libpq. As noted by Peter, adding this variable created a risk of unexpected
connection failures when talking to older server versions, and since it
doesn't do anything you can't do with PGOPTIONS, it doesn't seem really
necessary. Removing it does occasion a few extra lines in pg_regress.c,
but saving a getenv() call per libpq connection attempt is perhaps worth
that anyway.
The information on why the shared libraries are built the way they are
was not relevant to end users and has been made a mailing list archive
link in Makefile.shlib.
locate the target row, if the cursor was declared with FOR UPDATE or FOR
SHARE. This approach is more flexible and reliable than digging through the
plan tree; for instance it can cope with join cursors. But we still provide
the old code for use with non-FOR-UPDATE cursors. Per gripe from Robert Haas.
another section if required by the platform (instead of the old way of
building them in section "l" and always transforming them to the
platform-specific section).
This speeds up the installation on common platforms, and it avoids some
funny business with the man page tools and build process.
anyelement. This lacks the WITH ORDINALITY option, as well as the multiple
input arrays option added in the most recent SQL specs. But it's still a
pretty useful subset of the spec's functionality, and it is enough to
allow obsoleting contrib/intagg.
We don't actually use this anywhere, but it might come in handy for dealing
with SELECT/WITH/TABLE.
It works with both the old and the new man page target (for some value of
"works").
function as a special case.
This version still has the suspicious behavior of returning null for an
empty array (rather than zero), but this may need a wholesale revision of
empty array behavior, currently under discussion.
Jim Nasby, Robert Haas, Peter Eisentraut
specifically, we can input either the "format with designators" or the
"alternative format", and we can output the former when IntervalStyle is set
to iso_8601.
Ron Mayer
different locales. This is just syntactical sweetener over --lc-collate and
--lc-ctype. Per discussion.
While at it, properly document --lc-ctype and --lc-collate in SGML docs,
which apparently were forgotten (or purposefully ommited?) when they were
created.
("there might be triggers") rather than an exact count. This is necessary
catalog infrastructure for the upcoming patch to reduce the strength of
locking needed for trigger addition/removal. Split out and committed
separately for ease of reviewing/testing.
In passing, also get rid of the unused pg_class columns relukeys, relfkeys,
and relrefs, which haven't been maintained in many years and now have no
chance of ever being maintained (because of wishing to avoid locking).
Simon Riggs
from DateStyle, and create a new interval style that produces output matching
the SQL standard (at least for interval values that fall within the standard's
restrictions). IntervalStyle is also used to resolve the conflict between the
standard and traditional Postgres rules for interpreting negative interval
input.
Ron Mayer
data type. This patch takes the approach of allowing an optional hyphen after
each group of four hex digits.
Author: Robert Haas <robertmhaas@gmail.com>
upon requests from backends, rather than on a fixed 500msec cycle. (There's
still throttling logic to ensure it writes no more often than once per
500msec, though.) This should result in a significant reduction in stats file
write traffic in typical scenarios where the stats are demanded only
infrequently.
This approach also means that the former difficulty with changing
stats_temp_directory on-the-fly has gone away, so remove the caution about
that as well as the thrashing we did to minimize the trouble window.
In passing, also fix pgstat_report_stat() so that we will send a stats
message if we have function call stats but not table stats to report;
this fixes a bug in the recent patch to support function-call stats.
Martin Pihlak
RETURNING clause, not just a SELECT as formerly.
A side effect of this patch is that when a set-returning SQL function is used
in a FROM clause, performance is improved because the output is collected into
a tuplestore within the function, rather than using the less efficient
value-per-call mechanism.
BSD sed. So write it in Perl, which is more portable and a bit faster, too.
We already use Perl for standard documentation builds, so this imposes no
additional requirement.
via a tuplestore instead of value-per-call. Refactor a few things to reduce
ensuing code duplication with nodeFunctionscan.c. This represents the
reasonably noncontroversial part of my proposed patch to switch SQL functions
over to returning tuplestores. For the moment, SQL functions still do things
the old way. However, this change enables PL SRFs to be called in targetlists
(observe changes in plperl regression results).
recursion when we are unable to convert a localized error message to the
client's encoding. We've been over this ground before, but as reported by
Ibrar Ahmed, it still didn't work in the case of conversion failures for
the conversion-failure message itself :-(. Fix by installing a "circuit
breaker" that disables attempts to localize this message once we get into
recursion trouble.
Patch all supported branches, because it is in fact broken in all of them;
though I had to add some missing translations to the older branches in
order to expose the failure in the particular test case I was using.
after each other (since we already add a newline on each, this makes them
multiline).
Previously a new error would just overwrite the old one, so for example any
error caused when trying to connect with SSL enabled would be overwritten
by the error message form the non-SSL connection when using sslmode=prefer.
* make LDAP use this instead of the hacky previous method to specify
the DN to bind as
* make all auth options behave the same when they are not compiled
into the server
* rename "ident maps" to "user name maps", and support them for all
auth methods that provide an external username
This makes a backwards incompatible change in the format of pg_hba.conf
for the ident, PAM and LDAP authentication methods.
scanning; GiST and GIN do not, and it seems like too much trouble to make
them do so. By teaching ExecSupportsBackwardScan() about this restriction,
we ensure that the planner will protect a scroll cursor from the problem
by adding a Materialize node.
In passing, fix another longstanding bug in the same area: backwards scan of
a plan with set-returning functions in the targetlist did not work either,
since the TupFromTlist expansion code pays no attention to direction (and
has no way to run a SRF backwards anyway). Again the fix is to make
ExecSupportsBackwardScan check this restriction.
Also adjust the index AM API specification to note that mark/restore support
is unnecessary if the AM can't produce ordered output.
the timestamp types. Turns out this doesn't even reduce the available
range of dates, since the restriction to dates that work for Julian-date
arithmetic is much tighter than the int32 range anyway. Per a longstanding
TODO item.
depth-first search order. Upon close reading of SQL:2008, it seems that the
spec's SEARCH DEPTH FIRST and SEARCH BREADTH FIRST options do not actually
guarantee any particular result order: what they do is provide a constructed
column that the user can then sort on in the outer query. So this is actually
just as much functionality ...
pseudo-type record[] to represent arrays of possibly-anonymous composite
types. Since composite datums carry their own type identification, no
extra knowledge is needed at the array level.
The main reason for doing this right now is that it is necessary to support
the general case of detection of cycles in recursive queries: if you need to
compare more than one column to detect a cycle, you need to compare a ROW()
to an array built from ROW()s, at least if you want to do it as the spec
suggests. Add some documentation and regression tests concerning the cycle
detection issue.
implementation uses an in-memory hash table, so it will poop out for very
large recursive results ... but the performance characteristics of a
sort-based implementation would be pretty unpleasant too.
relation forks. While the file names are not visible to users, for those
that do peek into the data directory, it's nice to have more descriptive
names. Per Greg Stark's suggestion.
There are some unimplemented aspects: recursive queries must use UNION ALL
(should allow UNION too), and we don't have SEARCH or CYCLE clauses.
These might or might not get done for 8.4, but even without them it's a
pretty useful feature.
There are also a couple of small loose ends and definitional quibbles,
which I'll send a memo about to pgsql-hackers shortly. But let's land
the patch now so we can get on with other development.
Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane
name of a fork ('main' or 'fsm', at the moment) to pg_relation_size() to
get the size of a specific fork. Defaults to 'main', if none given.
While we're at it, modify pg_relation_size to take a regclass as argument,
instead of separate variants taking oid and name. This change is
transparent to typical use where the table name is passed as a string
literal, like pg_relation_size('table'), but will break queries like
pg_relation_size(namecol), where namecol is of type name. text-type input
still works, and using a non-schema-qualified table name is not very
reliable anyway, so this is unlikely to break anyone's queries in practice.
large enough for block numbers higher than 2^31. The old pre-FSM-rewrite
pg_freespacemap implementation got this right. While we're at it, remove
some unnecessary #includes.
free space information is stored in a dedicated FSM relation fork, with each
relation (except for hash indexes; they don't use FSM).
This eliminates the max_fsm_relations and max_fsm_pages GUC options; remove any
trace of them from the backend, initdb, and documentation.
Rewrite contrib/pg_freespacemap to match the new FSM implementation. Also
introduce a new variant of the get_raw_page(regclass, int4, int4) function in
contrib/pageinspect that let's you to return pages from any relation fork, and
a new fsm_page_contents() function to inspect the new FSM pages.
ctype are now more like encoding, stored in new datcollate and datctype
columns in pg_database.
This is a stripped-down version of Radek Strnad's patch, with further
changes by me.
that presence of the password in the conninfo string must be checked *before*
risking a connection attempt, there is no point in checking it afterwards.
This makes the specification of PQconnectionUsedPassword() a bit simpler
and perhaps more generally useful, too.
conninfo string *before* trying to connect to the remote server, not after.
As pointed out by Marko Kreen, in certain not-very-plausible situations
this could result in sending a password from the postgres user's .pgpass file,
or other places that non-superusers shouldn't have access to, to an
untrustworthy remote server. The cleanest fix seems to be to expose libpq's
conninfo-string-parsing code so that dblink can check for a password option
without duplicating the parsing logic.
Joe Conway, with a little cleanup by Tom Lane
sequence of operations that libpq goes through while creating a PGresult.
Also, remove ill-considered "const" decoration on parameters passed to
event procedures.
guarantees about whether event procedures will receive DESTROY events.
They no longer need to defend themselves against getting a DESTROY
without a successful prior CREATE.
Andrew Chernow
value. This means that hash index lookups are always lossy and have to be
rechecked when the heap is visited; however, the gain in index compactness
outweighs this when the indexed values are wide. Also, we only need to
perform datatype comparisons when the hash codes match exactly, rather than
for every entry in the hash bucket; so it could also win for datatypes that
have expensive comparison functions. A small additional win is gained by
keeping hash index pages sorted by hash code and using binary search to reduce
the number of index tuples we have to look at.
Xiao Meng
This commit also incorporates Zdenek Kotala's patch to isolate hash metapages
and hash bitmaps a bit better from the page header datastructures.
each connection. This makes it possible to catch errors in the pg_hba
file when it's being reloaded, instead of silently reloading a broken
file and failing only when a user tries to connect.
This patch also makes the "sameuser" argument to ident authentication
optional.
and the literal syntax INTERVAL 'string' ... SECOND(n), as required by the
SQL standard. Our old syntax put (n) directly after INTERVAL, which was
a mistake, but will still be accepted for backward compatibility as well
as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This could
potentially affect clients, if there are any that analyze the typmod of an
INTERVAL in any detail.)
Also fix interval input to handle 'min:sec.frac' properly; I had overlooked
this case in my previous patch.
Document the use of the interval fields qualifier, which up to now we had
never mentioned in the docs. (I think the omission was intentional because
it didn't work per spec; but it does now, or at least close enough to be
credible.)
for editing if no function name is specified. This seems a much cleaner way
to offer that functionality than the original patch had. In passing,
de-clutter the error displays that are given for a bogus function-name
argument, and standardize on "$function$" as the default delimiter for the
function body. (The original coding would use the shortest possible
dollar-quote delimiter, which seems to create unnecessarily high risk of
later conflicts with the user-modified function body.)
In support of that, create a backend function pg_get_functiondef().
The psql command is functional but maybe a bit rough around the edges...
Abhijit Menon-Sen
debug_print_plan to appear at LOG message level, not DEBUG1 as historically.
Make debug_pretty_print default to on. Also, cause plans generated via
EXPLAIN to be subject to debug_print_plan. This is all to make
debug_print_plan a reasonably comfortable substitute for the former behavior
of EXPLAIN VERBOSE.
While at it, mark a couple of items completed in 8.4:
! o -Prevent long-lived temporary tables from causing frozen-xid
advancement starvation
! * -Improve performance of shared invalidation queue for multiple CPUs
Also remove a couple of obsolete assignments.
>
> * Fix all set-returning system functions so they support a wildcard
> target list
>
> SELECT * FROM pg_get_keywords() works but SELECT * FROM
> pg_show_all_settings() does not.
variable stats_temp_directory, instead of requiring the admin to
mount/symlink the pg_stat_tmp directory manually.
For now the config variable is PGC_POSTMASTER. Room for further improvment
that would allow it to be changed on-the-fly.
the old JOIN_IN code, but antijoins are new functionality.) Teach the planner
to convert appropriate EXISTS and NOT EXISTS subqueries into semi and anti
joins respectively. Also, LEFT JOINs with suitable upper-level IS NULL
filters are recognized as being anti joins. Unify the InClauseInfo and
OuterJoinInfo infrastructure into "SpecialJoinInfo". With that change,
it becomes possible to associate a SpecialJoinInfo with every join attempt,
which permits some cleanup of join selectivity estimation. That needs to be
taken much further than this patch does, but the next step is to change the
API for oprjoin selectivity functions, which seems like material for a
separate patch. So for the moment the output size estimates for semi and
especially anti joins are quite bogus.
This allows the use of a ramdrive (either through mount or symlink) for
the temporary file that's written every half second, which should
reduce I/O.
On server shutdown/startup, the file is written to the old location in
the global directory, to preserve data across restarts.
Bump catversion since the $PGDATA directory layout changed.
or domains). This was already effectively required because you had to own
the I/O functions, and the I/O functions pretty much have to be written in
C since we don't let PL functions take or return cstring. But given the
possible security consequences of a malicious type definition, it seems
prudent to enforce superuser requirement directly. Per recent discussion.
of the STRING type category, thereby opening up the mechanism for user-defined
types. This is mainly for the benefit of citext, though; there aren't likely
to be a lot of types that are all general-purpose character strings.
Per discussion with David Wheeler.
only type categories in which the previous coding made *every* type
preferred; so there is no change in effective behavior, because the function
resolution rules only do something different when faced with a choice
between preferred and non-preferred types in the same category. It just
seems safer and less surprising to have CREATE TYPE default to non-preferred
status ...
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
and bogus documentation (dimension arrays are int[] not anyarray). Also the
errhint() messages seem to be really errdetail(), since there is nothing
heuristic about them. Some other trivial cosmetic improvements.
need to deconstruct proargmodes for each pg_proc entry inspected by
FuncnameGetCandidates(). Fixes function lookup performance regression
caused by yesterday's variadic-functions patch.
In passing, make pg_proc.probin be NULL, rather than a dummy value '-',
in cases where it is not actually used for the particular type of function.
This should buy back some of the space cost of the extra column.
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
on the most common individual lexemes in place of the mostly-useless default
behavior of counting duplicate tsvectors. Future work: create selectivity
estimation functions that actually do something with these stats.
(Some other things we ought to look at doing: using the Lossy Counting
algorithm in compute_minimal_stats, and using the element-counting idea for
stats on regular arrays.)
Jan Urbanski
Document return type of cast functions.
Also change documentation to prefer the term "binary coercible" in its
present sense instead of the previous term "binary compatible".
wal_segment_size to make those configuration parameters available to clients,
in the same way that block_size was previously exposed. Bernd Helmle, with
comments from Abhijit Menon-Sen and some further tweaking by me.
As the buffer could now be a lot larger than before, and copying it could
thus be a lot more expensive than before, use strcpy instead of memcpy to
copy the query string, as was already suggested in comments. Also, only copy
the PgBackendStatus struct and string if the slot is in use.
Patch by Thomas Lee, with some changes by me.
grammar allows ALTER TABLE/INDEX/SEQUENCE/VIEW interchangeably for all
subforms of those commands, and then we sort out what's really legal
at execution time. This allows the ALTER SEQUENCE/VIEW reference pages
to fully document all the ALTER forms available for sequences and views
respectively, and eliminates a longstanding cause of confusion for users.
The net effect is that the following forms are allowed that weren't before:
ALTER SEQUENCE OWNER TO
ALTER VIEW ALTER COLUMN SET/DROP DEFAULT
ALTER VIEW OWNER TO
ALTER VIEW SET SCHEMA
(There's no actual functionality gain here, but formerly you had to say
ALTER TABLE instead.)
Interestingly, the grammar tables actually get smaller, probably because
there are fewer special cases to keep track of.
I did not disallow using ALTER TABLE for these operations. Perhaps we
should, but there's a backwards-compatibility issue if we do; in fact
it would break existing pg_dump scripts. I did however tighten up
ALTER SEQUENCE and ALTER VIEW to reject non-sequences and non-views
in the new cases as well as a couple of cases where they didn't before.
The patch doesn't change pg_dump to use the new syntaxes, either.
"make all", and then reference them there during the actual tests. This
makes the handling of these files more parallel to that of regress.so,
and in particular simplifies use of the regression tests outside the
original build tree. The PGDG and Red Hat RPMs have been doing this via
patches for a very long time. Inclusion of the change in core was requested
by Jørgen Austvik of Sun, and I can't see any reason not to.
I attempted to fix the MSVC scripts for this too, but they may need
further tweaking ...
* Add deferred trigger queue file
< This item involves dumping large queues into files.
> This item involves dumping large queues into files, or doing some
> kind of join to process all the triggers, or some bulk operation.
require SELECT privilege as well, since you normally need to read existing
column values within such commands. This behavior is according to spec,
but we'd never documented it before. Per gripe from Volkan Yazici.
the associated datatype as their equality member. This means that these
opclasses can now support plain equality comparisons along with LIKE tests,
thus avoiding the need for an extra index in some applications. This
optimization was not possible when the pattern opclasses were first introduced,
because we didn't insist that text equality meant bitwise equality; but we
do now, so there is no semantic difference between regular and pattern
equality operators.
I removed the name_pattern_ops opclass altogether, since it's really useless:
name's regular comparisons are just strcmp() and are unlikely to become
something different. Instead teach indxpath.c that btree name_ops can be
used for LIKE whether or not the locale is C. This might lead to a useful
speedup in LIKE queries on the system catalogs in non-C locales.
The ~=~ and ~<>~ operators are gone altogether. (It would have been nice to
keep them for backward compatibility's sake, but since the pg_amop structure
doesn't allow multiple equality operators per opclass, there's no way.)
A not-immediately-obvious incompatibility is that the sort order within
bpchar_pattern_ops indexes changes --- it had been identical to plain
strcmp, but is now trailing-blank-insensitive. This will impact
in-place upgrades, if those ever happen.
Per discussions a couple months ago.
IDENTITY to be more explicit about the possible hazards. Per gripe from Neil
and subsequent discussion. Eventually we may be able to get rid of this
warning, but for now it had better be there.
sequence to be reset to its original starting value. This requires adding the
original start value to the set of parameters (columns) of a sequence object,
which is a user-visible change with potential compatibility implications;
it also forces initdb.
Also add hopefully-SQL-compatible RESTART/CONTINUE IDENTITY options to
TRUNCATE TABLE. RESTART IDENTITY executes ALTER SEQUENCE RESTART for all
sequences "owned by" any of the truncated relations. CONTINUE IDENTITY is
a no-op option.
Zoltan Boszormenyi
functions.
Note that because this patch changes FmgrInfo, any external C functions
you might be testing with 8.4 will need to be recompiled.
Patch by Martin Pihlak, some editorialization by me (principally, removing
tracking of getrusage() numbers)
HINT fields to a user-thrown error message, and to specify the SQLSTATE
error code to use. The syntax has also been tweaked so that the
Oracle-compatible case "RAISE exception_name" works (though you won't get a
very nice error message if you just write that much). Lastly, support
the Oracle-compatible syntax "RAISE" with no parameters to re-throw
the current error from within an EXCEPTION block.
In passing, allow the syntax SQLSTATE 'nnnnn' within EXCEPTION lists,
so that there is a way to trap errors with custom SQLSTATE codes.
Pavel Stehule and Tom Lane
as those for inherited columns; that is, it's no longer allowed for a child
table to not have a check constraint matching one that exists on a parent.
This satisfies the principle of least surprise (rows selected from the parent
will always appear to meet its check constraints) and eliminates some
longstanding bogosity in pg_dump, which formerly had to guess about whether
check constraints were really inherited or not.
The implementation involves adding conislocal and coninhcount columns to
pg_constraint (paralleling attislocal and attinhcount in pg_attribute)
and refactoring various ALTER TABLE actions to be more like those for
columns.
Alex Hunsaker, Nikhil Sontakke, Tom Lane
< * Improve detection of shared memory segments being used by other
< FreeBSD jails
> * Improve detection of shared memory segments being used by others
> by checking the SysV shared memory field 'nattch'
> http://archives.postgresql.org/pgsql-hackers/2008-01/msg00673.php
instead of calling a bunch of individual functions.
This function can also be called directly, taking a PID as an argument, to
return only the data for a single PID.
let XLOG_BLCKSZ and XLOG_SEG_SIZE be set via configure. Per a proposal by
Mark Wong, though I thought it better to call the switches after "wal" rather
than "xlog".
support for a nonsegmented mode from md.c. Per recent discussions, there
doesn't seem to be much value in a "never segment" option as opposed to
segmenting with a suitably large segment size. So instead provide a
configure-time switch to set the desired segment size in units of gigabytes.
While at it, expose a configure switch for BLCKSZ as well.
Zdenek Kotala
it vary with BLCKSZ as before. This agrees with what the documentation says,
and avoids a regression test problem when BLCKSZ is larger than default.
Per recent discussion.
do CancelBackup at a sane place, fix some oversights in the state transitions,
allow only superusers to connect while we are waiting for backup mode to end.
have pg_ctl warn about this.
Cancel running online backups (by renaming the backup_label file,
thus rendering the backup useless) when shutting down in fast mode.
Laurenz Albe
version ones, to make it clear to users just browsing the notes
that there are a lot more changes available from whatever version
they are at than what's in the minor version release notes.
where Datum is 8 bytes wide. Since this will break old-style C functions
(those still using version 0 calling convention) that have arguments or
results of these types, provide a configure option to disable it and retain
the old pass-by-reference behavior. Likewise, provide a configure option
to disable the recently-committed float4 pass-by-value change.
Zoltan Boszormenyi, plus configurability stuff by me.
of each plan node, instead of its former behavior of dumping the internal
representation of the plan tree. The latter display is still available for
those who really want it (see debug_print_plan), but uses for it are certainly
few and and far between. Per discussion.
This patch also removes the explain_pretty_print GUC, which is obsoleted
by the change.
"consistent" functions, and remove pg_amop.opreqcheck, as per recent
discussion. The main immediate benefit of this is that we no longer need
8.3's ugly hack of requiring @@@ rather than @@ to test weight-using tsquery
searches on GIN indexes. In future it should be possible to optimize some
other queries better than is done now, by detecting at runtime whether the
index match is exact or not.
Tom Lane, after an idea of Heikki's, and with some help from Teodor.
instead of plan time. Extend the amgettuple API so that the index AM returns
a boolean indicating whether the indexquals need to be rechecked, and make
that rechecking happen in nodeIndexscan.c (currently the only place where
it's expected to be needed; other callers of index_getnext are just erroring
out for now). For the moment, GIN and GIST have stub logic that just always
sets the recheck flag to TRUE --- I'm hoping to get Teodor to handle pushing
that control down to the opclass consistent() functions. The planner no
longer pays any attention to amopreqcheck, and that catalog column will go
away in due course.
the server version check is now always enforced. Relax the version check to
allow a server that is of pg_dump's own major version but a later minor
version; this is the only case that -i was at all safe to use in.
pg_restore already enforced only a very weak version check, so this is
really just a documentation change for it.
Per discussion.
indexscan always occurs in one call, and the results are returned in a
TIDBitmap instead of a limited-size array of TIDs. This should improve
speed a little by reducing AM entry/exit overhead, and it is necessary
infrastructure if we are ever to support bitmap indexes.
In an only slightly related change, add support for TIDBitmaps to preserve
(somewhat lossily) the knowledge that particular TIDs reported by an index
need to have their quals rechecked when the heap is visited. This facility
is not really used yet; we'll need to extend the forced-recheck feature to
plain indexscans before it's useful, and that hasn't been coded yet.
The intent is to use it to clean up 8.3's horrid @@@ kluge for text search
with weighted queries. There might be other uses in future, but that one
alone is sufficient reason.
Heikki Linnakangas, with some adjustments by me.
algorithm. This is a good deal slower than our old roundoff-error-prone
code for long inputs, so we keep the old code for use in the transcendental
functions, where everything is approximate anyway. Also create a
user-accessible function div(numeric, numeric) to provide access to the
exact result of trunc(x/y) --- since the regular numeric / operator will
round off its result, simply computing that expression in SQL doesn't
reliably give the desired answer. This fixes bug #3387 and various related
corner cases, and improves the usefulness of PG for high-precision integer
arithmetic.
specify the cost values to use, instead of always using 1's.
Volkan Yazici
In passing, remove fuzzystrmatch.h, which contained a bunch of stuff that had
no business being in a .h file; fold it into its only user, fuzzystrmatch.c.
that is commands that have out-of-line parameters but the plan is prepared
assuming that the parameter values are constants. This is needed for the
plpgsql EXECUTE USING patch, but will probably have use elsewhere.
This commit includes the SPI functions and documentation, but no callers
nor regression tests. The upcoming EXECUTE USING patch will provide
regression-test coverage. I thought committing this separately made
sense since it's logically a distinct feature.
key files that are similar to the one for the postmaster's data directory
permissions check. (I chose to standardize on that one since it's the most
heavily used and presumably best-wordsmithed by now.) Also eliminate explicit
tests on file ownership in these places, since the ensuing read attempt must
fail anyway if it's wrong, and there seems no value in issuing the same error
message for distinct problems. (But I left in the explicit ownership test in
postmaster.c, since it had its own error message anyway.) Also be more
specific in the documentation's descriptions of these checks. Per a gripe
from Kevin Hunter.
This requires a working 64-bit integer type. If such a type cannot
be found, "--disable-integer-datetimes" can be used to switch
back to the previous floating point-based datetime implementation.
restore_command should report failure on non-existent .backup and .history
files. Tidy up some related text along the way.
Patch by Markus Bertheau, with some editing by Simon Riggs and myself.
< o Consider invalidating the cache or keeping seperate cached
< copies when search_path changes
> o Consider keeping seperate cached copies when search_path changes
strings. This patch introduces four support functions cstring_to_text,
cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and
two macros CStringGetTextDatum and TextDatumGetCString. A number of
existing macros that provided variants on these themes were removed.
Most of the places that need to make such conversions now require just one
function or macro call, in place of the multiple notational layers that used
to be needed. There are no longer any direct calls of textout or textin,
and we got most of the places that were using handmade conversions via
memcpy (there may be a few still lurking, though).
This commit doesn't make any serious effort to eliminate transient memory
leaks caused by detoasting toasted text objects before they reach
text_to_cstring. We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few
places where it was easy, but much more could be done.
Brendan Jurd and Tom Lane
errdetail except the string goes only to the server log, replacing the normal
errdetail there. This provides a reasonably clean way of dealing with error
details that are too security-sensitive or too bulky to send to the client.
This commit just adds the infrastructure --- actual uses to follow.
< o Allow pre/data/post files when dumping a single object, for
< performance reasons
> o Allow pre/data/post files when schema and data are dumped
> separately, for performance reasons
except that it returns the string 'NULL', rather than a SQL null, when called
with a null argument. This is often a much more useful behavior for
constructing dynamic queries. Add more discussion to the documentation
about how to use these functions.
Brendan Jurd
directly to all the member expressions, instead of the previous implementation
where the ARRAY[] constructor would infer a common element type and then we'd
coerce the finished array after the fact. This has a number of benefits,
one being that we can allow an empty ARRAY[] construct so long as its
element type is specified by such a cast.
Brendan Jurd, minor fixes by me.
dumps can be loaded into databases without the same tablespaces that the
source had. The option acts by suppressing all "SET default_tablespace"
commands, and also CREATE TABLESPACE commands in pg_dumpall's case.
Gavin Roy, with documentation and minor fixes by me.
* Experiment with multi-threaded backend better I/O utilization
This would allow a single query to make use of multiple I/O channels
simultaneously. One idea is to create a background reader that can
pre-fetch sequential and index scan pages needed by other backends.
This could be expanded to allow concurrent reads from multiple devices
in a partitioned table.
* Experiment with multi-threaded backend better CPU utilization
This would allow several CPUs to be used for a single query, such as
for sorting or query execution.
* Speed WAL recovery by allowing more than one page to be prefetched
This should be done utilizing the same infrastructure used for
prefetching in general to avoid introducing complex error-prone code
in WAL replay.
pg_listener modifications commanded by LISTEN and UNLISTEN until the end
of the current transaction. This allows us to hold the ExclusiveLock on
pg_listener until after commit, with no greater risk of deadlock than there
was before. Aside from fixing the race condition, this gets rid of a
truly ugly kludge that was there before, namely having to ignore
HeapTupleBeingUpdated failures during NOTIFY. There is a small potential
incompatibility, which is that if a transaction issues LISTEN or UNLISTEN
and then looks into pg_listener before committing, it won't see any resulting
row insertion or deletion, where before it would have. It seems unlikely
that anyone would be depending on that, though.
This patch also disallows LISTEN and UNLISTEN inside a prepared transaction.
That case had some pretty undesirable properties already, such as possibly
allowing pg_listener entries to be made for PIDs no longer present, so
disallowing it seems like a better idea than trying to maintain the behavior.
o Allow COPY in CSV mode to control whether a quoted zero-length
string is treated as NULL
Currently this is always treated as a zero-length string,
which generates an error when loading into an integer column
>
> * Change memory allocation for multi-byte functions so memory is
> allocated inside conversion functions
>
> Currently we preallocate memory based on worst-case usage.
* Consider increasing the number of default statistics target, and
reduce statistics target overhead
Also consider having a larger statistics target for indexed columns
and expression indexes
<
> http://archives.postgresql.org/pgsql-general/2007-06/msg00542.php
* Consider increasing the number of default statistics target, and
reduce statistics target overhead
Also consider having a larger statistics target for indexed columns
and expression indexes
> http://archives.postgresql.org/pgsql-general/2007-05/msg01228.php
>
>
> * Consider increasing the number of default statistics target, and
> reduce statistics target overhead
>
> Also consider having a larger statistics target for indexed columns
> and expression indexes
than dividing them into 1GB segments as has been our longtime practice. This
requires working support for large files in the operating system; at least for
the time being, it won't be the default.
Zdenek Kotala
variables to it. More need to be converted, but I wanted to get this in
before it conflicts with too much...
Other than just centralising the text-to-int conversion for parameters,
this allows the pg_settings view to contain a list of available options
and allows an error hint to show what values are allowed.
With the addition of multiple autovacuum workers, our choices were to delete
the check, document the interaction with autovacuum_max_workers, or complicate
the check to try to hide that interaction. Since this restriction has never
been adequate to ensure backends can't run out of pinnable buffers, it doesn't
really have enough excuse to live to justify the second or third choices.
Per discussion of a complaint from Andreas Kling (see also bug #3888).
This commit also removes several documentation references to this restriction,
but I'm not sure I got them all.
>
> * Add comments on system tables/columns using the information in
> catalogs.sgml
>
> Ideally the information would be pulled from the SGML file
> automatically.
>
>
> * Allow client certificate names to be checked against the client
> hostname
>
> This is already implemented in
> libpq/fe-secure.c::verify_peer_name_matches_certificate() but the code
> is commented out.
> * Prevent malicious functions from being executed with the permissions
> of unsuspecting users
>
> Index functions are safe, so VACUUM and ANALYZE are safe too.
> Triggers, CHECK and DEFAULT expressions, and rules are still vulnerable.
> http://archives.postgresql.org/pgsql-hackers/2008-01/msg00268.php
>
> o Have CONSTRAINT cname NOT NULL preserve the contraint name
>
> Right now pg_attribute.attnotnull records the NOT NULL status
> of the column, but does not record the contraint name
>
<
< o To better utilize resources, restore data, primary keys, and
< indexes for a single table before restoring the next table
<
< Hopefully this will allow the CPU-I/O load to be more uniform
< for simultaneous restores. The idea is to start data restores
< for several objects, and once the first object is done, to move
< on to its primary keys and indexes. Over time, simultaneous
< data loads and index builds will be running.
< * pg_dump
> * pg_dump / pg_restore
> o Allow pg_dump to utilize multiple CPUs and I/O channels by dumping
> multiple objects simultaneously
>
> The difficulty with this is getting multiple dump processes to
> produce a single dump output file.
> http://archives.postgresql.org/pgsql-hackers/2008-02/msg00205.php
>
> o Allow pg_restore to utilize multiple CPUs and I/O channels by
> restoring multiple objects simultaneously
>
> This might require a pg_restore flag to indicate how many
> simultaneous operations should be performed. Only pg_dump's
> -Fc format has the necessary dependency information.
>
> o To better utilize resources, restore data, primary keys, and
> indexes for a single table before restoring the next table
>
> Hopefully this will allow the CPU-I/O load to be more uniform
> for simultaneous restores. The idea is to start data restores
> for several objects, and once the first object is done, to move
> on to its primary keys and indexes. Over time, simultaneous
> data loads and index builds will be running.
>
> o To better utilize resources, allow pg_restore to check foreign
> keys simultaneously, where possible
> o Allow pg_restore to create all indexes of a table
> concurrently, via a single heap scan
>
> This requires a pg_dump -Fc file because that format contains
> the required dependency information.
> http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php
>
> o Allow pg_restore to load different parts of the COPY data
> simultaneously
< single heap scan, and have a restore of a pg_dump somehow use it
> single heap scan, and have pg_restore use it
< http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php
> * Speed WAL recovery by allowing more than one page to be prefetched
>
> This involves having a separate process that can be told which pages
> the recovery process will need in the near future.
> http://archives.postgresql.org/pgsql-hackers/2008-02/msg01279.php
>
ssh -L 3333:foo.com:5432 joe@foo.com
I think this should be changed to
ssh -L 3333:localhost:5432 joe@foo.com
The reason is that this assumes the postgres server on foo.com allows
connections from foo.com, which is not allowed by the default
listen_addresses setting. Add more detail explaining this.
pointed out by Faheem Mitha
Also change the example port number 3333 to 63333 so no one can complain
that we are stealing a reserved port number.
represented as "char ...[4]" not "int32". Since the length word is never
supposed to be accessed via this struct member anyway, this won't break
any existing code that is following the rules. The advantage is that C
compilers will no longer assume that a pointer to struct varlena is
word-aligned, which prevents incorrect optimizations in TOAST-pointer
access and perhaps other places. gcc doesn't seem to do this (at least
not at -O2), but the problem is demonstrable on some other compilers.
I changed struct inet as well, but didn't bother to touch a lot of other
struct definitions in which it wouldn't make any difference because there
were other fields forcing int alignment anyway. Hopefully none of those
struct definitions are used for accessing unaligned Datums.
- Change configure.in to use Autoconf 2.61 and update generated files.
- Update build system and documentation to support now directory variables
offered by Autoconf 2.61.
- Replace usages of PGAC_CHECK_ALIGNOF by AC_CHECK_ALIGNOF, now available
in Autoconf 2.61.
- Drop our patched version of AC_C_INLINE, as Autoconf now has the change.
outside the 32-bit-time_t range. Also, refer to Olson's tz database
as the 'zoneinfo' database, a name that upstream sometimes uses, not
'zic database' which they never use.
(or RETURNING), but only when the output name is not any SQL keyword.
This seems as close as we can get to the standard's syntax without a
great deal of thrashing. Original patch by Hiroshi Saito, amended by me.
doing anything interesting, such as calling RevalidateCachedPlan(). The
necessity of this is demonstrated by an example from Willem Buitendyk:
during a replan, the planner might try to evaluate SPI-using functions,
and so we'd better be in a clean SPI context.
A small downside of this fix is that these two functions will now fail
outright if called when not inside a SPI-using procedure (ie, a
SPI_connect/SPI_finish pair). The documentation never promised or suggested
that that would work, though; and they are normally used in concert with
other functions, mainly SPI_prepare, that always have failed in such a case.
So the odds of breaking something seem pretty low.
In passing, make SPI_is_cursor_plan's error handling convention clearer,
and fix documentation's erroneous claim that SPI_cursor_open would
return NULL on error.
Before 8.3 these functions could not invoke replanning, so there is probably
no need for back-patching.
in .bat simply did not work, and it called them in the wrong order,
some several times, and some not at all. So this unrolls all subroutine
calls.
This should fix the issues with clean deleting the wrong files reported
by Dave Page.
While at it, add the "clean dist" option to act like "make distclean",
and no longer remove the flex/bison output files by default. This shuold
fix the problem reported by Pavel Golub in bug #3909.
< * Improve deadlock detection when deleting items from shared buffers
> * Improve deadlock detection when a page cleaning lock conflicts
> with a shared buffer that is pinned
buildfarm plus a narrative description of the CPU types and operating systems
on which Postgres is likely to work. Now that we've almost completely
decoupled CPU and OS considerations, the former tabular style isn't all that
enlightening anyway. Perhaps more importantly, no one seems particularly
interested in maintaining the table by hand when we have the buildfarm.
prevent anti-wraparound vacuuming, and to caution against setting unreasonably
small values of freeze_max_age. Also put in a notice that this catalog is
likely to disappear entirely in some future release. Per discussion of
bug #3898 from Steven Flatt.
ParameterStatus message can be sent during COPY OUT: it's definitely
possible, since COPY from a SELECT subquery can trigger any user-defined
function.
>
> * Add the ability to automatically create materialized views
>
> Right now materialized views require the user to create triggers on the
> main table to keep the summary table current. SQL syntax should be able
> to manager the triggers and summary table automatically. A more
> sophisticated implementation would automatically retrieve from the
> summary table when the main table is referenced, if possible.
>
we need to be able to swallow NOTICE messages, and potentially also
ParameterStatus messages (although the latter would be a bit weird),
without exiting COPY OUT state. Fix it, and adjust the protocol documentation
to emphasize the need for this. Per off-list report from Alexander Galler.
and CLUSTER) execute as the table owner rather than the calling user, using
the same privilege-switching mechanism already used for SECURITY DEFINER
functions. The purpose of this change is to ensure that user-defined
functions used in index definitions cannot acquire the privileges of a
superuser account that is performing routine maintenance. While a function
used in an index is supposed to be IMMUTABLE and thus not able to do anything
very interesting, there are several easy ways around that restriction; and
even if we could plug them all, there would remain a risk of reading sensitive
information and broadcasting it through a covert channel such as CPU usage.
To prevent bypassing this security measure, execution of SET SESSION
AUTHORIZATION and SET ROLE is now forbidden within a SECURITY DEFINER context.
Thanks to Itagaki Takahiro for reporting this vulnerability.
Security: CVE-2007-6600
< * Allow major upgrades without dump/reload, perhaps using pg_upgrade
< [pg_upgrade]
< * Check for unreferenced table files created by transactions that were
< in-progress when the server terminated abruptly
<
< http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php
<
> * Check for unreferenced table files created by transactions that were
> in-progress when the server terminated abruptly
>
> http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php
>
< * Support table partitioning that allows a single table to be stored
< in subtables that are partitioned based on the primary key or a WHERE
< clause
< creation of rules for INSERT/UPDATE/DELETE, and constraints for
< rapid partition selection. Options could include range and hash
> creation of triggers or rules for INSERT/UPDATE/DELETE, and constraints
> for rapid partition selection. Options could include range and hash
<
< * Improve replication solutions
<
< o Load balancing
<
< You can use any of the master/slave replication servers to use a
< standby server for data warehousing. To allow read/write queries to
< multiple servers, you need multi-master replication like pgcluster.
<
< o Allow replication over unreliable or non-persistent links
<
<
< o Mark change-on-restart-only values in postgresql.conf
< All objects in the default database tablespace must have default
< tablespace specifications. This is because new databases are
< created by copying directories. If you mix default tablespace
< tables and tablespace-specified tables in the same directory,
< creating a new database from such a mixed directory would create a
< new database with tables that had incorrect explicit tablespaces.
< To fix this would require modifying pg_class in the newly copied
< database, which we don't currently do.
> Currently all objects in the default database tablespace must
> have default tablespace specifications. This is because new
> databases are created by copying directories. If you mix default
> tablespace tables and tablespace-specified tables in the same
> directory, creating a new database from such a mixed directory
> would create a new database with tables that had incorrect
> explicit tablespaces. To fix this would require modifying
> pg_class in the newly copied database, which we don't currently
> do.
<
< o Allow recovery.conf to allow the same syntax as
> o Allow recovery.conf to support the same syntax as
< * Allow user-defined types to specify a type modifier at table creation
< time
< * Allow all data types to cast to and from TEXT
<
< http://archives.postgresql.org/pgsql-hackers/2007-04/msg00017.php
<
<
< o Add support for year-month syntax, INTERVAL '50-6' YEAR TO MONTH
< o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
< INTERVAL MONTH), and this should return '12 months'
> o Add support for year-month syntax, INTERVAL '50-6' YEAR
> TO MONTH
> o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1
> year' AS INTERVAL MONTH), and this should return '12 months'
< * Allow MONEY to be cast to/from other numeric data types
> * Allow MONEY to be easily cast to/from other numeric data types
>
< * Allow functions to have a schema search path specified at creation time
< * Fix cases where invalid byte encodings are accepted by the database,
< but throw an error on SELECT
<
< http://archives.postgresql.org/pgsql-hackers/2007-03/msg00767.php
< * Improve logging of prepared statements recovered during startup
> * Improve logging of prepared transactions recovered during startup
< * Make standard_conforming_strings the default in 8.4?
> * Make standard_conforming_strings the default in 8.5?
< * Allow the count returned by SELECT, etc to be to represent as an int64
> * Allow the count returned by SELECT, etc to be represented as an int64
< o Use more reliable method for CREATE DATABASE to get a consistent
< copy of db?
< o Fix transaction restriction checks for CREATE DATABASE and
< other commands
<
< http://archives.postgresql.org/pgsql-hackers/2007-01/msg00133.php
< currently allowed.
> currently allowed. This currently is done if the table is
> created inside the same transaction block as the COPY because
> no other backends can see the table.
< o Add SET PATH for schemas?
<
< This is basically the same as SET search_path.
< o Enforce referential integrity for system tables
< o Add Oracle-style packages (Pavel)
<
< A package would be a schema with session-local variables,
< public/private functions, and initialization functions. It
< is also possible to implement these capabilities
< in all schemas and not use a separate "packages"
< syntax at all.
<
< http://archives.postgresql.org/pgsql-hackers/2006-08/msg00384.php
<
< o Add single-step debugging of functions
< o Allow RETURN to return row or record functions
<
< http://archives.postgresql.org/pgsql-patches/2005-11/msg00045.php
< http://archives.postgresql.org/pgsql-patches/2006-08/msg00397.php
< http://archives.postgresql.org/pgsql-hackers/2006-09/msg00388.php
<
< o Fix problems with RETURN NEXT on tables with
< dropped/added columns after function creation
<
< http://archives.postgresql.org/pgsql-patches/2006-02/msg00165.php
<
< * Make consistent use of long/short command options --- pg_ctl needs
< long ones, pg_config doesn't have short ones, postgres doesn't have
< enough long ones, etc.
<
<
<
< o Consider parsing the -c string into individual queries so each
< is run in its own transaction
<
< http://archives.postgresql.org/pgsql-hackers/2007-01/msg00291.php
<
<
< o Remove unnecessary function pointer abstractions in pg_dump source
< code
> o Remove unnecessary function pointer abstractions in pg_dump source
> code
<
<
< o Fix SSL retry to avoid useless repeated connection attempts and
< ensuing misleading error messages
>
<
< This is difficult because it requires datatype-specific knowledge.
<
< * Improve commit_delay handling to reduce fsync()
< * %Add an option to sync() before fsync()'ing checkpoint files
>
< * Reduce lock time during VACUUM FULL by moving tuples with read lock,
< then write lock and truncate table
<
< Moved tuples are invisible to other backends so they don't require a
< write lock. However, the read lock promotion to write lock could lead
< to deadlock situations.
<
< * Prevent long-lived temporary tables from causing frozen-xid advancement
< starvation
<
< The problem is that autovacuum cannot vacuum them to set frozen xids;
< only the session that created them can do that.
<
<
<
< o Use free-space map information to guide refilling
< o Consider logging activity either to the logs or a system view
> The problem is that autovacuum cannot vacuum them to set frozen xids;
> only the session that created them can do that.
< * Add connection pooling
<
< It is unclear if this should be done inside the backend code or done
< by something external like pgpool. The passing of file descriptors to
< existing backends is one of the difficulties with a backend approach.
<
< * Consider reducing memory used for shared buffer reference count
<
< http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
<
< * %Remove memory/file descriptor freeing before ereport(ERROR)
< * %Promote debug_query_string into a server-side function current_query()
< * Allow ecpg to work with MSVC and BCC
< * Add xpath_array() to /contrib/xml2 to return results as an array
< * Allow building in directories containing spaces
<
< This is probably not possible because 'gmake' and other compiler tools
< do not fully support quoting of paths with spaces.
<
< * Fix sgmltools so PDFs can be generated with bookmarks
< * Split out libpq pgpass and environment documentation sections to make
< it easier for non-developers to find
< * Use strlcpy() rather than our StrNCpy() macro
<
< http://archives.postgresql.org/pgsql-hackers/2006-09/msg02108.php
<
< o Re-enable timezone output on log_line_prefix '%t' when a
< shorter timezone string is available
< * Allow statements across databases or servers with transaction
< semantics
<
< This can be done using dblink and two-phase commit.
> * Add Oracle-style packages (Pavel)
< * Add the features of packages
> A package would be a schema with session-local variables,
> public/private functions, and initialization functions. It
> is also possible to implement these capabilities
> in any schema and not use a separate "packages"
> syntax at all.
< o Make private objects accessible only to objects in the same schema
< o Allow current_schema.objname to access current schema objects
< o Add session variables
< o Allow nested schemas
> http://archives.postgresql.org/pgsql-hackers/2006-08/msg00384.php
< * Experiment with multi-threaded backend better resource utilization
<
< This would allow a single query to make use of multiple CPU's or
< multiple I/O channels simultaneously. One idea is to create a
< background reader that can pre-fetch sequential and index scan
< pages needed by other backends. This could be expanded to allow
< concurrent reads from multiple devices in a partitioned table.
<
> * Experiment with multi-threaded backend better resource utilization
>
> This would allow a single query to make use of multiple CPU's or
> multiple I/O channels simultaneously. One idea is to create a
> background reader that can pre-fetch sequential and index scan
> pages needed by other backends. This could be expanded to allow
> concurrent reads from multiple devices in a partitioned table.
* Consider having the background writer update the transaction status
hint bits before writing out the page
Implementing this requires the background writer to have access to system
catalogs and the transaction status log.
<
< * Allow free-behind capability for large sequential scans to avoid
< kernel cache spoiling
<
< Posix_fadvise() can control both sequential/random file caching and
< free-behind behavior, but it is unclear how the setting affects other
< backends that also have the file open, and the feature is not supported
< on all operating systems.
useful and confuses people who think it is the same as -U. (Eventually
we might want to re-introduce it as being an alias for -U, but that should
not happen until the switch has actually not been there for a few releases.)
Likewise in pg_dump and pg_restore. Per gripe from Robert Treat and
subsequent discussion.