Commit graph

633 commits

Author SHA1 Message Date
Tom Lane
bb03010b9f Remove the "opaque" pseudo-type and associated compatibility hacks.
A long time ago, it was necessary to declare datatype I/O functions,
triggers, and language handler support functions in a very type-unsafe
way involving a single pseudo-type "opaque".  We got rid of those
conventions in 7.3, but there was still support in various places to
automatically convert such functions to the modern declaration style,
to be able to transparently re-load dumps from pre-7.3 servers.
It seems unnecessary to continue to support that anymore, so take out
the hacks; whereupon the "opaque" pseudo-type itself is no longer
needed and can be dropped.

This is part of a group of patches removing various server-side kluges
for transparently upgrading pre-8.0 dump files.  Since we've had few
complaints about dropping pg_dump's support for dumping from pre-8.0
servers (commit 64f3524e2), it seems okay to now remove these kluges.

Discussion: https://postgr.es/m/4110.1583255415@sss.pgh.pa.us
2020-03-05 15:48:56 -05:00
Tom Lane
3ed2005ff5 Introduce macros for typalign and typstorage constants.
Our usual practice for "poor man's enum" catalog columns is to define
macros for the possible values and use those, not literal constants,
in C code.  But for some reason lost in the mists of time, this was
never done for typalign/attalign or typstorage/attstorage.  It's never
too late to make it better though, so let's do that.

The reason I got interested in this right now is the need to duplicate
some uses of the TYPSTORAGE constants in an upcoming ALTER TYPE patch.
But in general, this sort of change aids greppability and readability,
so it's a good idea even without any specific motivation.

I may have missed a few places that could be converted, and it's even
more likely that pending patches will re-introduce some hard-coded
references.  But that's not fatal --- there's no expectation that
we'd actually change any of these values.  We can clean up stragglers
over time.

Discussion: https://postgr.es/m/16457.1583189537@sss.pgh.pa.us
2020-03-04 10:34:25 -05:00
Tom Lane
70a7732007 Remove support for upgrading extensions from "unpackaged" state.
Andres Freund pointed out that allowing non-superusers to run
"CREATE EXTENSION ... FROM unpackaged" has security risks, since
the unpackaged-to-1.0 scripts don't try to verify that the existing
objects they're modifying are what they expect.  Just attaching such
objects to an extension doesn't seem too dangerous, but some of them
do more than that.

We could have resolved this, perhaps, by still requiring superuser
privilege to use the FROM option.  However, it's fair to ask just what
we're accomplishing by continuing to lug the unpackaged-to-1.0 scripts
forward.  None of them have received any real testing since 9.1 days,
so they may not even work anymore (even assuming that one could still
load the previous "loose" object definitions into a v13 database).
And an installation that's trying to go from pre-9.1 to v13 or later
in one jump is going to have worse compatibility problems than whether
there's a trivial way to convert their contrib modules into extension
style.

Hence, let's just drop both those scripts and the core-code support
for "CREATE EXTENSION ... FROM".

Discussion: https://postgr.es/m/20200213233015.r6rnubcvl4egdh5r@alap3.anarazel.de
2020-02-19 16:59:14 -05:00
Alvaro Herrera
c9d2977519 Clean up newlines following left parentheses
We used to strategically place newlines after some function call left
parentheses to make pgindent move the argument list a few chars to the
left, so that the whole line would fit under 80 chars.  However,
pgindent no longer does that, so the newlines just made the code
vertically longer for no reason.  Remove those newlines, and reflow some
of those lines for some extra naturality.

Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql
2020-01-30 13:42:14 -03:00
Tom Lane
50fc694e43 Invent "trusted" extensions, and remove the pg_pltemplate catalog.
This patch creates a new extension property, "trusted".  An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser.  The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user.  This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE.  We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.

Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs.  This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do.  The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.

This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents.  Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.

Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.

Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-29 18:42:43 -05:00
Peter Eisentraut
45223fd9ce Modernize Python exception syntax in tests
Change the exception syntax used in the tests to use the more current

    except Exception as ex:

rather than the old

    except Exception, ex:

Since support for Python <2.6 has been removed, all supported versions
now support the new style, and we can save one step in the Python 3
compatibility conversion.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
2020-01-08 22:47:22 +01:00
Peter Eisentraut
37f21ed132 Remove support for Python older than 2.6
Supporting very old Python versions is a maintenance burden,
especially with the several variant test files to maintain for Python
<2.6.

Since we have dropped support for older OpenSSL versions in
7b283d0e1d, RHEL 5 is now effectively
desupported, and that was also the only mainstream operating system
still using Python versions before 2.6, so it's a good time to drop
those as well.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
2020-01-08 22:47:22 +01:00
Bruce Momjian
7559d8ebfa Update copyrights for 2020
Backpatch-through: update all files in master, backpatch legal files through 9.4
2020-01-01 12:21:45 -05:00
Michael Paquier
aa3ef7ff50 Fix some OBJS lists in two Makefiles to be ordered alphabetically
These have been missed in 01368e5, and count for plpython and the
backend's tsearch code.

Author: Mahendra Singh
Discussion: https://postgr.es/m/CAKYtNAo4mxRRyDB0YqE6QLh17XD7pPQotpGm3GnHS+gQKz4zQQ@mail.gmail.com
2019-12-18 10:42:40 +09:00
Amit Kapila
e0487223ec Make the order of the header file includes consistent.
Similar to commits 14aec03502, 7e735035f2 and dddf4cdc33, this commit
makes the order of header file inclusion consistent in more places.

Author: Vignesh C
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
2019-11-25 08:08:57 +05:30
Peter Eisentraut
a63c84e59a Fix some compiler warnings on older compilers
Some older compilers appear to not understand the recently introduced
PG_FINALLY code structure that well in some circumstances and complain
about possibly uninitialized variables.  So to fix, initialize the
variables explicitly in the cases complained about.

Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-11-04 11:07:32 +01:00
Peter Eisentraut
604bd36711 PG_FINALLY
This gives an alternative way of catching exceptions, for the common
case where the cleanup code is the same in the error and non-error
cases.  So instead of

    PG_TRY();
    {
        ... code that might throw ereport(ERROR) ...
    }
    PG_CATCH();
    {
        cleanup();
	PG_RE_THROW();
    }
    PG_END_TRY();
    cleanup();

one can write

    PG_TRY();
    {
        ... code that might throw ereport(ERROR) ...
    }
    PG_FINALLY();
    {
        cleanup();
    }
    PG_END_TRY();

Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
2019-11-01 11:18:03 +01:00
Peter Eisentraut
400d5ffcaf Simplify PGAC_STRUCT_TIMEZONE Autoconf macro
Since 63bd0db121 we don't use tzname
anymore, so we don't need to check for it.  Instead, just keep the
part of PGAC_STRUCT_TIMEZONE that we need, which is the check for
struct tm.tm_zone.

Discussion: https://www.postgresql.org/message-id/flat/5eb11a37-f3ca-5fb7-308f-4485dec25a2e%402ndquadrant.com
2019-10-07 16:47:23 +02:00
Michael Paquier
66bde49d96 Fix inconsistencies and typos in the tree, take 10
This addresses some issues with unnecessary code comments, fixes various
typos in docs and comments, and removes some orphaned structures and
definitions.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/9aabc775-5494-b372-8bcb-4dfc0bd37c68@gmail.com
2019-08-13 13:53:41 +09:00
Michael Paquier
8548ddc61b Fix inconsistencies and typos in the tree, take 9
This addresses more issues with code comments, variable names and
unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com
2019-08-05 12:14:58 +09:00
Peter Eisentraut
91acff7a53 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a710c413ce4c4cd081843e563cde256bb95f490
2019-06-17 15:30:20 +02:00
Tom Lane
7640f93123 Fix assorted header files that failed to compile standalone.
We have a longstanding project convention that all .h files should
be includable with no prerequisites other than postgres.h.  This is
tested/relied-on by cpluspluscheck.  However, cpluspluscheck has not
historically been applied to most headers outside the src/include
tree, with the predictable consequence that some of them don't work.
Fix that, usually by adding missing #include dependencies.

The change in printf_hack.h might require some explanation: without
it, my C++ compiler whines that the function is unused.  There's
not so many call sites that "inline" is going to cost much, and
besides all the callers are in test code that we really don't care
about the size of.

There's no actual bugs being fixed here, so I see no need to back-patch.

Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
2019-05-31 11:45:33 -04:00
Tom Lane
8255c7a5ee Phase 2 pgindent run for v12.
Switch to 2.1 version of pg_bsd_indent.  This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.

Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
2019-05-22 13:04:48 -04:00
Tom Lane
be76af171c Initial pgindent run for v12.
This is still using the 2.0 version of pg_bsd_indent.
I thought it would be good to commit this separately,
so as to document the differences between 2.0 and 2.1 behavior.

Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
2019-05-22 12:55:34 -04:00
Peter Eisentraut
3c439a58df Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: a20bf6b8a5b4e32450967055eb5b07cee4704edd
2019-05-20 16:00:53 +02:00
Tom Lane
4d5840cea9 Fix problems with auto-held portals.
HoldPinnedPortals() did things in the wrong order: it must not mark
a portal autoHeld until it's been successfully held.  Otherwise,
a failure while persisting the portal results in a server crash
because we think the portal is in a good state when it's not.

Also add a check that portal->status is READY before attempting to
hold a pinned portal.  We have such a check before the only other
use of HoldPortal(), so it seems unwise not to check it here.

Lastly, rethink the responsibility for where to call HoldPinnedPortals.
The comment for it imagined that it was optional for any individual PL
to call it or not, but that cannot be the case: if some outer level of
procedure has a pinned portal, failing to persist it when an inner
procedure commits is going to be trouble.  Let's have SPI do it instead
of the individual PLs.  That's not a complete solution, since in theory
a PL might not be using SPI to perform commit/rollback, but such a PL
is going to have to be aware of lots of related requirements anyway.
(This change doesn't cause an API break for any external PLs that might
be calling HoldPinnedPortals per the old regime, because calling it
twice during a commit or rollback sequence won't hurt.)

Per bug #15703 from Julian Schauder.  Back-patch to v11 where this code
came in.

Discussion: https://postgr.es/m/15703-c12c5bc0ea34ba26@postgresql.org
2019-04-19 11:20:37 -04:00
Peter Eisentraut
fc22b6623b Generated columns
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.

This implements one kind of generated column: stored (computed on
write).  Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
2019-03-30 08:15:57 +01:00
Peter Eisentraut
bc09d5e4cc Remove unnecessary use of PROCEDURAL
Remove some unnecessary, legacy-looking use of the PROCEDURAL keyword
before LANGUAGE.  We mostly don't use this anymore, so some of these
look a bit old.

There is still some use in pg_dump, which is harder to remove because
it's baked into the archive format, so I'm not touching that.

Discussion: https://www.postgresql.org/message-id/2330919b-62d9-29ac-8de3-58c024fdcb96@2ndquadrant.com
2019-02-25 08:38:59 +01:00
Andres Freund
a9c35cf85c Change function call information to be variable length.
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays.  For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.

Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.

Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win.  It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.

Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.

Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.

Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.

This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.

Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
2019-01-26 14:17:52 -08:00
Heikki Linnakangas
95931133a9 Fix misc typos in comments.
Spotted mostly by Fabien Coelho.

Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1901230947050.16643@lancre
2019-01-23 13:39:00 +02:00
Bruce Momjian
97c39498e5 Update copyright for 2019
Backpatch-through: certain files through 9.4
2019-01-02 12:44:25 -05:00
Tom Lane
7767aadd94 Fix omissions in snprintf.c's coverage of standard *printf functions.
A warning on a NetBSD box revealed to me that pg_waldump/compat.c
is using vprintf(), which snprintf.c did not provide coverage for.
This is not good if we want to have uniform *printf behavior, and
it's pretty silly to omit when it's a one-line function.

I also noted that snprintf.c has pg_vsprintf() but for some reason
it was not exposed to the outside world, creating another way in
which code might accidentally invoke the platform *printf family.

Let's just make sure that we replace all eight of the POSIX-standard
printf family.

Also, upgrade plperl.h and plpython.h to make sure that they do
their undefine/redefine rain dance for all eight, not some random
maybe-sufficient subset thereof.
2018-10-08 19:15:55 -04:00
Tom Lane
8b91d25884 Clean up *printf macros to avoid conflict with format archetypes.
We must define the macro "printf" with arguments, else it can mess
up format archetype attributes in builds where PG_PRINTF_ATTRIBUTE
is just "printf".  Fortunately, that's easy to do now that we're
requiring C99; we can use __VA_ARGS__.

On the other hand, it's better not to use __VA_ARGS__ for the rest
of the *printf crew, so that one can take the addresses of those
functions without surprises.

I'd proposed doing this some time ago, but forgot to make it happen;
buildfarm failures subsequent to 96bf88d52 reminded me.

Discussion: https://postgr.es/m/22709.1535135640@sss.pgh.pa.us
Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
2018-09-26 17:35:01 -04:00
Tom Lane
d6c55de1f9 Implement %m in src/port/snprintf.c, and teach elog.c to rely on that.
I started out with the idea that we needed to detect use of %m format specs
in contexts other than elog/ereport calls, because we couldn't rely on that
working in *printf calls.  But a better answer is to fix things so that it
does work.  Now that we're using snprintf.c all the time, we can implement
%m in that and we've fixed the problem.

This requires also adjusting our various printf-wrapping functions so that
they ensure "errno" is preserved when they call snprintf.c.

Remove elog.c's handmade implementation of %m, and let it rely on
snprintf to support the feature.  That should provide some performance
gain, though I've not attempted to measure it.

There are a lot of places where we could now simplify 'printf("%s",
strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
to make that happen.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 13:31:56 -04:00
Tom Lane
96bf88d527 Always use our own versions of *printf().
We've spent an awful lot of effort over the years in coping with
platform-specific vagaries of the *printf family of functions.  Let's just
forget all that mess and standardize on always using src/port/snprintf.c.
This gets rid of a lot of configure logic, and it will allow a saner
approach to dealing with %m (though actually changing that is left for
a follow-on patch).

Preliminary performance testing suggests that as it stands, snprintf.c is
faster than the native printf functions for some tasks on some platforms,
and slower for other cases.  A pending patch will improve that, though
cases with floating-point conversions will doubtless remain slower unless
we want to put a *lot* of effort into that.  Still, we've not observed
that *printf is really a performance bottleneck for most workloads, so
I doubt this matters much.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 13:13:57 -04:00
Tom Lane
26e9d4d4ef Convert elog.c's useful_strerror() into a globally-used strerror wrapper.
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror.  On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not.  Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.

I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.

Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it.  That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.

We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 11:06:42 -04:00
Andrew Gierth
60f6756f92 Fix out-of-tree build for transform modules.
Neither plperl nor plpython installed sufficient header files to
permit transform modules to be built out-of-tree using PGXS. Fix that
by installing all plperl and plpython header files (other than those
with special purposes such as generated data tables), and also install
plpython's special .mk file for mangling regression tests.

(This commit does not fix the windows install, which does not
currently install _any_ plperl or plpython headers.)

Also fix the existing transform modules for hstore and ltree so that
their cross-module #include directives work as anticipated by commit
df163230b9 et seq. This allows them to serve as working examples of
how to reference other modules when doing separate out-of-tree builds.

Discussion: https://postgr.es/m/87o9ej8bgl.fsf%40news-spur.riddles.org.uk
2018-09-16 18:46:45 +01:00
Peter Eisentraut
98afa68d93 Use C99 designated initializers for some structs
These are just a few particularly egregious cases that were hard to read
and write, and error prone because of many similar adjacent types.

Discussion: https://www.postgresql.org/message-id/flat/4c9f01be-9245-2148-b569-61a8562ef190%402ndquadrant.com
2018-09-07 11:40:03 +02:00
Peter Eisentraut
f5a6509bb1 PL/Python: Remove use of simple slicing API
The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
since Python 2.0 and has been removed altogether in Python 3, so remove
those functions from the PLyResult class.  Instead, the non-slice
mapping functions mp_subscript and mp_ass_subscript can take slice
objects as an index.  Since we just pass the index through to the
underlying list object, we already support that.  Test coverage was
already in place.
2018-09-05 16:32:38 +02:00
Andres Freund
8ecdefc261 Remove test for VA_ARGS, implied by C99.
This simplifies logic / reduces duplication in a few headers.

Author: Andres Freund
Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
2018-08-24 10:41:45 -07:00
Andres Freund
013f320dc3 Mop-up for 3522d0eaba, which missed some alternative output files. 2018-07-22 17:39:02 -07:00
Andres Freund
3522d0eaba Deduplicate "invalid input syntax" messages for various types.
Previously a lot of the error messages referenced the type in the
error message itself. That requires that the message is translated
separately for each type.

Note that currently a few smallint cases continue to reference the
integer, rather than smallint, type. A later patch will create a
separate routine for 16bit input.

Author: Andres Freund
Discussion: https://postgr.es/m/20180707200158.wpqkd7rjr4jxq5g7@alap3.anarazel.de
2018-07-22 14:58:01 -07:00
Peter Eisentraut
917a68f010 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3a5a71cccad5c68e01008e9e3a4f06930197a05e
2018-05-21 12:29:52 -04:00
Peter Eisentraut
7d8679975f Update expected files for older Python versions
neglected in commit fa03769e4c
2018-05-03 20:29:54 -04:00
Peter Eisentraut
fa03769e4c Tweak tests to support Python 3.7
Python 3.7 removes the trailing comma in the repr() of
BaseException (see <https://bugs.python.org/issue30399>), leading to
test output differences.  Work around that by composing the equivalent
test output in a more manual way.
2018-05-03 13:13:09 -04:00
Tom Lane
bdf46af748 Post-feature-freeze pgindent run.
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-04-26 14:47:16 -04:00
Tom Lane
3e110a373b Fix YA parallel-make hazard, this one in "make check" in plpython.
We have to ensure that submake-generated-headers is finished before
the topmost make run launches any child makes.

Discussion: https://postgr.es/m/20180411235843.GG32449@paquier.xyz
2018-04-12 10:38:53 -04:00
Tom Lane
31f1f0bb4f Put back parallel-safety guards in plpython and src/test/regress/.
I'd hoped that commit 3b8f6e75f was sufficient to ensure parallel safety
even when a build started in a subdirectory requires rebuilding of
generated headers.  This isn't so, because making submake-generated-headers
a prerequisite of "all" isn't enough to ensure it's completed before
starting on "all"'s other prerequisites.  The explicit dependencies we put
on the recursive make targets ensure safe ordering before we recurse into
child directories, but they don't protect targets to be made in the current
directory.  Hence, put back some ordering dependencies in directories that
we've traditionally expected to be starting points for "standalone" builds,
to wit src/pl/plpython and src/test/regress.  (The former needs this in
order to minimize the work involved in building for both python 2 and
python 3; the latter to support packagings that make the regression tests
available for out-of-build-tree execution.)  Adjust some other dependencies
so that these two cases work correctly even at high -j settings.

I'm not terribly happy with this partial solution, but I don't see a
way to do better without massive makefile restructuring, which we surely
aren't doing at this point in the development cycle.  In any case, it's
little if any worse than what we had in prior releases.

Discussion: https://postgr.es/m/1523353963.8169.26.camel@gunduz.org
2018-04-10 16:15:04 -04:00
Tom Lane
3b8f6e75f3 Fix partial-build problems introduced by having more generated headers.
Commit 372728b0d created some problems for usages like building a
subdirectory without having first done "make all" at the top level,
or for proceeding directly to "make install" without "make all".
The only reasonably clean way to fix this seems to be to force the
submake-generated-headers rule to fire in *any* "make all" or "make
install" command anywhere in the tree.  To avoid lots of redundant work,
as well as parallel make jobs possibly clobbering each others' output, we
still need to be sure that the rule fires only once in a recursive build.
For that, adopt the same MAKELEVEL hack previously used for "temp-install".
But try to document it a bit better.

The submake-errcodes mechanism previously used in src/port/ and src/common/
is subsumed by this, so we can get rid of those special cases.  It was
inadequate for src/common/ anyway after the aforesaid commit, and it always
risked parallel attempts to build errcodes.h.

Discussion: https://postgr.es/m/E1f5FAB-0006LU-MB@gemulon.postgresql.org
2018-04-09 16:42:10 -04:00
Tom Lane
cefa387153 Merge catalog/pg_foo_fn.h headers back into pg_foo.h headers.
Traditionally, include/catalog/pg_foo.h contains extern declarations
for functions in backend/catalog/pg_foo.c, in addition to its function
as the authoritative definition of the pg_foo catalog's rowtype.
In some cases, we'd been forced to split out those extern declarations
into separate pg_foo_fn.h headers so that the catalog definitions
could be #include'd by frontend code.  That problem is gone as of
commit 9c0a0de4c, so let's undo the splits to make things less
confusing.

Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
2018-04-08 14:35:29 -04:00
Tom Lane
0b11a674fb Fix a boatload of typos in C comments.
Justin Pryzby

Discussion: https://postgr.es/m/20180331105640.GK28454@telsasoft.com
2018-04-01 15:01:28 -04:00
Peter Eisentraut
056a5a3f63 Allow committing inside cursor loop
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor.  To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks.  Portals now have a new state "auto-held", which
means they have been converted automatically from pinned.  An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.

This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.

Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
2018-03-28 19:03:26 -04:00
Tom Lane
442accc3fe Allow memory contexts to have both fixed and variable ident strings.
Originally, we treated memory context names as potentially variable in
all cases, and therefore always copied them into the context header.
Commit 9fa6f00b1 rethought this a little bit and invented a distinction
between fixed and variable names, skipping the copy step for the former.
But we can make things both simpler and more useful by instead allowing
there to be two parts to a context's identification, a fixed "name" and
an optional, variable "ident".  The name supplied in the context create
call is now required to be a compile-time-constant string in all cases,
as it is never copied but just pointed to.  The "ident" string, if
wanted, is supplied later.  This is needed because typically we want
the ident to be stored inside the context so that it's cleaned up
automatically on context deletion; that means it has to be copied into
the context before we can set the pointer.

The cost of this approach is basically just an additional pointer field
in struct MemoryContextData, which isn't much overhead, and is bought
back entirely in the AllocSet case by not needing a headerSize field
anymore, since we no longer have to cope with variable header length.
In addition, we can simplify the internal interfaces for memory context
creation still further, saving a few cycles there.  And it's no longer
true that a custom identifier disqualifies a context from participating
in aset.c's freelist scheme, so possibly there's some win on that end.

All the places that were using non-compile-time-constant context names
are adjusted to put the variable info into the "ident" instead.  This
allows more effective identification of those contexts in many cases;
for example, subsidary contexts of relcache entries are now identified
by both type (e.g. "index info") and relname, where before you got only
one or the other.  Contexts associated with PL function cache entries
are now identified more fully and uniformly, too.

I also arranged for plancache contexts to use the query source string
as their identifier.  This is basically free for CachedPlanSources, as
they contained a copy of that string already.  We pay an extra pstrdup
to do it for CachedPlans.  That could perhaps be avoided, but it would
make things more fragile (since the CachedPlanSource is sometimes
destroyed first).  I suspect future improvements in error reporting will
require CachedPlans to have a copy of that string anyway, so it's not
clear that it's worth moving mountains to avoid it now.

This also changes the APIs for context statistics routines so that the
context-specific routines no longer assume that output goes straight
to stderr, nor do they know all details of the output format.  This
is useful immediately to reduce code duplication, and it also allows
for external code to do something with stats output that's different
from printing to stderr.

The reason for pushing this now rather than waiting for v12 is that
it rethinks some of the API changes made by commit 9fa6f00b1.  Seems
better for extension authors to endure just one round of API changes
not two.

Discussion: https://postgr.es/m/CAB=Je-FdtmFZ9y9REHD7VsSrnCkiBhsA4mdsLKSPauwXtQBeNA@mail.gmail.com
2018-03-27 16:46:51 -04:00
Peter Eisentraut
33803f67f1 Support INOUT arguments in procedures
In a top-level CALL, the values of INOUT arguments will be returned as a
result row.  In PL/pgSQL, the values are assigned back to the input
arguments.  In other languages, the same convention as for return a
record from a function is used.  That does not require any code changes
in the PL implementations.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-03-14 12:07:28 -04:00
Peter Eisentraut
09230e54fb Remove some obsolete procedure-specific code from PLs
Since procedures are now declared to return void, code that handled
return values for procedures separately is no longer necessary.
2018-03-05 11:51:15 -05:00
Peter Eisentraut
fd1a421fe6 Add prokind column, replacing proisagg and proiswindow
The new column distinguishes normal functions, procedures, aggregates,
and window functions.  This replaces the existing columns proisagg and
proiswindow, and replaces the convention that procedures are indicated
by prorettype == 0.  Also change prorettype to be VOIDOID for procedures.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-02 13:48:33 -05:00
Tom Lane
49bff412ed Remove some inappropriate #includes.
Other header files should never #include postgres.h (nor postgres_fe.h,
nor c.h), per project policy.  Also, there's no need for any backend .c
file to explicitly include elog.h or palloc.h, because postgres.h pulls
those in already.

Extracted from a larger patch by Kyotaro Horiguchi.  The rest of the
removals he suggests require more study, but these are no-brainers.

Discussion: https://postgr.es/m/20180215.200447.209320006.horiguchi.kyotaro@lab.ntt.co.jp
2018-02-16 12:14:08 -05:00
Tom Lane
e748e902de Fix broken logic for reporting PL/Python function names in errcontext.
plpython_error_callback() reported the name of the function associated
with the topmost PL/Python execution context.  This was not merely
wrong if there were nested PL/Python contexts, but it risked a core
dump if the topmost one is an inline code block rather than a named
function.  That will have proname = NULL, and so we were passing a NULL
pointer to snprintf("%s").  It seems that none of the PL/Python-testing
machines in the buildfarm will dump core for that, but some platforms do,
as reported by Marina Polyakova.

Investigation finds that there actually is an existing regression test
that used to prove that the behavior was wrong, though apparently no one
had noticed that it was printing the wrong function name.  It stopped
showing the problem in 9.6 when we adjusted psql to not print CONTEXT
by default for NOTICE messages.  The problem is masked (if your platform
avoids the core dump) in error cases, because PL/Python will throw away
the originally generated error info in favor of a new traceback produced
at the outer level.

Repair by using ErrorContextCallback.arg to pass the correct context to
the error callback.  Add a regression test illustrating correct behavior.

Back-patch to all supported branches, since they're all broken this way.

Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
2018-02-14 14:47:18 -05:00
Tom Lane
4b93f57999 Make plpgsql use its DTYPE_REC code paths for composite-type variables.
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible.  In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session.  And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple.  A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.

Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type.  DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.

To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90.  A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests.  In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.

In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type.  This allows very cheap detection of the need to
refresh tupdesc-dependent data.  This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.

In passing, allow plpgsql functions to accept as well as return type
"record".  There was no good reason for the old restriction, and it
was out of step with most of the other PLs.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
2018-02-13 18:52:21 -05:00
Peter Eisentraut
f498704346 PL/Python: Fix tests for older Python versions
Commit 8561e4840c neglected to handle
older Python versions that don't support the "with" statement.  So write
the tests in a way that older versions can handle as well.
2018-01-22 12:09:52 -05:00
Peter Eisentraut
8561e4840c Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language.  Add similar underlying functions to SPI.  Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call.  Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.

- SPI

Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags.  The only option flag right now is
SPI_OPT_NONATOMIC.  A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed.  This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called.  A nonatomic SPI connection uses different
memory management.  A normal SPI connection allocates its memory in
TopTransactionContext.  For nonatomic connections we use PortalContext
instead.  As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.

SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.

- portalmem.c

Some adjustments were made in the code that cleans up portals at
transaction abort.  The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.

In AtAbort_Portals(), remove the code that marks an active portal as
failed.  As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort.  And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception.  So the code in AtAbort_Portals() is never used
anyway.

In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much.  This mirrors similar code in
PreCommit_Portals().

- PL/Perl

Gets new functions spi_commit() and spi_rollback()

- PL/pgSQL

Gets new commands COMMIT and ROLLBACK.

Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.

- PL/Python

Gets new functions plpy.commit and plpy.rollback.

- PL/Tcl

Gets new commands commit and rollback.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:43:06 -05:00
Peter Eisentraut
918e02a221 Improve type conversion of SPI_processed in Python
The previous code converted SPI_processed to a Python float if it didn't
fit into a Python int.  But Python longs have unlimited precision, so
use that instead in all cases.

As in eee50a8d4c, we use the Python
LongLong API unconditionally for simplicity.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2018-01-20 08:02:01 -05:00
Peter Eisentraut
eee50a8d4c PL/Python: Simplify PLyLong_FromInt64
We don't actually need two code paths, one for 32 bits and one for 64
bits.  Since the existing code already assumed that "long long" is
available, we can just use PyLong_FromLongLong() for 64 bits as well.
In Python 2.5 and later, PyLong_FromLong() and PyLong_FromLongLong() use
the same code, so there will be no difference for 64-bit platforms.  In
Python 2.4, the code is different, but performance testing showed no
noticeable difference in PL/Python, and that Python version is ancient
anyway.

Discussion: https://www.postgresql.org/message-id/0a02203c-e157-55b2-464e-6087066a1849@2ndquadrant.com
2018-01-19 17:22:38 -05:00
Peter Eisentraut
70d6226e4f Use portal pinning in PL/Perl and PL/Python
PL/pgSQL "pins" internally generated portals so that user code cannot
close them by guessing their names.  Add this functionality to PL/Perl
and PL/Python as well, preventing users from manually closing cursors
created by spi_query and plpy.cursor, respectively.  (PL/Tcl does not
currently offer any cursor functionality.)
2018-01-10 17:10:32 -05:00
Bruce Momjian
9d4649ca49 Update copyright for 2018
Backpatch-through: certain files through 9.3
2018-01-02 23:30:12 -05:00
Tom Lane
9fa6f00b13 Rethink MemoryContext creation to improve performance.
This patch makes a number of interrelated changes to reduce the overhead
involved in creating/deleting memory contexts.  The key ideas are:

* Include the AllocSetContext header of an aset.c context in its first
malloc request, rather than allocating it separately in TopMemoryContext.
This means that we now always create an initial or "keeper" block in an
aset, even if it never receives any allocation requests.

* Create freelists in which we can save and recycle recently-destroyed
asets (this idea is due to Robert Haas).

* In the common case where the name of a context is a constant string,
just store a pointer to it in the context header, rather than copying
the string.

The first change eliminates a palloc/pfree cycle per context, and
also avoids bloat in TopMemoryContext, at the price that creating
a context now involves a malloc/free cycle even if the context never
receives any allocations.  That would be a loser for some common
usage patterns, but recycling short-lived contexts via the freelist
eliminates that pain.

Avoiding copying constant strings not only saves strlen() and strcpy()
overhead, but is an essential part of the freelist optimization because
it makes the context header size constant.  Currently we make no
attempt to use the freelist for contexts with non-constant names.
(Perhaps someday we'll need to think harder about that, but in current
usage, most contexts with custom names are long-lived anyway.)

The freelist management in this initial commit is pretty simplistic,
and we might want to refine it later --- but in common workloads that
will never matter because the freelists will never get full anyway.

To create a context with a non-constant name, one is now required to
call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME
option.  AllocSetContextCreate becomes a wrapper macro, and it includes
a test that will complain about non-string-literal context name
parameters on gcc and similar compilers.

An unfortunate side effect of making AllocSetContextCreate a macro is
that one is now *required* to use the size parameter abstraction macros
(ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of
writing out individual size parameters no longer works unless you
switch to AllocSetContextCreateExtended.

Internally to the memory-context-related modules, the context creation
APIs are simplified, removing the rather baroque original design whereby
a context-type module called mcxt.c which then called back into the
context-type module.  That saved a bit of code duplication, but not much,
and it prevented context-type modules from exercising control over the
allocation of context headers.

In passing, I converted the test-and-elog validation of aset size
parameters into Asserts to save a few more cycles.  The original thought
was that callers might compute size parameters on the fly, but in practice
nobody does that, so it's useless to expend cycles on checking those
numbers in production builds.

Also, mark the memory context method-pointer structs "const",
just for cleanliness.

Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
2017-12-13 13:55:16 -05:00
Peter Eisentraut
4c6744ed70 PL/Python: Fix potential NULL pointer dereference
After d0aa965c0a, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL.  Rearrange the code a bit to
fix that.

Also add another SPI_freetuptable() call so that that is cleared in all
error paths.

discovered by John Naylor <jcnaylor@gmail.com> via scan-build

ideas and review by Tom Lane
2017-12-12 20:52:14 -05:00
Tom Lane
a852cfe967 Fix uninitialized-variable compiler warning induced by commit e4128ee76.
I'm a little bit astonished that anyone's compiler would have failed to
complain about this.  The compiler surely does not know that is_procedure
means the function return value will be ignored.
2017-12-03 11:25:17 -05:00
Peter Eisentraut
e4128ee767 SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar.  This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.

This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL).  There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql.  Support for defining
procedures is available in all the languages supplied by the core
distribution.

While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-30 11:03:20 -05:00
Peter Eisentraut
c7f5c58e1c PL/Python: Fix remaining scan-build warnings
Apparently, scan-build thinks that proc->is_setof can change during
PLy_exec_function().  To make it clearer, save the value in a local
variable.

Also add an assertion to clear another warning.

Reviewed-by: John Naylor <jcnaylor@gmail.com>
2017-11-29 09:56:49 -05:00
Peter Eisentraut
cdddd5d40b Add compiler hints to PLy_elog()
Decorate PLy_elog() in a similar way as elog(), to give compilers and
static analyzers hints in which cases it does not return.

Reviewed-by: John Naylor <jcnaylor@gmail.com>
2017-11-29 09:56:49 -05:00
Peter Eisentraut
62546b4357 Revert "PL/Python: Fix potential NULL pointer dereference"
This reverts commit e42e2f3890.

It's not safe to return in the middle of a PG_TRY block, so this will
have to be done differently.
2017-11-28 13:55:39 -05:00
Peter Eisentraut
e42e2f3890 PL/Python: Fix potential NULL pointer dereference
After d0aa965c0a, one error path in
PLy_spi_execute_fetch_result() could result in the variable "result"
being dereferenced after being set to NULL.  To fix that, just clear the
resources right there and return early.

Also add another SPI_freetuptable() call so that that is cleared in all
error paths.

discovered by John Naylor <jcnaylor@gmail.com> via scan-build
2017-11-28 11:28:05 -05:00
Peter Eisentraut
d0aa965c0a Consistently catch errors from Python _New() functions
Python Py*_New() functions can fail and return NULL in out-of-memory
conditions.  The previous code handled that inconsistently or not at
all.  This change organizes that better.  If we are in a function that
is called from Python, we just check for failure and return NULL
ourselves, which will cause any exception information to be passed up.
If we are called from PostgreSQL, we consistently create an "out of
memory" error.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2017-11-18 13:39:53 -05:00
Tom Lane
687f096ea9 Make PL/Python handle domain-type conversions correctly.
Fix PL/Python so that it can handle domains over composite, and so that
it enforces domain constraints correctly in other cases that were not
always done properly before.  Notably, it didn't do arrays of domains
right (oversight in commit c12d570fa), and it failed to enforce domain
constraints when returning a composite type containing a domain field,
and if a transform function is being used for a domain's base type then
it failed to enforce domain constraints on the result.  Also, in many
places it missed checking domain constraints on null values, because
the plpy_typeio code simply wasn't called for Py_None.

Rather than try to band-aid these problems, I made a significant
refactoring of the plpy_typeio logic.  The existing design of recursing
for array and composite members is extended to also treat domains as
containers requiring recursion, and the APIs for the module are cleaned
up and simplified.

The patch also modifies plpy_typeio to rely on the typcache more than
it did before (which was pretty much not at all).  This reduces the
need for repetitive lookups, and lets us get rid of an ad-hoc scheme
for detecting changes in composite types.  I added a couple of small
features to typcache to help with that.

Although some of this is fixing bugs that long predate v11, I don't
think we should risk a back-patch: it's a significant amount of code
churn, and there've been no complaints from the field about the bugs.

Tom Lane, reviewed by Anthony Bykov

Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
2017-11-16 16:23:04 -05:00
Peter Eisentraut
0008a106d4 Use Py_RETURN_NONE where suitable
This is more idiomatic style and available as of Python 2.4, which is
our minimum.
2017-09-29 16:51:39 -04:00
Andres Freund
2cd7084524 Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).
This is a mechanical change in preparation for a later commit that
will change the layout of TupleDesc.  Introducing a macro to abstract
the details of where attributes are stored will allow us to change
that in separate step and revise it in future.

Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
2017-08-20 11:19:07 -07:00
Peter Eisentraut
f7668b2b35 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a0b5e655d7871506c2b1c7ba562c2de6b6a55de
2017-08-07 13:55:34 -04:00
Peter Eisentraut
26d40ada3f Message style improvements 2017-08-04 18:31:32 -04:00
Alvaro Herrera
6c774caf0e Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: c5a8de3653bb1af6b0eb41cc6bf090c5522df52b
2017-07-10 11:53:55 -04:00
Peter Eisentraut
898d24ae2a PL/Python: Fix hint about returning composite type from Python
('foo') is not a Python tuple: it is a string wrapped in parentheses.  A
valid 1-element Python tuple is ('foo',).

Author: Daniele Varrazzo <daniele.varrazzo@gmail.com>
2017-06-30 16:51:14 -04:00
Tom Lane
382ceffdf7 Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.

By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis.  However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent.  That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.

This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
Tom Lane
c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Tom Lane
e3860ffa4d Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:

* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
  sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
  well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
  with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
  than the expected column 33.

On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list.  This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.

There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses.  I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 14:39:04 -04:00
Alvaro Herrera
08aa550694 Update expected file
Missed in ea3e310e71.
2017-05-25 14:41:43 -04:00
Alvaro Herrera
ea3e310e71 Fix message case 2017-05-25 13:16:00 -04:00
Bruce Momjian
a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Peter Eisentraut
82d24bab75 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 398beeef4921df0956f917becd7b5669d2a8a5c4
2017-05-15 12:19:54 -04:00
Kevin Grittner
5ebeb579b9 Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables.  In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data.  Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.

Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql.  That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code.  Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.

While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.

Thomas Munro with some work by Kevin Grittner (mostly docs)
2017-04-04 18:36:39 -05:00
Peter Eisentraut
70ec3f1f8f PL/Python: Add cursor and execute methods to plan object
Instead of

    plan = plpy.prepare(...)
    res = plpy.execute(plan, ...)

you can now write

    plan = plpy.prepare(...)
    res = plan.execute(...)

or even

    res = plpy.prepare(...).execute(...)

and similarly for the cursor() method.

This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-03-27 11:37:22 -04:00
Peter Eisentraut
facde2a98f Clean up Perl code according to perlcritic
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2017-03-27 08:18:22 -04:00
Peter Eisentraut
f97a028d8e Spelling fixes in code comments
From: Josh Soref <jsoref@gmail.com>
2017-03-14 12:58:39 -04:00
Noah Misch
3a0d473192 Use wrappers of PG_DETOAST_DATUM_PACKED() more.
This makes almost all core code follow the policy introduced in the
previous commit.  Specific decisions:

- Text search support functions with char* and length arguments, such as
  prsstart and lexize, may receive unaligned strings.  I doubt
  maintainers of non-core text search code will notice.

- Use plain VARDATA() on values detoasted or synthesized earlier in the
  same function.  Use VARDATA_ANY() on varlenas sourced outside the
  function, even if they happen to always have four-byte headers.  As an
  exception, retain the universal practice of using VARDATA() on return
  values of SendFunctionCall().

- Retain PG_GETARG_BYTEA_P() in pageinspect.  (Page images are too large
  for a one-byte header, so this misses no optimization.)  Sites that do
  not call get_page_from_raw() typically need the four-byte alignment.

- For now, do not change btree_gist.  Its use of four-byte headers in
  memory is partly entangled with storage of 4-byte headers inside
  GBT_VARKEY, on disk.

- For now, do not change gtrgm_consistent() or gtrgm_distance().  They
  incorporate the varlena header into a cache, and there are multiple
  credible implementation strategies to consider.
2017-03-12 19:35:34 -04:00
Peter Eisentraut
04aad40186 Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden.  The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.

In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
2017-02-21 09:49:22 -05:00
Heikki Linnakangas
181bdb90ba Fix typos in comments.
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.

Josh Soref

Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
2017-02-06 11:33:58 +02:00
Andres Freund
8b07aee8c5 Adapt python regression tests to 69f4b9c85f.
Hopefully this'll unbreak the buildfarm.
2017-01-18 16:11:19 -08:00
Bruce Momjian
1d25779284 Update copyright via script for 2017 2017-01-03 13:48:53 -05:00
Tom Lane
9cda81f005 Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't.  This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.

In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions.  And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.

To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.

Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table.  This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.

The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.

The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me.  Back-patch
to all supported branches.

Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
Tom Lane
1833f1a1c3 Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI.  That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases.  And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?

Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid.  It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge.  Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.

A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead.  The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.

Discussion: <20808.1478481403@sss.pgh.pa.us>
2016-11-08 17:39:57 -05:00
Tom Lane
de4026c673 Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment.  But really it's better to just use heap_modify_tuple.

While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.  The lack of a check
for system column names is actually a pre-existing bug here, and might
even qualify as a security bug except that we don't have any trusted
version of plpython.
2016-11-08 12:00:24 -05:00
Peter Eisentraut
eaed88ce12 Add function name to PyArg_ParseTuple()
This causes the supplied function name to appear in any error message,
making the error message friendlier and relieving us from having to
provide our own in some cases.
2016-10-27 15:41:29 -04:00
Peter Eisentraut
84d457edaf Format PL/Python module contents test vertically
It makes it readable again and makes merges more manageable.
2016-10-27 15:41:29 -04:00
Heikki Linnakangas
8eb6337f9f Remove platform-dependent PL/python test case.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.

Per buildfarm member prairiedog.
2016-10-26 17:09:18 +03:00
Heikki Linnakangas
cfd9c87a54 Only treat Python Lists as array dimensions.
Instead of treating all python sequence types as array dimensions, except
for tuples and various kinds of strings, only treat Python lists as
dimensions. The PyBytes_Check() function used previously is only available
on Python 2.6 and newer, and it was a bit fiddly anyway. The list of
exceptions would require adjustment if Python got a new kind of a sequence
similar to bytes/unicodes/strings, so only checking for Lists seems more
future-proof. The documentation only mentioned using Lists, so this is
closer to what was documented, anyway.

This should fix the buildfarm failures on systems building with Python 2.5,
although I don't have Python 2.5 installed myself to test with.
2016-10-26 14:44:55 +03:00
Heikki Linnakangas
73c8e8506c Avoid using platform-dependent floats in test case.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
2016-10-26 14:17:07 +03:00
Heikki Linnakangas
e131ba4fe5 Fix regression test to also work with Python 2.
Per buildfarm.
2016-10-26 11:18:04 +03:00
Heikki Linnakangas
510e1b8ecf Give a hint, when [] is incorrectly used for a composite type in array.
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.

Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:56 +03:00
Heikki Linnakangas
94aceed317 Support multi-dimensional arrays in PL/python.
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.

This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".

To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.

This needs to be mentioned in the release notes.

Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.

Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
2016-10-26 10:56:30 +03:00
Andres Freund
ccbb852cd6 Fix further hash table order dependent tests.
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order.  After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
2016-10-12 18:31:45 -07:00
Tom Lane
7107d58ec5 Fix misplacement of submake-generated-headers prerequisites.
The sequence "configure; cd src/pl/plpython; make -j" failed due to
trying to compile plpython's .o files before the generated headers
finished building.  (This is an important real-world case, since it's
the typical second step when building both plpython2 and plpython3.)
This happens because the submake-generated-headers target is not
placed in a way to make it a prerequisite to compiling the .o files.
Fix that.

Checking other uses of submake-generated-headers, I noted that the one
attached to pg_regress was similarly misplaced; but it's actually not
needed at all for pg_regress.o, rather regress.o, so move it to be a
prerequisite of that.

Back-patch to 9.6 where submake-generated-headers was introduced
(by commit 548af97fc).  It's not immediately clear to me why the
previous coding didn't have the same issue; but since we've not
had field reports of plpython make failing, leave it alone in the
older branches.

Pavel Raiskup and Tom Lane

Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
2016-10-01 13:35:13 -04:00
Tom Lane
ea268cdc9a Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Tom Lane
b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
Peter Eisentraut
34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Peter Eisentraut
7d67606569 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
2016-07-18 12:07:49 -04:00
Peter Eisentraut
3a4a33ad49 PL/Python: Report argument parsing errors using exceptions
Instead of calling PLy_elog() for reporting Python argument parsing
errors, generate appropriate exceptions.  This matches the existing plpy
functions and is more consistent with the behavior of the Python
argument parsing routines.
2016-07-02 22:53:14 -04:00
Tom Lane
548af97fce Provide and use a makefile target to build all generated headers.
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there.  We previously had a similar complaint
about plpython.

To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython.  In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.

I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.

Michael Paquier and Tom Lane

Report: <31398.1467036827@sss.pgh.pa.us>
Report: <20150916200959.GB32090@msg.df7cb.de>
2016-07-01 15:09:02 -04:00
Peter Eisentraut
47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Peter Eisentraut
f0688d6e6c PL/Python: Clean up extended error reporting docs and tests
Format the example and test code more to Python style standards.
Improve whitespace.  Improve documentation formatting.
2016-06-15 10:34:11 -04:00
Peter Eisentraut
020140d84d PL/Python: Rename new keyword arguments of plpy.error() etc.
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
2016-06-11 19:27:49 -04:00
Robert Haas
4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Peter Eisentraut
8359077124 PL/Python: Move ereport wrapper test cases to separate file
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file.  Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
2016-06-07 09:39:11 -04:00
Peter Eisentraut
48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Tom Lane
1d2f9de38d Fix freshly-introduced PL/Python portability bug.
It turns out that those PyErr_Clear() calls I removed from plpy_elog.c
in 7e3bb08038 et al were not quite as random as they appeared: they
mask a Python 2.3.x bug.  (Specifically, it turns out that PyType_Ready()
can fail if the error indicator is set on entry, and PLy_traceback's fetch
of frame.f_code may be the first operation in a session that requires the
"frame" type to be readied.  Ick.)  Put back the clear call, but in a more
centralized place closer to what it's protecting, and this time with a
comment warning what it's really for.

Per buildfarm member prairiedog.  Although prairiedog was only failing
on HEAD, it seems clearly possible for this to occur in older branches
as well, so back-patch to 9.2 the same as the previous patch.
2016-04-11 18:17:20 -04:00
Peter Eisentraut
d8ed83cd7f Fix whitespace 2016-04-11 14:44:51 -04:00
Tom Lane
81ba9348d8 Fix missing "volatile" in PLy_output().
Commit 5c3c3cd0a3 plastered "volatile" on a bunch of variables
in PLy_output(), but removed the one that actually mattered, ie the
one on "oldcontext".  This allows some versions of clang to generate
code in which "oldcontext" has been trashed when control reaches the
PG_CATCH block.  Per buildfarm member tick.
2016-04-11 11:49:54 -04:00
Tom Lane
f73b2bbbdc Fix poorly thought-through code from commit 5c3c3cd0a3.
It's not entirely clear to me whether PyString_AsString can return
null (looks like the answer might vary between Python 2 and 3).
But in any case, this code's attempt to cope with the possibility
was quite broken, because pstrdup() neither allows a null argument
nor ever returns a null.

Moreover, the code below this point assumes that "message" is a
palloc'd string, which would not be the case for a dgettext result.

Fix both problems by doing the pstrdup step separately.
2016-04-11 00:28:44 -04:00
Tom Lane
7e3bb08038 Fix access-to-already-freed-memory issue in plpython's error handling.
PLy_elog() could attempt to access strings that Python had already freed,
because the strings that PLy_get_spi_error_data() returns are simply
pointers into storage associated with the error "val" PyObject.  That's
fine at the instant PLy_get_spi_error_data() returns them, but just after
that PLy_traceback() intentionally releases the only refcount on that
object, allowing it to be freed --- so that the strings we pass to
ereport() are dangling pointers.

In principle this could result in garbage output or a coredump.  In
practice, I think the risk is pretty low, because there are no Python
operations between where we decrement that refcount and where we use the
strings (and copy them into PG storage), and thus no reason for Python
to recycle the storage.  Still, it's clearly hazardous, and it leads to
Valgrind complaints when running under a Valgrind that hasn't been
lobotomized to ignore Python memory allocations.

The code was a mess anyway: we fetched the error data out of Python
(clearing Python's error indicator) with PyErr_Fetch, examined it, pushed
it back into Python with PyErr_Restore (re-setting the error indicator),
then immediately pulled it back out with another PyErr_Fetch.  Just to
confuse matters even more, there were some gratuitous-and-yet-hazardous
PyErr_Clear calls in the "examine" step, and we didn't get around to doing
PyErr_NormalizeException until after the second PyErr_Fetch, making it even
less clear which object was being manipulated where and whether we still
had a refcount on it.  (If PyErr_NormalizeException did substitute a
different "val" object, it's possible that the problem could manifest for
real, because then we'd be doing assorted Python stuff with no refcount
on the object we have string pointers into.)

So, rearrange all that into some semblance of sanity, and don't decrement
the refcount on the Python error objects until the end of PLy_elog().
In HEAD, I failed to resist the temptation to reformat some messy bits
from 5c3c3cd0a3 along the way.

Back-patch as far as 9.2, because the code is substantially the same
that far back.  I believe that 9.1 has the bug as well; but the code
around it is rather different and I don't want to take a chance on
breaking something for what seems a low-probability problem.
2016-04-10 23:16:10 -04:00
Tom Lane
c7a141a986 Fix PL/Python ereport() test to work on Python 2.3.
Per buildfarm.

Pavel Stehule
2016-04-09 16:44:54 -04:00
Teodor Sigaev
5c3c3cd0a3 Enhanced custom error in PLPythonu
Patch adds a new, more rich,  way to emit error message or exception from
PL/Pythonu code.

Author: Pavel Stehule
Reviewers: Catalin Iacob, Peter Eisentraut, Jim Nasby
2016-04-08 18:33:06 +03:00
Tom Lane
1d2fe56e42 Fix PL/Python for recursion and interleaved set-returning functions.
PL/Python failed if a PL/Python function was invoked recursively via SPI,
since arguments are passed to the function in its global dictionary
(a horrible decision that's far too ancient to undo) and it would delete
those dictionary entries on function exit, leaving the outer recursion
level(s) without any arguments.  Not deleting them would be little better,
since the outer levels would then see the innermost level's arguments.

Since PL/Python uses ValuePerCall mode for evaluating set-returning
functions, it's possible for multiple executions of the same SRF to be
interleaved within a query.  PL/Python failed in such a case, because
it stored only one iterator per function, directly in the function's
PLyProcedure struct.  Moreover, one interleaved instance of the SRF
would see argument values that should belong to another.

Hence, invent code for saving and restoring the argument entries.  To fix
the recursion case, we only need to save at recursive entry and restore
at recursive exit, so the overhead in non-recursive cases is negligible.
To fix the SRF case, we have to save when suspending a SRF and restore
when resuming it, which is potentially not negligible; but fortunately
this is mostly a matter of manipulating Python object refcounts and
should not involve much physical data copying.

Also, store the Python iterator and saved argument values in a structure
associated with the SRF call site rather than the function itself.  This
requires adding a memory context deletion callback to ensure that the SRF
state is cleaned up if the calling query exits before running the SRF to
completion.  Without that we'd leak a refcount to the iterator object in
such a case, resulting in session-lifespan memory leakage.  (In the
pre-existing code, there was no memory leak because there was only one
iterator pointer, but what would happen is that the previous iterator
would be resumed by the next query attempting to use the SRF.  Hardly the
semantics we want.)

We can buy back some of whatever overhead we've added by getting rid of
PLy_function_delete_args(), which seems a useless activity: there is no
need to delete argument entries from the global dictionary on exit,
since the next time anyone would see the global dict is on the next
fresh call of the PL/Python function, at which time we'd overwrite those
entries with new arg values anyway.

Also clean up some really ugly coding in the SRF implementation, including
such gems as returning directly out of a PG_TRY block.  (The only reason
that failed to crash hard was that all existing call sites immediately
exited their own PG_TRY blocks, popping the dangling longjmp pointer before
there was any chance of it being used.)

In principle this is a bug fix; but it seems a bit too invasive relative to
its value for a back-patch, and besides the fix depends on memory context
callbacks so it could not go back further than 9.5 anyway.

Alexey Grishchenko and Tom Lane
2016-04-05 14:51:19 -04:00
Tom Lane
23a27b039d Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout.  Some of these values were declared uint32 before, and
others "long".

I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.

The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command.  Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.

Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long".  It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.

Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 16:05:29 -05:00
Tom Lane
66f503868b Make plpython cope with funny characters in function names.
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier.  To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name.  This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness.  Per bug #13960 from Jim Nasby.

Patch by Jim Nasby, modified a bit by me.  Back-patch to all
supported branches.
2016-02-16 21:08:15 -05:00
Tom Lane
796d1e889f Remove no-longer-needed old-style check for incompatible plpythons.
Commit 866566a690 introduced a new mechanism for incompatible
plpythons to detect each other.  I left the old mechanism in place,
because it seems possible that a plpython predating that commit might be
used with one postdating it.  (This would require updating plpython3 but
not plpython2 or vice versa, but that seems well within the realm of
possibility.)  However, surely it will not be able to happen in 9.6 or
later, so we can delete the old mechanism in HEAD.
2016-01-11 20:13:31 -05:00
Tom Lane
866566a690 Avoid dump/reload problems when using both plpython2 and plpython3.
Commit 803716013d installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session.  However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.

We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython.  This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).

It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort.  However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.

Per a report from Paul Jones.  Back-patch to 9.2, which is as far back
as the patch applies without work.  (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back.  We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
2016-01-11 19:55:39 -05:00
Bruce Momjian
ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Tom Lane
8c75ad436f Fix memory leaks in PL/Python.
Previously, plpython was in the habit of allocating a lot of stuff in
TopMemoryContext, and it was very slipshod about making sure that stuff
got cleaned up; in particular, use of TopMemoryContext as fn_mcxt for
function calls represents an unfixable leak, since we generally don't
know what the called function might have allocated in fn_mcxt.  This
results in session-lifespan leakage in certain usage scenarios, as for
example in a case reported by Ed Behn back in July.

To fix, get rid of all the retail allocations in TopMemoryContext.
All long-lived allocations are now made in sub-contexts that are
associated with specific objects (either pl/python procedures, or
Python-visible objects such as cursors and plans).  We can clean these
up when the associated object is deleted.

I went so far as to get rid of PLy_malloc completely.  There were a
couple of places where it could still have been used safely, but on
the whole it was just an invitation to bad coding.

Haribabu Kommi, based on a draft patch by Heikki Linnakangas;
some further work by me
2015-11-05 13:52:40 -05:00
Andres Freund
86b1e6784b Fix hstore_plpython test when python3 is used.
Due to b67aaf21e8 / CREATE EXTENSION ... CASCADE the test output
contains the extension name in yet another place. Since that's variable
depending on the python version...

Add yet another name mangling stanza to regress-python3-mangle.mk.

Author: Petr Jelinek
2015-10-04 22:29:03 +02:00
Tom Lane
0426f349ef Rearrange the handling of error context reports.
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands.  That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function.  What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.

To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages.  Printing CONTEXT for errors only
is now their default behavior.

The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread.  I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.

In passing, fix up (again) the output line counts in psql's various
help displays.  Add some commentary about how to verify them.

Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
2015-09-05 11:58:33 -04:00
Tom Lane
f469f634ad Fix plpython crash when returning string representation of a RECORD result.
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid.  This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.

Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.

Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite().  This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.

This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.

Add regression test cases covering all the supported methods of converting
composite results.

Back-patch to 9.1 where the faulty coding was introduced.
2015-08-21 12:21:37 -04:00
Peter Eisentraut
f16d52269a PL/Python: Make tests pass with Python 3.5
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file.  In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python.  So use a generic
exception instead, which has a message that doesn't vary across
versions.
2015-08-13 23:55:20 -04:00
Tom Lane
09cecdf285 Fix a number of places that produced XX000 errors in the regression tests.
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases.  Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.

I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk.  I do not claim to have fixed
all violations of the policy, just the ones in these files.

In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.

Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
2015-08-02 23:49:19 -04:00
Peter Eisentraut
c5e5d444de Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: fb7e72f46cfafa1b5bfe4564d9686d63a1e6383f
2015-06-28 23:56:55 -04:00
Bruce Momjian
807b9e0dff pgindent run for 9.5 2015-05-23 21:35:49 -04:00
Heikki Linnakangas
4fc72cc7bb Collection of typo fixes.
Use "a" and "an" correctly, mostly in comments. Two error messages were
also fixed (they were just elogs, so no translation work required). Two
function comments in pg_proc.h were also fixed. Etsuro Fujita reported one
of these, but I found a lot more with grep.

Also fix a few other typos spotted while grepping for the a/an typos.
For example, "consists out of ..." -> "consists of ...". Plus a "though"/
"through" mixup reported by Euler Taveira.

Many of these typos were in old code, which would be nice to backpatch to
make future backpatching easier. But much of the code was new, and I didn't
feel like crafting separate patches for each branch. So no backpatching.
2015-05-20 16:56:22 +03:00
Tom Lane
0c071936e9 Revert error-throwing wrappers for the printf family of functions.
This reverts commit 16304a0134, except
for its changes in src/port/snprintf.c; as well as commit
cac18a76bb which is no longer needed.

Fujii Masao reported that the previous commit caused failures in psql on
OS X, since if one exits the pager program early while viewing a query
result, psql sees an EPIPE error from fprintf --- and the wrapper function
thought that was reason to panic.  (It's a bit surprising that the same
does not happen on Linux.)  Further discussion among the security list
concluded that the risk of other such failures was far too great, and
that the one-size-fits-all approach to error handling embodied in the
previous patch is unlikely to be workable.

This leaves us again exposed to the possibility of the type of failure
envisioned in CVE-2015-3166.  However, that failure mode is strictly
hypothetical at this point: there is no concrete reason to believe that
an attacker could trigger information disclosure through the supposed
mechanism.  In the first place, the attack surface is fairly limited,
since so much of what the backend does with format strings goes through
stringinfo.c or psprintf(), and those already had adequate defenses.
In the second place, even granting that an unprivileged attacker could
control the occurrence of ENOMEM with some precision, it's a stretch to
believe that he could induce it just where the target buffer contains some
valuable information.  So we concluded that the risk of non-hypothetical
problems induced by the patch greatly outweighs the security risks.
We will therefore revert, and instead undertake closer analysis to
identify specific calls that may need hardening, rather than attempt a
universal solution.

We have kept the portion of the previous patch that improved snprintf.c's
handling of errors when it calls the platform's sprintf().  That seems to
be an unalloyed improvement.

Security: CVE-2015-3166
2015-05-19 18:19:38 -04:00
Noah Misch
16304a0134 Add error-throwing wrappers for the printf family of functions.
All known standard library implementations of these functions can fail
with ENOMEM.  A caller neglecting to check for failure would experience
missing output, information exposure, or a crash.  Check return values
within wrappers and code, currently just snprintf.c, that bypasses the
wrappers.  The wrappers do not return after an error, so their callers
need not check.  Back-patch to 9.0 (all supported versions).

Popular free software standard library implementations do take pains to
bypass malloc() in simple cases, but they risk ENOMEM for floating point
numbers, positional arguments, large field widths, and large precisions.
No specification demands such caution, so this commit regards every call
to a printf family function as a potential threat.

Injecting the wrappers implicitly is a compromise between patch scope
and design goals.  I would prefer to edit each call site to name a
wrapper explicitly.  libpq and the ECPG libraries would, ideally, convey
errors to the caller rather than abort().  All that would be painfully
invasive for a back-patched security fix, hence this compromise.

Security: CVE-2015-3166
2015-05-18 10:02:31 -04:00
Peter Eisentraut
dcf5e31908 PL/Python: Remove procedure cache invalidation
This was added to react to changes in the pg_transform catalog, but
building with CLOBBER_CACHE_ALWAYS showed that PL/Python was not
prepared for having its procedure cache cleared.  Since this is a
marginal use case, and we don't do this for other catalogs anyway, we
can postpone this to another day.
2015-05-12 22:52:18 -04:00
Andrew Dunstan
b6b2149e48 Fix python_includespec on Windows at configure time
By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
2015-05-03 08:17:04 -04:00
Peter Eisentraut
67df9782e9 Windows also needs an override of the shared libpython detection 2015-05-02 13:23:16 -04:00
Peter Eisentraut
d664a10f96 Move interpreter shared library detection to configure
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively.  Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available.  Now this is checked in configure, with an
error if no shared library is available.

The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built.  The procedural
languages were added later, and shared libraries were often not
available in the beginning.  So it was decided skip the builds of the
procedural languages in those cases.  The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.

Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
2015-05-01 21:38:21 -04:00
Peter Eisentraut
dbf2ec1a1c Fix parallel make risk with new check temp-install setup
The "check" target no longer needs to depend on "all", because it now
runs "install" directly, which in turn depends on "all".  Doing both
will cause problems with parallel make, because two builds will run next
to each other.

Also remove the redirection of the temp-install output into a log file.
This was appropriate when this was done from within pg_regress, but now
it's just a regular make run, and especially with the above changes this
will now take the place of running the "all" target before the test
suites.

problem report by Jeff Janes, patch in part by Michael Paquier
2015-04-29 20:34:22 -04:00
Peter Eisentraut
cac7658205 Add transforms feature
This provides a mechanism for specifying conversions between SQL data
types and procedural languages.  As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.

reviews by Pavel Stěhule and Andres Freund
2015-04-26 10:33:14 -04:00
Peter Eisentraut
dcae5facca Improve speed of make check-world
Before, make check-world would create a new temporary installation for
each test suite, which is slow and wasteful.  Instead, we now create one
test installation that is used by all test suites that are part of a
make run.

The management of the temporary installation is removed from pg_regress
and handled in the makefiles.  This allows for better control, and
unifies the code with that of test suites not run through pg_regress.

review and msvc support by Michael Paquier <michael.paquier@gmail.com>

more review by Fabien Coelho <coelho@cri.ensmp.fr>
2015-04-23 08:59:52 -04:00
Tom Lane
785941cdc3 Tweak __attribute__-wrapping macros for better pgindent results.
This improves on commit bbfd7edae5 by
making two simple changes:

* pg_attribute_noreturn now takes parentheses, ie pg_attribute_noreturn().
Likewise pg_attribute_unused(), pg_attribute_packed().  This reduces
pgindent's tendency to misformat declarations involving them.

* attributes are now always attached to function declarations, not
definitions.  Previously some places were taking creative shortcuts,
which were not merely candidates for bad misformatting by pgindent
but often were outright wrong anyway.  (It does little good to put a
noreturn annotation where callers can't see it.)  In any case, if
we would like to believe that these macros can be used with non-gcc
compilers, we should avoid gratuitous variance in usage patterns.

I also went through and manually improved the formatting of a lot of
declarations, and got rid of excessively repetitive (and now obsolete
anyway) comments informing the reader what pg_attribute_printf is for.
2015-03-26 14:03:25 -04:00
Peter Eisentraut
ff2faeec5c PL/Python: Fix regression tests for Python 3 2015-03-11 18:30:56 -04:00