Until now shadowed_variables was the only plpgsql check supported by
plpgsql.extra_warnings and plpgsql.extra_errors. This patch introduces
two new checks - strict_multi_assignment and too_many_rows. Unlike
shadowed_variables, these new checks are enforced at run-time.
strict_multi_assignment checks that commands allowing multi-assignment
(for example SELECT INTO) have the same number of sources and targets.
too_many_rows checks that queries with an INTO clause return one row
exactly.
These checks are aimed at cases that are technically valid and allowed,
but are often a sign of a bug. Therefore those checks are expected to
be enabled primarily in development and testing environments.
Author: Pavel Stehule
Reviewed-by: Stephen Frost, Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRA2kKRDKpUNwLY0GeG1OqOp+tLS2yQA1V41gzuSz-hCng@mail.gmail.com
Many of the objects we create during the regression tests are put in the
public schema, so that using the same names in different regression tests
creates a hazard of test failures if any two such scripts run concurrently.
This patch cleans up a bunch of latent hazards of that sort, as well as two
live hazards.
The current situation in this regard is far worse than it was a year or two
back, because practically all of the partitioning-related test cases have
reused table names with enthusiasm. I despaired of cleaning up that mess
within the five most-affected tests (create_table, alter_table, insert,
update, inherit); fortunately those don't run concurrently.
Other than partitioning problems, most of the issues boil down to using
names like "foo", "bar", "tmp", etc, without thought for the fact that
other test scripts might use similar names concurrently. I've made an
effort to make all such names more specific.
One of the live hazards was that commit 7421f4b8 caused with.sql to
create a table named "test", conflicting with a similarly-named table
in alter_table.sql; this was exposed in the buildfarm recently.
The other one was that join.sql and transactions.sql both create tables
named "foo" and "bar"; but join.sql's uses of those names date back
only to December or so.
Since commit 7421f4b8 was back-patched to v10, back-patch a minimal
fix for that problem. The rest of this is just future-proofing.
Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node. That's because commit
18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times. Repair.
Thomas Munro
Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.
This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.
I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.
Tom Lane, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible. In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session. And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple. A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.
Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type. DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.
To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90. A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests. In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.
In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type. This allows very cheap detection of the need to
refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.
In passing, allow plpgsql functions to accept as well as return type
"record". There was no good reason for the old restriction, and it
was out of step with most of the other PLs.
Tom Lane, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements. Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.
Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql. So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter. I then
added new test cases that exercise the bits of code complained of above.
Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests. That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation. So testing each usage of
that implementation separately seems redundant.
In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.
Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome. The other PLs have their own test suites split up into
smaller files by topic. It would be nice to have that for plpgsql as
well. So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there. That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.
msvc build system changes with help from Michael Paquier
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent. But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely. We can revert this if we do something
in the backend to eliminate the inconsistency.
Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is). But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list. This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.
Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well. (We don't need to consider other DTYPEs because they don't
represent separately-stored values.) I failed to resist the temptation
to make some nearby cosmetic adjustments, too.
No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior. In any case this change would have no
impact on correctly-written code.
Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
setTargetTable threw an error if the proposed target RangeVar's relname
matched any visible CTE or ENR. This breaks backwards compatibility in
the CTE case, since pre-v10 we never looked for a CTE here at all, so that
CTE names did not mask regular tables. It does seem like a good idea to
throw an error for the ENR case, though, thus causing ENRs to mask tables
for this purpose; ENRs are new in v10 so we're not breaking existing code,
and we may someday want to allow them to be the targets of DML.
To fix that, replace use of getRTEForSpecialRelationTypes, which was
overkill anyway, with use of scanNameSpaceForENR.
A second problem was that the check neglected to verify null schemaname,
so that a CTE or ENR could incorrectly be thought to match a qualified
RangeVar. That happened because getRTEForSpecialRelationTypes relied
on its caller to have checked for null schemaname. Even though the one
remaining caller got it right, this is obviously bug-prone, so move
the check inside getRTEForSpecialRelationTypes.
Also, revert commit 18ce3a4ab's extremely poorly thought out decision to
add a NULL return case to parserOpenTable --- without either documenting
that or adjusting any of the callers to check for it. The current bug
seems to have arisen in part due to working around that bad idea.
In passing, remove the one-line shim functions transformCTEReference and
transformENRReference --- they don't seem to be adding any clarity or
functionality.
Per report from Hugo Mercier (via Julien Rouhaud). Back-patch to v10
where the bug was introduced.
Thomas Munro, with minor editing by me
Discussion: https://postgr.es/m/CAOBaU_YdPVH+PTtiKSSLOiiW3mVDYsnNUekK+XPbHXiP=wrFLA@mail.gmail.com
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query. Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.
Patch by Thomas Munro
Discussion: https://postgr.es/m/CAEepm%3D11KHQ0JmETJQihSvhZB5mUZL2xrqHeXbCeLhDiqQ39%3Dw%40mail.gmail.com
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.
Dean Rasheed, reviewed by Amit Langote.
Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
When the input value to a CoerceToDomain expression node is a read-write
expanded datum, we should pass a read-only pointer to any domain CHECK
expressions and then return the original read-write pointer as the
expression result. Previously we were blindly passing the same pointer to
all the consumers of the value, making it possible for a function in CHECK
to modify or even delete the expanded value. (Since a plpgsql function
will absorb a passed-in read-write expanded array as a local variable
value, it will in fact delete the value on exit.)
A similar hazard of passing the same read-write pointer to multiple
consumers exists in domain_check() and in ExecEvalCase, so fix those too.
The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate
places, which is simple enough except that we need to get the data type's
typlen from somewhere. For the domain cases, solve this by redefining
DomainConstraintRef.tcache as okay for callers to access; there wasn't any
reason for the original convention against that, other than not wanting the
API of typcache.c to be any wider than it had to be. For CASE, there's
no good solution except to add a syscache lookup during executor start.
Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded
values were introduced.
Discussion: https://postgr.es/m/15225.1482431619@sss.pgh.pa.us
There's no good reason why plpgsql's GET DIAGNOSTICS statement can't
support an array element as target variable, since the execution code
already uses the generic exec_assign_value() function to assign to it.
Hence, refactor the grammar to allow that, by making getdiag_target
depend on the assign_var production.
Ideally we'd also let a cursor_variable expand to an element of a
refcursor[] array, but that's substantially harder since those statements
also have to handle bound-cursor-variable cases. For now, just make sure
the reported error is sensible, ie "cursor variable must be a simple
variable" not "variable must be of type cursor or refcursor". The latter
was quite confusing from the user's viewpoint, since what he wrote
satisfies the claimed restriction.
Per bug #14463 from Zhou Digoal. Given the lack of previous complaints,
I see no need for a back-patch.
Discussion: https://postgr.es/m/20161213152548.14897.81245@wrigleys.postgresql.org
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
Further thought about bug #14174 motivated me to try the case of a
R/W datum being returned from a VALUES list, and sure enough it was
broken. Fix that.
Also add a regression test case exercising the same scenario for
FunctionScan. That's not broken right now, because the function's
result will get shoved into a tuplestore between generation and use;
but it could easily become broken whenever we get around to optimizing
FunctionScan better.
There don't seem to be any other places where we put the result of
expression evaluation into a virtual tuple slot that could then be
the source for Vars of further expression evaluation, so I think
this is the end of this bug.
If a plan node output expression returns an "expanded" datum, and that
output column is referenced in more than one place in upper-level plan
nodes, we need to ensure that what is returned is a read-only reference
not a read/write reference. Otherwise one of the referencing sites could
scribble on or even delete the expanded datum before we have evaluated the
others. Commit 1dc5ebc907, which introduced this feature, supposed
that it'd be sufficient to make SubqueryScan nodes force their output
columns to read-only state. The folly of that was revealed by bug #14174
from Andrew Gierth, and really should have been immediately obvious
considering that the planner will happily optimize SubqueryScan nodes
out of the plan without any regard for this issue.
The safest fix seems to be to make ExecProject() force its results into
read-only state; that will cover every case where a plan node returns
expression results. Actually we can delegate this to ExecTargetList()
since we can recursively assume that plain Vars will not reference
read-write datums. That should keep the extra overhead down to something
minimal. We no longer need ExecMakeSlotContentsReadOnly(), which was
introduced only in support of the idea that just a few plan node types
would need to do this.
In the future it would be nice to have the planner account for this problem
and inject force-to-read-only expression evaluation nodes into only the
places where there's a risk of multiple evaluation. That's not a suitable
solution for 9.5 or even 9.6 at this point, though.
Report: <20160603124628.9932.41279@wrigleys.postgresql.org>
Instead, use transaction abort. Given an unlucky bout of latency, the
timeout would cancel the RESET itself. Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that. Back-patch
to 9.1 (all supported versions). The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
With a bit of tweaking of the compile namestack data structure, we can
verify at compile time whether a CONTINUE or EXIT is legal. This is
surely better than leaving it to runtime, both because earlier is better
and because we can issue a proper error pointer. Also, we can get rid
of the ad-hoc old way of detecting the problem, which only took care of
CONTINUE not EXIT.
Jim Nasby, adjusted a bit by me
DO blocks use private simple_eval_estates to avoid intra-transaction memory
leakage, cf commit c7b849a89. I had forgotten about that while writing
commit 0fc94a5ba, but it means that expression execution trees created
within a DO block disappear immediately on exiting the DO block, and hence
can't safely be linked into plpgsql's session-wide cast hash table.
To fix, give a DO block a private cast hash table to go with its private
simple_eval_estate. This is less efficient than one could wish, since
DO blocks can no longer share any cast lookup work with other plpgsql
execution, but it shouldn't be too bad; in any case it's no worse than
what happened in DO blocks before commit 0fc94a5ba.
Per bug #13571 from Feike Steenbergen. Preliminary analysis by
Oleksandr Shulgin.
In commit 1345cc67bb, I introduced caching
of expressions representing type-cast operations into plpgsql. However,
I supposed that I could cache both the expression trees and the evaluation
state trees derived from them for the life of the session. This doesn't
work, because we execute the expressions in plpgsql's simple_eval_estate,
which has an ecxt_per_query_memory that is only transaction-lifespan.
Therefore we can end up putting pointers into the evaluation state tree
that point to transaction-lifespan memory; in particular this happens if
the cast expression calls a SQL-language function, as reported by Geoff
Winkless.
The minimum-risk fix seems to be to treat the state trees the same way
we do for "simple expression" trees in plpgsql, ie create them in the
simple_eval_estate's ecxt_per_query_memory, which means recreating them
once per transaction.
Since I had to introduce bookkeeping overhead for that anyway, I bought
back some of the added cost by sharing the read-only expression trees
across all functions in the session, instead of using a per-function
table as originally. The simple-expression bookkeeping takes care of
the recursive-usage risk that I was concerned about avoiding before.
At some point we should take a harder look at how all this works,
and see if we can't reduce the amount of tree reinitialization needed.
But that won't happen for 9.5.
In commit 9e3ad1aac5 I modified plpgsql
to use exec_stmt_return's simple-variables fast path in more cases.
However, I overlooked that there are really two different return
conventions in use here, depending on whether estate->retistuple is true,
and the existing fast-path code had only bothered to handle one of them.
So trying to return a scalar in a function returning composite, or vice
versa, could lead to unexpected error messages (typically "cache lookup
failed for type 0") or to a null-pointer-dereference crash.
In the DTYPE_VAR case, we can just throw error if retistuple is true,
corresponding to what happens in the general-expression code path that was
being used previously. (Perhaps someday both of these code paths should
attempt a coercion, but today is not that day.)
In the REC and ROW cases, just hand the problem to exec_eval_datum()
when not retistuple. Also clean up the ROW coding slightly so it looks
more like exec_eval_datum().
The previous commit also caused exec_stmt_return_next() to be used in
more cases, but that code seems to be OK as-is.
Per off-list report from Serge Rielau. This bug is new in 9.5 so no need
to back-patch.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory. Back-patch to 9.0 (all supported versions). Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
Add a bit of context sensitivity to plpgsql_yylex() so that it can
recognize when the word it is looking at is the first word of a new
statement, and if so whether it is the target of an assignment statement.
When we are at start of statement and it's not an assignment, we can
prefer recognizing unreserved keywords over recognizing variable names,
thereby allowing most statements' initial keywords to be demoted from
reserved to unreserved status. This is rather useful already (there are
15 such words that get demoted here), and what's more to the point is
that future patches proposing to add new plpgsql statements can avoid
objections about having to add new reserved words.
The keywords BEGIN, DECLARE, FOR, FOREACH, LOOP, WHILE need to remain
reserved because they can be preceded by block labels, and the logic
added here doesn't understand about block labels. In principle we
could probably fix that, but it would take more than one token of
lookback and the benefit doesn't seem worth extra complexity.
Also note I didn't de-reserve EXECUTE, because it is used in more places
than just statement start. It's possible it could be de-reserved with
more work, but that would be an independent fix.
In passing, also de-reserve COLLATE and DEFAULT, which shouldn't have
been reserved in the first place since they only need to be recognized
within DECLARE sections.
The number of % parameter markers in RAISE statement should match the number
of parameters given. We used to check that at execution time, but we have
all the information needed at compile time, so let's check it at compile
time instead. It's generally better to find mistakes earlier.
Marko Tiikkaja, reviewed by Fabien Coelho
Infrastructure to allow
plpgsql.extra_warnings
plpgsql.extra_errors
Initial extra checks only for shadowed_variables
Marko Tiikkaja and Petr Jelinek
Reviewed by Simon Riggs and Pavel Stěhule
plpgsql likes to cache query plans and simple-expression execution state
trees across calls. This is a considerable win for multiple executions
of the same function. However, it's useless for DO blocks, since by
definition those are executed only once and discarded. Nonetheless,
we were allowing a DO block's expression execution trees to survive
until end of transaction, resulting in a significant intra-transaction
memory leak, as reported by Yeb Havinga. Worse, if the DO block exited
with an error, the compiled form of the block's code was leaked till
end of session --- along with subsidiary plancache entries.
To fix, make DO blocks keep their expression execution trees in a private
EState that's deleted at exit from the block, and add a PG_TRY block
to plpgsql_inline_handler to make sure that memory cleanup happens
even on error exits. Also add a regression test covering error handling
in a DO block, because my first try at this broke that. (The test is
not meant to prove that we don't leak memory anymore, though it could
be used for that with a much larger loop count.)
Ideally we'd back-patch this into all versions supporting DO blocks;
but the patch needs to add a field to struct PLpgSQL_execstate, and that
would break ABI compatibility for third-party plugins such as the plpgsql
debugger. Given the small number of complaints so far, fixing this in
HEAD only seems like an acceptable choice.
This option provides more detailed error messages when STRICT is used
and the number of rows returned is not one.
Marko Tiikkaja, reviewed by Ian Lawrence Barwick
As pointed out by Tom Lane, we can allow other users of the error
handler callbacks to provide their own memory context by adding
the context to use to ErrorData and using that instead of explicitly
using ErrorContext.
This then allows GetErrorContextStack() to be called from inside
exception handlers, so modify plpgsql to take advantage of that and
add an associated regression test for it.
As GetErrorContextStack() borrowed setup and tear-down code from other
places, it was less than clear that it must only be called as a
top-level entry point into the error system and can't be called by an
exception handler (unlike the rest of the error system, which is set up
to be reentrant-safe).
Being called from an exception handler is outside the charter of
GetErrorContextStack(), so add a bit more protection against it,
improve the comments addressing why we have to set up an errordata
stack for this function at all, and add a few more regression tests.
Lack of clarity pointed out by Tom Lane; all bugs are mine.
This adds the ability to get the call stack as a string from within a
PL/PgSQL function, which can be handy for logging to a table, or to
include in a useful message to an end-user.
Pavel Stehule, reviewed by Rushabh Lathia and rather heavily whacked
around by Stephen Frost.
Commit 31a891857a added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right. Use another name to
avoid that. Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Pavel Stehule, lexical revisions by Noah Misch.
exec_simple_check_plan and exec_eval_simple_expr attempted to call
GetCachedPlan directly. This meant that if an error was thrown during
planning, the resulting context traceback would not include the line
normally contributed by _SPI_error_callback. This is already inconsistent,
but just to be really odd, a re-execution of the very same expression
*would* show the additional context line, because we'd already have cached
the plan and marked the expression as non-simple.
The problem is easy to demonstrate in 9.2 and HEAD because planning of a
cached plan doesn't occur at all until GetCachedPlan is done. In earlier
versions, it could only be an issue if initial planning had succeeded, then
a replan was forced (already somewhat improbable for a simple expression),
and the replan attempt failed. Since the issue is mainly cosmetic in older
branches anyway, it doesn't seem worth the risk of trying to fix it there.
It is worth fixing in 9.2 since the instability of the context printout can
affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion
on pgsql-novice.
To fix, introduce a SPI function that wraps GetCachedPlan while installing
the correct callback function. Use this instead of calling GetCachedPlan
directly from plpgsql.
Also introduce a wrapper function for extracting a SPI plan's
CachedPlanSource list. This lets us stop including spi_priv.h in
pl_exec.c, which was never a very good idea from a modularity standpoint.
In passing, fix a similar inconsistency that could occur in SPI_cursor_open,
which was also calling GetCachedPlan without setting up a context callback.
For some reason lost in the mists of prehistory, RETURN was only coded to
allow a simple reference to a composite variable when the function's return
type is composite. Allow an expression instead, while preserving the
efficiency of the original code path in the case where the expression is
indeed just a composite variable's name. Likewise for RETURN NEXT.
As is true in various other places, the supplied expression must yield
exactly the number and data types of the required columns. There was some
discussion of relaxing that for pl/pgsql, but no consensus yet, so this
patch doesn't address that.
Asif Rehman, reviewed by Pavel Stehule
The parser got confused if a cursor parameter had the same name as
a plpgsql variable. Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.
Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
Fix up some infelicitous coding in DefineRange, and add some missing error
checks. Rearrange operator strategy number assignments for GiST anyrange
opclass so that they don't make such a mess of opr_sanity's table of
operator names associated with different strategy numbers. Assign
hopefully-temporary selectivity estimators to range operators that didn't
have one --- poor as the estimates are, they're still a lot better than the
default 0.5 estimate, and they'll shut up the opr_sanity test that wants to
see selectivity estimators on all built-in operators.