memory if the result had zero rows, and also if there was any sort of error
while converting the result tuples into Python data. Reported and partially
fixed by Andres Freund.
Back-patch to all supported versions. Note: I haven't tested the 7.4 fix.
7.4's configure check for python is so obsolete it doesn't work on my
current machines :-(. The logic change is pretty straightforward though.
In PLy_spi_execute_plan, use the data-type specific Python-to-PostgreSQL
conversion function instead of passing everything through InputFunctionCall
as a string. The equivalent fix was already done months ago for function
parameters and return values, but this other gateway between Python and
PostgreSQL was apparently forgotten. As a result, data types that need
special treatment, such as bytea, would misbehave when used with
plpy.execute.
old memory context in plpython. Before only one of them was marked
volatile, but per report from Zdenek Kotala, some compilers do the
wrong thing here.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
Mimic the Python interpreter's own logic for printing exceptions instead
of just using the straight str() call, so that
you get
plpy.SPIError
instead of
<class 'plpy.SPIError'>
and for built-in exceptions merely
UnicodeEncodeError
Besides looking better this cuts down on the endless version differences
in the regression test expected files.
Behaves more or less unchanged compared to Python 2, but the new language
variant is called plpython3u. Documentation describing the naming scheme
is included.
In PLy_output(), when the elog() call in the TRY branch throws an exception
(this can happen when a statement timeout kicks in, for example), the
PyErr_SetString() call in the CATCH branch can cause a segfault, because the
Py_XDECREF(so) call before it releases memory that is still used by the sv
variable that PyErr_SetString() uses as argument, because sv points into
memory owned by so.
Backpatched back to 8.0, where this code was introduced.
I also threw in a couple of volatile declarations for variables that are used
before and after the TRY. I don't think they caused the crash that I
observed, but they could become issues.
Check calls of PyUnicode_AsEncodedString() for NULL return, probably
because the encoding name is not known. Add special treatment for
SQL_ASCII, which Python definitely does not know.
Since using SQL_ASCII produces errors in the regression tests when
non-ASCII characters are involved, we have to put back various regression
test result variants.
PL/Python now accepts Unicode objects where it previously only accepted string
objects (for example, as return value). Unicode objects are converted to the
PostgreSQL server encoding as necessary.
This change is also necessary for future Python 3 support, which treats all
strings as Unicode objects.
Since this removes the error conditions that the plpython_unicode test file
tested for, the alternative result files are no longer necessary.
Before, PL/Python converted data between SQL and Python by going
through a C string representation. This broke for bytea in two ways:
- On input (function parameters), you would get a Python string that
contains bytea's particular external representation with backslashes
etc., instead of a sequence of bytes, which is what you would expect
in a Python environment. This problem is exacerbated by the new
bytea output format.
- On output (function return value), null bytes in the Python string
would cause truncation before the data gets stored into a bytea
datum.
This is now fixed by converting directly between the PostgreSQL datum
and the Python representation.
The required generalized infrastructure also allows for other
improvements in passing:
- When returning a boolean value, the SQL datum is now true if and
only if Python considers the value that was passed out of the
PL/Python function to be true. Previously, this determination was
left to the boolean data type input function. So, now returning
'foo' results in true, because Python considers it true, rather than
false because PostgreSQL considers it false.
- On input, we can convert the integer and float types directly to
their Python equivalents without having to go through an
intermediate string representation.
original patch by Caleb Welton, with updates by myself
Extract the "while creating return value" and "while modifying trigger
row" parts of some error messages into another layer of error context.
This will simplify the upcoming patch to improve data type support, but
it can stand on its own.
Switch the implementation of the plan and result types to generic attribute
management, as described at <http://docs.python.org/extending/newtypes.html>.
This modernizes and simplifies the code a bit and prepares for Python 3.1,
where the old way doesn't work anymore.
When examining what Python type to convert a PostgreSQL type to on input,
look at the base type of the input type, otherwise all domains end up
defaulting to string.
Error messages from PL/Python now always mention the function name in the
CONTEXT: field. This also obsoletes the few places that tried to do the
same manually.
Regression test files are updated to work with Python 2.4-2.6. I don't have
access to older versions right now.
by extending the ereport() API to cater for pluralization directly. This
is better than the original method of calling ngettext outside the elog.c
code because (1) it avoids double translation, which wastes cycles and in
the worst case could give a wrong result; and (2) it avoids having to use
a different coding method in PL code than in the core backend. The
client-side uses of ngettext are not touched since neither of these concerns
is very pressing in the client environment. Per my proposal of yesterday.
for its arguments. Also add a regression test, since someone apparently
changed every single plpython test case to use only named parameters; else
we'd have noticed this sooner.
Euler Taveira de Oliveira, per a report from Alvaro
In the backend, I changed only a handful of exemplary or important-looking
instances to make use of the plural support; there is probably more work
there. For the rest of the source, this should cover all relevant cases.
and heap_deformtuple in favor of the newer functions heap_form_tuple et al
(which do the same things but use bool control flags instead of arbitrary
char values). Eliminate the former duplicate coding of these functions,
reducing the deprecated functions to mere wrappers around the newer ones.
We can't get rid of them entirely because add-on modules probably still
contain many instances of the old coding style.
Kris Jurka
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
unnecessary #include lines in it. Also, move some tuple routine prototypes and
macros to htup.h, which allows removal of heapam.h inclusion from some .c
files.
For this to work, a new header file access/sysattr.h needed to be created,
initially containing attribute numbers of system columns, for pg_dump usage.
While at it, make contrib ltree, intarray and hstore header files more
consistent with our header style.
strings. This patch introduces four support functions cstring_to_text,
cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and
two macros CStringGetTextDatum and TextDatumGetCString. A number of
existing macros that provided variants on these themes were removed.
Most of the places that need to make such conversions now require just one
function or macro call, in place of the multiple notational layers that used
to be needed. There are no longer any direct calls of textout or textin,
and we got most of the places that were using handmade conversions via
memcpy (there may be a few still lurking, though).
This commit doesn't make any serious effort to eliminate transient memory
leaks caused by detoasting toasted text objects before they reach
text_to_cstring. We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few
places where it was easy, but much more could be done.
Brendan Jurd and Tom Lane
a trigger's target table. The rowtype could change from one call to the
next, so cope in such cases, while avoiding doing repetitive catalog lookups.
Per bug #3847 from Mark Reid.
Backpatch to 8.2.x. Likely this fix should go further back, but I can't test
it because I no longer have a machine with a pre-2.5 Python installation.
(Maybe we should rethink that idea about not supporting Python 2.5 in the
older branches.)
from old versions of gcc. It's not clear to me that this is really
necessary for correctness, but less warnings are always good.
Per buildfarm results and local testing.
It removes last remaining casts inside struct definitions.
Such usage is bad practice, as it hides problems from compiler.
Reason for the cast is popular practice in some circles
to define functions as foo(MyObj *) instead of foo(PyObject *)
thus avoiding a local variable inside functions and make
direct calling easier. As pl/python does not use such style,
the casts were unnecessary from the start.
Marko Kreen
keeping private state in each backend that has inserted and deleted the same
tuple during its current top-level transaction. This is sufficient since
there is no need to be able to determine the cmin/cmax from any other
transaction. This gets us back down to 23-byte headers, removing a penalty
paid in 8.0 to support subtransactions. Patch by Heikki Linnakangas, with
minor revisions by moi, following a design hashed out awhile back on the
pghackers list.
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
python 2.5. This involves fixing several violations of the published
spec for creating PyTypeObjects, and adding another regression test
expected output for yet another variation of error message spelling.
Fix all the standard PLs to be able to return tuples from FOO_RETURNING
statements as well as utility statements that return tuples. Also,
fix oversight that SPI_processed wasn't set for a utility statement
returning tuples. Per recent discussion.
loaded libraries: call functions _PG_init() and _PG_fini() if the library
defines such symbols. Hence we no longer need to specify an initialization
function in preload_libraries: we can assume that the library used the
_PG_init() convention, instead. This removes one source of pilot error
in use of preloaded libraries. Original patch by Ralf Engelschall,
preload_libraries changes by me.
Studio 2005. Basically MS defined errcode in the headers with a typedef,
so we have to #define it out of the way.
While at it, fix a function declaration in plpython that didn't match
the implementation (volatile missing).
Magnus Hagander
After updating to the latest cvs, and also building most of the addons
(like PLs), the following patch is neededf for win32 + Visual C++.
* Switch to use the new win32 semaphore code
* Rename win32_open to pgwin32_open. win32_open collides with symbols
defined in Perl. MingW didn't detect ig, MSVC did. And it's a bit too
generic a name to export globally, imho...
* Python defines some partially broken #pragmas in the headers when
doing a debug build. Workaround.
Magnus Hagander
by creating a reference-count mechanism, similar to what we did a long time
ago for catcache entries. The back branches have an ugly solution involving
lots of extra copies, but this way is more efficient. Reference counting is
only applied to tupdescs that are actually in caches --- there seems no need
to use it for tupdescs that are generated in the executor, since they'll go
away during plan shutdown by virtue of being in the per-query memory context.
Neil Conway and Tom Lane
> >> >> > 1) named parameters additionally to args[]
> >> >> > 2) return composite-types from plpython as dictionary
> >> >> > 3) return result-set from plpython as list, iterator or generator
1) named parameters additionally to args[]
2) return composite-types from plpython as dictionary
3) return result-set from plpython as list, iterator or generator
Hannu Krosing
Sven Suursoho
functions are not strict, they will be called (passing a NULL first parameter)
during any attempt to input a NULL value of their datatype. Currently, all
our input functions are strict and so this commit does not change any
behavior. However, this will make it possible to build domain input functions
that centralize checking of domain constraints, thereby closing numerous holes
in our domain support, as per previous discussion.
While at it, I took the opportunity to introduce convenience functions
InputFunctionCall, OutputFunctionCall, etc to use in code that calls I/O
functions. This eliminates a lot of grotty-looking casts, but the main
motivation is to make it easier to grep for these places if we ever need
to touch them again.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
more compliant with the error message style guide. In particular,
errdetail should begin with a capital letter and end with a period,
whereas errmsg should not. I also fixed a few related issues in
passing, such as fixing the repeated misspelling of "lexeme" in
contrib/tsearch2 (per Tom's suggestion).
(I didn't use his patch, however). A void-returning PL/Python function
must return None (from Python), which is translated into a void datum
(and *not* NULL) for Postgres. I also added some regression tests for
this functionality.
in leaking memory when invoking a PL/Python procedure that raises an
exception. Unfortunately this still leaks memory, but at least the
largest leak has been plugged.
This patch also fixes a reference counting mistake in PLy_modify_tuple()
for 8.0, 8.1 and HEAD: we don't actually own a reference to `platt', so
we shouldn't Py_DECREF() it.
one argument at a time and then inserting the argument into a Python
list via PyList_SetItem(). This "steals" the reference to the argument:
that is, the reference to the new list member is now held by the Python
list itself. This works fine, except if an elog occurs. This causes the
function's PG_CATCH() block to be invoked, which decrements the
reference counts on both the current argument and the list of arguments.
If the elog happens to occur during the second or subsequent iteration
of the loop, the reference count on the current argument will be
decremented twice.
The fix is simple: set the local pointer to the current argument to NULL
immediately after adding it to the argument list. This ensures that the
Py_XDECREF() in the PG_CATCH() block doesn't double-decrement.
- use "bool" rather than "int" for boolean variables
- use "PLy_malloc" rather than "malloc" in two places
- define "PLy_strdup", and use it rather than malloc() + strcpy() in
two places (which should have been memcpy(), anyway).
- remove a bunch of redundant parentheses from expressions that do not
need the parentheses for code clarity
when a plpython function returns unicode" thread:
http://archives.postgresql.org/pgsql-bugs/2005-06/msg00105.php
In several places PL/Python was calling PyObject_Str() and then
PyString_AsString() without checking if the former had returned
NULL to indicate an error. PyString_AsString() doesn't expect a
NULL argument, so passing one causes a segmentation fault. This
patch adds checks for NULL and raises errors via PLy_elog(), which
prints details of the underlying Python exception. The patch also
adds regression tests for these checks. All tests pass on my
Solaris 9 box running HEAD and Python 2.4.1.
In one place the patch doesn't call PLy_elog() because that could
cause infinite recursion; see the comment I added. I'm not sure
how to test that particular case or whether it's even possible to
get an error there: the value that the code should check is the
Python exception type, so I wonder if a NULL value "shouldn't
happen." This patch converts NULL to "Unknown Exception" but I
wonder if an Assert() would be appropriate.
The patch is against HEAD but the same changes should be applied
to earlier versions because they have the same problem. The patch
might not apply cleanly against earlier versions -- will the committer
take care of little differences or should I submit different versions
of the patch?
Michael Fuhr
---------------------------------------------------------------------------
This patch allows the PL/Python module to do (SRF) functions.
The patch was taken from the CVS version.
I have modified the plpython.c file and have added a test sql script for
testing the functionality. It was actually the script that was in the
8.0.3 version but have since been removed.
In order to signal the end of a set, the called python function must
simply return plpy.EndOfSet and the set would be returned.
Gerrit van Dyk
The patch was taken from the CVS version.
I have modified the plpython.c file and have added a test sql script for
testing the functionality. It was actually the script that was in the
8.0.3 version but have since been removed.
In order to signal the end of a set, the called python function must
simply return plpy.EndOfSet and the set would be returned.
Gerrit van Dyk
which is neither needed by nor related to that header. Remove the bogus
inclusion and instead include the header in those C files that actually
need it. Also fix unnecessary inclusions and bad inclusion order in
tsearch2 files.
to produce when running the executor. This is consistent with the internal
executor APIs (such as ExecutorRun), which also use a long for this purpose.
It also allows FETCH_ALL to be passed -- since FETCH_ALL is defined as
LONG_MAX, this wouldn't have worked on platforms where int and long are of
different sizes. Per report from Tzahi Fadida.
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
mode see a fresh snapshot for each command in the function, rather than
using the latest interactive command's snapshot. Also, suppress fresh
snapshots as well as CommandCounterIncrement inside STABLE and IMMUTABLE
functions, instead using the snapshot taken for the most closely nested
regular query. (This behavior is only sane for read-only functions, so
the patch also enforces that such functions contain only SELECT commands.)
As per my proposal of 6-Sep-2004; I note that I floated essentially the
same proposal on 19-Jun-2002, but that discussion tailed off without any
action. Since 8.0 seems like the right place to be taking possibly
nontrivial backwards compatibility hits, let's get it done now.
Create a shared function to convert a SPI error code into a string
(replacing near-duplicate code in several PLs), and use it anywhere
that a SPI function call error is reported.
possible to trap an error inside a function rather than letting it
propagate out to PostgresMain. You still have to use AbortCurrentTransaction
to clean up, but at least the error handling itself will cooperate.
of a composite type to get that type's OID as their second parameter,
in place of typelem which is useless. The actual changes are mostly
centralized in getTypeInputInfo and siblings, but I had to fix a few
places that were fetching pg_type.typelem for themselves instead of
using the lsyscache.c routines. Also, I renamed all the related variables
from 'typelem' to 'typioparam' to discourage people from assuming that
they necessarily contain array element types.
conversion of basic ASCII letters. Remove all uses of strcasecmp and
strncasecmp in favor of new functions pg_strcasecmp and pg_strncasecmp;
remove most but not all direct uses of toupper and tolower in favor of
pg_toupper and pg_tolower. These functions use the same notions of
case folding already developed for identifier case conversion. I left
the straight locale-based folding in place for situations where we are
just manipulating user data and not trying to match it to built-in
strings --- for example, the SQL upper() function is still locale
dependent. Perhaps this will prove not to be what's wanted, but at
the moment we can initdb and pass regression tests in Turkish locale.
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
pointer type when it is not necessary to do so.
For future reference, casting NULL to a pointer type is only necessary
when (a) invoking a function AND either (b) the function has no prototype
OR (c) the function is a varargs function.
parameters to be declared with names. pg_proc has a column to store
names, and CREATE FUNCTION can insert data into it, but that's all as
yet. I need to do more work on the pg_dump and plpgsql portions of the
patch before committing those, but I thought I'd get the bulky changes
in before the tree drifts under me.
initdb forced due to pg_proc change.