To avoid having the python headers hijack various definitions,
we now include them after all the system headers we want, having
first undefined some of the things they want to define. After that's
done we restore the things they scribbled on that matter, namely our
snprintf and vsnprintf macros, if we're using them.
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case. This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
Certain subdirectories do not get built if corresponding options are not
selected at configure time. However, "make distprep" should visit such
directories anyway, so that constructing derived files to be included in
the tarball happens without requiring all configure options to be given
in the tarball build script. Likewise, it's better if cleanup actions
unconditionally visit all directories (for example, this ensures proper
cleanup if someone has done a manual make in such a subdirectory).
To handle this, set up a convention that subdirectories that are
conditionally included in SUBDIRS should be added to ALWAYS_SUBDIRS
instead when they are excluded.
Back-patch to 9.1, so that plpython's spiexceptions.h will get provided
in 9.1 tarballs. There don't appear to be any instances where distprep
actions got missed in previous releases, and anyway this fix requires
gmake 3.80 so we don't want to apply it before 9.1.
The --flag argument can be used to tell xgettext the arguments of
which functions should be flagged with c-format in the PO files,
instead of guessing based on the presence of format specifiers, which
fails if no format specifiers are present but the translation
accidentally introduces one.
Appropriate flag settings have been added for each message catalog.
based on a patch by Christoph Berg for bug #6066
Put gettext trigger words that are common to the backend and backend
modules into a makefile variable to include everywhere, to avoid
error-prone repetitions.
It currently doesn't make a difference, but it's inconsistent with
most other usage, and it might interfere with a future patch, so I'll
change it all in a separate commit.
Also, replace tabs with spaces for alignment.
install-sh can install multiple files at once, so for loops are not
necessary. This was already changed for the rest of the code some
time ago, but pgxs.mk was apparently forgotten, and the obsolete
coding style has now been copied to the PLs as well.
This also fixes the problem that the for loops in question did not
catch errors.
The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from %m and %ll{d,u}
formats.
It assumed that the lineno from the traceback always refers to the
PL/Python function. If you created a PL/Python function that imports
some code, runs it, and that code raises an exception, PLy_traceback
would get utterly confused.
Now we look at the file name reported with the traceback and only
print the source line if it came from the PL/Python function.
Jan Urbański
This improves reporting, as the error string now includes the actual
Python exception. As a side effect, this no longer sets the errcode to
ERRCODE_DATA_EXCEPTION, which might be considered a feature, as it's
not documented and not clear why iterator errors should be treated
differently.
Jan Urbański
Seen with an older gcc version. I'm not sure these represent any real
risk factor, but still a bit scary. Anyway we have lots of other
volatile-marked variables in this code, so a couple more won't hurt.
The original scheme for this was to symlink plpython.$DLSUFFIX to
plpython2.$DLSUFFIX, but that doesn't work on Windows, and only
accidentally failed to fail because of the way that CREATE LANGUAGE created
or didn't create new C functions. My changes of yesterday exposed the
weakness of that approach. To fix, get rid of the symlink and make
pg_pltemplate show what's really going on.
This mostly just involves creating control, install, and
update-from-unpackaged scripts for them. However, I had to adjust plperl
and plpython to not share the same support functions between variants,
because we can't put the same function into multiple extensions.
catversion bump forced due to new contents of pg_pltemplate, and because
initdb now installs plpgsql as an extension not a bare language.
Add support for regression testing these as extensions not bare
languages.
Fix a couple of other issues that popped up while testing this: my initial
hack at pg_dump binary-upgrade support didn't work right, and we don't want
an extra schema permissions test after all.
Documentation changes still to come, but I'm committing now to see
whether the MSVC build scripts need work (likely they do).
This provides a separate exception class for each error code that the
backend defines, as well as the ability to get the SQLSTATE from the
exception object.
Jan Urbański, reviewed by Steve Singer
Adds a context manager, obtainable by plpy.subtransaction(), to run a
group of statements in a subtransaction.
Jan Urbański, reviewed by Steve Singer, additional scribbling by me
We don't have complete expected coverage for Python 2.2 anyway, so it
doesn't seem worth keeping this one around that no one appears to be
updating anyway. Visual inspection of the differences ought to be
good enough for those few who care about this obsolete Python version.
This allows functions with multiple OUT parameters returning both one
or multiple records (RECORD or SETOF RECORD).
Jan Urbański, reviewed by Hitoshi Harada
Add functions plpy.quote_ident, plpy.quote_literal,
plpy.quote_nullable, which wrap the equivalent SQL functions.
To be able to propagate char * constness properly, make the argument
of quote_literal_cstr() const char *. This also makes it more
consistent with quote_identifier().
Jan Urbański, reviewed by Hitoshi Harada, some refinements by Peter
Eisentraut
This allows the language-specific try/catch construct to catch and
handle exceptions arising from SPI calls, matching the behavior of
other PLs.
As an additional bonus you no longer get all the ugly "unrecognized
error in PLy_spi_execute_query" errors.
Jan Urbański, reviewed by Steve Singer
Use the built-in TypeError, not SPIError, for errors having to do with
argument counts or types. Use SPIError, not simply plpy.Error, for
errors in PLy_spi_execute_plan. Finally, do not set a Python
exception if PyArg_ParseTuple failed, as it already sets the correct
exception.
Jan Urbański
This reverts commit 740e54ca84, which seems
to have tickled an optimization bug in gcc 4.5.x, as reported upstream at
https://bugzilla.redhat.com/show_bug.cgi?id=671899
Since this patch had no purpose beyond code beautification, it's not
worth expending a lot of effort to look for another workaround.
It's not clear to me what should happen to the other plpython_unicode
variant expected files, but this patch gets things passing on my own
machines and at least some of the buildfarm.
Global error handling led to confusion and was hard to manage. With
this change, errors from PostgreSQL are immediately reported to Python
as exceptions. This requires setting a Python exception after
reporting the caught PostgreSQL error as a warning, because PLy_elog
destroys the Python exception state.
Ideally, all places where PostgreSQL errors need to be reported back
to Python should be wrapped in subtransactions, to make going back to
Python from a longjmp safe. This will be handled in a separate patch.
Jan Urbański
The way the exception types where added to the module was wrong for
Python 3. Exception classes were not actually available from plpy.
Fix that by factoring out code that is responsible for defining new
Python exceptions and make it work with Python 3. New regression test
makes sure the plpy module has the expected contents.
Jan Urbanśki, slightly revised by me
Pay attention to the attisdropped field and skip over TupleDesc fields
that have it set. Not a real problem until we get table returning
functions, but it's the right thing to do anyway.
Jan Urbański
If the function using yield to return rows fails halfway, the iterator
stays open and subsequent calls to the function will resume reading
from it. The fix is to unref the iterator and set it to NULL if there
has been an error.
Jan Urbański
Two separate hash tables are used for regular procedures and for
trigger procedures, since the way trigger procedures work is quite
different from normal stored procedures. Change the signatures of
PLy_procedure_{get,create} to accept the function OID and a Boolean
flag indicating whether it's a trigger. This should make implementing
a PL/Python validator easier.
Using HTABs instead of Python dictionaries makes error recovery
easier, and allows for procedures to be cached based on their OIDs,
not their names. It also allows getting rid of the PyCObject field
that used to hold a pointer to PLyProcedure, since PyCObjects are
deprecated in Python 2.7 and replaced by Capsules in Python 3.
Jan Urbański
We must stay in the function's SPI context until done calling the iterator
that returns the set result. Otherwise, any attempt to invoke SPI features
in the python code called by the iterator will malfunction. Diagnosis and
patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot.
Back-patch to 8.2; there was no support for SRFs in previous versions of
plpython.
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
Various places were testing TRIGGER_FIRED_BEFORE() where what they really
meant was !TRIGGER_FIRED_AFTER(), or vice versa. This needs to be cleaned
up because there are about to be more than two possible states.
We might want to note this in the 9.1 release notes as something for
trigger authors to double-check.
For consistency's sake I also changed some places that assumed that
TRIGGER_FIRED_FOR_ROW and TRIGGER_FIRED_FOR_STATEMENT are necessarily
mutually exclusive; that's not in immediate danger of breaking, but
it's still sloppier than it should be.
Extracted from Dean Rasheed's patch for triggers on views. I'm committing
this separately since it's an identifiable separate issue, and is the
only reason for the patch to touch most of these particular files.
This is reproducibly possible in Python 2.7 if the user turned
PendingDeprecationWarning into an error, but it's theoretically also possible
in earlier versions in case of exceptional conditions.
backpatched to 8.0
(_PG_init should be called only once anyway, but as long as it's got an
internal guard against repeat calls, that should be in front of the
version check.)
memory if the result had zero rows, and also if there was any sort of error
while converting the result tuples into Python data. Reported and partially
fixed by Andres Freund.
Back-patch to all supported versions. Note: I haven't tested the 7.4 fix.
7.4's configure check for python is so obsolete it doesn't work on my
current machines :-(. The logic change is pretty straightforward though.
In PLy_spi_execute_plan, use the data-type specific Python-to-PostgreSQL
conversion function instead of passing everything through InputFunctionCall
as a string. The equivalent fix was already done months ago for function
parameters and return values, but this other gateway between Python and
PostgreSQL was apparently forgotten. As a result, data types that need
special treatment, such as bytea, would misbehave when used with
plpy.execute.
old memory context in plpython. Before only one of them was marked
volatile, but per report from Zdenek Kotala, some compilers do the
wrong thing here.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
Mimic the Python interpreter's own logic for printing exceptions instead
of just using the straight str() call, so that
you get
plpy.SPIError
instead of
<class 'plpy.SPIError'>
and for built-in exceptions merely
UnicodeEncodeError
Besides looking better this cuts down on the endless version differences
in the regression test expected files.
Behaves more or less unchanged compared to Python 2, but the new language
variant is called plpython3u. Documentation describing the naming scheme
is included.
In PLy_output(), when the elog() call in the TRY branch throws an exception
(this can happen when a statement timeout kicks in, for example), the
PyErr_SetString() call in the CATCH branch can cause a segfault, because the
Py_XDECREF(so) call before it releases memory that is still used by the sv
variable that PyErr_SetString() uses as argument, because sv points into
memory owned by so.
Backpatched back to 8.0, where this code was introduced.
I also threw in a couple of volatile declarations for variables that are used
before and after the TRY. I don't think they caused the crash that I
observed, but they could become issues.
Check calls of PyUnicode_AsEncodedString() for NULL return, probably
because the encoding name is not known. Add special treatment for
SQL_ASCII, which Python definitely does not know.
Since using SQL_ASCII produces errors in the regression tests when
non-ASCII characters are involved, we have to put back various regression
test result variants.
PL/Python now accepts Unicode objects where it previously only accepted string
objects (for example, as return value). Unicode objects are converted to the
PostgreSQL server encoding as necessary.
This change is also necessary for future Python 3 support, which treats all
strings as Unicode objects.
Since this removes the error conditions that the plpython_unicode test file
tested for, the alternative result files are no longer necessary.
Before, PL/Python converted data between SQL and Python by going
through a C string representation. This broke for bytea in two ways:
- On input (function parameters), you would get a Python string that
contains bytea's particular external representation with backslashes
etc., instead of a sequence of bytes, which is what you would expect
in a Python environment. This problem is exacerbated by the new
bytea output format.
- On output (function return value), null bytes in the Python string
would cause truncation before the data gets stored into a bytea
datum.
This is now fixed by converting directly between the PostgreSQL datum
and the Python representation.
The required generalized infrastructure also allows for other
improvements in passing:
- When returning a boolean value, the SQL datum is now true if and
only if Python considers the value that was passed out of the
PL/Python function to be true. Previously, this determination was
left to the boolean data type input function. So, now returning
'foo' results in true, because Python considers it true, rather than
false because PostgreSQL considers it false.
- On input, we can convert the integer and float types directly to
their Python equivalents without having to go through an
intermediate string representation.
original patch by Caleb Welton, with updates by myself
Extract the "while creating return value" and "while modifying trigger
row" parts of some error messages into another layer of error context.
This will simplify the upcoming patch to improve data type support, but
it can stand on its own.
Switch the implementation of the plan and result types to generic attribute
management, as described at <http://docs.python.org/extending/newtypes.html>.
This modernizes and simplifies the code a bit and prepares for Python 3.1,
where the old way doesn't work anymore.
This changes a bunch of incidentially used constructs in the PL/Python
regression tests to equivalent constructs in cases where Python 3 no longer
supports the old syntax. Support for older Python versions is unchanged.
Add some checks on various data types are converted into and out of Python.
This is extracted from Caleb Welton's patch for improved bytea support,
but much expanded.
When examining what Python type to convert a PostgreSQL type to on input,
look at the base type of the input type, otherwise all domains end up
defaulting to string.
of the previous monolithic setup-create-run sequence, that was apparently
inherited from a previous test infrastructure, but makes working with the
tests and adding new ones weird.
Error messages from PL/Python now always mention the function name in the
CONTEXT: field. This also obsoletes the few places that tried to do the
same manually.
Regression test files are updated to work with Python 2.4-2.6. I don't have
access to older versions right now.
by extending the ereport() API to cater for pluralization directly. This
is better than the original method of calling ngettext outside the elog.c
code because (1) it avoids double translation, which wastes cycles and in
the worst case could give a wrong result; and (2) it avoids having to use
a different coding method in PL code than in the core backend. The
client-side uses of ngettext are not touched since neither of these concerns
is very pressing in the client environment. Per my proposal of yesterday.
for its arguments. Also add a regression test, since someone apparently
changed every single plpython test case to use only named parameters; else
we'd have noticed this sooner.
Euler Taveira de Oliveira, per a report from Alvaro
In the backend, I changed only a handful of exemplary or important-looking
instances to make use of the plural support; there is probably more work
there. For the rest of the source, this should cover all relevant cases.
and heap_deformtuple in favor of the newer functions heap_form_tuple et al
(which do the same things but use bool control flags instead of arbitrary
char values). Eliminate the former duplicate coding of these functions,
reducing the deprecated functions to mere wrappers around the newer ones.
We can't get rid of them entirely because add-on modules probably still
contain many instances of the old coding style.
Kris Jurka
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
unnecessary #include lines in it. Also, move some tuple routine prototypes and
macros to htup.h, which allows removal of heapam.h inclusion from some .c
files.
For this to work, a new header file access/sysattr.h needed to be created,
initially containing attribute numbers of system columns, for pg_dump usage.
While at it, make contrib ltree, intarray and hstore header files more
consistent with our header style.
modules are built. Foremost, it creates a solid distinction between these two
types of targets based on what had already been implemented and duplicated in
ad hoc ways before. Specifically,
- Dynamically loadable modules no longer get a soname. The numbers previously
set in the makefiles were dummy numbers anyway, and the presence of a soname
upset a few packaging tools, so it is nicer not to have one.
- The cumbersome detour taken on installation (build a libfoo.so.0.0.0 and
then override the rule to install foo.so instead) is removed.
- Lots of duplicated code simplified.
strings. This patch introduces four support functions cstring_to_text,
cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and
two macros CStringGetTextDatum and TextDatumGetCString. A number of
existing macros that provided variants on these themes were removed.
Most of the places that need to make such conversions now require just one
function or macro call, in place of the multiple notational layers that used
to be needed. There are no longer any direct calls of textout or textin,
and we got most of the places that were using handmade conversions via
memcpy (there may be a few still lurking, though).
This commit doesn't make any serious effort to eliminate transient memory
leaks caused by detoasting toasted text objects before they reach
text_to_cstring. We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few
places where it was easy, but much more could be done.
Brendan Jurd and Tom Lane
a trigger's target table. The rowtype could change from one call to the
next, so cope in such cases, while avoiding doing repetitive catalog lookups.
Per bug #3847 from Mark Reid.
Backpatch to 8.2.x. Likely this fix should go further back, but I can't test
it because I no longer have a machine with a pre-2.5 Python installation.
(Maybe we should rethink that idea about not supporting Python 2.5 in the
older branches.)
from old versions of gcc. It's not clear to me that this is really
necessary for correctness, but less warnings are always good.
Per buildfarm results and local testing.
It removes last remaining casts inside struct definitions.
Such usage is bad practice, as it hides problems from compiler.
Reason for the cast is popular practice in some circles
to define functions as foo(MyObj *) instead of foo(PyObject *)
thus avoiding a local variable inside functions and make
direct calling easier. As pl/python does not use such style,
the casts were unnecessary from the start.
Marko Kreen
keeping private state in each backend that has inserted and deleted the same
tuple during its current top-level transaction. This is sufficient since
there is no need to be able to determine the cmin/cmax from any other
transaction. This gets us back down to 23-byte headers, removing a penalty
paid in 8.0 to support subtransactions. Patch by Heikki Linnakangas, with
minor revisions by moi, following a design hashed out awhile back on the
pghackers list.
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
python 2.5. This involves fixing several violations of the published
spec for creating PyTypeObjects, and adding another regression test
expected output for yet another variation of error message spelling.
Fix all the standard PLs to be able to return tuples from FOO_RETURNING
statements as well as utility statements that return tuples. Also,
fix oversight that SPI_processed wasn't set for a utility statement
returning tuples. Per recent discussion.
loaded libraries: call functions _PG_init() and _PG_fini() if the library
defines such symbols. Hence we no longer need to specify an initialization
function in preload_libraries: we can assume that the library used the
_PG_init() convention, instead. This removes one source of pilot error
in use of preloaded libraries. Original patch by Ralf Engelschall,
preload_libraries changes by me.
pg_regress: there's no other way to cope with testing a relocated
installation. Seems better to call it --psqldir though, since the
only thing we need to find in that case is psql. It'd be better if
we could use find_other_exec, but that's not happening unless we are
willing to install pg_regress alongside psql, which seems unlikely
to happen.
This allows it to be used on Windows without installing mingw
(though you do still need 'diff'), and opens the door to future
improvements such as message localization.
Magnus Hagander and Tom Lane.
Studio 2005. Basically MS defined errcode in the headers with a typedef,
so we have to #define it out of the way.
While at it, fix a function declaration in plpython that didn't match
the implementation (volatile missing).
Magnus Hagander
After updating to the latest cvs, and also building most of the addons
(like PLs), the following patch is neededf for win32 + Visual C++.
* Switch to use the new win32 semaphore code
* Rename win32_open to pgwin32_open. win32_open collides with symbols
defined in Perl. MingW didn't detect ig, MSVC did. And it's a bit too
generic a name to export globally, imho...
* Python defines some partially broken #pragmas in the headers when
doing a debug build. Workaround.
Magnus Hagander
by creating a reference-count mechanism, similar to what we did a long time
ago for catcache entries. The back branches have an ugly solution involving
lots of extra copies, but this way is more efficient. Reference counting is
only applied to tupdescs that are actually in caches --- there seems no need
to use it for tupdescs that are generated in the executor, since they'll go
away during plan shutdown by virtue of being in the per-query memory context.
Neil Conway and Tom Lane
> >> >> > 1) named parameters additionally to args[]
> >> >> > 2) return composite-types from plpython as dictionary
> >> >> > 3) return result-set from plpython as list, iterator or generator
1) named parameters additionally to args[]
2) return composite-types from plpython as dictionary
3) return result-set from plpython as list, iterator or generator
Hannu Krosing
Sven Suursoho
functions are not strict, they will be called (passing a NULL first parameter)
during any attempt to input a NULL value of their datatype. Currently, all
our input functions are strict and so this commit does not change any
behavior. However, this will make it possible to build domain input functions
that centralize checking of domain constraints, thereby closing numerous holes
in our domain support, as per previous discussion.
While at it, I took the opportunity to introduce convenience functions
InputFunctionCall, OutputFunctionCall, etc to use in code that calls I/O
functions. This eliminates a lot of grotty-looking casts, but the main
motivation is to make it easier to grep for these places if we ever need
to touch them again.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
more compliant with the error message style guide. In particular,
errdetail should begin with a capital letter and end with a period,
whereas errmsg should not. I also fixed a few related issues in
passing, such as fixing the repeated misspelling of "lexeme" in
contrib/tsearch2 (per Tom's suggestion).
(I didn't use his patch, however). A void-returning PL/Python function
must return None (from Python), which is translated into a void datum
(and *not* NULL) for Postgres. I also added some regression tests for
this functionality.
in leaking memory when invoking a PL/Python procedure that raises an
exception. Unfortunately this still leaks memory, but at least the
largest leak has been plugged.
This patch also fixes a reference counting mistake in PLy_modify_tuple()
for 8.0, 8.1 and HEAD: we don't actually own a reference to `platt', so
we shouldn't Py_DECREF() it.
one argument at a time and then inserting the argument into a Python
list via PyList_SetItem(). This "steals" the reference to the argument:
that is, the reference to the new list member is now held by the Python
list itself. This works fine, except if an elog occurs. This causes the
function's PG_CATCH() block to be invoked, which decrements the
reference counts on both the current argument and the list of arguments.
If the elog happens to occur during the second or subsequent iteration
of the loop, the reference count on the current argument will be
decremented twice.
The fix is simple: set the local pointer to the current argument to NULL
immediately after adding it to the argument list. This ensures that the
Py_XDECREF() in the PG_CATCH() block doesn't double-decrement.
- use "bool" rather than "int" for boolean variables
- use "PLy_malloc" rather than "malloc" in two places
- define "PLy_strdup", and use it rather than malloc() + strcpy() in
two places (which should have been memcpy(), anyway).
- remove a bunch of redundant parentheses from expressions that do not
need the parentheses for code clarity
when a plpython function returns unicode" thread:
http://archives.postgresql.org/pgsql-bugs/2005-06/msg00105.php
In several places PL/Python was calling PyObject_Str() and then
PyString_AsString() without checking if the former had returned
NULL to indicate an error. PyString_AsString() doesn't expect a
NULL argument, so passing one causes a segmentation fault. This
patch adds checks for NULL and raises errors via PLy_elog(), which
prints details of the underlying Python exception. The patch also
adds regression tests for these checks. All tests pass on my
Solaris 9 box running HEAD and Python 2.4.1.
In one place the patch doesn't call PLy_elog() because that could
cause infinite recursion; see the comment I added. I'm not sure
how to test that particular case or whether it's even possible to
get an error there: the value that the code should check is the
Python exception type, so I wonder if a NULL value "shouldn't
happen." This patch converts NULL to "Unknown Exception" but I
wonder if an Assert() would be appropriate.
The patch is against HEAD but the same changes should be applied
to earlier versions because they have the same problem. The patch
might not apply cleanly against earlier versions -- will the committer
take care of little differences or should I submit different versions
of the patch?
Michael Fuhr
---------------------------------------------------------------------------
This patch allows the PL/Python module to do (SRF) functions.
The patch was taken from the CVS version.
I have modified the plpython.c file and have added a test sql script for
testing the functionality. It was actually the script that was in the
8.0.3 version but have since been removed.
In order to signal the end of a set, the called python function must
simply return plpy.EndOfSet and the set would be returned.
Gerrit van Dyk
The patch was taken from the CVS version.
I have modified the plpython.c file and have added a test sql script for
testing the functionality. It was actually the script that was in the
8.0.3 version but have since been removed.
In order to signal the end of a set, the called python function must
simply return plpy.EndOfSet and the set would be returned.
Gerrit van Dyk
*fail*, to test that plpython didn't allow untrusted operations.
When we changed plpython to plpythonu because python didn't actually have
a secure sandbox mode, someone (probably me :-() misinterpreted the tests
as checking whether Python's file I/O works. Which is a stupid thing for
us to be testing. Remove it so we don't clutter the filesystem with
random temporary files.
for testing PLs and contrib_regression for testing contrib, instead of
overwriting the core system's regression database as formerly done.
Andrew Dunstan
which is neither needed by nor related to that header. Remove the bogus
inclusion and instead include the header in those C files that actually
need it. Also fix unnecessary inclusions and bad inclusion order in
tsearch2 files.
to produce when running the executor. This is consistent with the internal
executor APIs (such as ExecutorRun), which also use a long for this purpose.
It also allows FETCH_ALL to be passed -- since FETCH_ALL is defined as
LONG_MAX, this wouldn't have worked on platforms where int and long are of
different sizes. Per report from Tzahi Fadida.
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
for the languages even when not installed in a standard directory.
pltcl may need this treatment as well, but we don't have the right path
conveniently available, so I'll leave it alone as long as there aren't
actual reports of trouble.
-L spec rather than assuming libpython is in the standard search path
(this returns to the way 7.4 did it). But check the distutils output
to see if it looks like Python has built a shared library, and if so
link with that instead of the probably-not-shared library found in
configdir.
mode see a fresh snapshot for each command in the function, rather than
using the latest interactive command's snapshot. Also, suppress fresh
snapshots as well as CommandCounterIncrement inside STABLE and IMMUTABLE
functions, instead using the snapshot taken for the most closely nested
regular query. (This behavior is only sane for read-only functions, so
the patch also enforces that such functions contain only SELECT commands.)
As per my proposal of 6-Sep-2004; I note that I floated essentially the
same proposal on 19-Jun-2002, but that discussion tailed off without any
action. Since 8.0 seems like the right place to be taking possibly
nontrivial backwards compatibility hits, let's get it done now.
Create a shared function to convert a SPI error code into a string
(replacing near-duplicate code in several PLs), and use it anywhere
that a SPI function call error is reported.
possible to trap an error inside a function rather than letting it
propagate out to PostgresMain. You still have to use AbortCurrentTransaction
to clean up, but at least the error handling itself will cooperate.
of a composite type to get that type's OID as their second parameter,
in place of typelem which is useless. The actual changes are mostly
centralized in getTypeInputInfo and siblings, but I had to fix a few
places that were fetching pg_type.typelem for themselves instead of
using the lsyscache.c routines. Also, I renamed all the related variables
from 'typelem' to 'typioparam' to discourage people from assuming that
they necessarily contain array element types.
conversion of basic ASCII letters. Remove all uses of strcasecmp and
strncasecmp in favor of new functions pg_strcasecmp and pg_strncasecmp;
remove most but not all direct uses of toupper and tolower in favor of
pg_toupper and pg_tolower. These functions use the same notions of
case folding already developed for identifier case conversion. I left
the straight locale-based folding in place for situations where we are
just manipulating user data and not trying to match it to built-in
strings --- for example, the SQL upper() function is still locale
dependent. Perhaps this will prove not to be what's wanted, but at
the moment we can initdb and pass regression tests in Turkish locale.
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
pointer type when it is not necessary to do so.
For future reference, casting NULL to a pointer type is only necessary
when (a) invoking a function AND either (b) the function has no prototype
OR (c) the function is a varargs function.
parameters to be declared with names. pg_proc has a column to store
names, and CREATE FUNCTION can insert data into it, but that's all as
yet. I need to do more work on the pg_dump and plpgsql portions of the
patch before committing those, but I thought I'd get the bulky changes
in before the tree drifts under me.
initdb forced due to pg_proc change.
relation, when the same function is used as a trigger on more than
one relation. This avoids crashes due to differing rowtypes for
different relations. Per bug report from Lance Thomas, 7-Feb-03.
> rexec and making it an untrusted language. Last time I looked, it didn't
> look particularly difficult. I've set aside some time next week, so stay
> tuned.
Attached is a patch that removes all of the RExec code from plpython from
the current PostgreSQL CVS. In addition, plpython needs to be changed to an
untrusted language in createlang. Please let me know if there are any
problems.
Kevin Jacobs
include output that vary depending on the python build one is
running. Basically, the order of keys in a dictionary is
non-deterministic, and that part of the test fails for me regularly.
I rewrote the test to work around this problem, and include a patch
file with that change and the change to the expected otuput as well.
Mike Meyer
SPI_prepare: they all save the prepared plan into topCxt, and so the
procCxt copy that's actually returned by SPI_prepare ought to be freed.
Diagnosis and plpython fix by Nigel Andrews, followup for other PLs
by Tom Lane.
1) pltcl:
Add SPI_freetuptable() calls to avoid memory leaks (Me + Neil Conway)
Change sprintf()s to snprintf()s (Neil Conway)
Remove header files included elsewhere (Neil Conway)
2)plpython:
Add SPI_freetuptable() calls to avoid memory leaks
Cosemtic change to remove a compiler warning
Notes:
I have tested pltcl.c for
a) the original leak problem reported for the repeated call of spi_exec
in a TCL fragment
and
b) the subsequent report resulting from the use of spi_exec -array
in a TCL
fragment.
The plpython.c patch is exactly the same as that applied to make
revision 1.23,
the plpython_schema.sql and feature.expected sections of the patch are
also the
same as last submited, applied and subsequently reversed out. It remains
untested by me (other than via make check). However, this should be safe
provided PyString_FromString() _copies_ the given string to make a
PyObject.
Nigel J. Andrews
let's say this patch superscedes the previous one.
I have also attached a patch addressing the similar memory leak problem in
plpython. This includes a slight adjustment of the tests in the source
directory. The patch also includes a cosmetic change to remove a compiler
warning although I think the change makes the code look worse though.
BTW, by my reckoning the memory leak would occur with prepared plans and
without. If that is not the case then I've been barking up the wrong tree.
Nigel J. Andrews
Eliminate the mysterious games that the Cygwin build plays with the linker
flag variables. DLLLIBS is gone, use SHLIB_LINK like everyone else.
Detect cygipc in configure, after the linker flags are set up, otherwise
configure might not work at all.
Make sure everything is covered by make clean.
Fix the build of the new conversion procedure modules.
Add new DLLIMPORT markers where required.
Finally, the compiler complains if we use an explicit
-I/usr/local/include, so don't do that. Curiously, -L/usr/local/lib is
still necessary.
with OPAQUE, as per recent pghackers discussion. I still want to do some
more work on the 'cstring' pseudo-type, but I'm going to commit the bulk
of the changes now before the tree starts shifting under me ...
bitmap, if present).
Per Tom Lane's suggestion the information whether a tuple has an oid
or not is carried in the tuple descriptor. For debugging reasons
tdhasoid is of type char, not bool. There are predefined values for
WITHOID, WITHOUTOID and UNDEFOID.
This patch has been generated against a cvs snapshot from last week
and I don't expect it to apply cleanly to current sources. While I
post it here for public review, I'm working on a new version against a
current snapshot. (There's been heavy activity recently; hope to
catch up some day ...)
This is a long patch; if it is too hard to swallow, I can provide it
in smaller pieces:
Part 1: Accessor macros
Part 2: tdhasoid in TupDesc
Part 3: Regression test
Part 4: Parameter withoid to heap_addheader
Part 5: Eliminate t_oid from HeapTupleHeader
Part 2 is the most hairy part because of changes in the executor and
even in the parser; the other parts are straightforward.
Up to part 4 the patched postmaster stays binary compatible to
databases created with an unpatched version. Part 5 is small (100
lines) and finally breaks compatibility.
Manfred Koizar
HeapTupleHeaderData in setter and getter macros called
HeapTupleHeaderGetXmin, HeapTupleHeaderSetXmin etc.
It also introduces a "virtual" field xvac by defining
HeapTupleHeaderGetXvac and HeapTupleHeaderSetXvac. Xvac is used by
VACUUM, in fact it is stored in t_cmin.
Manfred Koizar
o Change all current CVS messages of NOTICE to WARNING. We were going
to do this just before 7.3 beta but it has to be done now, as you will
see below.
o Change current INFO messages that should be controlled by
client_min_messages to NOTICE.
o Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.
to always go to the client.
o Remove INFO from the client_min_messages options and add NOTICE.
Seems we do need three non-ERROR elog levels to handle the various
behaviors we need for these messages.
Regression passed.
now just below FATAL in server_min_messages. Added more text to
highlight ordering difference between it and client_min_messages.
---------------------------------------------------------------------------
REALLYFATAL => PANIC
STOP => PANIC
New INFO level the prints to client by default
New LOG level the prints to server log by default
Cause VACUUM information to print only to the client
NOTICE => INFO where purely information messages are sent
DEBUG => LOG for purely server status messages
DEBUG removed, kept as backward compatible
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added
DebugLvl removed in favor of new DEBUG[1-5] symbols
New server_min_messages GUC parameter with values:
DEBUG[5-1], INFO, NOTICE, ERROR, LOG, FATAL, PANIC
New client_min_messages GUC parameter with values:
DEBUG[5-1], LOG, INFO, NOTICE, ERROR, FATAL, PANIC
Server startup now logged with LOG instead of DEBUG
Remove debug_level GUC parameter
elog() numbers now start at 10
Add test to print error message if older elog() values are passed to elog()
Bootstrap mode now has a -d that requires an argument, like postmaster
lookup info in the relcache for index access method support functions.
This makes a huge difference for dynamically loaded support functions,
and should save a few cycles even for built-in ones. Also tweak dfmgr.c
so that load_external_function is called only once, not twice, when
doing fmgr_info for a dynamically loaded function. All per performance
gripe from Teodor Sigaev, 5-Oct-01.
under libdir, for a cleaner separation in the installation layout
and compatibility with binary packaging standards. Point backend's
default search location there. The contrib modules are also
installed in the said location, giving them the benefit of the
default search path as well. No changes in user interface
nevertheless.
choice of compiler and flags, uninstall, and peculiar Python installation
layouts for PyGreSql. Also install into site-packages now, as officially
recommended. And pgdb.py is also installed now, used to be forgotten.
in plpgsql: they fail for datatypes that have old-style I/O functions
due to caching FmgrInfo structs with wrong fn_mcxt lifetime.
Although the plpython fix seems straightforward, I can't check it here
since I don't have Python installed --- would someone check it?
a PostgreSQL user-defined function. The Metaphone system is a method of
matching similar sounding names (or any words) to the same code.
Metaphone was invented by Lawrence Philips as an improvement to the popular
name-hashing routine, Soundex.
This metaphone code is from Michael Kuhn, and is detailed at
http://aspell.sourceforge.net/metaphone/metaphone-kuhn.txt
Joel Burton