2011-12-18 14:14:16 -05:00
|
|
|
/*
|
|
|
|
|
* the plpy module
|
|
|
|
|
*
|
|
|
|
|
* src/pl/plpython/plpy_plpymodule.c
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
#include "access/xact.h"
|
2011-12-18 14:14:16 -05:00
|
|
|
#include "mb/pg_wchar.h"
|
|
|
|
|
#include "utils/builtins.h"
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
#include "utils/snapmgr.h"
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
#include "plpython.h"
|
|
|
|
|
|
|
|
|
|
#include "plpy_plpymodule.h"
|
|
|
|
|
|
|
|
|
|
#include "plpy_cursorobject.h"
|
|
|
|
|
#include "plpy_elog.h"
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
#include "plpy_main.h"
|
2011-12-18 14:14:16 -05:00
|
|
|
#include "plpy_planobject.h"
|
|
|
|
|
#include "plpy_resultobject.h"
|
|
|
|
|
#include "plpy_spi.h"
|
|
|
|
|
#include "plpy_subxactobject.h"
|
|
|
|
|
|
|
|
|
|
|
2012-06-10 15:20:04 -04:00
|
|
|
HTAB *PLy_spi_exceptions = NULL;
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
|
2011-12-29 15:55:49 -05:00
|
|
|
static void PLy_add_exceptions(PyObject *plpy);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
static PyObject *PLy_create_exception(char *name,
|
2019-05-22 13:04:48 -04:00
|
|
|
PyObject *base, PyObject *dict,
|
|
|
|
|
const char *modname, PyObject *mod);
|
2011-12-29 15:55:49 -05:00
|
|
|
static void PLy_generate_spi_exceptions(PyObject *mod, PyObject *base);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
/* module functions */
|
2016-04-08 11:30:25 -04:00
|
|
|
static PyObject *PLy_debug(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_log(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_info(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_notice(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_warning(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_error(PyObject *self, PyObject *args, PyObject *kw);
|
|
|
|
|
static PyObject *PLy_fatal(PyObject *self, PyObject *args, PyObject *kw);
|
2011-12-29 15:55:49 -05:00
|
|
|
static PyObject *PLy_quote_literal(PyObject *self, PyObject *args);
|
|
|
|
|
static PyObject *PLy_quote_nullable(PyObject *self, PyObject *args);
|
|
|
|
|
static PyObject *PLy_quote_ident(PyObject *self, PyObject *args);
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
static PyObject *PLy_commit(PyObject *self, PyObject *args);
|
|
|
|
|
static PyObject *PLy_rollback(PyObject *self, PyObject *args);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
|
|
|
|
|
/* A list of all known exceptions, generated from backend/utils/errcodes.txt */
|
|
|
|
|
typedef struct ExceptionMap
|
|
|
|
|
{
|
|
|
|
|
char *name;
|
|
|
|
|
char *classname;
|
|
|
|
|
int sqlstate;
|
|
|
|
|
} ExceptionMap;
|
|
|
|
|
|
|
|
|
|
static const ExceptionMap exception_map[] = {
|
|
|
|
|
#include "spiexceptions.h"
|
|
|
|
|
{NULL, NULL, 0}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static PyMethodDef PLy_methods[] = {
|
|
|
|
|
/*
|
|
|
|
|
* logging methods
|
|
|
|
|
*/
|
2016-06-09 18:02:36 -04:00
|
|
|
{"debug", (PyCFunction) PLy_debug, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"log", (PyCFunction) PLy_log, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"info", (PyCFunction) PLy_info, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"notice", (PyCFunction) PLy_notice, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"warning", (PyCFunction) PLy_warning, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"error", (PyCFunction) PLy_error, METH_VARARGS | METH_KEYWORDS, NULL},
|
|
|
|
|
{"fatal", (PyCFunction) PLy_fatal, METH_VARARGS | METH_KEYWORDS, NULL},
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* create a stored plan
|
|
|
|
|
*/
|
|
|
|
|
{"prepare", PLy_spi_prepare, METH_VARARGS, NULL},
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* execute a plan or query
|
|
|
|
|
*/
|
|
|
|
|
{"execute", PLy_spi_execute, METH_VARARGS, NULL},
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* escaping strings
|
|
|
|
|
*/
|
|
|
|
|
{"quote_literal", PLy_quote_literal, METH_VARARGS, NULL},
|
|
|
|
|
{"quote_nullable", PLy_quote_nullable, METH_VARARGS, NULL},
|
|
|
|
|
{"quote_ident", PLy_quote_ident, METH_VARARGS, NULL},
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* create the subtransaction context manager
|
|
|
|
|
*/
|
|
|
|
|
{"subtransaction", PLy_subtransaction_new, METH_NOARGS, NULL},
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* create a cursor
|
|
|
|
|
*/
|
|
|
|
|
{"cursor", PLy_cursor, METH_VARARGS, NULL},
|
|
|
|
|
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
/*
|
|
|
|
|
* transaction control
|
|
|
|
|
*/
|
|
|
|
|
{"commit", PLy_commit, METH_NOARGS, NULL},
|
|
|
|
|
{"rollback", PLy_rollback, METH_NOARGS, NULL},
|
|
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
{NULL, NULL, 0, NULL}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static PyMethodDef PLy_exc_methods[] = {
|
|
|
|
|
{NULL, NULL, 0, NULL}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
#if PY_MAJOR_VERSION >= 3
|
|
|
|
|
static PyModuleDef PLy_module = {
|
2018-08-29 02:36:30 -04:00
|
|
|
PyModuleDef_HEAD_INIT,
|
|
|
|
|
.m_name = "plpy",
|
|
|
|
|
.m_size = -1,
|
|
|
|
|
.m_methods = PLy_methods,
|
2011-12-18 14:14:16 -05:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static PyModuleDef PLy_exc_module = {
|
2018-08-29 02:36:30 -04:00
|
|
|
PyModuleDef_HEAD_INIT,
|
|
|
|
|
.m_name = "spiexceptions",
|
|
|
|
|
.m_size = -1,
|
|
|
|
|
.m_methods = PLy_exc_methods,
|
2011-12-18 14:14:16 -05:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Must have external linkage, because PyMODINIT_FUNC does dllexport on
|
|
|
|
|
* Windows-like platforms.
|
|
|
|
|
*/
|
|
|
|
|
PyMODINIT_FUNC
|
|
|
|
|
PyInit_plpy(void)
|
|
|
|
|
{
|
|
|
|
|
PyObject *m;
|
|
|
|
|
|
|
|
|
|
m = PyModule_Create(&PLy_module);
|
|
|
|
|
if (m == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
PLy_add_exceptions(m);
|
|
|
|
|
|
|
|
|
|
return m;
|
|
|
|
|
}
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:18:54 -04:00
|
|
|
#endif /* PY_MAJOR_VERSION >= 3 */
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
void
|
|
|
|
|
PLy_init_plpy(void)
|
|
|
|
|
{
|
|
|
|
|
PyObject *main_mod,
|
|
|
|
|
*main_dict,
|
|
|
|
|
*plpy_mod;
|
2012-06-10 15:20:04 -04:00
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
#if PY_MAJOR_VERSION < 3
|
|
|
|
|
PyObject *plpy;
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* initialize plpy module
|
|
|
|
|
*/
|
|
|
|
|
PLy_plan_init_type();
|
|
|
|
|
PLy_result_init_type();
|
|
|
|
|
PLy_subtransaction_init_type();
|
|
|
|
|
PLy_cursor_init_type();
|
|
|
|
|
|
|
|
|
|
#if PY_MAJOR_VERSION >= 3
|
|
|
|
|
PyModule_Create(&PLy_module);
|
|
|
|
|
/* for Python 3 we initialized the exceptions in PyInit_plpy */
|
|
|
|
|
#else
|
|
|
|
|
plpy = Py_InitModule("plpy", PLy_methods);
|
|
|
|
|
PLy_add_exceptions(plpy);
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
/* PyDict_SetItemString(plpy, "PlanType", (PyObject *) &PLy_PlanType); */
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* initialize main module, and add plpy
|
|
|
|
|
*/
|
|
|
|
|
main_mod = PyImport_AddModule("__main__");
|
|
|
|
|
main_dict = PyModule_GetDict(main_mod);
|
|
|
|
|
plpy_mod = PyImport_AddModule("plpy");
|
2012-03-13 15:26:32 -04:00
|
|
|
if (plpy_mod == NULL)
|
2012-04-25 14:11:59 -04:00
|
|
|
PLy_elog(ERROR, "could not import \"plpy\" module");
|
2011-12-18 14:14:16 -05:00
|
|
|
PyDict_SetItemString(main_dict, "plpy", plpy_mod);
|
|
|
|
|
if (PyErr_Occurred())
|
2012-04-25 14:11:59 -04:00
|
|
|
PLy_elog(ERROR, "could not import \"plpy\" module");
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
PLy_add_exceptions(PyObject *plpy)
|
|
|
|
|
{
|
|
|
|
|
PyObject *excmod;
|
|
|
|
|
HASHCTL hash_ctl;
|
|
|
|
|
|
|
|
|
|
#if PY_MAJOR_VERSION < 3
|
|
|
|
|
excmod = Py_InitModule("spiexceptions", PLy_exc_methods);
|
|
|
|
|
#else
|
|
|
|
|
excmod = PyModule_Create(&PLy_exc_module);
|
|
|
|
|
#endif
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
if (excmod == NULL)
|
|
|
|
|
PLy_elog(ERROR, "could not create the spiexceptions module");
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
/*
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
* PyModule_AddObject does not add a refcount to the object, for some odd
|
|
|
|
|
* reason; we must do that.
|
2011-12-18 14:14:16 -05:00
|
|
|
*/
|
|
|
|
|
Py_INCREF(excmod);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
if (PyModule_AddObject(plpy, "spiexceptions", excmod) < 0)
|
|
|
|
|
PLy_elog(ERROR, "could not add the spiexceptions module");
|
2011-12-18 14:14:16 -05:00
|
|
|
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
PLy_exc_error = PLy_create_exception("plpy.Error", NULL, NULL,
|
|
|
|
|
"Error", plpy);
|
|
|
|
|
PLy_exc_fatal = PLy_create_exception("plpy.Fatal", NULL, NULL,
|
|
|
|
|
"Fatal", plpy);
|
|
|
|
|
PLy_exc_spi_error = PLy_create_exception("plpy.SPIError", NULL, NULL,
|
|
|
|
|
"SPIError", plpy);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
memset(&hash_ctl, 0, sizeof(hash_ctl));
|
|
|
|
|
hash_ctl.keysize = sizeof(int);
|
|
|
|
|
hash_ctl.entrysize = sizeof(PLyExceptionEntry);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
PLy_spi_exceptions = hash_create("PL/Python SPI exceptions", 256,
|
Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create(). Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate. Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions. hash_create() itself will
take care of optimizing when the key size is four bytes.
This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.
In future we could look into offering a similar optimized hashing function
for 8-byte keys. Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.
For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules. Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.
Teodor Sigaev and Tom Lane
2014-12-18 13:36:29 -05:00
|
|
|
&hash_ctl, HASH_ELEM | HASH_BLOBS);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
PLy_generate_spi_exceptions(excmod, PLy_exc_spi_error);
|
|
|
|
|
}
|
|
|
|
|
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
/*
|
|
|
|
|
* Create an exception object and add it to the module
|
|
|
|
|
*/
|
|
|
|
|
static PyObject *
|
|
|
|
|
PLy_create_exception(char *name, PyObject *base, PyObject *dict,
|
|
|
|
|
const char *modname, PyObject *mod)
|
|
|
|
|
{
|
|
|
|
|
PyObject *exc;
|
|
|
|
|
|
|
|
|
|
exc = PyErr_NewException(name, base, dict);
|
|
|
|
|
if (exc == NULL)
|
2017-10-31 10:49:36 -04:00
|
|
|
PLy_elog(ERROR, NULL);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* PyModule_AddObject does not add a refcount to the object, for some odd
|
|
|
|
|
* reason; we must do that.
|
|
|
|
|
*/
|
|
|
|
|
Py_INCREF(exc);
|
|
|
|
|
PyModule_AddObject(mod, modname, exc);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* The caller will also store a pointer to the exception object in some
|
|
|
|
|
* permanent variable, so add another ref to account for that. This is
|
|
|
|
|
* probably excessively paranoid, but let's be sure.
|
|
|
|
|
*/
|
|
|
|
|
Py_INCREF(exc);
|
|
|
|
|
return exc;
|
|
|
|
|
}
|
|
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
/*
|
|
|
|
|
* Add all the autogenerated exceptions as subclasses of SPIError
|
|
|
|
|
*/
|
|
|
|
|
static void
|
|
|
|
|
PLy_generate_spi_exceptions(PyObject *mod, PyObject *base)
|
|
|
|
|
{
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
for (i = 0; exception_map[i].name != NULL; i++)
|
|
|
|
|
{
|
|
|
|
|
bool found;
|
|
|
|
|
PyObject *exc;
|
|
|
|
|
PLyExceptionEntry *entry;
|
|
|
|
|
PyObject *sqlstate;
|
|
|
|
|
PyObject *dict = PyDict_New();
|
|
|
|
|
|
2012-03-13 15:26:32 -04:00
|
|
|
if (dict == NULL)
|
2017-10-31 10:49:36 -04:00
|
|
|
PLy_elog(ERROR, NULL);
|
2012-03-13 15:26:32 -04:00
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
sqlstate = PyString_FromString(unpack_sql_state(exception_map[i].sqlstate));
|
2012-03-13 15:26:32 -04:00
|
|
|
if (sqlstate == NULL)
|
|
|
|
|
PLy_elog(ERROR, "could not generate SPI exceptions");
|
|
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
PyDict_SetItemString(dict, "sqlstate", sqlstate);
|
|
|
|
|
Py_DECREF(sqlstate);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
|
|
|
|
|
exc = PLy_create_exception(exception_map[i].name, base, dict,
|
|
|
|
|
exception_map[i].classname, mod);
|
|
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
entry = hash_search(PLy_spi_exceptions, &exception_map[i].sqlstate,
|
|
|
|
|
HASH_ENTER, &found);
|
|
|
|
|
Assert(!found);
|
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
2016-12-09 15:27:23 -05:00
|
|
|
entry->exc = exc;
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* the python interface to the elog function
|
|
|
|
|
* don't confuse these with PLy_elog
|
|
|
|
|
*/
|
2016-04-08 11:30:25 -04:00
|
|
|
static PyObject *PLy_output(volatile int level, PyObject *self,
|
2019-05-22 13:04:48 -04:00
|
|
|
PyObject *args, PyObject *kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_debug(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(DEBUG2, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_log(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(LOG, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_info(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(INFO, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_notice(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(NOTICE, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_warning(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(WARNING, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_error(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(ERROR, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_fatal(PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
return PLy_output(FATAL, self, args, kw);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2011-12-18 14:14:16 -05:00
|
|
|
PLy_quote_literal(PyObject *self, PyObject *args)
|
|
|
|
|
{
|
|
|
|
|
const char *str;
|
|
|
|
|
char *quoted;
|
|
|
|
|
PyObject *ret;
|
|
|
|
|
|
2016-10-27 12:00:00 -04:00
|
|
|
if (!PyArg_ParseTuple(args, "s:quote_literal", &str))
|
2011-12-18 14:14:16 -05:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
quoted = quote_literal_cstr(str);
|
|
|
|
|
ret = PyString_FromString(quoted);
|
|
|
|
|
pfree(quoted);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2011-12-18 14:14:16 -05:00
|
|
|
PLy_quote_nullable(PyObject *self, PyObject *args)
|
|
|
|
|
{
|
|
|
|
|
const char *str;
|
|
|
|
|
char *quoted;
|
|
|
|
|
PyObject *ret;
|
|
|
|
|
|
2016-10-27 12:00:00 -04:00
|
|
|
if (!PyArg_ParseTuple(args, "z:quote_nullable", &str))
|
2011-12-18 14:14:16 -05:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
if (str == NULL)
|
|
|
|
|
return PyString_FromString("NULL");
|
|
|
|
|
|
|
|
|
|
quoted = quote_literal_cstr(str);
|
|
|
|
|
ret = PyString_FromString(quoted);
|
|
|
|
|
pfree(quoted);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2015-01-12 16:08:43 -05:00
|
|
|
static PyObject *
|
2011-12-18 14:14:16 -05:00
|
|
|
PLy_quote_ident(PyObject *self, PyObject *args)
|
|
|
|
|
{
|
|
|
|
|
const char *str;
|
|
|
|
|
const char *quoted;
|
|
|
|
|
PyObject *ret;
|
|
|
|
|
|
2016-10-27 12:00:00 -04:00
|
|
|
if (!PyArg_ParseTuple(args, "s:quote_ident", &str))
|
2011-12-18 14:14:16 -05:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
quoted = quote_identifier(str);
|
|
|
|
|
ret = PyString_FromString(quoted);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2016-04-08 11:30:25 -04:00
|
|
|
/* enforce cast of object to string */
|
|
|
|
|
static char *
|
|
|
|
|
object_to_string(PyObject *obj)
|
|
|
|
|
{
|
|
|
|
|
if (obj)
|
|
|
|
|
{
|
2016-06-09 18:02:36 -04:00
|
|
|
PyObject *so = PyObject_Str(obj);
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
if (so != NULL)
|
|
|
|
|
{
|
2016-06-09 18:02:36 -04:00
|
|
|
char *str;
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
str = pstrdup(PyString_AsString(so));
|
|
|
|
|
Py_DECREF(so);
|
|
|
|
|
|
|
|
|
|
return str;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
|
2011-12-18 14:14:16 -05:00
|
|
|
static PyObject *
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_output(volatile int level, PyObject *self, PyObject *args, PyObject *kw)
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
2016-06-09 18:02:36 -04:00
|
|
|
int sqlstate = 0;
|
|
|
|
|
char *volatile sqlstatestr = NULL;
|
|
|
|
|
char *volatile message = NULL;
|
|
|
|
|
char *volatile detail = NULL;
|
|
|
|
|
char *volatile hint = NULL;
|
2016-06-11 19:27:49 -04:00
|
|
|
char *volatile column_name = NULL;
|
|
|
|
|
char *volatile constraint_name = NULL;
|
|
|
|
|
char *volatile datatype_name = NULL;
|
|
|
|
|
char *volatile table_name = NULL;
|
|
|
|
|
char *volatile schema_name = NULL;
|
2016-04-11 11:49:48 -04:00
|
|
|
volatile MemoryContext oldcontext;
|
2016-06-09 18:02:36 -04:00
|
|
|
PyObject *key,
|
|
|
|
|
*value;
|
|
|
|
|
PyObject *volatile so;
|
|
|
|
|
Py_ssize_t pos = 0;
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
if (PyTuple_Size(args) == 1)
|
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* Treat single argument specially to avoid undesirable ('tuple',)
|
|
|
|
|
* decoration.
|
|
|
|
|
*/
|
|
|
|
|
PyObject *o;
|
|
|
|
|
|
2012-03-13 15:26:32 -04:00
|
|
|
if (!PyArg_UnpackTuple(args, "plpy.elog", 1, 1, &o))
|
|
|
|
|
PLy_elog(ERROR, "could not unpack arguments in plpy.elog");
|
2011-12-18 14:14:16 -05:00
|
|
|
so = PyObject_Str(o);
|
|
|
|
|
}
|
|
|
|
|
else
|
|
|
|
|
so = PyObject_Str(args);
|
2016-04-08 11:30:25 -04:00
|
|
|
|
2016-04-11 00:28:44 -04:00
|
|
|
if (so == NULL || ((message = PyString_AsString(so)) == NULL))
|
2011-12-18 14:14:16 -05:00
|
|
|
{
|
|
|
|
|
level = ERROR;
|
2016-04-08 11:30:25 -04:00
|
|
|
message = dgettext(TEXTDOMAIN, "could not parse error message in plpy.elog");
|
|
|
|
|
}
|
2016-04-11 00:28:44 -04:00
|
|
|
message = pstrdup(message);
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
Py_XDECREF(so);
|
|
|
|
|
|
|
|
|
|
if (kw != NULL)
|
|
|
|
|
{
|
|
|
|
|
while (PyDict_Next(kw, &pos, &key, &value))
|
|
|
|
|
{
|
2016-06-09 18:02:36 -04:00
|
|
|
char *keyword = PyString_AsString(key);
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
if (strcmp(keyword, "message") == 0)
|
|
|
|
|
{
|
2017-02-06 04:33:58 -05:00
|
|
|
/* the message should not be overwritten */
|
2016-04-08 11:30:25 -04:00
|
|
|
if (PyTuple_Size(args) != 0)
|
2016-07-02 22:53:14 -04:00
|
|
|
{
|
2017-05-25 13:16:00 -04:00
|
|
|
PLy_exception_set(PyExc_TypeError, "argument 'message' given by name and position");
|
2016-07-02 22:53:14 -04:00
|
|
|
return NULL;
|
|
|
|
|
}
|
2016-04-08 11:30:25 -04:00
|
|
|
|
2016-04-11 00:28:44 -04:00
|
|
|
if (message)
|
|
|
|
|
pfree(message);
|
2016-04-08 11:30:25 -04:00
|
|
|
message = object_to_string(value);
|
|
|
|
|
}
|
|
|
|
|
else if (strcmp(keyword, "detail") == 0)
|
|
|
|
|
detail = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "hint") == 0)
|
|
|
|
|
hint = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "sqlstate") == 0)
|
|
|
|
|
sqlstatestr = object_to_string(value);
|
2016-06-11 19:27:49 -04:00
|
|
|
else if (strcmp(keyword, "schema_name") == 0)
|
|
|
|
|
schema_name = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "table_name") == 0)
|
|
|
|
|
table_name = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "column_name") == 0)
|
|
|
|
|
column_name = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "datatype_name") == 0)
|
|
|
|
|
datatype_name = object_to_string(value);
|
|
|
|
|
else if (strcmp(keyword, "constraint_name") == 0)
|
|
|
|
|
constraint_name = object_to_string(value);
|
2016-06-09 18:02:36 -04:00
|
|
|
else
|
2016-07-02 22:53:14 -04:00
|
|
|
{
|
|
|
|
|
PLy_exception_set(PyExc_TypeError,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
|
|
|
"'%s' is an invalid keyword argument for this function",
|
2016-07-02 22:53:14 -04:00
|
|
|
keyword);
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
2016-04-08 11:30:25 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (sqlstatestr != NULL)
|
|
|
|
|
{
|
|
|
|
|
if (strlen(sqlstatestr) != 5)
|
2016-07-02 22:53:14 -04:00
|
|
|
{
|
|
|
|
|
PLy_exception_set(PyExc_ValueError, "invalid SQLSTATE code");
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
if (strspn(sqlstatestr, "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ") != 5)
|
2016-07-02 22:53:14 -04:00
|
|
|
{
|
|
|
|
|
PLy_exception_set(PyExc_ValueError, "invalid SQLSTATE code");
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
sqlstate = MAKE_SQLSTATE(sqlstatestr[0],
|
2016-06-09 18:02:36 -04:00
|
|
|
sqlstatestr[1],
|
|
|
|
|
sqlstatestr[2],
|
|
|
|
|
sqlstatestr[3],
|
|
|
|
|
sqlstatestr[4]);
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
oldcontext = CurrentMemoryContext;
|
|
|
|
|
PG_TRY();
|
|
|
|
|
{
|
2016-04-08 11:30:25 -04:00
|
|
|
if (message != NULL)
|
|
|
|
|
pg_verifymbstr(message, strlen(message), false);
|
|
|
|
|
if (detail != NULL)
|
|
|
|
|
pg_verifymbstr(detail, strlen(detail), false);
|
|
|
|
|
if (hint != NULL)
|
|
|
|
|
pg_verifymbstr(hint, strlen(hint), false);
|
2016-06-11 19:27:49 -04:00
|
|
|
if (schema_name != NULL)
|
|
|
|
|
pg_verifymbstr(schema_name, strlen(schema_name), false);
|
|
|
|
|
if (table_name != NULL)
|
|
|
|
|
pg_verifymbstr(table_name, strlen(table_name), false);
|
|
|
|
|
if (column_name != NULL)
|
|
|
|
|
pg_verifymbstr(column_name, strlen(column_name), false);
|
|
|
|
|
if (datatype_name != NULL)
|
|
|
|
|
pg_verifymbstr(datatype_name, strlen(datatype_name), false);
|
|
|
|
|
if (constraint_name != NULL)
|
|
|
|
|
pg_verifymbstr(constraint_name, strlen(constraint_name), false);
|
2016-04-08 11:30:25 -04:00
|
|
|
|
|
|
|
|
ereport(level,
|
|
|
|
|
((sqlstate != 0) ? errcode(sqlstate) : 0,
|
|
|
|
|
(message != NULL) ? errmsg_internal("%s", message) : 0,
|
|
|
|
|
(detail != NULL) ? errdetail_internal("%s", detail) : 0,
|
|
|
|
|
(hint != NULL) ? errhint("%s", hint) : 0,
|
2016-06-11 19:27:49 -04:00
|
|
|
(column_name != NULL) ?
|
|
|
|
|
err_generic_string(PG_DIAG_COLUMN_NAME, column_name) : 0,
|
|
|
|
|
(constraint_name != NULL) ?
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
|
|
|
err_generic_string(PG_DIAG_CONSTRAINT_NAME, constraint_name) : 0,
|
2016-06-11 19:27:49 -04:00
|
|
|
(datatype_name != NULL) ?
|
|
|
|
|
err_generic_string(PG_DIAG_DATATYPE_NAME, datatype_name) : 0,
|
|
|
|
|
(table_name != NULL) ?
|
|
|
|
|
err_generic_string(PG_DIAG_TABLE_NAME, table_name) : 0,
|
|
|
|
|
(schema_name != NULL) ?
|
|
|
|
|
err_generic_string(PG_DIAG_SCHEMA_NAME, schema_name) : 0));
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
|
|
|
|
PG_CATCH();
|
|
|
|
|
{
|
2016-06-09 18:02:36 -04:00
|
|
|
ErrorData *edata;
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldcontext);
|
|
|
|
|
edata = CopyErrorData();
|
|
|
|
|
FlushErrorState();
|
|
|
|
|
|
2016-04-08 11:30:25 -04:00
|
|
|
PLy_exception_set_with_details(PLy_exc_error, edata);
|
|
|
|
|
FreeErrorData(edata);
|
2011-12-18 14:14:16 -05:00
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
PG_END_TRY();
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* return a legal object so the interpreter will continue on its merry way
|
|
|
|
|
*/
|
2017-09-29 16:50:01 -04:00
|
|
|
Py_RETURN_NONE;
|
2011-12-18 14:14:16 -05:00
|
|
|
}
|
Transaction control in PL procedures
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2018-01-22 08:30:16 -05:00
|
|
|
|
|
|
|
|
static PyObject *
|
|
|
|
|
PLy_commit(PyObject *self, PyObject *args)
|
|
|
|
|
{
|
|
|
|
|
PLyExecutionContext *exec_ctx = PLy_current_execution_context();
|
|
|
|
|
|
|
|
|
|
SPI_commit();
|
|
|
|
|
SPI_start_transaction();
|
|
|
|
|
|
|
|
|
|
/* was cleared at transaction end, reset pointer */
|
|
|
|
|
exec_ctx->scratch_ctx = NULL;
|
|
|
|
|
|
|
|
|
|
Py_RETURN_NONE;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static PyObject *
|
|
|
|
|
PLy_rollback(PyObject *self, PyObject *args)
|
|
|
|
|
{
|
|
|
|
|
PLyExecutionContext *exec_ctx = PLy_current_execution_context();
|
|
|
|
|
|
|
|
|
|
SPI_rollback();
|
|
|
|
|
SPI_start_transaction();
|
|
|
|
|
|
|
|
|
|
/* was cleared at transaction end, reset pointer */
|
|
|
|
|
exec_ctx->scratch_ctx = NULL;
|
|
|
|
|
|
|
|
|
|
Py_RETURN_NONE;
|
|
|
|
|
}
|