postgresql/src/include/executor/executor.h
David Rowley 9eacee2e62 Add Result Cache executor node (take 2)
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-02 14:10:56 +13:00

653 lines
24 KiB
C

/*-------------------------------------------------------------------------
*
* executor.h
* support for the POSTGRES executor module
*
*
* Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/executor/executor.h
*
*-------------------------------------------------------------------------
*/
#ifndef EXECUTOR_H
#define EXECUTOR_H
#include "executor/execdesc.h"
#include "fmgr.h"
#include "nodes/lockoptions.h"
#include "nodes/parsenodes.h"
#include "utils/memutils.h"
/*
* The "eflags" argument to ExecutorStart and the various ExecInitNode
* routines is a bitwise OR of the following flag bits, which tell the
* called plan node what to expect. Note that the flags will get modified
* as they are passed down the plan tree, since an upper node may require
* functionality in its subnode not demanded of the plan as a whole
* (example: MergeJoin requires mark/restore capability in its inner input),
* or an upper node may shield its input from some functionality requirement
* (example: Materialize shields its input from needing to do backward scan).
*
* EXPLAIN_ONLY indicates that the plan tree is being initialized just so
* EXPLAIN can print it out; it will not be run. Hence, no side-effects
* of startup should occur. However, error checks (such as permission checks)
* should be performed.
*
* REWIND indicates that the plan node should try to efficiently support
* rescans without parameter changes. (Nodes must support ExecReScan calls
* in any case, but if this flag was not given, they are at liberty to do it
* through complete recalculation. Note that a parameter change forces a
* full recalculation in any case.)
*
* BACKWARD indicates that the plan node must respect the es_direction flag.
* When this is not passed, the plan node will only be run forwards.
*
* MARK indicates that the plan node must support Mark/Restore calls.
* When this is not passed, no Mark/Restore will occur.
*
* SKIP_TRIGGERS tells ExecutorStart/ExecutorFinish to skip calling
* AfterTriggerBeginQuery/AfterTriggerEndQuery. This does not necessarily
* mean that the plan can't queue any AFTER triggers; just that the caller
* is responsible for there being a trigger context for them to be queued in.
*/
#define EXEC_FLAG_EXPLAIN_ONLY 0x0001 /* EXPLAIN, no ANALYZE */
#define EXEC_FLAG_REWIND 0x0002 /* need efficient rescan */
#define EXEC_FLAG_BACKWARD 0x0004 /* need backward scan */
#define EXEC_FLAG_MARK 0x0008 /* need mark/restore */
#define EXEC_FLAG_SKIP_TRIGGERS 0x0010 /* skip AfterTrigger calls */
#define EXEC_FLAG_WITH_NO_DATA 0x0020 /* rel scannability doesn't matter */
/* Hook for plugins to get control in ExecutorStart() */
typedef void (*ExecutorStart_hook_type) (QueryDesc *queryDesc, int eflags);
extern PGDLLIMPORT ExecutorStart_hook_type ExecutorStart_hook;
/* Hook for plugins to get control in ExecutorRun() */
typedef void (*ExecutorRun_hook_type) (QueryDesc *queryDesc,
ScanDirection direction,
uint64 count,
bool execute_once);
extern PGDLLIMPORT ExecutorRun_hook_type ExecutorRun_hook;
/* Hook for plugins to get control in ExecutorFinish() */
typedef void (*ExecutorFinish_hook_type) (QueryDesc *queryDesc);
extern PGDLLIMPORT ExecutorFinish_hook_type ExecutorFinish_hook;
/* Hook for plugins to get control in ExecutorEnd() */
typedef void (*ExecutorEnd_hook_type) (QueryDesc *queryDesc);
extern PGDLLIMPORT ExecutorEnd_hook_type ExecutorEnd_hook;
/* Hook for plugins to get control in ExecCheckRTPerms() */
typedef bool (*ExecutorCheckPerms_hook_type) (List *, bool);
extern PGDLLIMPORT ExecutorCheckPerms_hook_type ExecutorCheckPerms_hook;
/*
* prototypes from functions in execAmi.c
*/
struct Path; /* avoid including pathnodes.h here */
extern void ExecReScan(PlanState *node);
extern void ExecMarkPos(PlanState *node);
extern void ExecRestrPos(PlanState *node);
extern bool ExecSupportsMarkRestore(struct Path *pathnode);
extern bool ExecSupportsBackwardScan(Plan *node);
extern bool ExecMaterializesOutput(NodeTag plantype);
/*
* prototypes from functions in execCurrent.c
*/
extern bool execCurrentOf(CurrentOfExpr *cexpr,
ExprContext *econtext,
Oid table_oid,
ItemPointer current_tid);
/*
* prototypes from functions in execGrouping.c
*/
extern ExprState *execTuplesMatchPrepare(TupleDesc desc,
int numCols,
const AttrNumber *keyColIdx,
const Oid *eqOperators,
const Oid *collations,
PlanState *parent);
extern void execTuplesHashPrepare(int numCols,
const Oid *eqOperators,
Oid **eqFuncOids,
FmgrInfo **hashFunctions);
extern TupleHashTable BuildTupleHashTable(PlanState *parent,
TupleDesc inputDesc,
int numCols, AttrNumber *keyColIdx,
const Oid *eqfuncoids,
FmgrInfo *hashfunctions,
Oid *collations,
long nbuckets, Size additionalsize,
MemoryContext tablecxt,
MemoryContext tempcxt, bool use_variable_hash_iv);
extern TupleHashTable BuildTupleHashTableExt(PlanState *parent,
TupleDesc inputDesc,
int numCols, AttrNumber *keyColIdx,
const Oid *eqfuncoids,
FmgrInfo *hashfunctions,
Oid *collations,
long nbuckets, Size additionalsize,
MemoryContext metacxt,
MemoryContext tablecxt,
MemoryContext tempcxt, bool use_variable_hash_iv);
extern TupleHashEntry LookupTupleHashEntry(TupleHashTable hashtable,
TupleTableSlot *slot,
bool *isnew, uint32 *hash);
extern uint32 TupleHashTableHash(TupleHashTable hashtable,
TupleTableSlot *slot);
extern TupleHashEntry LookupTupleHashEntryHash(TupleHashTable hashtable,
TupleTableSlot *slot,
bool *isnew, uint32 hash);
extern TupleHashEntry FindTupleHashEntry(TupleHashTable hashtable,
TupleTableSlot *slot,
ExprState *eqcomp,
FmgrInfo *hashfunctions);
extern void ResetTupleHashTable(TupleHashTable hashtable);
/*
* prototypes from functions in execJunk.c
*/
extern JunkFilter *ExecInitJunkFilter(List *targetList,
TupleTableSlot *slot);
extern JunkFilter *ExecInitJunkFilterConversion(List *targetList,
TupleDesc cleanTupType,
TupleTableSlot *slot);
extern AttrNumber ExecFindJunkAttribute(JunkFilter *junkfilter,
const char *attrName);
extern AttrNumber ExecFindJunkAttributeInTlist(List *targetlist,
const char *attrName);
extern TupleTableSlot *ExecFilterJunk(JunkFilter *junkfilter,
TupleTableSlot *slot);
/*
* ExecGetJunkAttribute
*
* Given a junk filter's input tuple (slot) and a junk attribute's number
* previously found by ExecFindJunkAttribute, extract & return the value and
* isNull flag of the attribute.
*/
#ifndef FRONTEND
static inline Datum
ExecGetJunkAttribute(TupleTableSlot *slot, AttrNumber attno, bool *isNull)
{
Assert(attno > 0);
return slot_getattr(slot, attno, isNull);
}
#endif
/*
* prototypes from functions in execMain.c
*/
extern void ExecutorStart(QueryDesc *queryDesc, int eflags);
extern void standard_ExecutorStart(QueryDesc *queryDesc, int eflags);
extern void ExecutorRun(QueryDesc *queryDesc,
ScanDirection direction, uint64 count, bool execute_once);
extern void standard_ExecutorRun(QueryDesc *queryDesc,
ScanDirection direction, uint64 count, bool execute_once);
extern void ExecutorFinish(QueryDesc *queryDesc);
extern void standard_ExecutorFinish(QueryDesc *queryDesc);
extern void ExecutorEnd(QueryDesc *queryDesc);
extern void standard_ExecutorEnd(QueryDesc *queryDesc);
extern void ExecutorRewind(QueryDesc *queryDesc);
extern bool ExecCheckRTPerms(List *rangeTable, bool ereport_on_violation);
extern void CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation);
extern void InitResultRelInfo(ResultRelInfo *resultRelInfo,
Relation resultRelationDesc,
Index resultRelationIndex,
ResultRelInfo *partition_root_rri,
int instrument_options);
extern ResultRelInfo *ExecGetTriggerResultRel(EState *estate, Oid relid);
extern void ExecConstraints(ResultRelInfo *resultRelInfo,
TupleTableSlot *slot, EState *estate);
extern bool ExecPartitionCheck(ResultRelInfo *resultRelInfo,
TupleTableSlot *slot, EState *estate, bool emitError);
extern void ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo,
TupleTableSlot *slot, EState *estate);
extern void ExecWithCheckOptions(WCOKind kind, ResultRelInfo *resultRelInfo,
TupleTableSlot *slot, EState *estate);
extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo);
extern ExecRowMark *ExecFindRowMark(EState *estate, Index rti, bool missing_ok);
extern ExecAuxRowMark *ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist);
extern TupleTableSlot *EvalPlanQual(EPQState *epqstate, Relation relation,
Index rti, TupleTableSlot *testslot);
extern void EvalPlanQualInit(EPQState *epqstate, EState *parentestate,
Plan *subplan, List *auxrowmarks, int epqParam);
extern void EvalPlanQualSetPlan(EPQState *epqstate,
Plan *subplan, List *auxrowmarks);
extern TupleTableSlot *EvalPlanQualSlot(EPQState *epqstate,
Relation relation, Index rti);
#define EvalPlanQualSetSlot(epqstate, slot) ((epqstate)->origslot = (slot))
extern bool EvalPlanQualFetchRowMark(EPQState *epqstate, Index rti, TupleTableSlot *slot);
extern TupleTableSlot *EvalPlanQualNext(EPQState *epqstate);
extern void EvalPlanQualBegin(EPQState *epqstate);
extern void EvalPlanQualEnd(EPQState *epqstate);
/*
* functions in execProcnode.c
*/
extern PlanState *ExecInitNode(Plan *node, EState *estate, int eflags);
extern void ExecSetExecProcNode(PlanState *node, ExecProcNodeMtd function);
extern Node *MultiExecProcNode(PlanState *node);
extern void ExecEndNode(PlanState *node);
extern bool ExecShutdownNode(PlanState *node);
extern void ExecSetTupleBound(int64 tuples_needed, PlanState *child_node);
/* ----------------------------------------------------------------
* ExecProcNode
*
* Execute the given node to return a(nother) tuple.
* ----------------------------------------------------------------
*/
#ifndef FRONTEND
static inline TupleTableSlot *
ExecProcNode(PlanState *node)
{
if (node->chgParam != NULL) /* something changed? */
ExecReScan(node); /* let ReScan handle this */
return node->ExecProcNode(node);
}
#endif
/*
* prototypes from functions in execExpr.c
*/
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
extern ExprState *ExecInitExprWithParams(Expr *node, ParamListInfo ext_params);
extern ExprState *ExecInitQual(List *qual, PlanState *parent);
extern ExprState *ExecInitCheck(List *qual, PlanState *parent);
extern List *ExecInitExprList(List *nodes, PlanState *parent);
extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase,
bool doSort, bool doHash, bool nullcheck);
extern ExprState *ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc,
const TupleTableSlotOps *lops, const TupleTableSlotOps *rops,
int numCols,
const AttrNumber *keyColIdx,
const Oid *eqfunctions,
const Oid *collations,
PlanState *parent);
extern ExprState *ExecBuildParamSetEqual(TupleDesc desc,
const TupleTableSlotOps *lops,
const TupleTableSlotOps *rops,
const Oid *eqfunctions,
const Oid *collations,
const List *param_exprs,
PlanState *parent);
extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList,
ExprContext *econtext,
TupleTableSlot *slot,
PlanState *parent,
TupleDesc inputDesc);
extern ProjectionInfo *ExecBuildUpdateProjection(List *subTargetList,
List *targetColnos,
TupleDesc relDesc,
ExprContext *econtext,
TupleTableSlot *slot,
PlanState *parent);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern ExprState *ExecPrepareQual(List *qual, EState *estate);
extern ExprState *ExecPrepareCheck(List *qual, EState *estate);
extern List *ExecPrepareExprList(List *nodes, EState *estate);
/*
* ExecEvalExpr
*
* Evaluate expression identified by "state" in the execution context
* given by "econtext". *isNull is set to the is-null flag for the result,
* and the Datum value is the function result.
*
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
* ExecEvalExprSwitchContext() is provided for callers who don't prefer to
* do the switch in an outer loop.
*/
#ifndef FRONTEND
static inline Datum
ExecEvalExpr(ExprState *state,
ExprContext *econtext,
bool *isNull)
{
return state->evalfunc(state, econtext, isNull);
}
#endif
/*
* ExecEvalExprSwitchContext
*
* Same as ExecEvalExpr, but get into the right allocation context explicitly.
*/
#ifndef FRONTEND
static inline Datum
ExecEvalExprSwitchContext(ExprState *state,
ExprContext *econtext,
bool *isNull)
{
Datum retDatum;
MemoryContext oldContext;
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
retDatum = state->evalfunc(state, econtext, isNull);
MemoryContextSwitchTo(oldContext);
return retDatum;
}
#endif
/*
* ExecProject
*
* Projects a tuple based on projection info and stores it in the slot passed
* to ExecBuildProjectionInfo().
*
* Note: the result is always a virtual tuple; therefore it may reference
* the contents of the exprContext's scan tuples and/or temporary results
* constructed in the exprContext. If the caller wishes the result to be
* valid longer than that data will be valid, he must call ExecMaterializeSlot
* on the result slot.
*/
#ifndef FRONTEND
static inline TupleTableSlot *
ExecProject(ProjectionInfo *projInfo)
{
ExprContext *econtext = projInfo->pi_exprContext;
ExprState *state = &projInfo->pi_state;
TupleTableSlot *slot = state->resultslot;
bool isnull;
/*
* Clear any former contents of the result slot. This makes it safe for
* us to use the slot's Datum/isnull arrays as workspace.
*/
ExecClearTuple(slot);
/* Run the expression, discarding scalar result from the last column. */
(void) ExecEvalExprSwitchContext(state, econtext, &isnull);
/*
* Successfully formed a result row. Mark the result slot as containing a
* valid virtual tuple (inlined version of ExecStoreVirtualTuple()).
*/
slot->tts_flags &= ~TTS_FLAG_EMPTY;
slot->tts_nvalid = slot->tts_tupleDescriptor->natts;
return slot;
}
#endif
/*
* ExecQual - evaluate a qual prepared with ExecInitQual (possibly via
* ExecPrepareQual). Returns true if qual is satisfied, else false.
*
* Note: ExecQual used to have a third argument "resultForNull". The
* behavior of this function now corresponds to resultForNull == false.
* If you want the resultForNull == true behavior, see ExecCheck.
*/
#ifndef FRONTEND
static inline bool
ExecQual(ExprState *state, ExprContext *econtext)
{
Datum ret;
bool isnull;
/* short-circuit (here and in ExecInitQual) for empty restriction list */
if (state == NULL)
return true;
/* verify that expression was compiled using ExecInitQual */
Assert(state->flags & EEO_FLAG_IS_QUAL);
ret = ExecEvalExprSwitchContext(state, econtext, &isnull);
/* EEOP_QUAL should never return NULL */
Assert(!isnull);
return DatumGetBool(ret);
}
#endif
/*
* ExecQualAndReset() - evaluate qual with ExecQual() and reset expression
* context.
*/
#ifndef FRONTEND
static inline bool
ExecQualAndReset(ExprState *state, ExprContext *econtext)
{
bool ret = ExecQual(state, econtext);
/* inline ResetExprContext, to avoid ordering issue in this file */
MemoryContextReset(econtext->ecxt_per_tuple_memory);
return ret;
}
#endif
extern bool ExecCheck(ExprState *state, ExprContext *context);
/*
* prototypes from functions in execSRF.c
*/
extern SetExprState *ExecInitTableFunctionResult(Expr *expr,
ExprContext *econtext, PlanState *parent);
extern Tuplestorestate *ExecMakeTableFunctionResult(SetExprState *setexpr,
ExprContext *econtext,
MemoryContext argContext,
TupleDesc expectedDesc,
bool randomAccess);
extern SetExprState *ExecInitFunctionResultSet(Expr *expr,
ExprContext *econtext, PlanState *parent);
extern Datum ExecMakeFunctionResultSet(SetExprState *fcache,
ExprContext *econtext,
MemoryContext argContext,
bool *isNull,
ExprDoneCond *isDone);
/*
* prototypes from functions in execScan.c
*/
typedef TupleTableSlot *(*ExecScanAccessMtd) (ScanState *node);
typedef bool (*ExecScanRecheckMtd) (ScanState *node, TupleTableSlot *slot);
extern TupleTableSlot *ExecScan(ScanState *node, ExecScanAccessMtd accessMtd,
ExecScanRecheckMtd recheckMtd);
extern void ExecAssignScanProjectionInfo(ScanState *node);
extern void ExecAssignScanProjectionInfoWithVarno(ScanState *node, Index varno);
extern void ExecScanReScan(ScanState *node);
/*
* prototypes from functions in execTuples.c
*/
extern void ExecInitResultTypeTL(PlanState *planstate);
extern void ExecInitResultSlot(PlanState *planstate,
const TupleTableSlotOps *tts_ops);
extern void ExecInitResultTupleSlotTL(PlanState *planstate,
const TupleTableSlotOps *tts_ops);
extern void ExecInitScanTupleSlot(EState *estate, ScanState *scanstate,
TupleDesc tupleDesc,
const TupleTableSlotOps *tts_ops);
extern TupleTableSlot *ExecInitExtraTupleSlot(EState *estate,
TupleDesc tupledesc,
const TupleTableSlotOps *tts_ops);
extern TupleTableSlot *ExecInitNullTupleSlot(EState *estate, TupleDesc tupType,
const TupleTableSlotOps *tts_ops);
extern TupleDesc ExecTypeFromTL(List *targetList);
extern TupleDesc ExecCleanTypeFromTL(List *targetList);
extern TupleDesc ExecTypeFromExprList(List *exprList);
extern void ExecTypeSetColNames(TupleDesc typeInfo, List *namesList);
extern void UpdateChangedParamSet(PlanState *node, Bitmapset *newchg);
typedef struct TupOutputState
{
TupleTableSlot *slot;
DestReceiver *dest;
} TupOutputState;
extern TupOutputState *begin_tup_output_tupdesc(DestReceiver *dest,
TupleDesc tupdesc,
const TupleTableSlotOps *tts_ops);
extern void do_tup_output(TupOutputState *tstate, Datum *values, bool *isnull);
extern void do_text_output_multiline(TupOutputState *tstate, const char *txt);
extern void end_tup_output(TupOutputState *tstate);
/*
* Write a single line of text given as a C string.
*
* Should only be used with a single-TEXT-attribute tupdesc.
*/
#define do_text_output_oneline(tstate, str_to_emit) \
do { \
Datum values_[1]; \
bool isnull_[1]; \
values_[0] = PointerGetDatum(cstring_to_text(str_to_emit)); \
isnull_[0] = false; \
do_tup_output(tstate, values_, isnull_); \
pfree(DatumGetPointer(values_[0])); \
} while (0)
/*
* prototypes from functions in execUtils.c
*/
extern EState *CreateExecutorState(void);
extern void FreeExecutorState(EState *estate);
extern ExprContext *CreateExprContext(EState *estate);
extern ExprContext *CreateWorkExprContext(EState *estate);
extern ExprContext *CreateStandaloneExprContext(void);
extern void FreeExprContext(ExprContext *econtext, bool isCommit);
extern void ReScanExprContext(ExprContext *econtext);
#define ResetExprContext(econtext) \
MemoryContextReset((econtext)->ecxt_per_tuple_memory)
extern ExprContext *MakePerTupleExprContext(EState *estate);
/* Get an EState's per-output-tuple exprcontext, making it if first use */
#define GetPerTupleExprContext(estate) \
((estate)->es_per_tuple_exprcontext ? \
(estate)->es_per_tuple_exprcontext : \
MakePerTupleExprContext(estate))
#define GetPerTupleMemoryContext(estate) \
(GetPerTupleExprContext(estate)->ecxt_per_tuple_memory)
/* Reset an EState's per-output-tuple exprcontext, if one's been created */
#define ResetPerTupleExprContext(estate) \
do { \
if ((estate)->es_per_tuple_exprcontext) \
ResetExprContext((estate)->es_per_tuple_exprcontext); \
} while (0)
extern void ExecAssignExprContext(EState *estate, PlanState *planstate);
extern TupleDesc ExecGetResultType(PlanState *planstate);
extern const TupleTableSlotOps *ExecGetResultSlotOps(PlanState *planstate,
bool *isfixed);
extern void ExecAssignProjectionInfo(PlanState *planstate,
TupleDesc inputDesc);
extern void ExecConditionalAssignProjectionInfo(PlanState *planstate,
TupleDesc inputDesc, Index varno);
extern void ExecFreeExprContext(PlanState *planstate);
extern void ExecAssignScanType(ScanState *scanstate, TupleDesc tupDesc);
extern void ExecCreateScanSlotFromOuterPlan(EState *estate,
ScanState *scanstate,
const TupleTableSlotOps *tts_ops);
extern bool ExecRelationIsTargetRelation(EState *estate, Index scanrelid);
extern Relation ExecOpenScanRelation(EState *estate, Index scanrelid, int eflags);
extern void ExecInitRangeTable(EState *estate, List *rangeTable);
extern void ExecCloseRangeTableRelations(EState *estate);
extern void ExecCloseResultRelations(EState *estate);
static inline RangeTblEntry *
exec_rt_fetch(Index rti, EState *estate)
{
return (RangeTblEntry *) list_nth(estate->es_range_table, rti - 1);
}
extern Relation ExecGetRangeTableRelation(EState *estate, Index rti);
extern void ExecInitResultRelation(EState *estate, ResultRelInfo *resultRelInfo,
Index rti);
extern int executor_errposition(EState *estate, int location);
extern void RegisterExprContextCallback(ExprContext *econtext,
ExprContextCallbackFunction function,
Datum arg);
extern void UnregisterExprContextCallback(ExprContext *econtext,
ExprContextCallbackFunction function,
Datum arg);
extern Datum GetAttributeByName(HeapTupleHeader tuple, const char *attname,
bool *isNull);
extern Datum GetAttributeByNum(HeapTupleHeader tuple, AttrNumber attrno,
bool *isNull);
extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
extern TupleTableSlot *ExecGetTriggerOldSlot(EState *estate, ResultRelInfo *relInfo);
extern TupleTableSlot *ExecGetTriggerNewSlot(EState *estate, ResultRelInfo *relInfo);
extern TupleTableSlot *ExecGetReturningSlot(EState *estate, ResultRelInfo *relInfo);
extern Bitmapset *ExecGetInsertedCols(ResultRelInfo *relinfo, EState *estate);
extern Bitmapset *ExecGetUpdatedCols(ResultRelInfo *relinfo, EState *estate);
extern Bitmapset *ExecGetExtraUpdatedCols(ResultRelInfo *relinfo, EState *estate);
extern Bitmapset *ExecGetAllUpdatedCols(ResultRelInfo *relinfo, EState *estate);
/*
* prototypes from functions in execIndexing.c
*/
extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
extern List *ExecInsertIndexTuples(ResultRelInfo *resultRelInfo,
TupleTableSlot *slot, EState *estate,
bool update,
bool noDupErr,
bool *specConflict, List *arbiterIndexes);
extern bool ExecCheckIndexConstraints(ResultRelInfo *resultRelInfo,
TupleTableSlot *slot,
EState *estate, ItemPointer conflictTid,
List *arbiterIndexes);
extern void check_exclusion_constraint(Relation heap, Relation index,
IndexInfo *indexInfo,
ItemPointer tupleid,
Datum *values, bool *isnull,
EState *estate, bool newIndex);
/*
* prototypes from functions in execReplication.c
*/
extern bool RelationFindReplTupleByIndex(Relation rel, Oid idxoid,
LockTupleMode lockmode,
TupleTableSlot *searchslot,
TupleTableSlot *outslot);
extern bool RelationFindReplTupleSeq(Relation rel, LockTupleMode lockmode,
TupleTableSlot *searchslot, TupleTableSlot *outslot);
extern void ExecSimpleRelationInsert(ResultRelInfo *resultRelInfo,
EState *estate, TupleTableSlot *slot);
extern void ExecSimpleRelationUpdate(ResultRelInfo *resultRelInfo,
EState *estate, EPQState *epqstate,
TupleTableSlot *searchslot, TupleTableSlot *slot);
extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
EState *estate, EPQState *epqstate,
TupleTableSlot *searchslot);
extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname);
/* needed by trigger.c */
extern TupleTableSlot *ExecGetUpdateNewTuple(ResultRelInfo *relinfo,
TupleTableSlot *planSlot,
TupleTableSlot *oldSlot);
#endif /* EXECUTOR_H */