By comparing the local thread's load with the least loaded thread's
load, we can further improve the fairness and at the same time also
improve locality since it allows a small ratio of connections not to
be migrated. This is visible on CPU usage with long connections on
very large thread counts (224) and high bandwidth (200G). The cost
of checking the local thread's load remains fairly low so there's no
reason not to do this. We continue to update the index if we select
the local thread, because it means that the two other threads were
both more loaded so we'd rather find better ones.
One limitation of the current thread index mechanism is that if the
values are assigned multiple times to the same thread and the index
loops, it can match again the old value, which will not prevent a
competing thread from finishing its CAS and assigning traffic to a
thread that's not the optimal one. The probability is low but the
solution is simple enough and consists in implementing an update
counter in the high bits of the index to force a mismatch in this
case (assuming we don't try to cover for extremely unlikely cases
where the update counter loops while the index remains equal). So
let's do that. In order to improve the situation a little bit, we
now set the index to a ulong so that in 32 bits we have 8 bits of
counter and in 64 bits we have 40 bits.
During heavy accept competition, the CAS will occasionally fail and
we'll have to go through all the calculation again. While the first
two loops look heavy, they're almost never taken so they're quite
cheap. However the rest of the operation is heavy because we have to
consult connection counts and queue indexes for other threads, so
better double-check if the index is still valid before continuing.
Tests show that it's more efficient do retry half-way like this.
Instead of seeing each listener use its own thr_idx, let's use the same
for all those from a shard. It should provide more accurate and smoother
thread allocation.
Till now threads were assigned in listener_accept() to other threads of
the same group only, using a single group mask. Now that we have all the
relevant info (array of listeners of the same shard), we can spread the
thr_idx to cover all assigned groups. The thread indexes now contain the
group number in their upper bits, and the indexes run over te whole list
of threads, all groups included.
One particular subtlety here is that switching to a thread from another
group also means switching the group, hence the listener. As such, when
changing the group we need to update the connection's owner to point to
the listener of the same shard that is bound to the target group.
There has always been a race when checking the length of an accept queue
to determine which one is more loaded that another, because the head and
tail are read at two different moments. This is not required, we can merge
them as two 16 bit numbers inside a single 32-bit index that is always
accessed atomically. This way we read both values at once and always have
a consistent measurement.
Now it's possible for a bind line to span multiple thread groups. When
this happens, the first one will become the reference and will be entirely
set up, and the subsequent ones will be duplicated from this reference,
so that they can be registered in distinct groups. The reference is
always setup and started first so it is always available when the other
ones are started.
The doc was updated to reflect this new possibility with its limitations
and impacts, and the differences with the "shards" option.
It's not strictly necessary, but it's still better to avoid setting
up the same socket multiple times when it's being duplicated to a few
FDs. We don't change that for inherited ones however since they may
really need to be set up, so we only skip duplicated ones.
The different protocol's ->bind() function will now check the receiver's
RX_F_MUST_DUP flag to decide whether to bind a fresh new listener from
scratch or reuse an existing one and just duplicate it. It turns out
that the existing code already supports reusing FDs since that was done
as part of the FD passing and inheriting mechanism. Here it's not much
different, we pass the FD of the reference receiver, it gets duplicated
and becomes the new receiver's FD.
These FDs are also marked RX_F_INHERITED so that they are not exported
and avoid being touched directly (only the reference should be touched).
In order to create multiple receivers for one multi-group shard, we'll
need some more info about the shard. Here we store:
- the number of groups (= number of receivers)
- the number of threads (will be used for accept LB)
- pointer to the reference rx (to get the FD and to find all threads)
- pointers to the other members (to iterate over all threads)
For now since there's only one group per shard it remains simple. The
listener deletion code already takes care of removing the current
member from its shards list and moving others' reference to the last
one if it was their reference (so as to avoid o(n^2) updates during
ordered deletes).
Since the vast majority of setups will not use multi-group shards, we
try to save memory usage by only allocating the shard_info when it is
needed, so the principle here is that a receiver shard_info==NULL is
alone and doesn't share its socket with another group.
Various approaches were considered and tests show that the management
of the listeners during boot makes it easier to just attach to or
detach from a shard_info and automatically allocate it if it does not
exist, which is what is being done here.
For now the attach code is not called, but detach is already called
on delete.
This new algorithm for rebalancing incoming connections to multiple
threads is simpler and instead of considering the threads load, it will
only cycle through all of them, offering a fair share of the traffic to
each thread. It may be well suited for short-lived connections but is
also convenient for very large thread counts where it's not always certain
that the least loaded thread will always be found.
There's a li_per_thread array in each listener for use with QUIC
listeners. Since thread groups were introduced, this array can be
allocated too large because global.nbthread is allocated for each
listener, while only no more than MIN(nbthread,MAX_THREADS_PER_GROUP)
may be used by a single listener. This was because the global thread
ID is used as the index instead of the local ID (since a listener may
only be used by a single group). Let's just switch to local ID and
reduce the allocated size.
When migrating a quic_conn to another thread, we may need to also
switch the listener if the thread belongs to another group. When
this happens, the freshly created connection will already have the
target listener, so let's just pick it from the connection and use
it in qc_set_tid_affinity(). Note that it will be the caller's
responsibility to guarantee this.
Adding the possibility to publish an event using a struct wrapper
around existing SERVER events to provide additional contextual info.
Using the specific struct wrapper is not required: it is supported
to cast event data as a regular server event data struct so
that we don't break the existing API.
However, casting event data with a more explicit data type allows
to fetch event-only relevant hints.
Considering that srv_update_status() is now synchronous again since
3ff577e1 ("MAJOR: server: make server state changes synchronous again"),
and that we can easily identify if the update is from an operational
or administrative context thanks to "MINOR: server: pass adm and op cause
to srv_update_status()".
And given that administrative and operational updates cannot be cumulated
(since srv_update_status() is called synchronously and independently for
admin updates and state/operational updates, and the function directly
consumes the changes).
We split srv_update_status() in 2 distinct parts:
Either <type> is 0, meaning the update is an operational update which
is handled by directly looking at cur_state and next_state to apply the
proper transition.
Also, the check to prevent operational state from being applied
if MAINT admin flag is set is no longer needed given that the calling
functions already ensure this (ie: srv_set_{running,stopping,stopped)
Or <type> is 1, meaning the update is an administrative update, where
cur_admin and next_admin are evaluated to apply the proper transition and
deduct the resulting server state (next_state is updated implicitly).
Once this is done, both operations share a common code path in
srv_update_status() to update proxy and servers stats if required.
Thanks to this change, the function's behavior is much more predictable,
it is not an all-in-one function anymore. Either we apply an operational
change, else it is an administrative change. That's it, we cannot mix
the 2 since both code paths are now properly separated.
Operational and administrative state change causes are not propagated
through srv_update_status(), instead they are directly consumed within
the function to provide additional info during the call when required.
Thus, there is no valid reason for keeping adm and op causes within
server struct. We are wasting space and keeping uneeded complexity.
We now exlicitly pass change type (operational or administrative) and
associated cause to srv_update_status() so that no extra storage is
needed since those values are only relevant from srv_update_status().
Fixing function comments for the server state changing function since they
still refer to asynchonous propagation of server state which is no longer
in play.
Moreover, there were some mixups between running/stopping.
This one is greatly inspired by "MINOR: server: change adm_st_chg_cause storage type".
While looking at current srv_op_st_chg_cause usage, it was clear that
the struct needed some cleanup since some leftovers from asynchronous server
state change updates were left behind and resulted in some useless code
duplication, and making the whole thing harder to maintain.
Two observations were made:
- by tracking down srv_set_{running, stopped, stopping} usage,
we can see that the <reason> argument is always a fixed statically
allocated string.
- check-related state change context (duration, status, code...) is
not used anymore since srv_append_status() directly extracts the
values from the server->check. This is pure legacy from when
the state changes were applied asynchronously.
To prevent code duplication, useless string copies and make the reason/cause
more exportable, we store it as an enum now, and we provide
srv_op_st_chg_cause() function to fetch the related description string.
HEALTH and AGENT causes (check related) are now explicitly identified to
make consumers like srv_append_op_chg_cause() able to fetch checks info
from the server itself if they need to.
srv_append_status() has become a swiss-knife function over time.
It is used from server code and also from checks code, with various
inputs and distincts code paths, making it very hard to guess the
actual behavior of the function (resulting string output).
To simplify the logic behind it, we're dividing it in multiple contextual
functions that take simple inputs and do explicit things, making them
more predictable and easier to maintain.
Even though it doesn't look like it at first glance, this is more like
a cleanup than an actual code improvement:
Given that srv->adm_st_chg_cause has been used to exclusively store
static strings ever since it was implemented, we make the choice to
store it as an enum instead of a fixed-size string within server
struct.
This will allow to save some space in server struct, and will make
it more easily exportable (ie: event handlers) because of the
reduced memory footprint during handling and the ability to later get
the corresponding human-readable message when it's explicitly needed.
Now that we have a generic srv_lb_propagate(s) function, let's
use it each time we explicitly wan't to set the status down as
well.
Indeed, it is tricky to try to handle "down" case explicitly,
instead we use srv_lb_propagate() which will call the proper
function that will handle the new server state.
This will allow some code cleanup and will prevent any logic
error.
This commit depends on:
- "MINOR: server: propagate server state change to lb through single function"
Use a dedicated helper function to propagate server state change to
lb algorithms, since it is performed at multiple places within
srv_update_status() function.
Based on "BUG/MINOR: server: don't miss server stats update on server
state transitions", we're also taking advantage of the new centralized
logic to update down_trans server counter directly from there instead
of multiple places.
When restoring from a state file: the server "Status" reports weird values on
the html stats page:
"5s UP" becomes -> "? UP" after the restore
This is due to a bug in srv_state_srv_update(): when restoring the states
from a state file, we rely on date.tv_sec to compute the process-relative
server last_change timestamp.
This is wrong because everywhere else we use now.tv_sec when dealing
with last_change, for instance in srv_update_status().
date (which is Wall clock time) deviates from now (monotonic time) in the
long run.
They should not be mixed, and given that last_change is an internal time value,
we should rely on now.tv_sec instead.
last_change export through "show servers state" cli is safe since we export
a delta and not the raw time value in dump_servers_state():
srv_time_since_last_change = now.tv_sec - srv->last_change
--
While this bug affects all stable versions, it was revealed in 2.8 thanks
to 28360dc ("MEDIUM: clock: force internal time to wrap early after boot")
This is due to the fact that "now" immediately deviates from "date",
whereas in the past they had the same value when starting.
Thus prior to 2.8 the bug is trickier since it could take some time for
date and now to deviate sufficiently for the issue to arise, and instead
of reporting absurd values that are easy to spot it could just result in
last_change becoming inconsistent over time.
As such, the fix should be backported to all stable versions.
[for 2.2 the patch needs to be applied manually since
srv_state_srv_update() was named srv_update_state() and can be found in
server.c instead of server_state.c]
s->last_change and s->down_time updates were manually updated for each
effective server state change within srv_update_status().
This is rather error-prone, and as a result there were still some state
transitions that were not handled properly since at least 1.8.
ie:
- when transitionning from DRAIN to READY: downtime was updated
(which is wrong since a server in DRAIN state should not be
considered as DOWN)
- when transitionning from MAINT to READY: downtime was not updated
(this can be easily seen in the html stats page)
To fix these all at once, and prevent similar bugs from being introduced,
we centralize the server last_change and down_time stats logic at the end
of srv_update_status():
If the server state changed during the call, then it means that
last_change must be updated, with a special case when changing from
STOPPED state which means the server was previously DOWN and thus
downtime should be updated.
This patch depends on:
- "MINOR: server: explicitly commit state change in srv_update_status()"
This could be backported to every stable versions.
backend "down" stats logic has been duplicated multiple times in
srv_update_status(), resulting in the logic now being error-prone.
For example, the following bugfix was needed to compensate for a
copy-paste introduced bug: d332f139
("BUG/MINOR: server: update last_change on maint->ready transitions too")
While the above patch works great, we actually forgot to update the
proxy downtime like it is done for other down->up transitions...
This is simply illustrating that the current design is error-prone,
it is very easy to miss something in this area.
To properly update the proxy downtime stats on the maint->ready transition,
to cleanup srv_update_status() and to prevent similar bugs from being
introduced in the future, proxy/backend stats update are now automatically
performed at the end of the server state change if needed.
Thus we can remove existing updates that were performed at various places
within the function, this simplifies things a bit.
This patch depends on:
- "MINOR: server: explicitly commit state change in srv_update_status()"
This could be backported to all stable versions.
Backport notes:
2.2:
Replace
struct task *srv_cleanup_toremove_conns(struct task *task, void *context, unsigned int state)
by
struct task *srv_cleanup_toremove_connections(struct task *task, void *context, unsigned short state)
As shown in 8f29829 ("BUG/MEDIUM: checks: a down server going to
maint remains definitely stucked on down state."), state changes
that don't result in explicit lb state change, require us to perform
an explicit server state commit to make sure the next state is
applied before returning from the function.
This is the case for server state changes that don't trigger lb logic
and only perform some logging.
This is quite error prone, we could easily forget a state change
combination that could result in next_state, next_admin or next_eweight
not being applied. (cur_state, cur_admin and cur_eweight would be left
with unexpected values)
To fix this, we explicitly call srv_lb_commit_status() at the end
of srv_update_status() to enforce the new values, even if they were
already applied. (when a state changes requires lb state update
an implicit commit is already performed)
Applying the state change multiple times is safe (since the next value
always points to the current value).
Backport notes:
2.2:
Replace
struct task *srv_cleanup_toremove_conns(struct task *task, void *context, unsigned int state)
by
struct task *srv_cleanup_toremove_connections(struct task *task, void *context, unsigned short state)
Report message for tracking servers completely leaving drain is wrong:
The check for "leaving drain .. via" never evaluates because the
condition !(s->next_admin & SRV_ADMF_FDRAIN) is always true in the
current block which is guarded by !(s->next_admin & SRV_ADMF_DRAIN).
For tracking servers that leave inherited drain mode, this results in the
following message being emitted:
"Server x/b is UP (leaving forced drain)"
Instead of:
"Server x/b is UP (leaving drain) via x/a"
To this fix: we check if FDRAIN is currently set, else it means that the
drain status is inherited from the tracked server (IDRAIN)
This regression was introduced with 64cc49cf ("MAJOR: servers: propagate server
status changes asynchronously."), thus it may be backported to every stable
versions.
'when' optional argument is provided to lua event handlers.
It is an integer representing the number of seconds elapsed since Epoch
and may be used in conjunction with lua `os.date()` function to provide
a custom format string.
For advanced async handlers only
(Registered using EVENT_HDL_ASYNC_TASK() macro):
event->when is provided as a struct timeval and fetched from 'date'
haproxy global variable.
Thanks to 'when', related event consumers will be able to timestamp
events, even if they don't work in real-time or near real-time.
Indeed, unlike sync or normal async handlers, advanced async handlers
could purposely delay the consumption of pending events, which means
that the date wouldn't be accurate if computed directly from within
the handler.
Add the ability to provide a cleanup function for event data passed
via the publishing function.
One use case could be the need to provide valid pointers in the safe
section of the data struct.
Cleanup function will be automatically called with data (or copy of data)
as argument when all handlers consumed the event, which provides an easy
way to release some memory or decrement refcounts to ressources that were
provided through the data struct.
data in itself may not be freed by the cleanup function, it is handled
by the API.
This would allow passing large (allocated) data blocks through the data
struct while keeping data struct size under the EVENT_HDL_ASYNC_EVENT_DATA
size limit.
To do so, when publishing an event, where we would currently do:
struct event_hdl_cb_data_new_family event_data;
/* safe data, available from both sync and async contexts
* may not use pointers to short-living resources
*/
event_data.safe.my_custom_data = x;
/* unsafe data, only available from sync contexts */
event_data.unsafe.my_unsafe_data = y;
/* once data is prepared, we can publish the event */
event_hdl_publish(NULL,
EVENT_HDL_SUB_NEW_FAMILY_SUBTYPE_1,
EVENT_HDL_CB_DATA(&event_data));
We could do:
struct event_hdl_cb_data_new_family event_data;
/* safe data, available from both sync and async contexts
* may not use pointers to short-living resources,
* unless EVENT_HDL_CB_DATA_DM is used to ensure pointer
* consistency (ie: refcount)
*/
event_data.safe.my_custom_static_data = x;
event_data.safe.my_custom_dynamic_data = malloc(1);
/* unsafe data, only available from sync contexts */
event_data.unsafe.my_unsafe_data = y;
/* once data is prepared, we can publish the event */
event_hdl_publish(NULL,
EVENT_HDL_SUB_NEW_FAMILY_SUBTYPE_1,
EVENT_HDL_CB_DATA_DM(&event_data, data_new_family_cleanup));
With data_new_family_cleanup func which would look like this:
void data_new_family_cleanup(const void *data)
{
const struct event_hdl_cb_data_new_family *event_data = ptr;
/* some data members require specific cleanup once the event
* is consumed
*/
free(event_data.safe.my_custom_dynamic_data);
/* don't ever free data! it is not ours */
}
Not sure if this feature will become relevant in the future, so I prefer not
to mention it in the doc for now.
But given that the implementation is trivial and does not put a burden
on the existing API, it's a good thing to have it there, just in case.
This commit does nothing that ought to be mentioned, except that
it adds missing comments and slighty moves some function calls
out of "sensitive" code in preparation of some server code refactors.
Changing hlua_event_hdl_cb_data_push_args() return type to void since it
does not return anything useful.
Also changing its name to hlua_event_hdl_cb_push_args() since it does more
than just pushing cb data argument (it also handles event type and mgmt).
Errors catched by the function are reported as lua errors.
Since "MINOR: server/event_hdl: add proxy_uuid to event_hdl_cb_data_server"
we may now use proxy_uuid variable to perform proxy lookups when
handling a server event.
It is more reliable since proxy_uuid isn't subject to any size limitation
Expose proxy_uuid variable in event_hdl_cb_data_server struct to
overcome proxy_name fixed length limitation.
proxy_uuid may be used by the handler to perform proxy lookups.
This should be preferred over lookups relying proxy_name.
(proxy_name is suitable for printing / logging purposes but not for
ID lookups since it has a maximum fixed length)
srv_update_status() function comment says that the function "is designed
to be called asynchronously".
While this used to be true back then with 64cc49cf
("MAJOR: servers: propagate server status changes asynchronously.")
This is not true anymore since 3ff577e ("MAJOR: server: make server state changes
synchronous again")
Fixing the comment in order to better reflect current behavior.
Since 9f903af5 ("MEDIUM: log: slightly refine the output format of
alerts/warnings/etc"), messages generated by ha_{alert,warning,notice}
don't embed date/time information anymore.
Updating some old function comments that kept saying otherwise.
A BUG_ON crash can occur on qc_rcv_buf() if a Rx packet allocation
failed.
To fix this, datagram are marked as consumed even if a fatal error
occured during parsing. For the moment, only a Rx packet allocation
failure could provoke this. At this stage, it's unknown if the datagram
were partially parsed or not at all so it's better to discard it
completely.
This bug was detected using -dMfail argument.
This should be backported up to 2.7.
Properly initialize el_th_ctx member first on qc_new_conn(). This
prevents a segfault if release should be called later due to memory
allocation failure in the function on qc_detach_th_ctx_list().
This should be backported up to 2.7.
Do not emit a CONNECTION_CLOSE on h3s allocation failure. Indeed, this
causes a crash as the calling function qcs_new() will also try to emit a
CONNECTION_CLOSE which triggers a BUG_ON() on qcc_emit_cc().
This was reproduced using -dMfail.
This should be backported up to 2.7.
Previously, if a STREAM frame cannot be allocated for emission, a crash
would occurs due to an ABORT_NOW() statement in _qc_send_qcs().
Replace this by proper error code handling. Each stream were sending
fails are removed temporarily from qcc::send_list to a list local to
_qc_send_qcs(). Once emission has been conducted for all streams,
reinsert failed stream to qcc::send_list. This avoids to reloop on
failed streams on the second while loop at the end of _qc_send_qcs().
This crash was reproduced using -dMfail.
This should be backported up to 2.6.
On MUX initialization, the application layer is setup via
qcc_install_app_ops(). If this function fails MUX is deallocated and an
error is returned.
This code path causes a crash before connection has been registered
prior into the mux_stopping_data::list for stopping idle frontend conns.
To fix this, insert the connection later in qc_init() once no error can
occured.
The crash was seen on the process closing with SUGUSR1 with a segfault
on mux_stopping_process(). This was reproduced using -dMfail.
This regression was introduced by the following patch :
commit b4d119f0c7
BUG/MEDIUM: mux-quic: fix crash on H3 SETTINGS emission
This should be backported up to 2.7.
Again a now_ms variable value used without the ticks API. It is used
to store the generation time of the Retry token to be received back
from the client.
Must be backported to 2.6 and 2.7.
As server, an Initial does not contain a token but only the token length field
with zero as value. The remaining room was not checked before writting this field.
Must be backported to 2.6 and 2.7.
Since this commit:
BUG/MINOR: quic: Possible wrapped values used as ACK tree purging limit.
There are more chances that ack ranges may be removed from their trees when
building a packet. It is preferable to impose a limit to these trees. This
will be the subject of the a next commit to come.
For now on, it is sufficient to stop deleting ack range from their trees.
Remove quic_ack_frm_reduce_sz() and quic_rm_last_ack_ranges() which were
there to do that.
Make qc_frm_len() support ACK frames and calls it to ensure an ACK frame
may be added to a packet before building it.
Must be backported to 2.6 and 2.7.
Overriding global coroutine.create() function in order to link the
newly created subroutine with the parent hlua ctx.
(hlua_gethlua() function from a subroutine will return hlua ctx from the
hlua ctx on which the coroutine.create() was performed, instead of NULL)
Doing so allows hlua_hook() function to support being called from
subroutines created using coroutine.create() within user lua scripts.
That is: the related subroutine will be immune to the forced-yield,
but it will still be checked against hlua timeouts. If the subroutine
fails to yield or finish before the timeout, the related lua handler will
be aborted (instead of going rogue unnoticed like it would be the case prior
to this commit)
When forcing a yield attempt from hlua_hook(), we should perform it on
the known hlua state, not on a potential substate created using
coroutine.create() from an existing hlua state from lua script.
Indeed, only true hlua couroutines will properly handle the yield and
perform the required timeout checks when returning in hlua_ctx_resume().
So far, this was not a concern because hlua_gethlua() would return NULL
if hlua_hook() is not directly being called from a hlua coroutine anyway.
But with this we're trying to make hlua_hook() ready for being called
from a subcoroutine which inherits from a parent hlua ctx.
In this case, no yield attempt will be performed, we will simply check
for hlua timeouts.
Not doing so would result in the timeout checks not being performed since
hlua_ctx_resume() is completely bypassed when yielding from the subroutine,
resulting in a user-defined coroutine potentially going rogue unnoticed.
Not all hlua "time" variables use the same time logic.
hlua->wake_time relies on ticks since its meant to be used in conjunction
with task scheduling. Thus, it should be stored as a signed int and
manipulated using the tick api.
Adding a few comments about that to prevent mixups with hlua internal
timer api which doesn't rely on the ticks api.
The "burst" execution timeout applies to any Lua handler.
If the handler fails to finish or yield before timeout is reached,
handler will be aborted to prevent thread contention, to prevent
traffic from not being served for too long, and ultimately to prevent
the process from crashing because of the watchdog kicking in.
Default value is 1000ms.
Combined with forced-yield default value of 10000 lua instructions, it
should be high enough to prevent any existing script breakage, while
still being able to catch slow lua converters or sample fetches
doing thread contention and risking the process stability.
Setting value to 0 completely bypasses this check. (not recommended but
could be required to restore original behavior if this feature breaks
existing setups somehow...)
No backport needed, although it could be used to prevent watchdog crashes
due to poorly coded (slow/cpu consuming) lua sample fetches/converters.
For non yieldable lua handlers (converters, fetches or yield
incompatible lua functions), current timeout detection relies on now_ms
thread local variable.
But within non-yieldable contexts, now_ms won't be updated if not by us
(because we're momentarily stuck in lua context so we won't
re-enter the polling loop, which is responsible for clock updates).
To circumvent this, clock_update_date(0, 1) was manually performed right
before now_ms is being read for the timeout checks.
But this fails to work consistently, because if no other concurrent
threads periodically run clock_update_global_date(), which do happen if
we're the only active thread (nbthread=1 or low traffic), our
clock_update_date() call won't reliably update our local now_ms variable
Moreover, clock_update_date() is not the right tool for this anyway, as
it was initially meant to be used from the polling context.
Using it could have negative impact on other threads relying on now_ms
to be stable. (because clock_update_date() performs global clock update
from time to time)
-> Introducing hlua multipurpose timer, which is internally based on
now_cpu_time_fast() that provides per-thread consistent clock readings.
Thanks to this new hlua timer API, hlua timeout logic is less error-prone
and more robust.
This allows the timeout detection to work as expected for both yieldable
and non-yieldable lua handlers.
This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function"
While this could theorically be backported to all stable versions,
it is advisable to avoid backports unless we're confident enough
since it could cause slight behavior changes (timing related) in
existing setups.
Same as now_cpu_time(), but for fast queries (less accurate)
Relies on now_cpu_time() and now_mono_time_fast() is used
as a cache expiration hint to prevent now_cpu_time() from being
called too often since it is known to be quite expensive.
Depends on commit "MINOR: clock: add now_mono_time_fast() function"
Same as now_mono_time(), but for fast queries (less accurate)
Relies on coarse clock source (also known as fast clock source on
some systems).
Fallback to now_mono_time() if coarse source is not supported on the system.
Commit 5003ac7fe ("MEDIUM: config: set useful ALPN defaults for HTTPS
and QUIC") revealed a build dependency bug: if QUIC is not enabled,
cfgparse doesn't have any dependency on the SSL stack, so the various
ifdefs that try to check special conditions such as rejecting an H2
config with too small a bufsize, are silently ignored. This was
detected because the default ALPN string was not set and caused the
alpn regtest to fail without QUIC support. Adding openssl-compat to
the list of includes seems to be sufficient to have what we need.
It's unclear when this dependency was broken, it seems that even 2.2
didn't have an explicit dependency on anything SSL-related, though it
could have been inherited through other files (as happens with QUIC
here). It would be safe to backport it to all stable branches. The
impact is very low anyway.
The following commit introduced a regression :
commit 1a5cc19cec
MINOR: quic: adjust Rx packet type parsing
Since this commit, qv variable was left to NULL as version is stored
directly in quic_rx_packet instance. In most cases, this only causes
traces to skip version printing. However, qv is dereferenced when
sending a Retry which causes a segfault.
To fix this, simply remove qv variable and use pkt->version instead,
both for traces and send_retry() invocation.
This bug was detected thanks to QUIC interop runner. It can easily be
reproduced by using quic-force-retry on the bind line.
This must be backported up to 2.7.
This commit makes sure that if three is no "alpn", "npn" nor "no-alpn"
setting on a "bind" line which corresponds to an HTTPS or QUIC frontend,
we automatically turn on "h2,http/1.1" as an ALPN default for an HTTP
listener, and "h3" for a QUIC listener. This simplifies the configuration
for end users since they won't have to explicitly configure the ALPN
string to enable H2, considering that at the time of writing, HTTP/1.1
represents less than 7% of the traffic on large infrastructures. The
doc and regtests were updated. For more info, refer to the following
thread:
https://www.mail-archive.com/haproxy@formilux.org/msg43410.html
While it does not have any effect, it's better not to try to setup an
ALPN callback nor to try to lookup algorithms when the configured ALPN
string is empty as a result of "no-alpn" being used.
It's possible to replace a previously set ALPN but not to disable ALPN
if it was previously set. The new "no-alpn" setting allows to disable
a previously set ALPN setting by preparing an empty one that will be
replaced and freed when the config is validated.
On sending path, a pending error can be promoted to a terminal error at the
endpoint level (SE_FL_ERR_PENDING to SE_FL_ERROR). When this happens, we
must propagate the error on the SC to be able to handle it at the stream
level and eventually forward it to the other side.
Because of this bug, it is possible to freeze sessions, for instance on the
CLI.
It is a 2.8-specific issue. No backport needed.
When compiled in debug mode, HAProxy prints a debug message at the end of
the cli I/O handle. It is pretty annoying and useless because, we can active
applet traces. Thus, just remove it.
When compiled in debug mode, HAProxy prints a debug message at the beginning
of assign_server(). It is pretty annoying and useless because, in debug
mode, we can active stream traces. Thus, just remove it.
The commit 9704797fa ("BUG/MEDIUM: http-ana: Properly switch the request in
tunnel mode on upgrade") fixes the switch in TUNNEL mode, but only
partially. Because both channels are switch in TUNNEL mode in same time on
one side, the channel's analyzers on the opposite side are not updated
accordingly. This prevents the tunnel timeout to be applied.
So instead of updating both sides in same time, we only force the analysis
on the other side by setting CF_WAKE_ONCE flag when a channel is switched in
TUNNEL mode. In addition, we must take care to forward all data if there is
no DATAa TCP filters registered.
This patch is related to the issue #2125. It is 2.8-specific. No backport
needed.
Remove the receiver RX_F_LOCAL_ACCEPT flag. This was used by QUIC
protocol before thread rebinding was supported by the quic_conn layer.
This should be backported up to 2.7 after the previous patch has also
been taken.
Before this patch, QUIC protocol used a custom add_listener callback.
This was because a quic_conn instance was allocated before accept. Its
thread affinity was fixed and could not be changed after. The thread was
derived itself from the CID selected by the client which prevent an even
repartition of QUIC connections on multiple threads.
A series of patches was introduced with a lot of changes. The most
important ones :
* removal of affinity between an encoded CID and a thread
* possibility to rebind a quic_conn on a new thread
Thanks to this, it's possible to suppress the custom add_listener
callback. Accept is conducted for QUIC protocol as with the others. A
less loaded thread is selected on listener_accept() and the connection
stack is bind on it. This operation implies that quic_conn instance is
moved to the new thread using the set_affinity QUIC protocol callback.
To reactivate quic_conn instance after thread rebind,
qc_finalize_affinity_rebind() is called after accept on the new thread
by qc_xprt_start() through accept_queue_process() / session_accept_fd().
This should be backported up to 2.7 after a period of observation.
When a quic_conn instance is rebinded on a new thread its tasks and
tasklet are destroyed and new ones created. Its socket is also migrated
to a new thread which stop reception on it.
To properly reactivate a quic_conn after rebind, wake up its tasks and
tasklet if they were active before thread rebind. Also reactivate
reading on the socket FD. These operations are implemented on a new
function qc_finalize_affinity_rebind().
This should be backported up to 2.7 after a period of observation.
qc_set_timer() function is used to rearm the timer for loss detection
and probing. Previously, timer was always rearm when congestion window
was free due to a wrong interpretation of the RFC which mandates the
client to rearm the timer before handshake completion to avoid a
deadlock related to anti-amplification.
Fix this by removing this code from quic_pto_pktns(). This allows
qc_set_timer() to be reentrant and only activate the timer if needed.
The impact of this bug seems limited. It can probably caused the timer
task to be processed too frequently which could caused too frequent
probing.
This change will allow to reuse easily qc_set_timer() after quic_conn
thread migration. As such, the new timer task will be scheduled only if
needed.
This should be backported up to 2.6.
Implement a new function qc_set_tid_affinity(). This function is
responsible to rebind a quic_conn instance to a new thread.
This operation consists mostly of releasing existing tasks and tasklet
and allocating new instances on the new thread. If the quic_conn uses
its owned socket, it is also migrated to the new thread. The migration
is finally completed with updated the CID TID to the new thread. After
this step, the connection is thus accessible to the new thread and
cannot be access anymore on the old one without risking race condition.
To ensure rebinding is either done completely or not at all, tasks and
tasklet are pre-allocated before all operations. If this fails, an error
is returned and rebiding is not done.
To destroy the older tasklet, its context is set to NULL before wake up.
In I/O callbacks, a new function qc_process() is used to check context
and free the tasklet if NULL.
The thread rebinding can cause a race condition if the older thread
quic_dghdlrs::dgrams list contains datagram for the connection after
rebinding is done. To prevent this, quic_rx_pkt_retrieve_conn() always
check if the packet CID is still associated to the current thread or
not. In the latter case, no connection is returned and the new thread is
returned to allow to redispatch the datagram to the new thread in a
thread-safe way.
This should be backported up to 2.7 after a period of observation.
When QUIC handshake is completed on our side, some frames are prepared
to be sent :
* HANDSHAKE_DONE
* several NEW_CONNECTION_ID with CIDs allocated
This step was previously executed in quic_conn_io_cb() directly after
CRYPTO frames parsing. This patch delays it to be completed after
accept. Special care have been taken to ensure it is still functional
with 0-RTT activated.
For the moment, this patch should have no impact. However, when
quic_conn thread migration on accept will be implemented, it will be
easier to remap only one CID to the new thread. New CIDs will be
allocated after migration on the new thread.
This should be backported up to 2.7 after a period of observation.
Define a new protocol callback set_affinity. This function is used
during listener_accept() to notify about a rebind on a new thread just
before pushing the connection on the selected thread queue. If the
callback fails, accept is done locally.
This change will be useful for protocols with state allocated before
accept is done. For the moment, only QUIC protocol is concerned. This
will allow to rebind the quic_conn to a new thread depending on its
load.
This should be backported up to 2.7 after a period of observation.
Each quic_conn is inserted in an accept queue to allocate the upper
layers. This is done through a listener tasklet in
quic_sock_accept_conn().
This patch interrupts the accept process for a quic_conn in
closing/draining state. Indeed, this connection will soon be closed so
it's unnecessary to allocate a complete stack for it.
This patch will become necessary when thread migration is implemented.
Indeed, it won't be allowed to proceed to thread migration for a closing
quic_conn.
This should be backported up to 2.7 after a period of observation.
TID encoding in CID was removed by a recent change. It is now possible
to access to the <tid> member stored in quic_connection_id instance.
For unknown CID, a quick solution was to redispatch to the thread
corresponding to the first CID byte. This ensures that an identical CID
will always be handled by the same thread to avoid creating multiple
same connection. However, this forces an uneven load repartition which
can be critical for QUIC handshake operation.
To improve this, remove the above constraint. An unknown CID is now
handled by its receiving thread. However, this means that if multiple
packets are received with the same unknown CID, several threads will try
to allocate the same connection.
To prevent this race condition, CID insertion in global tree is now
conducted first before creating the connection. This is a thread-safe
operation which can only be executed by a single thread. The thread
which have inserted the CID will then proceed to quic_conn allocation.
Other threads won't be able to insert the same CID : this will stop the
treatment of the current packet which is redispatch to the now owning
thread.
This should be backported up to 2.7 after a period of observation.
CIDs were moved from a per-thread list to a global list instance. The
TID-encoded is thus non needed anymore.
This should be backported up to 2.7 after a period of observation.
Previously, quic_connection_id were stored in a per-thread tree list.
Datagram were first dispatched to the correct thread using the encoded
TID before a tree lookup was done.
Remove these trees and replace it with a global trees list of 256
entries. A CID is using the list index corresponding to its first byte.
On datagram dispatch, CID is lookup on its tree and TID is retrieved
using new member quic_connection_id.tid. As such, a read-write lock
protects each list instances. With 256 entries, it is expected that
contention should be reduced.
A new structure quic_cid_tree served as a tree container associated with
its read-write lock. An API is implemented to ensure lock safety for
insert/lookup/delete operation.
This patch is a step forward to be able to break the affinity between a
CID and a TID encoded thread. This is required to be able to migrate a
quic_conn after accept to select thread based on their load.
This should be backported up to 2.7 after a period of observation.
Remove <tid> member in quic_conn. This is moved to quic_connection_id
instance.
For the moment, this change has no impact. Indeed, qc.tid reference
could easily be replaced by tid as all of this work was already done on
the connection thread. However, it is planified to support quic_conn
thread migration in the future, so removal of qc.tid will simplify this.
This should be backported up to 2.7.
ODCID are never stored in the CID tree. Instead, we store our generated
CID which is directly derived from the CID using a hash function. This
operation is done via quic_derive_cid().
Previously, generated CID was returned as a 64-bits integer. However,
this is cumbersome to convert as an array of bytes which is the most
common CID representation. Adjust this by modifying return type to a
quic_cid struct.
This should be backported up to 2.7.
qc_parse_hd_form() is the function used to parse the first byte of a
packet and return its type and version. Its API has been simplified with
the following changes :
* extra out paremeters are removed (long_header and version). All infos
are now stored directly in quic_rx_packet instance
* a new dummy version is declared in quic_versions array with a 0 number
code. This can be used to match Version negotiation packets.
* a new default packet type is defined QUIC_PACKET_TYPE_UNKNOWN to be
used as an initial value.
Also, the function has been exported to an include file. This will be
useful to be able to reuse on quic-sock to parse the first packet of a
datagram.
This should be backported up to 2.7.
Two different structs exists for QUIC connection ID :
* quic_connection_id which represents a full CID with its sequence
number
* quic_cid which is just a buffer with a length. It is contained in the
above structure.
To better differentiate them, rename all quic_connection_id variable
instances to "conn_id" by contrast to "cid" which is used for quic_cid.
This should be backported up to 2.7.
Remove quic_conn instance as first parameter of
quic_stateless_reset_token_init() and quic_stateless_reset_token_cpy()
functions. It was only used for trace purpose.
The main advantage is that it will be possible to allocate a QUIC CID
without a quic_conn instance using new_quic_cid() which is requires to
first check if a CID is existing before allocating a connection.
This should be backported up to 2.7.
QUIC_LOCK label is never used. Indeed, lock usage is minimal on QUIC as
every connection is pinned to its owned thread.
This should be backported up to 2.7.
Adjust BUG_ON() statement to allow tasklet_wakeup_after() for tasklets
with tid pinned to -1 (the current thread). This is similar to
tasklet_wakeup().
This should be backported up to 2.6.
For a long time the maximum number of concurrent streams was set once for
both sides (front and back) while the impacts are different. This commit
allows it to be configured separately for each side. The older settings
remains the fallback choice when other ones are not set.
For a long time the initial window size (per-stream size) was set once
for both directions, frontend and backend, resulting in a tradeoff between
upload speed and download fairness. This commit allows it to be configured
separately for each side. The older settings remains the fallback choice
when other ones are not set.
In the same way than for a stream-connector attached to a mux, an EOS is now
propagated from an applet to its stream-connector. To do so, sc_applet_eos()
function is added.
Now there is a SC flag to state the endpoint has reported an end-of-stream,
it is possible to distinguish an EOS from an abort at the stream-connector
level.
sc_conn_read0() function is renamed to sc_conn_eos() and it propagates an
EOS by setting SC_FL_EOS instead of SC_FL_ABRT_DONE. It only concernes
stream-connectors attached to a mux.
SC_FL_EOS flag is added to report the end-of-stream at the SC level. It will
be used to distinguish end of stream reported by the endoint, via the
SE_FL_EOS flag, and the abort triggered by the stream, via the
SC_FL_ABRT_DONE flag.
In this patch, the flag is defined and is systematically tested everywhere
SC_FL_ABRT_DONE is tested. It should be safe because it is never set.
In the syslog applet, when there is no output data, nothing is performed and
the applet leaves by requesting more data. But it is an issue because a
client abort is only handled if it reported with the last bytes of the
message. If the abort occurs after the message was handled, it is ignored.
The session remains opened and inactive until the client timeout is being
triggered. It no such timeout is configured, given that the default maxconn
is 10, all slots can be quickly busy and make the applet unresponsive.
To fix the issue, the best is to always try to read a message when the I/O
handle is called. This way, the abort can be handled. And if there is no
data, we leave as usual.
This patch should fix the issue #2112. It must be backported as far as 2.4.
Since the commit f2b02cfd9 ("MAJOR: http-ana: Review error handling during
HTTP payload forwarding"), during the payload forwarding, we are analyzing a
side, we stop to test the opposite side. It means when the HTTP request
forwarding analyzer is called, we no longer check the response side and vice
versa.
Unfortunately, since then, the HTTP tunneling is broken after a protocol
upgrade. On the response is switch in TUNNEL mode. The request remains in
DONE state. As a consequence, data received from the server are forwarded to
the client but not data received from the client.
To fix the bug, when both sides are in DONE state, both are switched in same
time in TUNNEL mode if it was requested. It is performed in the same way in
http_end_request() and http_end_response().
This patch should fix the issue #2125. It is 2.8-specific. No backport
needed.
As revealed by GH #2120 opened by @Tristan971, there are cases where ACKs
have to be sent without packet to acknowledge because the ACK timer has
been triggered and the connection needs to probe the peer at the same time.
Indeed
Thank you to @Tristan971 for having reported this issue.
Must be backported to 2.6 and 2.7.
It is the last commit on this subject. we stop to use SE_FL_ERROR flag from
the SC, except at the I/O level. Otherwise, we rely on SC_FL_ERROR
flag. Now, there should be a real separation between SE flags and SC flags.
In the same way the previous commit, we stop to use SE_FL_ERROR flag from
analyzers and their sub-functions. We now fully rely on SC_FL_ERROR to do so.
SE_FL_ERROR flag is no longer set when an error is detected durign the
connection establishment. SC_FL_ERROR flag is set instead. So it is safe to
remove test on SE_FL_ERROR to detect connection establishment error.
We can now fully rely on SC_FL_ERROR flag from the stream. The first step is
to stop to set the SE_FL_ERROR flag. Only endpoints are responsible to set
this flag. It was a design limitation. It is now fixed.
From the stream, when SE_FL_ERROR flag is tested, we now also test the
SC_FL_ERROR flag. Idea is to stop to rely on the SE descriptor to detect
errors.
The flag SC_FL_ERROR is added to ack errors on the endpoint. When
SE_FL_ERROR flag is detected on the SE descriptor, the corresponding is set
on the SC. Idea is to avoid, as far as possible, to manipulated the SE
descriptor in upper layers and know when an error in the endpoint is handled
by the SC.
For now, this flag is only set and cleared but never tested.
There is no reason to remove this flag. When the SC endpoint is reset, it is
replaced by a new one. The old one is released. It was useful when the new
endpoint inherited some flags from the old one. But it is no longer
performed. Thus there is no reason still unset this flag.
When an applet is woken up, before calling its io_handler, we pretend it has
no more data to deliver. So, after the io_handler execution, it is a bug if
an applet states it has more data to deliver while the end of input is
reached.
So a BUG_ON() is added to be sure it never happens.
It is not the SC responsibility to report errors on the SE descriptor. It is
the endpoint responsibility. It must switch SE_FL_ERR_PENDING into
SE_FL_ERROR if the end of stream was detected. It can even be considered as
a bug if it is not done by he endpoint.
So now, on sending path, a BUG_ON() is added to abort if SE_FL_EOS and
SE_FL_ERR_PENDING flags are set but not SE_FL_ERROR. It is trully important
to handle this case in the endpoint to be able to properly shut the endpoint
down.
SE_FL_ERR_PENDING is used to report an error on the write side. But it is
not a terminal error. Some incoming data may still be available. In the cli
analyzers, it is important to not close the stream when this flag is
set. Otherwise the response to a command can be truncated. It is probably
hard to observe. But it remains a bug.
While this patch could be backported to 2.7, there is no real reason to do
so, except if someone reports a bug about truncated responses.
Here again, it is just a flag renaming. In SC flags, there is no longer
shutdown for reads but aborts. For now this flag is set when a read0 is
detected. It is of couse not accurate. This will be changed later.
After the flag renaming, it is now the turn for the channel function to be
renamed and moved in the SC scope. channel_shutw_now() is replaced by
sc_schedule_shutdown(). The request channel is replaced by the front SC and
the response is replace by the back SC.
Because shutowns for reads are now considered as aborts, the shudowns for
writes can now be considered as shutdowns. Here it is just a flag
renaming. SC_FL_SHUTW_NOW is renamed SC_FL_SHUT_WANTED.
After the flag renaming, it is now the turn for the channel function to be
renamed and moved in the SC scope. channel_shutr_now() is replaced by
sc_schedule_abort(). The request channel is replaced by the front SC and the
response is replace by the back SC.
Most of calls to channel_abort() are associated to a call to
channel_auto_close(). Others are in areas where the auto close is the
default. So, it is now systematically enabled when an abort is performed on
a channel, as part of channel_abort() function.
First, it is useless to abort the both channel explicitly. For HTTP streams,
http_reply_and_close() is called. This function already take care to abort
processing. For TCP streams, we can rely on stream_retnclose().
To set termination flags, we can also rely on http_set_term_flags() for HTTP
streams and sess_set_term_flags() for TCP streams. Thus no reason to handle
them by hand.
At the end, the error handling after filters evaluation is now quite simple.
We erroneously though that an attempt to receive data was not possible if the SC
was waiting for more room in the channel buffer. A BUG_ON() was added to detect
bugs. And in fact, it is possible.
The regression was added in commit 341a5783b ("BUG/MEDIUM: stconn: stop to
enable/disable reads from streams via si_update_rx").
This patch should fix the issue #2115. It must be backported if the commit
above is backported.
A regression was introduced when stream's timeouts were refactored. Write
timeouts are not testing is the right order. When timeous of the front SC
are handled, we must then test the read timeout on the request channel and
the write timeout on the response channel. But write timeout is tested on
the request channel instead. On the back SC, the same mix-up is performed.
We must be careful to handle timeouts before checking channel flags. To
avoid any confusions, all timeuts are handled first, on front and back SCs.
Then flags of the both channels are tested.
It is a 2.8-specific issue. No backport needed.
There is a bug at begining of process_stream(). The SE_FL_ERROR flag is
tested against backend stream-connector's flags instead of its SE
descriptor's flags. It is an old typo, introduced when the stream-interfaces
were replaced by the conn-streams.
This patch must be backported as far as 2.6.
This bug arrived with this commit:
MEDIUM: quic: Ack delay implementation
After having probed the Handshake packet number space, one must not select the
Application encryption level to continue trying building packets as this is done
when the connection is not probing. Indeed, if the ACK timer has been triggered
in the meantime, the packet builder will try to build a packet at the Application
encryption level to acknowledge the received packet. But there is very often
no 01RTT packet to acknowledge when the connection is probing before the
handshake is completed. This triggers a BUG_ON() in qc_do_build_pkt() which
checks that the tree of ACK ranges to be used is not empty.
Thank you to @Tristan971 for having reported this issue in GH #2109.
Must be backported to 2.6 and 2.7.
qel->pktns->tx.pto_probe is set to 0 after having prepared a probing
datagram. There is no reason to check this parameter. Furthermore
it is always 0 when the connection does not probe the peer.
Must be backported to 2.6 and 2.7.
As reported by @Tristan971 in GH #2116, the congestion control window could be zero
due to an inversion in the code about the reduction factor to be applied.
On a new loss event, it must be applied to the slow start threshold and the window
should never be below ->min_cwnd (2*max_udp_payload_sz).
Same issue in both newReno and cubic algorithm. Furthermore in newReno, only the
threshold was decremented.
Must be backported to 2.6 and 2.7.
Add two missing checks not to substract too big values from another too little
one. In this case the resulted wrapped huge values could be passed to the function
which has to remove the last range of a tree of ACK ranges as encoded limit size
not to go below, cancelling the ACK ranges deletion. The consequence could be that
no ACK were sent.
Must be backported to 2.6 and 2.7.
qc_may_build_pkt() has been modified several times regardless of the conditions
the functions it is supposed to allow to send packets (qc_build_pkt()/qc_do_build_pkt())
really use to finally send packets just after having received others, leading
to contraditions and possible very long loops sending empty packets (PADDING only packets)
because qc_may_build_pkt() could allow qc_build_pkt()/qc_do_build_pkt to build packet,
and the latter did nothing except sending PADDING frames, because from its point
of view they had nothing to send.
For now on, this is the job of qc_may_build_pkt() to decide to if there is
packets to send just after having received others AND to provide this information
to the qc_build_pkt()/qc_do_build_pkt()
Note that the unique case where the acknowledgements are completely ignored is
when the endpoint must probe. But at least this is when sending at most two datagrams!
This commit also fixes the issue reported by Willy about a very low throughput
performance when the client serialized its requests.
Must be backported to 2.7 and 2.6.
This should help in diagnosing issues.
Some adjustments have to be done to avoid deferencing a quic_conn objects from
TRACE_*() calls.
Must be backported to 2.7 and 2.6.
Do not ignore very short RTTs (less than 1ms) before computing the smoothed
RTT initializing it to an "infinite" value (UINT_MAX).
Must be backported to 2.7 and 2.6.
Add the number of packet losts and the maximum congestion control window computed
by the algorithms to "show quic".
Same thing for the traces of existent congestion control algorithms.
Must be backported to 2.7 and 2.6.
In fd_update_events(), we loop until there's no bit in the running_mask
that is not in the thread_mask. Problem is, the thread sets its
running_mask bit before that loop, and so if 2 threads do the same, and
a 3rd one just closes the FD and sets the thread_mask to 0, then
running_mask will always be non-zero, and we will loop forever. This is
trivial to reproduce when using a DNS resolver that will just answer
"port unreachable", but could theoretically happen with other types of
file descriptors too.
To fix that, just don't bother looping if we're no longer in the
thread_mask, if that happens we know we won't have to take care of the
FD, anyway.
This should be backported to 2.7, 2.6 and 2.5.
Setting "shards by-group" will create one shard per thread group. This
can often be a reasonable tradeoff between a single one that can be
suboptimal on CPUs with many cores, and too many that will eat a lot
of file descriptors. It was shown to provide good results on a 224
thread machine, with a distribution that was even smoother than the
system's since here it can take into account the number of connections
per thread in the group. Depending on how popular it becomes, it could
even become the default setting in a future version.
Instead of artificially setting the shards count to MAX_THREAD when
"by-thread" is used, let's reserve special values for symbolic names
so that we can add more in the future. For now we use value -1 for
"by-thread", which requires to turn the type to signed int but it was
already used as such everywhere anyway.
fd_migrate_on() can be used to migrate an existing FD to any thread, even
one belonging to a different group from the current one and from the
caller's. All that is needed is to make sure the FD is still valid when
the operation is performed (which is the case when such operations happen).
This is potentially slightly expensive since it locks the tgid during the
delicate operation, but it is normally performed only from an owning
thread to offer the FD to another one (e.g. reassign a better thread upon
accept()).
We're only checking for 0, 1, or >1 groups enabled there, and we'll soon
need to be more precise and know quickly which groups are non-empty.
Let's just replace the count with a mask of enabled groups. This will
allow to quickly spot the presence of any such group in a set.
Alert when the len argument of a stick table type contains incorrect
characters.
Replace atol by strtol.
Could be backported in every maintained versions.
It was missing from the output but is sometimes convenient to observe
and understand how incoming connections are distributed. The CPU usage
is reported as the instant measurement of 100-idle_pct for each thread,
and the average value is shown for the aggregated value.
This could be backported as it's helpful in certain troublehsooting
sessions.
As the ACK frames are not added to the packet list of ack-eliciting frames,
it could not be traced. But there is a flag to identify such packet.
Let's use it to add this information to the traces of TX packets.
Must be backported to 2.6 and 2.7.
Dump at proto level the packet information when its header protection was removed.
Remove no more use qpkt_trace variable.
Must be backported to 2.7 and 2.6.
It is possible that the handshake was not confirmed and there was no more
packet in flight to probe with. It this case the server must wait for
the client to be unblocked without probing any packet number space contrary
to what was revealed by interop tests as follows:
[01|quic|2|uic_loss.c:65] TX loss pktns : qc@0x7fac301cd390 pktns=I pp=0
[01|quic|2|uic_loss.c:67] TX loss pktns : qc@0x7fac301cd390 pktns=H pp=0 tole=-102ms
[01|quic|2|uic_loss.c:67] TX loss pktns : qc@0x7fac301cd390 pktns=01RTT pp=0 if=1054 tole=-1987ms
[01|quic|5|uic_loss.c:73] quic_loss_pktns(): leaving : qc@0x7fac301cd390
[01|quic|5|uic_loss.c:91] quic_pto_pktns(): entering : qc@0x7fac301cd390
[01|quic|3|ic_loss.c:121] TX PTO handshake not already completed : qc@0x7fac301cd390
[01|quic|2|ic_loss.c:141] TX PTO : qc@0x7fac301cd390 pktns=I pp=0 dur=83ms
[01|quic|5|ic_loss.c:142] quic_pto_pktns(): leaving : qc@0x7fac301cd390
[01|quic|3|c_conn.c:5179] needs to probe Initial packet number space : qc@0x7fac301cd390
This bug was not visible before this commit:
BUG/MINOR: quic: wake up MUX on probing only for 01RTT
This means that before it, one could do bad things (probing the 01RTT packet number
space before the handshake was confirmed).
Must be backported to 2.7 and 2.6.
When end-of-stream is reported by a H2 stream, we must take care to also
report an error is end-of-input was not reported. Indeed, it is now
mandatory to set SE_FL_EOI or SE_FL_ERROR flags when SE_FL_EOS is set.
It is a 2.8-specific issue. No backport needed.
When TCP connection is first upgrade to H1 then to H2, the stream-connector,
created by the PT mux, must be destroyed because the H2 mux cannot inherit
from it. When it is performed, the SE_FL_EOS flag is set but SE_FL_EOI must
also be set. It is now required to never set SE_FL_EOS without SE_FL_EOI or
SE_FL_ERROR.
It is a 2.8-specific issue. No backport needed.
Timeouts for dynamic resolutions are not handled at the stream level but by
the resolvers themself. It means there is no connect, client and server
timeouts defined on the internal proxy used by a resolver.
While it is not an issue for DNS resolution over UDP, it can be a problem
for resolution over TCP. New sessions are automatically created when
required, and killed on excess. But only established connections are
considered. Connecting ones are never killed. Because there is no conncet
timeout, we rely on the kernel to report a connection error. And this may be
quite long.
Because resolutions are periodically triggered, this may lead to an excess
of unusable sessions in connecting state. This also prevents HAProxy to
quickly exit on soft-stop. It is annoying, especially because there is no
reason to not set a connect timeout.
So to mitigate the issue, we now use the "resolve" timeout as connect
timeout for the internal proxy attached to a resolver.
This patch should be backported as far as 2.4.
Thanks to previous commit ("BUG/MEDIUM: dns: Kill idle DNS sessions during
stopping stage"), DNS idle sessions are killed on stopping staged. But the
task responsible to kill these sessions is running every 5 seconds. It
means, when HAProxy is stopped, we can observe a delay before the process
exits.
To reduce this delay, when the resolvers task is executed, all DNS idle
tasks are woken up.
This patch must be backported as far as 2.6.
There is no server timeout for DNS sessions over TCP. It means idle session
cannot be killed by itself. There is a task running peridically, every 5s,
to kill the excess of idle sessions. But the last one is never
killed. During the stopping stage, it is an issue since the dynamic
resolutions are no longer performed (2ec6f14c "BUG/MEDIUM: resolvers:
Properly stop server resolutions on soft-stop").
Before the above commit, during stopping stage, the DNS sessions were killed
when a resolution was triggered. Now, nothing kills these sessions. This
prevents the process to finish on soft-stop.
To fix this bug, the task killing excess of idle sessions now kill all idle
sessions on stopping stage.
This patch must be backported as far as 2.6.
When the log applet is executed while a shut is pending, the remaining
output data must always be consumed. Otherwise, this can prevent the stream
to exit, leading to a spinning loop on the applet.
It is 2.8-specific. No backport needed.
When the stats applet is executed while a shut is pending, the remaining
output data must always be consumed. Otherwise, this can prevent the stream
to exit, leading to a spinning loop on the applet.
It is 2.8-specific. No backport needed.
When the http-client applet is executed while a shut is pending, the
remaining output data must always be consumed. Otherwise, this can prevent
the stream to exit, leading to a spinning loop on the applet.
It is 2.8-specific. No backport needed.
When the cli applet is executed while a shut is pending, the remaining
output data must always be consumed. Otherwise, this can prevent the stream
to exit, leading to a spinning loop on the applet.
This patch should fix the issue #2107. It is 2.8-specific. No backport
needed.
An applet must never set SE_FL_EOS flag without SE_FL_EOI or SE_FL_ERROR
flags. Here, SE_FL_EOI flag was missing for "quit" or "_getsocks"
commands. Indeed, these commands are terminal.
This bug triggers a BUG_ON() recently added.
This patch is related to the issue #2107. It is 2.8-specific. No backport
needed.
When calls to strcpy() were replaced with calls to strlcpy2(), one of them
was replaced wrong, and the source and size were inverted. Correct that.
This should fix issue #2110.
These ones are genenerally harmless on modern compilers because the
compiler checks them. While gcc optimizes them away without even
referencing strcpy(), clang prefers to call strcpy(). Nevertheless they
prevent from enabling stricter checks so better remove them altogether.
They were all replaced by strlcpy2() and the size of the destination
which is always known there.
strcpy() is quite nasty but tolerable to copy constants, but here
it copies a variable path into a node in a code path that's not
trivial to follow given that it takes the node as the result of
a tree lookup. Let's get rid of it and mention where the entry
is retrieved.
As every time strncat() is used, it's wrong, and this one is no exception.
Users often think that the length applies to the destination except it
applies to the source and makes it hard to use correctly. The bug did not
have an impact because the length was preallocated from the sum of all
the individual lengths as measured by strlen() so there was no chance one
of them would change in between. But it could change in the future. Let's
fix it to use memcpy() instead for strings, or byte copies for delimiters.
No backport is needed, though it can be done if it helps to apply other
fixes.
Add code so that compression can be used for requests as well.
New compression keywords are introduced :
"direction" that specifies what we want to compress. Valid values are
"request", "response", or "both".
"type-req" and "type-res" define content-type to be compressed for
requests and responses, respectively. "type" is kept as an alias for
"type-res" for backward compatibilty.
"algo-req" specifies the compression algorithm to be used for requests.
Only one algorithm can be provided.
"algo-res" provides the list of algorithm that can be used to compress
responses. "algo" is kept as an alias for "algo-res" for backward
compatibility.
Make provision for being able to store both compression algorithms and
content-types to compress for both requests and responses. For now only
the responses one are used.
Make provision for storing the compression algorithm and the compression
context twice, one for requests, and the other for responses. Only the
response ones are used for now.
When a trace message for an applet is dumped, if the SC exists, the stream
always exists too. There is no way to attached an applet to a health-check.
So, we can use the unsafe version __sc_strm() to get the stream.
This patch is related to #2106. Not sure it will be enough for
Coverity. However, there is no bug here.
On startup/reload, startup_logs_init() will try to export startup logs shm
filedescriptor through the internal HAPROXY_STARTUPLOGS_FD env variable.
While memprintf() is used to prepare the string to be exported via
setenv(), str_fd argument (first argument passed to memprintf()) could
be non NULL as a result of HAPROXY_STARTUPLOGS_FD env variable being
already set.
Indeed: str_fd is already used earlier in the function to store the result
of getenv("HAPROXY_STARTUPLOGS_FD").
The issue here is that memprintf() is designed to free the 'out' argument
if out != NULL, and here we don't expect str_fd to be freed since it was
provided by getenv() and would result in memory violation.
To prevent any invalid free, we must ensure that str_fd is set to NULL
prior to calling memprintf().
This must be backported in 2.7 with eba6a54cd4 ("MINOR: logs: startup-logs
can use a shm for logging the reload")
People who use HAProxy as a process 1 in containers sometimes starts
other things from the program section. This is still not recommend as
the master process has minimal features regarding process management.
Environment variables are still inherited, even internal ones.
Since 2.7, it could provoke a crash when inheriting the
HAPROXY_STARTUPLOGS_FD variable.
Note: for future releases it should be better to clean the env and sets
a list of variable to be exported. We need to determine which variables
are used by users before.
Must be backported in 2.7.
Previously, ODCID were concatenated with the client address. This was
done to prevent a collision between two endpoints which used the same
ODCID.
Thanks to the two previous patches, first connection generated CID is
now directly derived from the client ODCID using a hash function which
uses the client source address from the same purpose. Thus, it is now
unneeded to concatenate client address to <odcid> quic-conn member.
This change allows to simplify the quic_cid structure management and
reduce its size which is important as it is embedded several times in
various structures such as quic_conn and quic_rx_packet.
This should be backported up to 2.7.
First connection CID generation has been altered. It is now directly
derived from client ODCID since previous commit :
commit 162baaff7a
MINOR: quic: derive first DCID from client ODCID
This patch removes the ODCID tree which is now unneeded. On connection
lookup via CID, if a DCID is not found the hash derivation is performed
for an INITIAL/0-RTT packet only. In case a client has used multiple
times an ODCID, this will allow to retrieve our generated DCID in the
CID tree without storing the ODCID node.
The impact of this two combined patch is that it may improve slightly
haproxy memory footprint by removing a tree node from quic_conn
structure. The cpu calculation induced by hash derivation should only be
performed only a few times per connection as the client will start to
use our generated CID as soon as it received it.
This should be backported up to 2.7.
Change the generation of the first CID of a connection. It is directly
derived from the client ODCID using a 64-bits hash function. Client
address is added to avoid collision between clients which could use the
same ODCID.
For the moment, this change as no functional impact. However, it will be
directly used for the next commit to be able to remove the ODCID tree.
This should be backported up to 2.7.
This is due to this commit:
MINOR: quic: Add trace to debug idle timer task issues
where has been added without having been tested at developer level.
<qc> was dereferenced after having been released by qc_conn_release().
Set qc to NULL value after having been released to forbid its dereferencing.
Add a check for qc->idle_timer_task in the traces added by the mentionned
commit above to prevent its dereferencing if NULL.
Take the opportunity of this patch to modify trace events from
QUIC_EV_CONN_SSLALERT to QUIC_EV_CONN_IDLE_TIMER.
Must be backported to 2.6 and 2.7.
The HTTP message must remains in BODY state during the analysis, to be able
to report accurate termination state in logs. It is also important to know
the HTTP analysis is still in progress. Thus, when we are waiting for the
message payload, the message is no longer switch to DATA state. This was
used to not process "Expect: " header at each evaluation. But thanks to the
previous patch, it is no long necessary.
This patch also fixes a bug in the lua filter api. Some functions must be
called during the message analysis and not during the payload forwarding. It
is not valid to try to manipulate headers during the forward stage because
headers are already forwarded. We rely on the message state to detect
errors. So the api was unusable if a "wait-for-body" action was used.
This patch shoud fix the issue #2093. It relies on the commit:
* MINOR: http-ana: Add a HTTP_MSGF flag to state the Expect header was checked
Both must be backported as far as 2.5.
HTTP_MSGF_EXPECT_CHECKED is now set on the request message to know the
"Expect: " header was already handled, if any. The flag is set from the
moment we try to handle the header to send a "100-continue" response,
whether it was found or not.
This way, when we are waiting for the request payload, thanks to this flag,
we only try to handle "Expect: " header only once. Before it was performed
by changing the message state from BODY to DATA. But this has some side
effects and it is no accurate. So, it is better to rely on a flag to do so.
Now that event_hdl api is properly implemented in hlua, we may add the
per-server event subscription in addition to the global event
subscription.
Per-server subscription allows to be notified for events related to
single server. It is useful to track a server UP/DOWN and DEL events.
It works exactly like core.event_sub() except that the subscription
will be performed within the server dedicated subscription list instead
of the global one.
The callback function will only be called for server events affecting
the server from which the subscription was performed.
Regarding the implementation, it is pretty trivial at this point, we add
more doc than code this time.
Usage examples have been added to the (lua) documentation.
Now that the event handler API is pretty mature, we can expose it in
the lua API.
Introducing the core.event_sub(<event_types>, <cb>) lua function that
takes an array of event types <event_types> as well as a callback
function <cb> as argument.
The function returns a subscription <sub> on success.
Subscription <sub> allows you to manage the subscription from anywhere
in the script.
To this day only the sub->unsub method is implemented.
The following event types are currently supported:
- "SERVER_ADD": when a server is added
- "SERVER_DEL": when a server is removed from haproxy
- "SERVER_DOWN": server states goes from up to down
- "SERVER_UP": server states goes from down to up
As for the <cb> function: it will be called when one of the registered
event types occur. The function will be called with 3 arguments:
cb(<event>,<data>,<sub>)
<event>: event type (string) that triggered the function.
(could be any of the types used in <event_types> when registering
the subscription)
<data>: data associated with the event (specific to each event family).
For "SERVER_" family events, server details such as server name/id/proxy
will be provided.
If the server still exists (not yet deleted), a reference to the live
server is provided to spare you from an additionnal lookup if you need
to have direct access to the server from lua.
<sub> refers to the subscription. In case you need to manage it from
within an event handler.
(It refers to the same subscription that the one returned from
core.event_sub())
Subscriptions are per-thread: the thread that will be handling the
event is the one who performed the subscription using
core.event_sub() function.
Each thread treats events sequentially, it means that if you have,
let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your
cb function will first be called with SERVER_UP, and once you're done
handling the event, your function will be called again with SERVER_DOWN.
This is to ensure event consitency when it comes to logging / triggering
logic from lua.
Your lua cb function may yield if needed, but you're pleased to process
the event as fast as possible to prevent the event queue from growing up
To prevent abuses, if the event queue for the current subscription goes
over 100 unconsumed events, the subscription will pause itself
automatically for as long as it takes for your handler to catch up.
This would lead to events being missed, so a warning will be emitted in
the logs to inform you about that. This is not something you want to let
happen too often, it may indicate that you subscribed to an event that
is occurring too frequently or/and that your callback function is too
slow to keep up the pace and you should review it.
If you want to do some parallel processing because your callback
functions are slow: you might want to create subtasks from lua using
core.register_task() from within your callback function to perform the
heavy job in a dedicated task and allow remaining events to be processed
more quickly.
Please check the lua documentation for more information.
Adding alternative findserver() functions to be able to perform an
unique match based on name or puid and by leveraging revision id (rid)
to make sure the function won't match with a new server reusing the
same name or puid of the "potentially deleted" server we were initially
looking for.
For example, if you were in the position of finding a server based on
a given name provided to you by a different context:
Since dynamic servers were implemented, between the time the name was
picked and the time you will perform the findserver() call some dynamic
server deletion/additions could've been performed in the mean time.
In such cases, findserver() could return a new server that re-uses the
name of a previously deleted server. Depending on your needs, it could
be perfectly fine, but there are some cases where you want to lookup
the original server that was provided to you (if it still exists).
While working on event handling from lua, the need for a pause/resume
function to temporarily disable a subscription was raised.
We solve this by introducing the EHDL_SUB_F_PAUSED flag for
subscriptions.
The flag is set via _pause() and cleared via _resume(), and it is
checked prior to notifying the subscription in publish function.
Pause and Resume functions are also available for via lookups for
identified subscriptions.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
Use event_hdl_async_equeue_size() in advanced async task handler to
get the near real-time event queue size.
By near real-time, you should understand that the queue size is not
updated during element insertion/removal, but shortly before insertion
and shortly after removal, so the size should reflect the approximate
queue size at a given time but should definitely not be used as a
unique source of truth.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
advanced async mode (EVENT_HDL_ASYNC_TASK) provided full support for
custom tasklets registration.
Due to the similarities between tasks and tasklets, it may be useful
to use the advanced mode with an existing task (not a tasklet).
While the API did not explicitly disallow this usage, things would
get bad if we try to wakeup a task using tasklet_wakeup() for notifying
the task about new events.
To make the API support both custom tasks and tasklets, we use the
TASK_IS_TASKLET() macro to call the proper waking function depending
on the task's type:
- For tasklets: we use tasklet_wakeup()
- For tasks: we use task_wakeup()
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
In _event_hdl_publish(), when publishing an event to async handler(s),
async_data is allocated only once and then relies on a refcount
logic to reuse the same data block for multiple async event handlers.
(this allows to save significant amount of memory)
Because the refcount is first set to 0, there is a small race where
the consumers could consume async data (async data refcount reaching 0)
before publishing is actually over.
The consequence is that async data may be freed by one of the consumers
while we still rely on it within _event_hdl_publish().
This was discovered by chance when stress-testing the API with multiple
async handlers registered to the same event: some of the handlers were
notified about a new event for which the event data was already freed,
resulting in invalid reads and/or segfaults.
To fix this, we first set the refcount to 1, assuming that the
publish function relies on async_data until the publish is over.
At the end of the publish, the reference to the async data is dropped.
This way, async_data is either freed by _event_hdl_publish() itself
or by one of the consumers, depending on who is the last one relying
on it.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
soft-stop was not explicitly handled in event_hdl API.
Because of this, event_hdl was causing some leaks on deinit paths.
Moreover, a task responsible for handling events could require some
additional cleanups (ie: advanced async task), and as the task was not
protected against abort when soft-stopping, such cleanup could not be
performed unless the task itself implements the required protections,
which is not optimal.
Consider this new approach:
'jobs' global variable is incremented whenever an async subscription is
created to prevent the related task from being aborted before the task
acknowledges the final END event.
Once the END event is acknowledged and freed by the task, the 'jobs'
variable is decremented, and the deinit process may continue (including
the abortion of remaining tasks not guarded by the 'jobs' variable).
To do this, a new global mt_list is required: known_event_hdl_sub_list
This list tracks the known (initialized) subscription lists within the
process.
sub_lists are automatically added to the "known" list when calling
event_hdl_sub_list_init(), and are removed from the list with
event_hdl_sub_list_destroy().
This allows us to implement a global thread-safe event_hdl deinit()
function that is automatically called on soft-stop thanks to signal(0).
When event_hdl deinit() is initiated, we simply iterate against the known
subscription lists to destroy them.
event_hdl_subscribe_ptr() was slightly modified to make sure that a sub_list
may not accept new subscriptions once it is destroyed (removed from the
known list)
This can occur between the time the soft-stop is initiated (signal(0)) and
haproxy actually enters in the deinit() function (once tasks are either
finished or aborted and other threads already joined).
It is safe to destroy() the subscription list multiple times as long
as the pointer is still valid (ie: first on soft-stop when handling
the '0' signal, then from regular deinit() path): the function does
nothing if the subscription list is already removed.
We partially reverted "BUG/MINOR: event_hdl: make event_hdl_subscribe thread-safe"
since we can use parent mt_list locking instead of a dedicated lock to make
the check gainst duplicate subscription ID.
(insert_lock is not useful anymore)
The check in itself is not changed, only the locking method.
sizeof(event_hdl_sub_list) slightly increases: from 24 bits to 32bits due
to the additional mt_list struct within it.
With that said, having thread-safe list to store known subscription lists
is a good thing: it could help to implement additional management
logic for subcription lists and could be useful to add some stats or
debugging tools in the future.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
event_hdl_sub_list_init() and event_hdl_sub_list_destroy() don't expect
to be called with a NULL argument (to use global subscription list
implicitly), simply because the global subscription list init and
destroy is internally managed.
Adding BUG_ON() to detect such invalid usages, and updating some comments
to prevent confusion around these functions.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
List insertion in event_hdl_subscribe() was not thread-safe when dealing
with unique identifiers. Indeed, in this case the list insertion is
conditional (we check for a duplicate, then we insert). And while we're
using mt lists for this, the whole operation is not atomic: there is a
race between the check and the insertion.
This could lead to the same ID being registered multiple times with
concurrent calls to event_hdl_subscribe() on the same ID.
To fix this, we add 'insert_lock' dedicated lock in the subscription
list struct. The lock's cost is nearly 0 since it is only used when
registering identified subscriptions and the lock window is very short:
we only guard the duplicate check and the list insertion to make the
conditional insertion "atomic" within a given subscription list.
This is the only place where we need the lock: as soon as the item is
properly inserted we're out of trouble because all other operations on
the list are already thread-safe thanks to mt lists.
A new lock hint is introduced: LOCK_EHDL which is dedicated to event_hdl
The patch may seem quite large since we had to rework the logic around
the subscribe function and switch from simple mt_list to a dedicated
struct wrapping both the mt_list and the insert_lock for the
event_hdl_sub_list type.
(sizeof(event_hdl_sub_list) is now 24 instead of 16)
However, all the changes are internal: we don't break the API.
If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
core.register_task(function) may now take up to 4 additional arguments
that will be passed as-is to the task function.
This could be convenient to spawn sub-tasks from existing functions
supporting core.register_task() without the need to use global
variables to pass some context to the newly created task function.
The new prototype is:
core.register_task(function[, arg1[, arg2[, ...[, arg4]]]])
Implementation remains backward-compatible with existing scripts.
Server revision ID was recently added to haproxy with 61e3894
("MINOR: server: add srv->rid (revision id) value")
Let's add it to the hlua server class.
Main lua lock is used at various places in the code.
Most of the time it is used from unprotected lua environments,
in which case the locking is mandatory.
But there are some cases where the lock is attempted from protected
lua environments, meaning that lock is already owned by the current
thread. Thus new locking attempt should be skipped to prevent any
deadlocks from occuring.
To address this, "already_safe" lock hint was implemented in
hlua_ctx_init() function with commit bf90ce1
("BUG/MEDIUM: lua: dead lock when Lua tasks are trigerred")
But this approach is not very safe, for 2 reasons:
First reason is that there are still some code paths that could lead
to deadlocks.
For instance, in register_task(), hlua_ctx_init() is called with
already_safe set to 1 to prevent deadlock from occuring.
But in case of task init failure, hlua_ctx_destroy() will be called
from the same environment (protected environment), and hlua_ctx_destroy()
does not offer the already_safe lock hint.. resulting in a deadlock.
Second reason is that already_safe hint is used to completely skip
SET_LJMP macros (which manipulates the lock internally), resulting
in some logics in the function being unprotected from lua aborts in
case of unexpected errors when manipulating the lua stack (the lock
does not protect against longjmps)
Instead of leaving the locking responsibility to the caller, which is
quite error prone since we must find out ourselves if we are or not in
a protected environment (and is not robust against code re-use),
we move the deadlock protection logic directly in hlua_lock() function.
Thanks to a thread-local lock hint, we can easily guess if the current
thread already owns the main lua lock, in which case the locking attempt
is skipped. The thread-local lock hint is implemented as a counter so
that the lock is properly dropped when the counter reaches 0.
(to match actual lock() and unlock() calls)
This commit depends on "MINOR: hlua: simplify lua locking"
It may be backported to every stable versions.
[prior to 2.5 lua filter API did not exist, filter-related parts
should be skipped]
The check on lua state==0 to know whether locking is required or not can
be performed in a locking wrapper to simplify things a bit and prevent
implementation errors.
Locking from hlua context should now be performed via hlua_lock(L) and
unlocking via hlua_unlock(L)
Using hlua_pushref() everywhere temporary lua objects are involved.
(ie: hlua_checkfunction(), hlua_checktable...)
Those references are expected to be cleared using hlua_unref() when
they are no longer used.
Using hlua_ref() everywhere temporary lua objects are involved.
Those references are expected to be cleared using hlua_unref()
when they are no longer used.
Several error paths were leaking function or table references.
(Obtained through hlua_checkfunction() and hlua_checktable() functions)
Now we properly release the references thanks to hlua_unref() in
such cases.
This commit depends on "MINOR: hlua: add simple hlua reference handling API"
This could be backported in every stable versions although it is not
mandatory as such leaks only occur on rare error/warn paths.
[prior to 2.5 lua filter API did not exist, the hlua_register_filter()
part should be skipped]
hlua init function references were not released during
hlua_post_init_state().
Hopefully, this function is only used during startup so the resulting
leak is not a big deal.
Since each init lua function runs precisely once, it is safe to release
the ref as soon as the function is restored on the stack.
This could be backported to every stable versions.
Please note that this commit depends on "MINOR: hlua: add simple hlua reference handling API"
In core.register_task(): we take a reference to the function passed as
argument in order to push it in the new coroutine substack.
However, once pushed in the substack: the reference is not useful
anymore and should be cleared.
Currently, this is not the case in hlua_register_task().
Explicitly dropping the reference once the function is pushed to the
coroutine's stack to prevent any reference leak (which could contribute
to resource shortage)
This may be backported to every stable versions.
Please note that this commit depends on "MINOR: hlua: add simple hlua reference handling API"
hlua_checktable() and hlua_checkfunction() both return the raw
value of luaL_ref() function call.
As luaL_ref() returns a signed int, both functions should return a signed
int as well to prevent any misuse of the returned reference value.
We're doing this in an attempt to simplify temporary lua objects
references handling.
Adding the new hlua_unref() function to release lua object references
created using luaL_ref(, LUA_REGISTRYINDEX)
(ie: hlua_checkfunction() and hlua_checktable())
Failure to release unused object reference prevents the reference index
from being re-used and prevents the referred ressource from being garbage
collected.
Adding hlua_pushref(L, ref) to replace
lua_rawgeti(L, LUA_REGISTRYINDEX, ref)
Adding hlua_ref(L) to replace luaL_ref(L, LUA_REGISTRYINDEX)