When a tcpcheck ruleset uses multiple connections, the existing one must be
closed and destroyed before openning the new one. This part is handled in
the tcpcheck_main() function, when called from the wake callback function
(wake_srv_chk). But it is indeed a problem, because this function may be
called from the mux layer. This means a mux may call the wake callback
function of the data layer, which may release the connection and the mux. It
is easy to see how it is hazardous. And actually, depending on the
scheduling, it leads to crashes.
Thus, we must avoid to release the connection in the wake callback context,
and move this part in the check's process function instead. To do so, we
rely on the CHK_ST_CLOSE_CONN flags. When a connection must be replaced by a
new one, this flag is set on the check, in tcpcheck_main() function, and the
check's task is woken up. Then, the connection is really closed in
process_chk_conn() function.
This patch must be backported as far as 2.2, with some adaptations however
because the code is not exactly the same.
glibc < 2.10 requires _GNU_SOURCE in order to make use of strsignal(),
otherwise leading to SEGV at runtime.
$ make V=1 TARGET=linux-glibc-legacy USE_THREAD= USE_ACCEPT4=
..
src/mworker.c: In function 'mworker_catch_sigchld':
src/mworker.c:285: warning: implicit declaration of function 'strsignal'
src/mworker.c:285: warning: pointer/integer type mismatch in conditional expression
..
$ make V=1 reg-tests REGTESTS_TYPES=slow,default
..
###### Test case: reg-tests/mcli/mcli_start_progs.vtc ######
## test results in: "/tmp/haregtests-2021-01-19_15-18-07.n24989/vtc.29077.28f6153d"
---- h1 Bad exit status: 0x008b exit 0x0 signal 11 core 128
---- h1 Assert error in haproxy_wait(), src/vtc_haproxy.c line 792: Condition(*(&h->fds[1]) >= 0) not true. Errno=0 Success
..
$ gdb ./haproxy /tmp/core.0.haproxy.30270
..
Core was generated by `/root/haproxy/haproxy -d -W -S fd@8 -dM -f /tmp/haregtests-2021-01-19_15-18-07.'.
Program terminated with signal 11, Segmentation fault.
#0 0x00002aaaab387a10 in strlen () from /lib64/libc.so.6
(gdb) bt
#0 0x00002aaaab387a10 in strlen () from /lib64/libc.so.6
#1 0x00002aaaab354b69 in vfprintf () from /lib64/libc.so.6
#2 0x00002aaaab37788a in vsnprintf () from /lib64/libc.so.6
#3 0x00000000004a76a3 in memvprintf (out=0x7fffedc680a0, format=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n", orig_args=0x7fffedc680d0)
at src/tools.c:3868
#4 0x00000000004bbd40 in print_message (label=0x58abed "ALERT", fmt=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n", argp=0x7fffedc680d0)
at src/log.c:1066
#5 0x00000000004bc07f in ha_alert (fmt=0x5a5d58 "Current worker #%d (%d) exited with code %d (%s)\n") at src/log.c:1109
#6 0x0000000000534b7b in mworker_catch_sigchld (sh=<value optimized out>) at src/mworker.c:293
#7 0x0000000000556af3 in __signal_process_queue () at src/signal.c:88
#8 0x00000000004f6216 in signal_process_queue () at include/haproxy/signal.h:39
#9 run_poll_loop () at src/haproxy.c:2859
#10 0x00000000004f63b7 in run_thread_poll_loop (data=<value optimized out>) at src/haproxy.c:3028
#11 0x00000000004faaac in main (argc=<value optimized out>, argv=0x7fffedc68498) at src/haproxy.c:904
See: https://man7.org/linux/man-pages/man3/strsignal.3.html
Must be backported as far as 2.0.
If a subscriber's tasklet was called more than one million times, if
the ssl_ctx's connection doesn't match the current one, or if the
connection appears closed in one direction while the SSL stack is
still subscribed, the FD is reported as suspicious. The close cases
may occasionally trigger a false positive during very short and rare
windows. Similarly the 1M calls will trigger after 16GB are transferred
over a given connection. These are rare enough events to be reported as
suspicious.
A file descriptor which maps to a connection but has more than one
thread in its mask, or an FD handle that doesn't correspond to the FD,
or wiht no mux context, or an FD with no thread in its mask, or with
more than 1 million events is flagged as suspicious.
Now the show_fd helpers at the transport and mux levels return an integer
which indicates whether or not the inspected entry looks suspicious. When
an entry is reported as suspicious, "show fd" will suffix it with an
exclamation mark ('!') in the dump, that is supposed to help detecting
them.
For now, helpers were adjusted to adapt to the new API but none of them
reports any suspicious entry yet.
When dumping a live h1 stream, also take the opportunity for reporting
the subscriber including the event, tasklet, handler and context. Example:
3030 : st=0x21(R:rA W:Ra) ev=0x04(heOpi) [Lc] tmask=0x4 umask=0x0 owner=0x7f97805c1f70 iocb=0x65b847(sock_conn_iocb) back=1 cflg=0x00002300 sv=s1/recv mux=H1 ctx=0x7f97805c21b0 h1c.flg=0x80000200 .sub=1 .ibuf=0@(nil)+0/0 .obuf=0@(nil)+0/0 h1s=0x7f97805c2380 h1s.flg=0x4010 .req.state=MSG_DATA .res.state=MSG_RPBEFORE .meth=POST status=0 .cs.flg=0x00000000 .cs.data=0x7f97805c1720 .subs=0x7f97805c1748(ev=1 tl=0x7f97805c1990 tl.calls=2 tl.ctx=0x7f97805c1720 tl.fct=si_cs_io_cb) xprt=RAW
In FD dumps it's often very important to figure what upper layer function
is going to be called. Let's export the few I/O callbacks that appear as
tasklet functions so that "show fd" can resolve them instead of printing
a pointer relative to main. For example:
1028 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7f00b889b200 iocb=0x65b638(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7f00c8824de0 h2c.st0=FRH .err=0 .maxid=795 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=795 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7f00c86d0750 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7f00c88252e0(ev=1 tl=0x7f00a07d1aa0 tl.calls=1047 tl.ctx=0x7f00c8824de0 tl.fct=h2_io_cb) .sent_early=0 .early_in=0
The SSL context contains a lot of important details that are currently
missing from debug outputs. Now that we detect ssl_sock, we can perform
some sanity checks, print the next xprt, the subscriber callback's context,
handler and number of calls. The process function is also resolved. This
now gives for example on an H2 connection:
1029 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7fc714881700 iocb=0x65b528(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7fc734545e50 h2c.st0=FRH .err=0 .maxid=217 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=217 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7fc73478f230 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7fc734546350(ev=1 tl=0x7fc7346702e0 tl.calls=278 tl.ctx=0x7fc734545e50 tl.fct=main-0x144efa) .sent_early=0 .early_in=0
Just like we did for the muxes, now the transport layers will have the
ability to provide helpers to report more detailed information about their
internal context. When the helper is not known, the pointer continues to
be dumped as-is if it's not NULL. This way a transport with no context nor
dump function will not add a useless "xprt_ctx=(nil)" but the pointer will
be emitted if valid or if a helper is defined.
These ones are definitely missing from some dumps, let's report them! We
print the xprt's name instead of its useless pointer, as well as its ctx
when xprt is not NULL.
Over time the code has uglified, casting fdt.owner as a struct connection
for about everything. Let's have a const struct connection* there and take
this opportunity for passing all fields as const as well.
Additionally a misplaced closing parenthesis on the output was fixed.
When 0c439d895 ("BUILD: tools: make resolve_sym_name() return a const")
was written, the pointer argument ought to have been turned to const for
more flexibility. Let's do it now.
That was causing confusing outputs like this one whenan H2S is known:
1030 : ... last_h2s=0x2ed8390 .id=775 .st=HCR.flg=0x4001 .rxbuf=...
^^^^
This was introduced by commit ab2ec4540 in 2.1-dev2 so the fix can be
backported as far as 2.1.
This counter could be hugely incremented by the peer task responsible of managing
peer synchronizations and reconnections, for instance when a peer is not reachable
there is a period where the appctx is not created. If we receive stick-table
updates before the peer session (appctx) is instantiated, we reach the code
responsible of incrementing the "new_conn" counter.
With this patch we increment this counter only when we really instantiate a new
peer session thanks to peer_session_create().
May be backported as far as 2.0.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
commit 5a982a7165 ("MINOR:
contrib/prometheus-exporter: export build_info") is breaking lua
`core.get_info()`.
This patch makes sure build_info is correctly initialised in all cases.
Reviewed-by: William Dauchy <wdauchy@gmail.com>
Previous commit da2b0844f ("MINOR: peers: Add traces for peer control
messages.") introduced a build warning on some compiler versions after
the removal of variable "peers" in peer_send_msgs() because variable
"s" was used only to assign this one, and variable "si" to assign "s".
Let's remove both to fix the warning. No backport is needed.
V2 of this fix which includes a missing pointer initialization which was
causing a segfault in v1 (949a7f6459)
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
Display traces when sending/receiving peer control messages (synchronisation, heartbeat).
Add remaining traces when parsing malformed messages (acks, stick-table definitions)
or ignoring them.
Also add traces when releasing session or when reaching the PEER_SESS_ST_ERRPROTO
peer protocol state.
It's about the third time I get confused by these functions, half of
which manipulate the reference as a whole and those manipulating only
an entry. For me "pat_ref_commit" means committing the pattern reference,
not just an element, so let's rename it. A number of other ones should
really be renamed before 2.4 gets released :-/
There is no low level api to achieve same as Linux/FreeBSD, we rely
on CPUs available. Without this, the number of threads is just 1 for
Mac while having 8 cores in my M1.
Backporting to 2.1 should be enough if that's possible.
Signed-off-by: David CARLIER <devnexen@gmail.com>
An fatal error is now reported if a server is defined in a frontend
section. til now, a warning was just emitted and the server was ignored. The
warning was added in the 1.3.4 when the frontend/backend keywords were
introduced to allow a smooth transition and to not break existing
configs. It is old enough now to emit an fatal error in this case.
This patch is related to the issue #1043. It may be backported at least as
far as 2.2, and possibly to older versions. It relies on the previous commit
("MINOR: config: Add failifnotcap() to emit an alert on proxy capabilities").
The HAPROXY_CFGFILES env variable is built using a static trash chunk, via a
call to get_trash_chunk() function. This chunk is reserved during the whole
configuration parsing. It is far too large to guarantee it will not be
reused during the configuration parsing. And in fact, it happens in the lua
code since the commit f67442efd ("BUG/MINOR: lua: warn when registering
action, conv, sf, cli or applet multiple times"), when a lua script is
loaded.
To fix the bug, we now use a dynamic buffer instead. And we call memprintf()
function to handle both the allocation and the formatting. Allocation errors
at this stage are fatal.
This patch should fix the issue #1041. It must be backported as far as 2.0.
The strict-limits global option was introduced with commit 0fec3ab7b
("MINOR: init: always fail when setrlimit fails"). When used in
conjuction with master-worker, haproxy will not fail when a setrlimit
fails. This happens because we only exit() if master-worker isn't used.
This patch removes all tests for master-worker mode for all cases covered
by strict-limits scope.
This should be backported from 2.1 onward.
This should fix issue #1042.
Reviewed by William Dauchy <wdauchy@gmail.com>
If a server is defined in a frontend, thus a proxy without the backend
capability, the 'check' and 'agent-check' keywords are ignored. This way, no
check is performed on an ignored server. This avoids a segfault because some
part of the tcpchecks are not fully initialized (or released for frontends
during the post-check).
In addition, an test on the server's proxy capabilities is performed when
checks or agent-checks are initialized and nothing is performed for servers
attached to a non-backend proxy.
This patch should fix the issue #1043. It must be backported as far as 2.2.
If an errors occurs during the sample expression parsing, the alloced
sample_expr is not freed despite having its main pointer reset.
This fixes GitHub issue #1046.
It could be backported as far as 1.8.
This reverts commit 949a7f6459.
The first part of the patch introduces a bug. When a dns answer item is
allocated, its <ar_item> is only initialized at the end of the parsing, when
the item is added in the answer list. Thus, we must not try to release it
during the parsing.
The second part is also probably buggy. It fixes the issue #971 but reverts
a fix for the issue #841 (see commit fb0884c8297 "BUG/MEDIUM: dns: Don't
store additional records in a linked-list"). So it must be at least
revalidated.
This revert fixes a segfault reported in a comment of the issue #971. It
must be backported as far as 2.2.
like it is done in other places, check the return value of
`alloc_trash_chunk` before using it. This was detected by coverity.
this patch fixes commit 591fc3a330
("BUG/MINOR: sample: fix concat() converter's corruption with non-string
variables"
As a consequence, this patch should be backported as far as 2.0
this should fix github issue #1039
Signed-off-by: William Dauchy <wdauchy@gmail.com>
I was looking at writing a simple first test for prometheus but I
realised there is no proper way to exclude it if haproxy was not built
with prometheus plugin.
Today we have `REQUIRE_OPTIONS` in reg-tests which is based on `Feature
list` from `haproxy -vv`. Those options are coming from the Makefile
itself.
A plugin is build this way:
EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"
It does register service actions through `service_keywords_register`.
Those are listed through `list_services` in `haproxy -vv`.
To facilitate parsing, I slightly changed the output to a single line
and integrate it in regtests shell script so that we can now specify a
dependency while writing a reg-test for prometheus, e.g:
#REQUIRE_SERVICE=prometheus-exporter
#REQUIRE_SERVICES=prometheus-exporter,foo
There might be other ways to handle this, but that's the cleanest I
found; I understand people might be concerned by this output change in
`haproxy -vv` which goes from:
Available services :
foo
bar
to:
Available services : foo bar
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
GitHub Issue #1037 Reported a memory leak in deinit() caused by an
allocation made in sa2str() that was stored in srv_set_addr_desc().
When destroying each server for a proxy in deinit, include freeing the
memory in the key of server->addr_node.
The leak was introduced in commit 92149f9a8 ("MEDIUM: stick-tables: Add
srvkey option to stick-table") which is not in any released version so
no backport is needed.
Cc: Tim Duesterhus <tim@bastelstu.be>
Patrick Hemmer reported that calling concat() with an integer variable
causes a %00 to appear at the beginning of the output. Looking at the
code, it's not surprising. The function uses get_trash_chunk() to get
one of the trashes, but can call casting functions which will also use
their trash in turn and will cycle back to ours, causing the trash to
be overwritten before being assigned to a sample.
By allocating the trash from a pool using alloc_trash_chunk(), we can
avoid this. However we must free it so the trash's contents must be
moved to a permanent trash buffer before returning. This is what's
achieved using smp_dup().
This should be backported as far as 2.0.
This is from the output of codespell. It's done at once over a bunch
of files and only affects comments, so there is nothing user-visible.
No backport needed.
During a configuration check valgrind reports:
==14425== 0 bytes in 106 blocks are definitely lost in loss record 1 of 107
==14425== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x443CFC: hlua_alloc (hlua.c:8662)
==14425== by 0x5F72B11: luaM_realloc_ (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F78089: luaH_free (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F707D3: sweeplist (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F710D0: luaC_freeallobjects (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F7715D: close_state (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x443D4C: hlua_deinit (hlua.c:9302)
==14425== by 0x543F88: deinit (haproxy.c:2742)
==14425== by 0x5448E7: deinit_and_exit (haproxy.c:2830)
==14425== by 0x5455D9: init (haproxy.c:2044)
This is due to Lua calling `hlua_alloc()` with `ptr = NULL` and `nsize = 0`.
While `realloc` is supposed to be equivalent `free()` if the size is `0` this
is only required for a non-NULL pointer. Apparently my allocator (or valgrind)
actually allocates a zero size area if the pointer is NULL, possibly taking up
some memory for management structures.
Fix this leak by specifically handling the case where both the pointer and the
size are `0`.
This bug appears to have been introduced with the introduction of the
multi-threaded Lua, thus this fix is specific for 2.4. No backport needed.
add base support for url encode following RFC3986, supporting `query`
type only.
- add test checking url_enc/url_dec/url_enc
- update documentation
- leave the door open for future changes
this should resolve github issue #941
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Released version 2.4-dev5 with the following main changes :
- BUG/MEDIUM: mux_h2: Add missing braces in h2_snd_buf()around trace+wakeup
- BUILD: hpack: hpack-tbl-t.h uses VAR_ARRAY but does not include compiler.h
- MINOR: time: increase the minimum wakeup interval to 60s
- MINOR: check: do not ignore a connection header for http-check send
- REGTESTS: complete http-check test
- CI: travis-ci: drop coverity scan builds
- MINOR: atomic: don't use ; to separate instruction on aarch64.
- IMPORT: xxhash: update to v0.8.0 that introduces stable XXH3 variant
- MEDIUM: xxhash: use the XXH3 functions to generate 64-bit hashes
- MEDIUM: xxhash: use the XXH_INLINE_ALL macro to inline all functions
- CLEANUP: xxhash: remove the unused src/xxhash.c
- MINOR: sample: add the xxh3 converter
- REGTESTS: add tests for the xxh3 converter
- MINOR: protocol: Create proto_quic QUIC protocol layer.
- MINOR: connection: Attach a "quic_conn" struct to "connection" struct.
- MINOR: quic: Redefine control layer callbacks which are QUIC specific.
- MINOR: ssl_sock: Initialize BIO and SSL objects outside of ssl_sock_init()
- MINOR: connection: Add a new xprt to connection.
- MINOR: ssl: Export definitions required by QUIC.
- MINOR: cfgparse: Do not modify the QUIC xprt when parsing "ssl".
- MINOR: tools: Add support for QUIC addresses parsing.
- MINOR: quic: Add definitions for QUIC protocol.
- MINOR: quic: Import C source code files for QUIC protocol.
- MINOR: listener: Add QUIC info to listeners and receivers.
- MINOR: server: Add QUIC definitions to servers.
- MINOR: ssl: SSL CTX initialization modifications for QUIC.
- MINOR: ssl: QUIC transport parameters parsing.
- MINOR: quic: QUIC socket management finalization.
- MINOR: cfgparse: QUIC default server transport parameters init.
- MINOR: quic: Enable the compilation of QUIC modules.
- MAJOR: quic: Make usage of ebtrees to store QUIC ACK ranges.
- MINOR: quic: Attempt to make trace more readable
- MINOR: quic: Make usage of the congestion control window.
- MINOR: quic: Flag RX packet as ack-eliciting from the generic parser.
- MINOR: quic: Code reordering to help in reviewing/modifying.
- MINOR: quic: Add traces to congestion avoidance NewReno callback.
- MINOR: quic: Display the SSL alert in ->ssl_send_alert() callback.
- MINOR: quic: Update the initial salt to that of draft-29.
- MINOR: quic: Add traces for in flght ack-eliciting packet counter.
- MINOR: quic: make a packet build fails when qc_build_frm() fails.
- MINOR: quic: Add traces for quic_packet_encrypt().
- MINOR: cache: Refactoring of secondary_key building functions
- MINOR: cache: Avoid storing responses whose secondary key was not correctly calculated
- BUG/MINOR: cache: Manage multiple headers in accept-encoding normalization
- MINOR: cache: Add specific secondary key comparison mechanism
- MINOR: http: Add helper functions to trim spaces and tabs
- MEDIUM: cache: Manage a subset of encodings in accept-encoding normalizer
- REGTESTS: cache: Simplify vary.vtc file
- REGTESTS: cache: Add a specific test for the accept-encoding normalizer
- MINOR: cache: Remove redundant test in http_action_req_cache_use
- MINOR: cache: Replace the "process-vary" option's expected values
- CI: GitHub Actions: enable daily Coverity scan
- BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`
- MEDIUM: stick-tables: Add srvkey option to stick-table
- REGTESTS: add test for stickiness using "srvkey addr"
- BUILD: Makefile: disable -Warray-bounds until it's fixed in gcc 11
- BUG/MINOR: sink: Return an allocation failure in __sink_new if strdup() fails
- BUG/MINOR: lua: Fix memory leak error cases in hlua_config_prepend_path
- MINOR: lua: Use consistent error message 'memory allocation failed'
- CLEANUP: Compare the return value of `XXXcmp()` functions with zero
- CLEANUP: Apply the coccinelle patch for `XXXcmp()` on include/
- CLEANUP: Apply the coccinelle patch for `XXXcmp()` on contrib/
- MINOR: qpack: Add static header table definitions for QPACK.
- CLEANUP: qpack: Wrong comment about the draft for QPACK static header table.
- CLEANUP: quic: Remove useless QUIC event trace definitions.
- BUG/MINOR: quic: Possible CRYPTO frame building errors.
- MINOR: quic: Pass quic_conn struct to frame parsers.
- BUG/MINOR: quic: Wrong STREAM frames parsing.
- MINOR: quic: Drop packets with STREAM frames with wrong direction.
- CLEANUP: ssl: Remove useless loop in tlskeys_list_get_next()
- CLEANUP: ssl: Remove useless local variable in tlskeys_list_get_next()
- MINOR: ssl: make tlskeys_list_get_next() take a list element
- Revert "BUILD: Makefile: disable -Warray-bounds until it's fixed in gcc 11"
- BUG/MINOR: cfgparse: Fail if the strdup() for `rule->be.name` for `use_backend` fails
- CLEANUP: mworker: remove duplicate pointer tests in cfg_parse_program()
- CLEANUP: Reduce scope of `header_name` in http_action_store_cache()
- CLEANUP: Reduce scope of `hdr_age` in http_action_store_cache()
- CLEANUP: spoe: fix typo on `var_check_arg` comment
- BUG/MINOR: tcpcheck: Report a L7OK if the last evaluated rule is a send rule
- CI: github actions: build several popular "contrib" tools
- DOC: Improve the message printed when running `make` w/o `TARGET`
- BUG/MEDIUM: server: srv_set_addr_desc() crashes when a server has no address
- REGTESTS: add unresolvable servers to srvkey-addr
- BUG/MINOR: stats: Make stat_l variable used to dump a stat line thread local
- BUG/MINOR: quic: NULL pointer dereferences when building post handshake frames.
- SCRIPTS: improve announce-release to support different tag and versions
- SCRIPTS: make announce release support preparing announces before tag exists
- CLEANUP: assorted typo fixes in the code and comments
- BUG/MINOR: srv: do not init address if backend is disabled
- BUG/MINOR: srv: do not cleanup idle conns if pool max is null
- CLEANUP: assorted typo fixes in the code and comments
- CLEANUP: few extra typo and fixes over last one ("ot" -> "to")
If a server is configured to not have any idle conns, returns immediatly
from srv_cleanup_connections. This avoids a segfault when a server is
configured with pool-max-conn to 0.
This should be backported up to 2.2.
Do not proceed on init_addr if the backend of the server is marked as
disabled. When marked as disabled, the server is not fully initialized
and some operation must be avoided to prevent segfault. It is correct
because there is no way to activate a disabled backend.
This fixes the github issue #1031.
This should be backported to 2.2.
Since ee63d4bd6 ("MEDIUM: stats: integrate static proxies stats in new
stats"), all dumped stats for a given domain, the default ones and the
modules ones, are merged in a signle array to dump them in a generic way.
For this purpose, the stat_l global variable is allocated at startup to
store a line of stats before the dump, i.e. all stats of an entity
(frontend, backend, listener, server or dns nameserver). But this variable
is not thread safe. If stats are retrieved concurrently by several clients
on different threads, the same variable is used. This leads to corrupted
stats output.
To fix the bug, the stat_l variable is now thread local.
This patch should probably solve issues #972 and #992. It must be backported
to 2.3.
GitHub Issue #1026 reported a crash during configuration check for the
following example config:
backend 0
server 0 0
server 0 0
HAProxy crashed in srv_set_addr_desc() due to a NULL pointer dereference
caused by `sa2str` returning NULL for an `AF_UNSPEC` address (`0`).
Check to make sure the address key is non-null before using it for
comparison or inserting it into the tree.
The crash was introduced in commit 92149f9a8 ("MEDIUM: stick-tables: Add
srvkey option to stick-table") which not in any released version so no
backport is needed.
Cc: Tim Duesterhus <tim@bastelstu.be>
When all rules of a tcpcheck ruleset are successfully evaluated, the right
check status must always be reported. It is true if the last evaluated rule
is an expect or a connect rule. But not if it is a send rule. In this
situation, nothing more is done until the check timeout expiration and a
L7TOUT is reported instead of a L7OK.
Now, by default, when all rules were successfully evaluated, a L7OK is
reported. When the last evaluated rule is an expect or a connect, the
behavior remains unchanged.
This patch should fix the issue #1027. It must be backported as far as 2.2.
This variable is only needed deeply nested in a single location and clang's
static analyzer complains about a dead initialization. Reduce the scope to
satisfy clang and the human that reads the function.
As reported in issue #1017, there are two harmless duplicate tests in
cfg_parse_program(), one made of a "if" using the same condition as the
loop it's in, and the other one being a null test before a free. This
just removes them. No backport is needed.
This patch fixes GitHub issue #1024.
I could track the `strdup` back to commit
3a1f5fda10 which is 1.9-dev8. It's probably not
worth the effort to backport it across this refactoring.
This patch should be backported to 1.9+.
As reported in issue #1010, gcc-11 as of 2021-01-05 is overzealous in
its -Warray-bounds check as it considers that a cast of a global struct
accesses the entire struct even if only one specific element is accessed.
This instantly breaks all lists making use of container_of() to build
their iterators as soon as the starting point is known if the next
element is retrieved from the list head in a way that is visible to the
compiler's optimizer, because it decides that accessing the list's next
element dereferences the list as a larger struct (which it does not).
The temporary workaround consisted in disabling -Warray-bounds, but this
warning is traditionally quite effective at spotting real bugs, and we
actually have is a single occurrence of this issue in the whole code.
By changing the tlskeys_list_get_next() function to take a list element
as the starting point instead of the current element, we can avoid
the starting point issue but this requires to change all call places
to write hideous casts made of &((struct blah*)ref)->list. At the
moment we only have two such call places, the first one being used to
initialize the list (which is the one causing the warning) and which
is thus easy to simplify, and the second one for which we already have
an aliased pointer to the reference that is still valid at the call
place, and given the original pointer also remained unchanged, we can
safely use this alias, and this is safer than leaving a cast there.
Let's make this change now while it's still easy.
The generated code only changed in function cli_io_handler_tlskeys_files()
due to register allocation and the change of variable scope between the
old one and the new one.
`getnext` was only used to fill `ref` at the beginning of the function. Both
have the same type. Replace the parameter name by `ref` to remove the useless
local variable.
After having re-read the RFC, we noticed there are two bugs in the STREAM
frame parser. When the OFF bit (0x04) in the frame type is not set
we must set the offset to 0 (it was not set at all). When the LEN bit (0x02)
is not set we must extend the length of the data field to the end of the packet
(it was not set at all).
This is issue is due to the fact that when we call the function
responsible of building CRYPTO frames to fill a buffer, the Length
field of this packet did not take into an account the trailing 16 bytes for
the AEAD tag. Furthermore, the remaining <room> available in this buffer
was not decremented by the CRYPTO frame length, but only by the CRYPTO data length
of this frame.
This patch fixes GitHub issue #1023.
The function was introduced in commit 99c453d ("MEDIUM: ring: new
section ring to declare custom ring buffers."), which first appeared
in 2.2-dev9. The fix should be backported to 2.2+.
This allows using the address of the server rather than the name of the
server for keeping track of servers in a backend for stickiness.
The peers code was also extended to support feeding the dictionary using
this key instead of the name.
Fixes#814
This patch fixes GitHub Issue #988. Commit ce9e7b2521
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.
The accept-encoding normalizer now explicitely manages a subset of
encodings which will all have their own bit in the encoding bitmap
stored in the cache entry. This way two requests with the same primary
key will be served the same cache entry if they both explicitely accept
the stored response's encoding, even if their respective secondary keys
are not the same and do not match the stored response's one.
The actual hash of the accept-encoding will still be used if the
response's encoding is unmanaged.
The encoding matching and the encoding weight parsing are done for every
subpart of the accept-encoding values, and a bitmap of accepted
encodings is built for every request. It is then tested upon any stored
response that has the same primary key until one with an accepted
encoding is found.
The specific "identity" and "*" accept-encoding values are managed too.
When storing a response in the key, we also parse the content-encoding
header in order to only set the response's corresponding encoding's bit
in its cache_entry encoding bitmap.
This patch fixes GitHub issue #988.
It does not need to be backported.
The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
If any of the secondary hash normalizing functions raises an error, the
secondary hash will be unusable. In this case, the response will not be
stored anymore.
Add traces to have an idea why this function may fail. In fact
in never fails when the passed parameters are correct, especially the
lengths. This is not the case when a packet is not correctly built
before being encrypted.
Even if the size of frames built by qc_build_frm() are computed so that
not to overflow a buffer, do not rely on this and always makes a packet
build fails if we could not build a frame.
Also add traces to have an idea where qc_build_frm() fails.
Fixes a memory leak in qc_build_phdshk_apkt().
This salt is ued at leat up to draft-32. At this date ngtcp2 always
uses this salt even if it started the draft-33 development.
Note that when the salt is not correct, we cannot remove the header
protection. In this case the packet number length is wrong.
At least displays the SSL alert error code passed to ->ssl_send_alert()
QUIC BIO method and the SSL encryption level. This function is newly called
when using picoquic client with a recent version of BoringSSL (Nov 19 2020).
This is not the case with OpenSSL with 32 as QUIC draft implementation.
Reorder by increasing type the switch/case in qc_parse_pkt_frms()
which is the high level frame parser.
Add new STREAM_X frame types to support some tests with ngtcp2 client.
Add ->flags to the QUIC frame parser as this has been done for the builder so
that to flag RX packets as ack-eliciting at low level. This should also be
helpful to maintain the code if we have to add new flags to RX packets.
Remove the statements which does the same thing as higher level in
qc_parse_pkt_frms().
Remove ->ifcdata which was there to control the CRYPTO data sent to the
peer so that not to saturate its reception buffer. This was a sort
of flow control.
Add ->prep_in_flight counter to the QUIC path struct to control the
number of bytes prepared to be sent so that not to saturare the
congestion control window. This counter is increased each time a
packet was built. This has nothing to see with ->in_flight which
is the real in flight number of bytes which have really been sent.
We are olbiged to maintain two such counters to know how many bytes
of data we can prepared before sending them.
Modify traces consequently which were useful to diagnose issues about
the congestion control window usage.
As there is a lot of information in this protocol, this is not
easy to make the traces readable. We remove here a few of them and
shorten some line shortening the variable names.
This patch is there to initialize the default transport parameters for QUIC
as a preparation for one of the QUIC next steps to come: fully support QUIC
protocol for haproxy servers.
Implement ->accept_conn() callback for QUIC listener sockets.
Note that this patch also implements quic_session_accept() function
which is similar to session_accept_fd() without calling conn_complete_session()
at this time because we do not have any real QUIC mux.
Makes TLS/TCP and QUIC share the same CTX initializer so that not to modify the
caller which is an XPRT callback used both by the QUIC xprt and the SSL xprt over
TCP.
This patch adds QUIC structs to server struct so that to make the QUIC code
compile. Also initializes the ebtree to store the connections by connection
IDs.
This patch adds a quic_transport_params struct to bind_conf struct
used for the listeners. This is to store the QUIC transport parameters
for the listeners. Also initializes them when calling str2listener().
Before str2sa_range() it's too early to figure we're going to speak QUIC,
and after it's too late as listeners are already created. So it seems that
doing it in str2listener() when the protocol is discovered is the best
place.
Also adds two ebtrees to the underlying receivers to store the connection
by connections IDs (one for the original connection IDs, and another
one for the definitive connection IDs which really identify the connections.
However it doesn't seem normal that it is stored in the receiver nor the
listener. There should be a private context in the listener so that
protocols can store internal information. This element should in
fact be the listener handle.
Something still feels wrong, and probably we'll have to make QUIC and
SSL co-exist: a proof of this is that there's some explicit code in
bind_parse_ssl() to prevent the "ssl" keyword from replacing the xprt.
This patch imports all the C files for QUIC protocol implementation with few
modifications from 20200720-quic branch of quic-dev repository found at
https://github.com/haproxytech/quic-dev.
Traces were implemented to help with the development.
When parsing "ssl" keyword for TLS bindings, we must not use the same xprt as the one
for TLS/TCP connections. So, do not modify the QUIC xprt which will be initialized
when parsing QUIC addresses wich "ssl" bindings.
QUIC needs to initialize its BIO and SSL session the same way as for SSL over TCP
connections. It needs also to use the same ClientHello callback.
This patch only exports functions and variables shared between QUIC and SSL/TCP
connections.
This patch extraces the code which initializes the BIO and SSL session
objects so that to reuse it elsewhere later for QUIC conections which
only needs SSL and BIO objects at th TLS layer stack level to work.
We add src/quic_sock.c QUIC specific socket management functions as callbacks
for the control layer: ->accept_conn, ->default_iocb and ->rx_listening.
accept_conn() will have to be defined. The default I/O handler only recvfrom()
the datagrams received. Furthermore, ->rx_listening callback always returns 1 at
this time but should returns 0 when reloading the processus.
As QUIC is a connection oriented protocol, this file is almost a copy of
proto_tcp without TCP specific features. To suspend/resume a QUIC receiver
we proceed the same way as for proto_udp receivers.
With the recent updates to the listeners, we don't need a specific set of
quic*_add_listener() functions, the default ones are sufficient. The fields
declaration were reordered to make the various layers more visible like in
other protocols.
udp_suspend_receiver/udp_resume_receiver are up-to-date (the check for INHERITED
is present) and the code being UDP-specific, it's normal to use UDP here.
Note that in the future we might more reasily reference stacked layers so that
there's no more need for specifying the pointer here.
A new XXH3 variant of hash functions shows a noticeable improvement in
performance (especially on small data), and also brings 128-bit support,
better inlining and streaming capabilities.
Performance comparison is available here:
https://github.com/Cyan4973/xxHash/wiki/Performance-comparison
Allow the user to specify a custom Connection header for http-check
send. This is useful for example to implement a websocket upgrade check.
If no connection header has been set, a 'Connection: close' header is
automatically appended to allow the server to close the connection
immediately after the request/response.
Update the documentation related to http-check send.
This fixes the github issue #1009.
This is a regression in 7838a79ba ("MEDIUM: mux-h2/trace: add lots of traces
all over the code"). The issue was found using -Wmisleading-indentation.
This patch fixes GitHub issue #1015.
The impact of this bug is that it could in theory cause occasional delays
on some long responses for connections having otherwise no traffic.
This patch should be backported to 2.1+, the commit was first tagged in
v2.1-dev2.
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
SSL_CTX_get0_privatekey is openssl/boringssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
Since commit 8a069eb9a ("MINOR: debug: add a trivial PRNG for scheduler
stress-tests"), 32-bit gcc 4.7 emits this warning when parsing the
initial seed for the debugger's RNG (2463534242):
src/debug.c:46:1: warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
Let's mark it explicitly unsigned.
On frontend side, when a conn-stream is detached from a H1 connection, the
H1 stream is destroyed and if we already have some data to parse (a
pipelined request), we process these data immedialtely calling
h1_process(). Then we adjust the H1 connection timeout. But h1_process() may
fail and release the H1 connection. For instance, a parsing error may be
reported. Thus, when that happens, we must not use anymore the H1 connection
and exit.
This patch must be backported as far as the 2.2. This bug can impact the 2.3
and the 2.2, in theory, if h1 stream creation fails. But, concretly, it only
fails on the 2.4 because the requests are now parsed at this step.
When a channel is set in TUNNEL mode, we now always set the CF_NEVER_WAIT flag,
to be sure to never wait for sending data. It is important because in TUNNEL
mode, we have no idea if more data are expected or not. Setting this flag
prevent the MSG_MORE flag to be set on the connection.
It is only a problem with the HTX, since the 2.2. On previous versions, the
MSG_MORE flag is only set on the mux initiative. In fact, the problem arises
because there is an ambiguity in tunnel mode about the HTX_FL_EOI flag. In this
mode, from the mux point of view, while the SHUTR is not received more data are
expected. But from the channel point of view, we want to send data asap.
At short term, this fix is good enough and is valid anyway. But for the long
term more reliable solution must be found. At least, the to_forward field must
regain its original meaning.
This patch must be backported as far as 2.2.
When a protocol upgrade request is received, once parsed, it is waiting for
the response in the DONE state. But we must not set the flag CS_FL_EOI
because we don't know if a protocol upgrade will be performed or not.
Now, it is set on the response path, if both sides reached the DONE
state. If a protocol upgrade is finally performed, both side are switched in
TUNNEL state. Thus the CS_FL_EOI flag is not set.
If backported, this patch must be adapted because for now it relies on last
2.4-dev changes. It may be backported as far as 2.0.
As stated in the rfc7231, section 4.3.6, an HTTP tunnel via a CONNECT method
is successfully established if the server replies with any 2xx status
code. However, only 200 responses are considered as valid. With this patch,
any 2xx responses are now considered to estalish the tunnel.
This patch may be backported on demand to all stable versions and adapted
for the legacy HTTP. It works this way since a very long time and nobody
complains.
Due to the addition of the OpenTracing filter it is necessary to define
ARGC_OT enum. This value is used in the functions fmt_directive() and
smp_resolve_args().
The OpenTracing filter uses several internal HAProxy functions to work
with variables and therefore requires two static local HAProxy functions,
var_accounting_diff() and var_clear(), to be declared global.
In fact, the var_clear() function was not originally defined as static,
but it lacked a declaration.
This new option allows to tune the maximum number of simultaneous
entries with the same primary key in the cache (secondary entries).
When we try to store a response in the cache and there are already
max-secondary-entries living entries in the cache, the storage will
fail (but the response will still be sent to the client).
It defaults to 10 and does not have a maximum number.
The secondary entry counter cannot be updated without going over all the
items of a duplicates list periodically. In order to avoid doing it too
often and to impact the cache's performances, a timestamp is added to
the cache_entry. It will store the timestamp (with second precision) of
the last iteration over the list (actually the last call of the
clear_expired_duplicates function). This way, this function will not be
called more than once per second for a given duplicates list.
Add an arbitrary maximum number of secondary entries per primary hash
(10 for now) to the cache. This prevents the cache from being filled
with duplicates of the same resource.
This works thanks to an entry counter that is kept in one of the
duplicates of the list (the last one).
When an entry is added to the list, the ebtree's implementation ensures
that it will be added to the end of the existing list so the only thing
to do to keep the counter updated is to get the previous counter from
the second to last entry.
Likewise, when an entry is explicitely deleted, we update the counter
from the list's last item.
SSL_CTX_add_server_custom_ext is openssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
The cache entries are now added into the tree even when they are not
complete yet. If we realized while trying to add a response's payload
that the shctx was full, the entry was disabled through the
disable_cache_entry function, which cleared the key field of the entry's
node, but without actually removing it from the tree. So the shctx row
could be stolen from the entry and the row's content be rewritten while
a lookup in the tree would still find a reference to the old entry. This
caused a random crash in case of cache saturation and row reuse.
This patch adds the missing removal of the node from the tree next to
the reset of the key in disable_cache_entry.
This bug was introduced by commit 3243447 ("MINOR: cache: Add entry
to the tree as soon as possible")
It does not need to be backported.
In issue #1004, it was reported that it is not possible to remove
correctly a certificate after updating it when it came from a crt-list.
Indeed the "commit ssl cert" command on the CLI does not update the list
of ckch_inst in the crtlist_entry. Because of this, the "del ssl
crt-list" command does not remove neither the instances nor the SNIs
because they were never linked to the crtlist_entry.
This patch fixes the issue by inserting the ckch_inst in the
crtlist_entry once generated.
Must be backported as far as 2.2.
When a frontend H1 connection timed out waiting for the next request, a 408
error message is returned to the client. It is performed into the H1C task
process function, h1_timeout_task(), and under the idle connection takeover
lock. If the 408 error message cannot be sent immediately, we wait for a
next retry. In this case, the lock must be released.
This bug was introduced by the commit c4bfa59f1d ("MAJOR: mux-h1: Create the
client stream as later as possible") and is specific to the 2.4-DEV. No
backport needed.
Depending on the context, the current eweight or the next one must be used
to reposition a server in the tree. When the server state is updated, for
instance its weight, the next eweight must be used because it is not yet
committed. However, when the server is used, on normal conditions, the
current eweight must be used.
In fact, it is only a bug on the 1.8. On newer versions, the changes on a
server are performed synchronously. But it is safer to rely on the right
eweight value to avoid any futur bugs.
On the 1.8, it is important to do so, because the server state is updated
and committed inside the rendez-vous point. Thus, the next server state may
be unsync with the current state for a short time, waiting all threads join
the rendez-vous point. It is especially a problem if the next eweight is set
to 0. Because otherwise, it must not be used to reposition the server in the
tree, leading to a divide by 0.
This patch must be backported as far as 1.8.
This changes the subscribe/unsubscribe functions to rely on the control
layer's check_events/ignore_events. At the moment only the socket version
of these functions is present so the code should basically be the same.
Right now the connection subscribe/unsubscribe code needs to manipulate
FDs, which is not compatible with QUIC. In practice what we need there
is to be able to either subscribe or wake up depending on readiness at
the moment of subscription.
This commit introduces two new functions at the control layer, which are
provided by the socket code, to check for FD readiness or subscribe to it
at the control layer. For now it's not used.
Now we don't touch the fd anymore there, instead we rely on the ->drain()
provided by the control layer. As such the function was renamed to
conn_ctrl_drain().
This is what we need to drain pending incoming data from an connection.
The code was taken from conn_sock_drain() without the connection-specific
stuff. It still takes a connection for now for API simplicity.
conn_fd_handler() is 100% specific to socket code. It's about time
it moves to sock.c which manipulates socket FDs. With it comes
conn_fd_check() which tests for the socket's readiness. The ugly
connection status check at the end of the iocb was moved to an inlined
function in connection.h so that if we need it for other socket layers
it's not too hard to reuse.
The code was really only moved and not changed at all.
The send() loop present in this function and the error handling is already
present in raw_sock_from_buf(). Let's rely on it instead and stop touching
the FD from this place. The send flag was changed to use a more agnostic
CO_SFL_*. The name was changed to "conn_ctrl_send()" to remind that it's
meant to be used to send at the lowest level.
Add cur_server_timeout and cur_tunnel_timeout.
These sample fetches return the current timeout value for a stream. This
is useful to retrieve the value of a timeout which was changed via a
set-timeout rule.
Prepare the possibility to register sample fetches on the stream.
This commit is necessary to implement sample fetches to retrieve the
current timeout values.
Add a new http-request action 'set-timeout [server/tunnel]'. This action
can be used to update the server or tunnel timeout of a stream. It takes
two parameters, the timeout name to update and the new timeout value.
This rule is only valid for a proxy with backend capabilities. The
timeout value cannot be null. A sample expression can also be used
instead of a plain value.
Allow the modification of the tunnel timeout on the stream side.
Use a new field in the stream for the tunnel timeout. It is initialized
by the tunnel timeout from backend unless it has already been set by a
set-timeout tunnel rule.
Allow the modification of the timeout server value on the stream side.
Do not apply the default backend server timeout in back_establish if it
is already defined. This is the case if a set-timeout server rule has
been executed.