Compare commits

...

292 commits

Author SHA1 Message Date
William Lallemand
9b3345237a BUG/MINOR: admin: haproxy-reload rename -vv long option
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The -vv option used --verbose as its long form, which was identical to
the long form of -v. Since the case statement matches top-to-bottom,
--verbose would always trigger -v (VERBOSE=2), making -vv unreachable
via its long option. The long form is renamed to --verbose=all to avoid
the conflict, and the usage string is updated accordingly.

Must be backported to 3.3.
2026-03-08 01:37:56 +01:00
William Lallemand
2a0cf52cfc MEDIUM: admin: haproxy-reload conversion to POSIX sh
The script relied on a bash-specific process substitution (< <(...)) to
feed socat's output into the read loop. This is replaced with a standard
POSIX pipe into a command group.

The response parsing is also simplified: instead of iterating over each
line with a while loop and echoing them individually, the status line is
read first, the "--" separator consumed, and the remaining output is
streamed to stderr or discarded as a whole depending on the verbosity
level.

Could be backported to 3.3 as it makes it more portable, but introduce a
slight change in the error format.
2026-03-08 01:37:52 +01:00
William Lallemand
551e5f5fd4 BUG/MINOR: admin: haproxy-reload use explicit socat address type
socat was used with the ${MASTER_SOCKET} variable directly, letting it
auto-detect the network protocol. However, when given a plain filename
that does not point to a UNIX socket, socat would create a file at that
path instead of reporting an error.

To fix this, the address type is now determined explicitly: if
MASTER_SOCKET points to an existing UNIX socket file (checked with -S),
UNIX-CONNECT: is used; if it matches a <host>:<port> pattern, TCP: is
used; otherwise an error is reported. The socat_addr variable is also
properly scoped as local to the reload() function.

Could be backported in 3.3.
2026-03-08 01:33:29 +01:00
Aurelien DARRAGON
2a2989bb23 CLEANUP: flt_http_comp: comp_state doesn't bother about the direction anymore
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
no need to have duplicated comp_ctx and comp_algo for request vs response
in comp_state struct, because thanks to previous commit compression filter
is either oriented on the request or the response, and 2 distinct filters
are instanciated when we need to handle both requests and responses
compression.

Thus we can save us from duplicated struct members and related operations.
2026-03-06 13:55:41 +01:00
Aurelien DARRAGON
cbebdb4ba8 MEDIUM: flt_http_comp: split "compression" filter in 2 distinct filters
Existing "compression" filter is a multi-purpose filter that will try
to compress both requests and responses according to "compression"
settings, such as "compression direction".

One of the pre-requisite work identified to implement decompression
filter is that we needed a way to manually define the sequence of
enabled filters to chain them in the proper order to make
compression and decompression chains work as expected in regard
to the intended use-case.

Due to the current nature of the "compression" filter this was not
possible, because the filter has a combined action as it will try
to compress both requests and responses, and as we are about to
implement "filter-sequence" directive, we will not be able to
change the order of execution of the compression filter between
requests and responses.

A possible solution we identified to solve this issue is to split the
existing "compression" filter into 2 distinct filters, one which is
request-oriented, "comp-req", and another one which is response-oriented
"comp-res". This is what we are doing in this commit. Compression logic
in itself is unchanged, "comp-req" will only aim to compress the request
while "comp-res" will try to compress the response. Both filters will
still be invoked on request and responses hooks, but they only do their
part of the job.

From now on, to compress both requests and responses, both filters have
to be enabled on the proxy. To preserve original behavior, the "compression"
filter is still supported, what it does is that it instantiates both
"comp-req" and "comp-res" filters implicitly, as the compression filter is
now effectively split into 2 separate filters under the hood.

When using "comp-res" and "comp-req" filters explicitly, the use of the
"compression direction" setting is not relevant anymore. Indeed, the
compression direction is assumed as soon as one or both filters are
enabled. Thus "compression direction" is kept as a legacy option in
order to configure the "compression" generic filter.

Documentation was updated.
2026-03-06 13:55:31 +01:00
Aurelien DARRAGON
9549b05b94 MINOR: flt_http_comp: define and use proxy_get_comp() helper function
proxy_get_comp() function can be used to retrieve proxy->comp options or
allocate and initialize it if missing

For now, it is solely used by parse_compression_options(), but the goal is
to be able to use this helper from multiple origins.
2026-03-06 13:55:24 +01:00
Remi Tricot-Le Breton
5e14904fef BUG/MINOR: jwt: Missing 'jwt_tokenize' return value check
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
There was a "jwt_tokenize" call whose return value was not checked.

This was found by coverity and raised in GitHub #3277.
This patch can be backported to all stable branches.
2026-03-06 09:52:19 +01:00
Christopher Faulet
af6b9a0967 BUG/MINOR: backend: Don't get proto to use for webscoket if there is no server
In connect_server(), it is possible to have no server defined (dispatch mode
or transparent backend). In that case, we must be carefull to check the srv
variable in all calls involving the server. It was not perform at one place,
when the protocol to use for websocket is retrieved. This must not be done
when there is no server.

This patch should fix the first report in #3144. It must be backported to
all stable version.
2026-03-06 09:24:32 +01:00
Christopher Faulet
bfe5a2c3d7 BUG/MINOR: ssl-sample: Fix sample_conv_sha2() by checking EVP_Digest* failures
In sample_conv_sha2(), calls to EVP_Digest* can fail. So we must check
return value of each call and report a error on failure and release the
digest context.

This patch should fix the issue #3274. It should be backported as far as
2.6.
2026-03-06 09:07:16 +01:00
Christopher Faulet
b48c9a1465 BUG/MINOR: stconn: Increase SC bytes_out value in se_done_ff()
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
When data are sent via the zero-copy data forwarding, we must not forget to
increase the stconn bytes_out value.

This patch must be backport to 3.3.
2026-03-05 16:17:33 +01:00
Willy Tarreau
fcfabd0d90 [RELEASE] Released version 3.4-dev6
Released version 3.4-dev6 with the following main changes :
    - CLEANUP: acme: remove duplicate includes
    - BUG/MINOR: proxy: detect strdup error on server auto SNI
    - BUG/MINOR: server: set auto SNI for dynamic servers
    - BUG/MINOR: server: enable no-check-sni-auto for dynamic servers
    - MINOR: haterm: provide -b and -c options (RSA key size, ECDSA curves)
    - MINOR: haterm: add long options for QUIC and TCP "bind" settings
    - BUG/MINOR: haterm: missing allocation check in copy_argv()
    - BUG/MINOR: quic: fix counters used on BE side
    - MINOR: quic: add BUG_ON() on half_open_conn counter access from BE
    - BUG/MINOR: quic/h3: display QUIC/H3 backend module on HTML stats
    - BUG/MINOR: acme: acme_ctx_destroy() leaks auth->dns
    - BUG/MINOR: acme: wrong labels logic always memprintf errmsg
    - MINOR: ssl: clarify error reporting for unsupported keywords
    - BUG/MINOR: acme: fix incorrect number of arguments allowed in config
    - CLEANUP: haterm: remove unreachable labels hstream_add_data()
    - CLEANUP: haterm: avoid static analyzer warnings about rand() use
    - CLEANUP: ssl: Remove a useless variable from ssl_gen_x509()
    - CI: use the latest docker for QUIC Interop
    - CI: remove redundant "halog" compilation
    - CLENAUP: cfgparse: accept-invalid-http-* does not support "no"/"defaults"
    - BUG/MEDIUM: spoe: Acquire context buffer in applet before consuming a frame
    - MINOR: traces: always mark trace_source as thread-aligned
    - MINOR: ncbmbuf: improve itbmap_next() code
    - MINOR: proxy: improve code when checking server name conflicts
    - MINOR: quic: add a new metric for ncbuf failures
    - BUG/MINOR: haterm: cannot reset default "haterm" mode
    - BUG/MEDIUM: cpu-topo: Distribute CPUs fairly across groups
    - BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions
    - CLEANUP: ssl: remove outdated comments
    - MINOR: mux-h2: also count glitches on invalid trailers
    - MINOR: mux-h2: add a new setting, "tune.h2.log-errors" to tweak error logging
    - BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream
    - BUG/MINOR: server: adjust initialization order for dynamic servers
    - CLEANUP: tree-wide: drop a few useless null-checks before free()
    - CLEANUP: quic-stats: include counters from quic_stats
    - REORG: stats/counters: move extra_counters to counters not stats
    - CLEANUP: stats: drop stats.h / stats-t.h where not needed
    - MEDIUM: counters: change the fill_stats() API to pass the module and extra_counters
    - CLEANUP: counters: only retrieve zeroes for unallocated extra_counters
    - MEDIUM: counters: add a dedicated storage for extra_counters in various structs
    - MINOR: counters: store a tgroup step for extra_counters to access multiple tgroups
    - MEDIUM: counters: store the number of thread groups accessing extra_counters
    - MINOR: counters: add EXTRA_COUNTERS_BASE() to retrieve extra_counters base storage
    - MEDIUM: counters: return aggregate extra counters in ->fill_stats()
    - MEDIUM: counters: make EXTRA_COUNTERS_GET() consider tgid
    - BUG/MINOR: call EXTRA_COUNTERS_FREE() before srv_free_params() in srv_drop()
    - MINOR: promex: test applet resume in stress mode
    - BUG/MINOR: promex: fix server iteration when last server is deleted
    - BUG/MINOR: proxy: add dynamic backend into ID tree
    - MINOR: proxy: convert proxy flags to uint
    - MINOR: server: refactor srv_detach()
    - MINOR: proxy: define a basic "del backend" CLI
    - MINOR: proxy: define proxy watcher member
    - MINOR: stats: protect proxy iteration via watcher
    - MINOR: promex: use watcher to iterate over backend instances
    - MINOR: lua: use watcher for proxies iterator
    - MINOR: proxy: add refcount to proxies
    - MINOR: proxy: rename default refcount to avoid confusion
    - MINOR: server: take proxy refcount when deleting a server
    - MINOR: lua: handle proxy refcount
    - MINOR: proxy: prevent backend removal when unsupported
    - MINOR: proxy: prevent deletion of backend referenced by config elements
    - MINOR: proxy: prevent backend deletion if server still exists in it
    - MINOR: server: mark backend removal as forbidden if QUIC was used
    - MINOR: cli: implement wait on be-removable
    - MINOR: proxy: add comment for defaults_px_ref/unref_all()
    - MEDIUM: proxy: add lock for global accesses during proxy free
    - MEDIUM: proxy: add lock for global accesses during default free
    - MINOR: proxy: use atomic ops for default proxy refcount
    - MEDIUM: proxy: implement backend deletion
    - REGTESTS: add a test on "del backend"
    - REGTESTS: complete "del backend" with unnamed defaults ref free
    - BUG/MINOR: hlua: fix return with push nil on proxy check
    - BUG/MEDIUM: stream: Handle TASK_WOKEN_RES as a stream event
    - MINOR: quic: use signed char type for ALPN manipulation
    - MINOR: quic/h3: reorganize stream reject after MUX closure
    - MINOR: mux-quic: add function for ALPN to app-ops conversion
    - MEDIUM: quic/mux-quic: adjust app-ops install
    - MINOR: quic: use server cache for ALPN on BE side
    - BUG/MEDIUM: hpack: correctly deal with too large decoded numbers
    - BUG/MAJOR: qpack: unchecked length passed to huffman decoder
    - BUG/MINOR: qpack: fix 1-byte OOB read in qpack_decode_fs_pfx()
    - BUG/MINOR: quic: fix OOB read in preferred_address transport parameter
    - BUG/MEDIUM: qpack: correctly deal with too large decoded numbers
    - BUG/MINOR: hlua: Properly enable/disable line receives from HTTP applet
    - BUG/MEDIUM: hlua: Fix end of request detection when retrieving payload
    - BUG/MINOR: hlua: Properly enable/disable receives for TCP applets
    - MINOR: htx: Add a function to retrieve the HTTP version from a start-line
    - MINOR: h1-htx: Reports non-HTTP version via dedicated flags
    - BUG/MINOR: h1-htx: Be sure that H1 response version starts by "HTTP/"
    - MINOR: http-ana: Save the message version in the http_msg structure
    - MEDIUM: http-fetch: Rework how HTTP message version is retrieved
    - MEDIUM: http-ana: Use the version of the opposite side for internal messages
    - DEBUG: stream: Display the currently running rule in stream dump
    - MINOR: filters: Use filter API as far as poissible to break loops on filters
    - MINOR: filters: Set last_entity when a filter fails on stream_start callback
    - MINOR: stream: Display the currently running filter per channel in stream dump
    - DOC: config: Use the right alias for %B
    - BUG/MINOR: channel: Increase the stconn bytes_in value in channel_add_input()
    - BUG/MINOR: sample: Fix sample to retrieve the number of bytes received and sent
    - BUG/MINOR: http-ana: Increment scf bytes_out value if an haproxy error is sent
    - BUG/MAJOR: fcgi: Fix param decoding by properly checking its size
    - BUG/MAJOR: resolvers: Properly lowered the names found in DNS response
    - BUG/MEDIUM: mux-fcgi: Use a safe loop to resume each stream eligible for sending
    - MINOR: mux-fcgi: Use a dedicated function to resume streams eligible for sending
    - CLEANUP: qpack: simplify length checks in qpack_decode_fs()
    - MINOR: counters: Introduce COUNTERS_UPDATE_MAX()
    - MINOR: listeners: Update the frequency counters separately when needed
    - MINOR: proxies: Update beconn separately
    - MINOR: stats: Add an option to disable the calculation of max counters
2026-03-05 15:55:28 +01:00
Olivier Houchard
5d02d33ee1 MINOR: stats: Add an option to disable the calculation of max counters
Add a new option, "stats calculate-max-counters [on|off]".
It makes it possible to disable the calculation of max counters, as they
can have a performance cost.
2026-03-05 15:39:42 +01:00
Olivier Houchard
1544842801 MINOR: proxies: Update beconn separately
Update beconn separately from the call to COUNTERS_UPDATE_MAX(), as soon
there will be an option to get COUNTERS_UPDATE_MAX() to do nothing, and
we still want beconn to be properly updated, as it is used for other
purposes.
2026-03-05 15:39:42 +01:00
Olivier Houchard
88bc2bdfc9 MINOR: listeners: Update the frequency counters separately when needed
Update the frequency counters that are exported to the stats page
outside of the call to COUNTERS_UPDATE_MAX(), so that they will
happen even if COUNTERS_UPDATE_MAX() ends up doing nothing.
2026-03-05 15:39:42 +01:00
Olivier Houchard
0087651128 MINOR: counters: Introduce COUNTERS_UPDATE_MAX()
Introduce COUNTERS_UPDATE_MAX(), and use it instead of using
HA_ATOMIC_UPDATE_MAX() directly.
For now it just calls HA_ATOMIC_UPDATE_MAX(), but will later be modified
so that we can disable max calculation.
This can be backported up to 2.8 if the usage of COUNTERS_UPDATE_MAX()
generates too many conflicts.
2026-03-05 15:39:42 +01:00
Frederic Lecaille
65d3416da5 CLEANUP: qpack: simplify length checks in qpack_decode_fs()
This patch simplifies the decoding loop by merging the variable-length
integer truncation check (len == -1) with the subsequent buffer
availability check (len < length).

This removes redundant code blocks and improves readability without
changing the decoding logic.

Note that the second removal is correct, as the check was duplicate and
unnecessary."
2026-03-05 15:42:02 +01:00
Christopher Faulet
7d48e80da5 MINOR: mux-fcgi: Use a dedicated function to resume streams eligible for sending
The same code was duplicated in fcgi_process_mux() and fcgi_sedn(). So let's
move it in a dedicated function.
2026-03-05 15:35:36 +01:00
Christopher Faulet
9b22f22858 BUG/MEDIUM: mux-fcgi: Use a safe loop to resume each stream eligible for sending
At the end of fcgi_send(), if the connection is not full anymore, we loop on
the send list to resume FCGI stream for sending. But a streams may be
removed from the this list during the loop. So a safe loop must be used.

This patch should be backported to all stable versions.
2026-03-05 15:35:36 +01:00
Christopher Faulet
25d6e65aae BUG/MAJOR: resolvers: Properly lowered the names found in DNS response
Names found in DNS responses are lowered to be compared. A name is composed
of several labels, strings precedeed by their length on one byte. For
instance:

 3www7haproxy3org

There is an bug when labels are lowered. The label length is not skipped and
tolower() function is called on it. So for label length in the range [65-90]
(uppercase char), 32 is added to the label length due to the conversion of a
uppercase char to lowercase. This bugs can lead to OOB read later in the
resolvers code.

The fix is quite obvious, the label length must be skipped when the label is
lowered.

Thank you to Kamil Frankowicz for having reported this.

This patch must be backported to all stable versions.
2026-03-05 15:35:31 +01:00
Christopher Faulet
96286b2a84 BUG/MAJOR: fcgi: Fix param decoding by properly checking its size
In functions used to decode a FCGI parameter, the test on the data length
before reading the parameter's name and value did not consider the offset
value used to skip already parsed data. So it was possible to read more data
than available (OOB read). To do so, a malicious FCGI server must send a
forged GET_VALUES_RESULT record containing a parameter with wrong name/value
length.

Thank you to Kamil Frankowicz for having reported this.

This patch must be backported to all stable versions.
2026-03-05 15:35:21 +01:00
Christopher Faulet
306931dfb1 BUG/MINOR: http-ana: Increment scf bytes_out value if an haproxy error is sent
When an HAproxy error is sent, we must not forget to increment bytes_out
value on the front stconn.

This patch must be backport to 3.3.
2026-03-05 15:34:47 +01:00
Christopher Faulet
4791501011 BUG/MINOR: sample: Fix sample to retrieve the number of bytes received and sent
There was an issue in the if/else statement in smp_fetch_bytes() function.
When req.bytes_in or req.bytes_out was requested, res.bytes_in was always
returned. It is now fixed.

This patch must be backported to 3.3.
2026-03-05 15:34:47 +01:00
Christopher Faulet
e0728ebcf4 BUG/MINOR: channel: Increase the stconn bytes_in value in channel_add_input()
This function is no longer used. So it is not really an bug. But it is still
available and could be used by legacy applets. In that case, we must take
care to increment the stconn bytes_in value accordingly when input data are
inserted.

This patch must be backported to 3.3.
2026-03-05 15:34:47 +01:00
Christopher Faulet
97a63835af DOC: config: Use the right alias for %B
In custom log format part, %[req.bytes_in] was erroneously documented as the
alias of %B. The good alias is %[res.bytes_in]. It is now fixed.

This patch must be backported to 3.3.
2026-03-05 15:34:47 +01:00
Christopher Faulet
50fb37e5fe MINOR: stream: Display the currently running filter per channel in stream dump
Since the 3.1, when stream's info are dump, it is possible to print the
yielding filter on each channel, if any. It was useful to detect buggy
filter on spinning loop. But it is not possible to detect a filter consuming
too much CPU per-execution. We can see a filter was executing in the
backtrace reported by the watchdog, but we are unable to spot the specific
one.

Thanks to this patch, it is now possible. When a dump is emitted, the
running or yield filters on each channel are now displayed with their
current state (RUNNING or YIELDING).

This patch could be backported as far as 3.2 because it could be useful to
spot issues. But the filter API was slightly refactored in 3.4, so this
patch should be adapted.
2026-03-05 15:34:47 +01:00
Christopher Faulet
cd9f159210 MINOR: filters: Set last_entity when a filter fails on stream_start callback
On the stream, the last_entity should reference the last rule or the last
filter evaluated during the stream processing. However, this info was not
saved when a filter failed on strem_start callback function. It is now
fixed.

This patch could be backported as far as 3.1.
2026-03-05 15:34:47 +01:00
Christopher Faulet
3eadf887f7 MINOR: filters: Use filter API as far as poissible to break loops on filters
When the filters API was refactored to improve loops on filters, some places
were not updated (or not fully updated). Some loops were not relying on
resume_filter_list_break() while it was possible. So let's do so with this
patch.
2026-03-05 15:34:47 +01:00
Christopher Faulet
9f1e9ee0ed DEBUG: stream: Display the currently running rule in stream dump
Since the 2.5, when stream's info are dump, it is possible to print the
yielding rule, if any. It was useful to detect buggy rules on spinning
loop. But it is not possible to detect a rule consuming too much CPU
per-execution. We can see a rule was executing in the backtrace reported by
the watchdog, but we are unable to spot the specific rule.

Thanks to this patch, it is now possible. When a dump is emitted, the
running or yield rule is now displayed with its current state (RUNNING or
YIELDING).

This patch could be backported as far as 3.2 because it could be useful to
spot issues.
2026-03-05 15:34:47 +01:00
Christopher Faulet
4939f18ff7 MEDIUM: http-ana: Use the version of the opposite side for internal messages
When the response is send by HAProxy, from a applet or for the analyzers,
The request version is used for the response. The main reason is that there
is not real version for the response when this happens. "HTTP/1.1" is used,
but it is in fact just an HTX response. So the version of the request is
used.

In the same manner, when the request is sent from an applet (httpclient),
the response version is used, once available.

The purpose of this change is to return the most accurate version from the
user point of view.
2026-03-05 15:34:46 +01:00
Christopher Faulet
b2ba3c6662 MEDIUM: http-fetch: Rework how HTTP message version is retrieved
Thanks to previous patches, we can now rely on the version stored in the
http_msg structure to get the request or the response version.

"req.ver" and "res.ver" sample fetch functions returns the string
representation of the version, without the prefix, so "<major>.<minor>", but
only if the version is valid. For the response, "res.ver" may be added from
a health-check context, in that case, the HTX message is used.

"capture.req.ver" and "capture.res.ver" does the same but the "HTTP/" prefix
is added to the result. And "capture.res.ver" cannot be called from a
health-check.

To ease the version formatting and avoid code duplication, an helper
function was added. So these samples are now relying on "get_msg_version()".
2026-03-05 15:34:46 +01:00
Christopher Faulet
7bfb66d2b1 MINOR: http-ana: Save the message version in the http_msg structure
When the request or the response is received, the numerical value of the
message version is now saved. To do so, the field "vsn" was added in the
http_msg structure. It is an unsigned char. The 4 MSB bits are used for the
major digit and the 4 LSB bits for the minor one.

Of couse, the version must be valid. the HTX_SL_F_NOT_HTTP flag of the
start-line is used to be sure the version is valid. But because this flag is
quite new, we also take care the string representation of the version is 8
bytes length. 0 means the version is not valid.
2026-03-05 15:34:46 +01:00
Christopher Faulet
7a474855b4 BUG/MINOR: h1-htx: Be sure that H1 response version starts by "HTTP/"
When the response is parsed, we test the version to be sure it is
valid. However, the protocol was not tested. Now we take care that the
response version starts by "HTTP/", otherwise an error is returned.

Of course, it is still possible to by-pass this test with
"accept-unsafe-violations-in-http-response" option.

This patch could be backported to all stable versions.
2026-03-05 15:34:46 +01:00
Christopher Faulet
88765b69e0 MINOR: h1-htx: Reports non-HTTP version via dedicated flags
Now, when the HTTP version format is not strictly valid, flags are set on
the h1 parser and the HTX start-line. H1_MF_NOT_HTTP is set on the H1 parser
and HTX_SL_F_NOT_HTTP is set on the HTX start-line. These flags were
introduced to avoid parsing again and again the version to know if it is a
valid version or not, escpecially because it is most of time valid.
2026-03-05 15:34:46 +01:00
Christopher Faulet
9951f9cf85 MINOR: htx: Add a function to retrieve the HTTP version from a start-line
htx_sl_vsn() function can now be used to retrieve the ist string
representing the HTTP version from a start-line passed as parameter. This
function takes care to return the right part of the start-line, depending on
its type (request or response).
2026-03-05 15:34:46 +01:00
Christopher Faulet
e6a8ef5521 BUG/MINOR: hlua: Properly enable/disable receives for TCP applets
From a lua TCP applet, in functions used to retrieve data (receive,
try_receive and getline), we must take care to disable receives when data
are returned (or on failure) and to restart receives when these functions
are called again. In addition, when an applet execution is finished, we must
restart receives to properly drain the request.

This patch should be backported to 3.3. On older version, no bug was
reported so we can wait a report first.
2026-03-05 15:34:46 +01:00
Christopher Faulet
7fe1a92bb3 BUG/MEDIUM: hlua: Fix end of request detection when retrieving payload
When the lua HTTP applet was refactored to use its own buffers, a bug was
introduced in receive() and getline() function. We rely on HTX_FL_EOM flag
to detect the end of the request. But there is nothing preventing extra
calls to these function, when the whole request was consumed. If this
happens, the call will yield waiting for more data with no way to stop it.

To fix the issue, APPLET_REQ_RECV flag was added to know the whole request
was received.

This patch should fix #3293. It must be backported to 3.3.
2026-03-05 15:34:46 +01:00
Christopher Faulet
a779d0d23a BUG/MINOR: hlua: Properly enable/disable line receives from HTTP applet
From a lua HTTP applet, in the getline() function, we must take care to
disable receives when a line is retrieved and to restart receives when the
function is called again. In addition, when an applet execution is finished,
we must restart receives to properly drain the request.

This patch could help to fix #3293. It must be backported to 3.3. On older
version, no bug was reported so we can wait a report first. But in that
case, hlua_applet_http_recv() should also be fixed (on 3.3 it was fixed
during the applets refactoring).
2026-03-05 15:34:46 +01:00
Frederic Lecaille
0a02acecf3 BUG/MEDIUM: qpack: correctly deal with too large decoded numbers
Same fix as this one for hpack:

	7315428615 ("BUG/MEDIUM: hpack: correctly deal with too large decoded numbers")

Indeed, the encoding of integers for QPACK is the same as for HPACK but for 64 bits
integers.

Must be backported as far as 2.6.
2026-03-05 15:02:02 +01:00
Frederic Lecaille
cdcdc016cc BUG/MINOR: quic: fix OOB read in preferred_address transport parameter
This bug impacts only the QUIC backend. A QUIC server does receive
a server preferred address transport parameter.

In quic_transport_param_dec_pref_addr(), the boundary check for the
connection ID was inverted and incorrect. This could lead to an
out-of-bounds read during the following memcpy.

This patch fixes the comparison to ensure the buffer has enough input data
for both the CID and the mandatory Stateless Reset Token.

Thank you to Kamil Frankowicz for having reported this.

Must be backported to 3.3.
2026-03-05 15:02:02 +01:00
Frederic Lecaille
54b614d2b5 BUG/MINOR: qpack: fix 1-byte OOB read in qpack_decode_fs_pfx()
In qpack_decode_fs_pfx(), if the first qpack_get_varint() call
consumes the entire buffer, the code would perform a 1-byte
out-of-bounds read when accessing the sign bit via **raw.

This patch adds an explicit length check at the beginning of
qpack_get_varint(), which systematically secures all other callers
against empty inputs. It also adds a necessary check before the
second varint call in qpack_decode_fs_pfx() to ensure data is still
available before dereferencing the pointer to extract the sign bit,
returning QPACK_RET_TRUNCATED if the buffer is exhausted.

Thank you to Kamil Frankowicz for having reported this.

Must be backported as far as 2.6.
2026-03-05 15:02:02 +01:00
Frederic Lecaille
e38b86e72c BUG/MAJOR: qpack: unchecked length passed to huffman decoder
A call to huffman decoder function (huff_dec()) is made from qpack_decode_fs()
without checking the buffer length passed to this function, leading to OOB read
which can crash the process.

Thank you to Kamil Frankowicz for having reported this.

Must be backport as far as 2.6.
2026-03-05 15:02:02 +01:00
Willy Tarreau
7315428615 BUG/MEDIUM: hpack: correctly deal with too large decoded numbers
The varint hpack decoder supports unbounded numbers but returns 32-bit
results. This means that possible truncation my happen on some field
lengths or indexes that would be emitted as quantities that do not fit
in a 32-bit number. The final value will also depend on how the left
shift operation behaves on the target architecture (e.g. whether bits
are lost or used modulo 31). This could lead to a desynchronization of
the HPACK stream decoding compared to what an external observer would
see (e.g. from a network traffic capture). However, there isn't any
impact between streams, HPACK is performed at the connection level,
not at the stream level, so no stream may try to leverage this
limitation to have any effect on another one.

For the fix, instead of adding checks everywhere in the loop and for
the final stage, let's rewrite the decoder to compare the read value
to a max value that is shifted by 7 bits for every 7 bits read. This
allows a sender to continue to emit zeroes for higher bits without
being blocked, while detecting that a received value would overflow.
The loop is now simpler as it deals both with values with the higher
bit set and the final ones, and stops once the final value was recorded.

A test on non-zero before performing the shift was added to please
ubsan, though in practice zero shifted by any quantity remains zero.
But the test is cheap so that's OK.

Thanks to Guillaume Meunier, Head of Vulnerability Operations Center
France at Orange Cyberdefense, for reporting this bug.

This should be backported to all stable versions.
2026-03-05 14:33:21 +01:00
Amaury Denoyelle
b1441c6440 MINOR: quic: use server cache for ALPN on BE side
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
On the backend side, QUIC MUX may be started preemptively before the
ALPN negotiation. This is useful notably for 0-RTT implementation.

However, this was a source of crashes. ALPN was expected to be retrieved
from the server cache, however QUIC MUX still used the ALPN from the
transport layer. This could cause a crash, especially when several
connections runs in parallel as the server cache is shared among
threads.

Thanks to the previous patch which reworks QUIC MUX init, this solution
can now be fixed. Indeed, if conn_get_alpn() is not successful, MUX can
look at the server cache again to use the expected value.

Note that this could still prevent the MUX to work as expected if the
server cache is resetted between connect_server() and MUX init. Thus,
the ultimate solution would be to copy the cached ALPN into the
connection. This problem is not specific to QUIC though, and must be
fixed in a separate patch.
2026-03-03 16:23:03 +01:00
Amaury Denoyelle
940e1820f6 MEDIUM: quic/mux-quic: adjust app-ops install
This patch reworks the installation of app-ops layer by QUIC MUX.
Previously, app_ops field was stored directly into the quic_conn
structure. Then the MUX reused it directly during its qmux_init().

This patch removes app_ops field from quic_conn and replaces it with a
copy of the negotiated ALPN. By using quic_alpn_to_app_ops(), it ensures
it remains compatible with a known application layer.

On the MUX layer, qcc_install_app_ops() now uses the standard
conn_get_alpn() to retrieve the ALPN from the transport layer. This is
done via the newly defined <get_alpn> QUIC xprt callback.

This new architecture should be cleaner as it better highlights the
responsibility of each layers in the ALPN/app negotiation.
2026-03-03 16:22:57 +01:00
Amaury Denoyelle
9c7cf1c684 MINOR: mux-quic: add function for ALPN to app-ops conversion
Extract the conversion from ALPN to qcc_app_ops type from quic_conn
source file into QUIC MUX. The newly created function is named
quic_alpn_to_app_ops(). This will serve as a central point to identify
which ALPNs are currently supported in our QUIC stack.

This patch is purely a small refactoring. It will be useful for the next
one which rework MUX app-ops layer init. The current cleanup allows
notably to remove H3/hq-interop headers from quic_conn source file.
2026-03-03 16:20:16 +01:00
Amaury Denoyelle
4120faf289 MINOR: quic/h3: reorganize stream reject after MUX closure
The QUIC MUX layer is closed after its transport counterpart. This may
be necessary then to reject any new streams opened by the remote peer.
This operation is dependent however from the application protocol.

Previously, a function qc_h3_request_reject() was directly implemented
in quic_conn source file for use when HTTP/3 was previously negotiated.
However, this solution was not evolutive and broke layering.

This patch introduces a new proper separation with a <strm_reject>
callback defined in quic_conn structure. When set, it will be used to
preemptively close any new stream. QUIC MUX is responsible to set it
just before its closure.

No functional change. This patch is purely a refactoring with a better
architecture design. Especially, H3 specific code from transport layer
is now completely removed.
2026-03-03 16:19:13 +01:00
Amaury Denoyelle
58830990d0 MINOR: quic: use signed char type for ALPN manipulation
In most of haproxy code, ALPN is used as a signed char pointer. In QUIC
code instead, it is manipulated as unsigned.

Unifies this by using signed type in QUIC code. This allows to remove a
bunch of unnecessary casts.
2026-03-03 16:11:58 +01:00
Christopher Faulet
13c3445163 BUG/MEDIUM: stream: Handle TASK_WOKEN_RES as a stream event
The conversion of TASK_WOKEN_RES to a stream event was missing. Among other
things, this wakeup reason is used when a stream is dequeued. So it was
possible to skip the connection establishment if the stream was also woken
up for a timer reason. When this happened, the stream was blocked till the
queue timeout expiration.

Converting TASK_WOKEN_RES to STRM_EVT_RES fixes the issue.

This patch should fix the issue #3290. It must be backported as far as 3.2.
2026-03-03 15:16:10 +01:00
Amaury Denoyelle
f41e684e9a BUG/MINOR: hlua: fix return with push nil on proxy check
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
hlua_check_proxy() may now return NULL if the target proxy instance has
been flagged for deletion. Thus, proxies method have been adjusted and
may push nil to report such case.

This patch fixes these error paths. When nil is pushed, 1 must be
returned instead of 0. This represents the count of pushed values on the
stack which can be retrieved by the caller.

No need to backport.
2026-03-03 08:45:27 +01:00
Amaury Denoyelle
e07a75c764 REGTESTS: complete "del backend" with unnamed defaults ref free
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Complete delete backend regtests by checking deletion of a proxy with a
reference on an unnamed defaults instance. This operation is sensible as
the defaults refcount is decremented, and when the last backend is
removed, the defaults is also freed.
2026-03-02 14:15:53 +01:00
Amaury Denoyelle
2f5030c847 REGTESTS: add a test on "del backend"
Add a reg-tests to test "del backend" CLI command. First, checks are
performed to ensure a backend cannot be deleted if not in the expected
state.

Then, a "del backend" success is tested. Stats are dumped to ensure the
backend instance is indeed removed.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
712055f2f8 MEDIUM: proxy: implement backend deletion
This patch finalizes "del backend" handler by implementing the proper
proxy deletion.

After ensuring backend deletion can be performed, several steps are
executed. First, any watcher elements are updated to point on the next
proxy instance. The backend is then removed from ID and name global
trees and is finally detached from proxies_list.

Once the backend instance is removed from proxies_list, the backend
cannot be found by new elements. Thread isolation is lifted and
proxy_drop() is called, which will purge the proxy if its refcount is
null. Thanks to recently introduced PROXIES_DEL_LOCK, proxy_drop() is
thread safe.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
6145f52d9c MINOR: proxy: use atomic ops for default proxy refcount
Default proxy refcount <def_ref> is used to comptabilize reference on a
default proxy instance by standard proxies. Currently, this is necessary
when a default proxy defines TCP/HTTP rules or a tcpcheck ruleset.

Transform every access on <def_ref> so that atomic operations are now
used. Currently, this is not strictly needed as default proxies
references are only manipulated at init or deinit in single thread mode.
However, when dynamic backends deletion will be implemented, <def_ref>
will be decremented at runtime also.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
f64aa036d8 MEDIUM: proxy: add lock for global accesses during default free
This patch is similar to the previous one, but this time it deals with
functions related to defaults proxies instances. Lock PROXIES_DEL_LOCK
is used to protect accesses on global collections.

This patch will be necessary to implement dynamic backend deletion, even
if defaults won't be use as direct target of a "del backend" CLI.
However, a backend may have a reference on a default instance. When the
backend is freed, this references is released, which can in turn cause
the freeing of the default proxy instance. All of this will occur at
runtime, outside of thread isolation.
2026-03-02 14:10:46 +01:00
Amaury Denoyelle
f58b2698ce MEDIUM: proxy: add lock for global accesses during proxy free
Define a new lock with label PROXIES_DEL_LOCK. Its purpose is to protect
operations performed on global lists or trees while a proxy is freed.

Currently, this lock is unneeded as proxies are only freed on
single-thread init or deinit. However, with the incoming dynamic backend
deletion, this operation will be also performed at runtime, outside of
thread isolation.
2026-03-02 14:09:25 +01:00
Amaury Denoyelle
a7d1c59a92 MINOR: proxy: add comment for defaults_px_ref/unref_all()
Write documentation for functions related to default proxies instances.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
98c8c5e16e MINOR: cli: implement wait on be-removable
Implement be-removable argument to CLI wait. This is implemented via
be_check_for_deletion() invokation, also used by "del backend" handler.

The objective is to test whether a backend instance can be removed. If
this is not the case, the command may returns immediately if the target
proxy is incompatible with dynamic removal or if a user action is
required. Else, the command will wait until the temporary restriction is
lifted.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
5ddfbd4b03 MINOR: server: mark backend removal as forbidden if QUIC was used
Currenly, quic_conn on the backend side may access their parent proxy
instance during their lifetime. In particular, this is the case for
counters update, with <prx_counters> field directly referencing a proxy
memory zone.

As such, this prevents safe backend removal. One solution would be to
check if the upper connection instance is still alive, as a proxy cannot
be removed if connection are still active. However, this would
completely prevent proxy counters update via
quic_conn_prx_cntrs_update(), as this is performed on quic_conn release.

Another solution would be to use refcount, or a dedicated counter on the
which account for QUIC connections on a backend instance. However,
refcount is currently only used by short-term references, and it could
also have a negative impact on performance.

Thus, the simplest solution for now is to disable a backend removal if a
QUIC server is/was used in it. This is considered acceptable for now as
QUIC on the backend side is experimental.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
053887cc98 MINOR: proxy: prevent backend deletion if server still exists in it
Ensure a backend instance cannot be removed if there is still server in
it. This is checked via be_check_for_deletion() to ensure "del backend"
cannot be executed. The only solution is to use "del server" to remove
on the servers instances.

This check only covers servers not yet targetted via "del server". For
deleted servers not yet purged (due to their refcount), the proxy
refcount is incremented but this does not block "del backend"
invokation.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
7f725f0754 MINOR: proxy: prevent deletion of backend referenced by config elements
Define a new proxy flag PR_FL_NON_PURGEABLE. This is used to mark every
proxy instance explicitely referenced in the config. Such instances
cannot be deleted at runtime.

Static use_backend/default_backend rules are handled in
proxy_finalize(). Also, sample expression proxy references are protected
via smp_resolve_args().

Note that this last case also incidentally protects any proxies
referenced via a CLI "set var" expression. This should not be the case
as in this case variable value is instantly resolved so the proxy
reference is not needed anymore. This also affects dynamic servers.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
7bf3020952 MINOR: proxy: prevent backend removal when unsupported
Prevent removal of a backend which relies on features not compatible
with dynamic backends. This is the case if either dispatch or
transparent option is used, or if a stick-table is declared.

These limitations are similar to the "add backend" ones.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
ad1e00b2ac MINOR: lua: handle proxy refcount
Implement proxy refcount for Lua proxy class. This is similar to the
server class.

In summary, proxy_take() is used to increment refcount when a Lua proxy
is instantiated. proxy_drop() is called via Lua garbage collector. To
ensure a deleted backend is released asap, hlua_check_proxy() now
returns NULL if PR_FL_DELETED is set.

This approach is directly dependable on Lua GC execution. As such, it
probably suffers from the same limitations as the ones already described
in the previous commit. With the current patch, "del backend" is not
directly impacted though. However, the final proxy deinit may happen
after a long period of time, which could cause memory pressure increase.

One final observations regarding deinit : it is necessary to delay a
BUG_ON() which checks that defaults proxies list is empty. Now this must
be executed after Lua deinit (called via post_deinit_list). This should
guarantee that all proxies and their defaults refcount are null.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
f521c2ce2d MINOR: server: take proxy refcount when deleting a server
When a server is deleted via "del server", increment refcount of its
parent backend. This is necessary as the server is not referenced
anymore in the backend, but can still access it via its own <proxy>
member. Thus, backend removal must not happen until the complete purge
of the server.

The proxy refcount is released in srv_drop() if the flag SRV_F_DELETED
is set, which indicates that "del server" was used. This operation is
performed after the complete release of the server instance to ensure no
access will be performed on the proxy via itself. The refcount must not
be decremented if a server is freed without "del server" invokation.

Another solution could be for servers to always increment the refcount.
However, for now in haproxy refcount usage is limited, so the current
approach is preferred. It should also ensure that if the refcount is
still incremented, it may indicate that some servers are not completely
purged themselves.

Note that this patch may cause issues if "del backend" are used in
parallel with LUA scripts referencing servers. Currently, any servers
referenced by LUA must be released by its garbage collector to ensure it
can be finally freed. However, it appeas that in some case the gc does
not run for several minutes. At least this has been observed with Lua
version 5.4.8. In the end, this will result in indefinitely blocking of
"del backend" commands.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
ee1f0527c6 MINOR: proxy: rename default refcount to avoid confusion
Rename proxy conf <refcount> to <def_ref>. This field only serves for
defaults proxy instances. The objective is to avoid confusion with the
newly introduced <refcount> field used for dynamic backends.

As an optimization, it could be possible to remove <def_ref> and only
use <refcount> also for defaults proxies usage. However for now the
simplest solution is implemented.

This patch does not bring any functional change.
2026-03-02 14:07:40 +01:00
Amaury Denoyelle
f3127df74d MINOR: proxy: add refcount to proxies
Implement refcount notion into proxy structure. The objective is to be
able to increment refcount on proxy to prevent its deletion temporarily.
This is similar to the server refcount : "del backend" is not blocked
and will remove the targetted instance from the global proxies_list.
However, the final free operation is delayed until the refcount is null.

As stated above, the API is similar to servers. Proxies are initialized
with a refcount of 1. Refcount can be incremented via proxy_take(). When
no longer useful, refcount is decremented via proxy_drop() which
replaces the older free_proxy(). Deinit is only performed once refcount
is null.

This commit also defines flag PR_FL_DELETED. It is set when a proxy
instance has been removed via a "del backend" CLI command. This should
serve as indication to modules which may still have a refcount on the
target proxy so that they can release it as soon as possible.

Note that this new refcount is completely ignored for a default proxy
instance. For them, proxy_take() is pure noop. Free is immediately
performed on first proxy_drop() invokation.
2026-03-02 10:44:59 +01:00
Amaury Denoyelle
ebbdfc5915 MINOR: lua: use watcher for proxies iterator
Ensures proxies iteration via lua functions is safe via a new watcher
member. The principle is similar to the one already used for servers
iteration.
2026-03-02 10:36:21 +01:00
Amaury Denoyelle
dd1990a97a MINOR: promex: use watcher to iterate over backend instances
Ensures proxies iteration in promex applet is safe via a new watcher
member. The principle is similar to the one already used for servers
iteration.

Note that ctx.p[0] is not updated anymore at the end of a function, as
this is automatically done via the watcher itself.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
20376c54e2 MINOR: stats: protect proxy iteration via watcher
Define a new <px_watch> watcher member in stats applet context. It is
used to register the applet on a proxy when iterating over the proxies
list. <obj1> is automatically updated via the watcher interaction.
Watcher is first initialized prior to stats_dump_proxies() invocation.

This guarantees that stats dump is safe even if applet yields and a
backend is removed in parallel.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
4bcfc09acf MINOR: proxy: define proxy watcher member
Define a new member watcher_list in proxy. It will be used to register
modules which iterate over the proxies list. This will ensure that the
operation is safe even if a backend is removed in parallel.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
08623228a1 MINOR: proxy: define a basic "del backend" CLI
Add "del backend" handler which is restricted to admin level. Along with
it, a new function be_check_for_deletion() is used to test if the
backend is removable.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
d166894fef MINOR: server: refactor srv_detach()
Correct documentation for srv_detach() which previously stated that this
function could be called for a server even if not stored in its proxy
list. In fact there is a BUG_ON() which detects this case.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
dd55f2246e MINOR: proxy: convert proxy flags to uint
Proxy flags member were of type char. This will soon enough not be
sufficient as new flags will be defined. As such, convert flags member
to unsigned int type.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
78549c66c5 BUG/MINOR: proxy: add dynamic backend into ID tree
Add missing proxy_index_id() call in "add backend" handler. This step is
responsible to store the newly created proxy instance in the
used_proxy_id global tree.

No need to backport.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
5000f0b2ef BUG/MINOR: promex: fix server iteration when last server is deleted
Servers iteration via promex is now resilient to server runtime deletion
thanks to the watcher mechanism. However, the watcher was not correctly
initialized which could cause duplicate metrics reporting.

This issue happens when promex dump yielded when manipulating the last
server of a proxy. If this server is removed in parallel, <sv> pointer
will be set to NULL when promex resumes. Instead of switching to another
proxy, the code would reuse the same one and iterate again on the same
server list.

To fix this issue, <sv> pointer must not be reinitialized just after a
resumption point. Instead, this is now performed before
promex_dump_srv_metrics(), or just after switching to another proxy
instance. Thus, on resumption, if promex_dump_srv_metrics() is started
with <sv> as NULL, it means that the server was deleted and the end of
the current proxy list is reached, hence iteration is restarted on the
next proxy instance.

Note that ctx.p[1] does not need to be manually updated at the end of
promex_dump_srv_metrics() as srv_watch already does that.

This patch must be backported up to 3.0.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
00a106059e MINOR: promex: test applet resume in stress mode
Implement a stress mode with force yield for promex applet each time a
metric is displayed. This is implemented by returning 0 in
promex_dump_ts() each time the output buffer is not empty.

To test this, haproxy must be compiled with DEBUG_STRESS and the
following configuration must be used :

  global
    stress-level 1
2026-02-26 18:24:36 +01:00
Willy Tarreau
9db62d408a BUG/MINOR: call EXTRA_COUNTERS_FREE() before srv_free_params() in srv_drop()
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
As seen with the last changes to counters allocation, the move of the
counters storage to the thread group as operated in commit 04a9f86a85
("MEDIUM: counters: add a dedicated storage for extra_counters in various
structs") causes some random errors when using ASAN, because the extra
counters are freed in srv_drop() after calling srv_free_params(), which
is responsible for freeing the per-thread group storage.

For the proxies however it's OK because free calls are made before the
call to deinit_proxy() which frees the per_tgrp area.

No backport is needed, this is purely 3.4-dev.
2026-02-26 17:24:59 +01:00
Willy Tarreau
b604064980 MEDIUM: counters: make EXTRA_COUNTERS_GET() consider tgid
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Now we store and retrieve only counters for the current tgid when more
than one is supported. This allows to significantly reduce contention
on shared stats. The haterm utility saw its performance increase from
4.9 to 5.8M req/s in H1, and 6.0 to 7.6M for H2, both with 5 groups of
16 threads, showing that we don't necessarily need insane amounts of
groups.
2026-02-26 17:03:53 +01:00
Willy Tarreau
9019a5db93 MEDIUM: counters: return aggregate extra counters in ->fill_stats()
Now thanks to new macro EXTRA_COUNTERS_AGGR() we can iterate over all
thread groups storages when returning the data for a given metric. This
remains convenient and mostly transparent. The caller continues to pass
the pointer to the metric in the first group, and offsets are calculated
for all other groups and data summed. For now all groups except the
first one contain only zeroes but reported values are nevertheless
correct.
2026-02-26 17:03:53 +01:00
Willy Tarreau
de0eddf512 MINOR: counters: add EXTRA_COUNTERS_BASE() to retrieve extra_counters base storage
The goal is to always retrieve the storage address of the first thread
group for the given module. This will be used to iterate over all thread
groups. For now it returns the same value as EXTRA_COUNTERS_GET().
2026-02-26 17:03:53 +01:00
Willy Tarreau
a60e1fcf7f MEDIUM: counters: store the number of thread groups accessing extra_counters
In order to be able to properly allocate all storage and retrieve data
from there, we'll need to know how many thread groups are supposed to
access it. Let's store the number of thread groups at init time. If the
tgrp_step is zero, there's always only one tg though.

Now EXTRA_COUNTERS_ALLOC() takes this number of thread groups in argument
and stores it in the structure. It also allocates as many areas as needed,
incrementing the datap pointer by the step for each of them.

EXTRA_COUNTERS_FREE() uses this info to free all allocated areas.

EXTRA_COUNTERS_INIT() initializes all allocated areas, this is used
elsewhere to clear/preset counters, e.g. in proxy_stats_clear_counters().
It involves a memcpy() call for each array, which is normally preset to
something empty but might also be used to preset certain non-scalar
fields such as an instance name.
2026-02-26 17:03:53 +01:00
Willy Tarreau
7ac47910a2 MINOR: counters: store a tgroup step for extra_counters to access multiple tgroups
We'll need to permit any user to update its own tgroup's extra counters
instead of the global ones. For this we now store the per-tgroup step
between two consecutive data storages, for when they're stored in a
tgroup array. When shared (e.g. resolvers or listeners), we just store
zero to indicate that it doesn't scale with tgroups. For now only the
registration was handled, it's not used yet.
2026-02-26 17:03:53 +01:00
Willy Tarreau
04a9f86a85 MEDIUM: counters: add a dedicated storage for extra_counters in various structs
Servers, proxies, listeners and resolvers all use extra_counters. We'll
need to move the storage to per-tgroup for those where it matters. Now
we're relying on an external storage, and the data member of the struct
was replaced with a pointer to that pointer to data called datap. When
the counters are registered, these datap are set to point to relevant
locations. In the case of proxies and servers, it points to the first
tgrp's storage. For listeners and resolvers, it points to a local
storage. The rationale here is that listeners are limited to a single
group anyway, and that resolvers have a low enough load so that we do
not care about contention there.

Nothing should change for the user at this point.
2026-02-26 17:03:47 +01:00
Willy Tarreau
8dd22a62a4 CLEANUP: counters: only retrieve zeroes for unallocated extra_counters
Since version 2.4 with commit 7f8f6cb926 ("BUG/MEDIUM: stats: prevent
crash if counters not alloc with dummy one") we can afford to always
update extra_counters because we know they're always either allocated
or linked to a dedicated trash. However, the ->fill_stats() callbacks
continue to access such values, making it technically possible to
retrieve random counters from this trash, which is not really clean.
Let's implement an explicit test in the ->fill_stats() functions to
only return 0 for the metric when not allocated like this. It's much
cleaner because it guarantees that we're returning an empty counter
in this case rather than random values.

The situation currently happens for dummy servers like the ones used
in Lua proxies as well as those used by rings (e.g. used for logging
or traces). Normally, none of the objects retrieved via stats or
Prometheus is concerned by this unallocated extra_counters situation,
so this is more about a cleanup than a real fix.
2026-02-26 08:24:03 +01:00
Willy Tarreau
95a9f472d2 MEDIUM: counters: change the fill_stats() API to pass the module and extra_counters
We'll soon need to iterate over thread groups in the fill_stats() functions,
so let's first pass the extra_counters and stats_module pointers to the
fill_stats functions. They now call EXTRA_COUNTERS_GET() themselves with
these elements in order to retrieve the required pointer. Nothing else
changed, and it's getting even a bit more transparent for callers.

This doesn't change anything visible however.
2026-02-26 08:24:03 +01:00
Willy Tarreau
56fc12d6fa CLEANUP: stats: drop stats.h / stats-t.h where not needed
A number of C files include stats.h or stats-t.h, many of which were
just to access the counters. Now those which really need counters rely
on counters.h or counters-t.h, which already reduces the amount of
preprocessed code to be built (~3000 lines or about 0.05%).
2026-02-26 08:24:03 +01:00
Willy Tarreau
2b463e9b1f REORG: stats/counters: move extra_counters to counters not stats
It was always difficult to find extra_counters when the rest of the
counters are now in counters-t.h. Let's move the types to counters-t.h
and the macros to counters.h. Stats include them since they're used
there. But some users could be cleaned from the stats definitions now.
2026-02-26 08:24:03 +01:00
Willy Tarreau
9910af6117 CLEANUP: quic-stats: include counters from quic_stats
There's something a bit awkward in the way stats counters are inherited
through the QUIC modules: quic_conn-t includes quic_stats-t.h, which
declares quic_stats_module as extern from a type that's not known from
this file. And anyway externs should not be exported from type defintions
since they're not part of the ABI itself.

This commit moves the declaration to quic_stats.h which now takes care
to include stats-t.h to get the definition of struct stats_module. The
few users who used to learn it through quic_conn-t.h now include it
explicitly. As a bonus this reduces the number of preprocessed lines
by 5000 (~0.1%).

By the way, it looks like struct stats_module could benefit from being
moved off stats-t.h since it's only used at places where the rest of
the stats is not needed. Maybe something to consider for a future
cleanup.
2026-02-26 08:24:03 +01:00
Willy Tarreau
fb5e280e0d CLEANUP: tree-wide: drop a few useless null-checks before free()
We only support platforms where free(NULL) is a NOP so that
null checks are useless before free(). Let's drop them to keep
the code clean. There were a few in cfgparse-global, flt_trace,
ssl_sock and stats.
2026-02-26 08:24:03 +01:00
Willy Tarreau
709c3be845 BUG/MINOR: server: adjust initialization order for dynamic servers
It appears that in cli_parse_add_server(), we're calling srv_alloc_lb()
and stats_allocate_proxy_counters_internal() before srv_preinit() which
allocates the thread groups. LB algos can make use of the per_tgrp part
which is initialized by srv_preinit(). Fortunately for now no algo uses
both tgrp and ->server_init() so this explains why this remained
unnoticed to date. Also, extra counters will soon require per_tgrp to
already be initialized. So let's move these between srv_preinit() and
srv_postinit(). It's possible that other parts will have to be moved
in between.

This could be backported to recent versions for the sake of safety but
it looks like the current code cannot tell the difference.
2026-02-26 08:24:03 +01:00
Willy Tarreau
44932b6c41 BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Some stream parsing errors that do not affect the connection result in
the parsed block not being transferred from the rx buffer to the channel
and not being reported upstream in rcv_buf(), causing the stconn to time
out. Let's detect this condition, and propagate term flags anyway since
no more progress will be made otherwise.

This should be backported at least till 3.2, probably even 2.8.
2026-02-26 00:30:42 +01:00
Willy Tarreau
e67e36c9eb MINOR: mux-h2: add a new setting, "tune.h2.log-errors" to tweak error logging
The H2 mux currently logs whenever some decoding fails. Most of the errors
happen at the connection level, but some are even at the stream level,
meaning that multiple logs can be emitted for a given connection, which
can quickly use some resource for little value. This new setting allows
to tweak this and decide to only log errors that affect the connection,
or even none at all.

This should be backported at least as far as 3.2.
2026-02-25 22:43:40 +01:00
Willy Tarreau
cad6e0b3da MINOR: mux-h2: also count glitches on invalid trailers
Two cases were not causing glitches to be incremented:
  - invalid trailers
  - trailers on closed streams

This patch addresses this. It could be backported, at least to 3.2.
2026-02-25 22:03:16 +01:00
Frederic Lecaille
5af42fa342 CLEANUP: ssl: remove outdated comments
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
ssl_sock_srv_try_reuse_sess() was modified by this commit to no longer
fail (it now returns void), but the related comments remained:

  BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions

This patch cleans them up.
2026-02-25 11:25:05 +01:00
Frederic Lecaille
89c75b0777 BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions
The QUIC mux requires "application operations" (app ops), which are a list
of callbacks associated with the application level (i.e., h3, h0.9) and
derived from the ALPN. For 0-RTT, when the session cache cannot be reused
before activation, the current code fails to reach the initialization of
these app ops, causing the mux to crash during its initialization.

To fix this, this patch restores the behavior of
ssl_sock_srv_try_reuse_sess(), whose purpose was to reuse sessions stored
in the session cache regardless of whether 0-RTT was enabled, prior to
this commit:

  MEDIUM: quic-be: modify ssl_sock_srv_try_reuse_sess() to reuse backend
  sessions (0-RTT)

With this patch, this function now does only one thing: attempt to reuse a
session, and that's it!

This patch allows ignoring whether a session was successfully reused from
the cache or not. This directly fixes the issue where app ops
initialization was skipped upon a session cache reuse failure. From a
functional standpoint, starting a mux without reusing the session cache
has no negative impact; the mux will start, but with no early data to
send.

Finally, there is the case where the ALPN is reset when the backend is
stopped. It is critical to continue locking read access to the ALPN to
secure shared access, which this patch does. It is indeed possible for the
server to be stopped between the call to connect_server() and
quic_reuse_srv_params(). But this cannot prevent the mux to start
without app ops. This is why a 'TODO' section was added, as a reminder that a
race condition regarding the ALPN reset still needs to be fixed.

Must be backported to 3.3
2026-02-25 11:13:52 +01:00
Olivier Houchard
84837b6e70 BUG/MEDIUM: cpu-topo: Distribute CPUs fairly across groups
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Make sure CPUs are distributed fairly across groups, in case the number
of groups to generate is not a divider of the number of CPUs, otherwise
we may end up with a few groups that will have no CPU bound to them.

This was introduced in 3.4-dev2 with commit 56fd0c1a5c ("MEDIUM: cpu-topo:
Add an optional directive for per-group affinity"). No backport is
needed unless this commit is backported.
2026-02-24 08:17:16 +01:00
Frederic Lecaille
ca5332a9c3 BUG/MINOR: haterm: cannot reset default "haterm" mode
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
When "mode haterm" was set in a "defaults" section, it could not be
overridden in subsequent sections using the "mode" keyword. This is because
the proxy stream instantiation callback was not being reset to the
default stream_new() value.

This could break the stats URI with a configuration such as:

    defaults
        mode haterm
        # ...

    frontend stats
		bind :8181
		mode http
		stats uri /

This patch ensures the ->stream_new_from_sc() proxy callback is reset
to stream_new() when the "mode" keyword is parsed for any mode other
than "haterm".

No need to backport.
2026-02-23 17:57:19 +01:00
Maxime Henrion
a9dc8e2587 MINOR: quic: add a new metric for ncbuf failures
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
This counts the number of times we failed to add data to the ncbuf
buffer because of the gap size limit.
2026-02-23 17:47:45 +01:00
Amaury Denoyelle
05d73aa81c MINOR: proxy: improve code when checking server name conflicts
During proxy_finalize(), a lookup is performed over the servers by name
tree to detect any collision. Only the first conflict for each server
instance is reported to avoid a combinatory explosion with too many
alerts shown.

Previously, this was written using a for loop without any iteration.
Replace this by a simple if statement as this is cleaner.

This should fix github issue #3276.
2026-02-23 16:28:41 +01:00
Amaury Denoyelle
c528824094 MINOR: ncbmbuf: improve itbmap_next() code
itbmap_next() advances an iterator over a ncbmbuf buffer storage. When
reaching the end of the buffer, <b> field is set to NULL, and the caller
is expected to stop working with the iterator.

Complete this part to ensure that itbmap type is fully initialized in
case null iterator value is returned. This is not strictly required
given the above description, but this is better to avoid any possible
future mistake.

This should fix coverity issue from github #3273.

This could be backported up to 2.8.
2026-02-23 16:28:41 +01:00
Willy Tarreau
868dd3e88b MINOR: traces: always mark trace_source as thread-aligned
Some perf profiles occasionally show that reading the trace source's
state can take some time, which is not expected at all. It just happens
that the trace_source is not cache-aligned so depending on linkage, it
may share a cache line with a more active variable, thereby inducing a
slow down to all threads trying to read the variable.

Let's always mark it aligned to avoid this. For now the problem was not
observed again.
2026-02-23 16:22:59 +01:00
Christopher Faulet
c2b5446292 BUG/MEDIUM: spoe: Acquire context buffer in applet before consuming a frame
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Changes brought to support large buffers revealed a bug in the SPOE applet
when a frame is copied in the SPOE context buffer. A b_xfer() was performed
without allocating the SPOE context buffer. It is not expected. As stated in
the function documentation, the caller is responsible for ensuring there is
enough space in the destination buffer. So first of all, it must ensure this
buffer was allocated.

With recent changes, we are able to hit a BUG_ON() because the swap is no
longer possible if source and destination buffers size are not the same.

This patch should fix the issue #3286. It could be backported as far as 3.1.
2026-02-23 15:47:25 +01:00
Willy Tarreau
bbd8492c22 CLENAUP: cfgparse: accept-invalid-http-* does not support "no"/"defaults"
Some options do not support "no" nor "defaults" and they're placed after
the check for their absence. However, "accept-invalid-http-request" and
"accept-invalid-http-response" still used to check for the flags that
come with these prefixes, but Coverity noticed this was dead code in
github issue #3272. Let's just drop the test.

No backport needed as it's just dead code.
2026-02-23 15:44:40 +01:00
Ilia Shipitsin
bf363a7135 CI: remove redundant "halog" compilation
since 6499c0a0d5 halog is being build
in vtest workflow, no need to build it two times
2026-02-23 11:11:26 +01:00
Ilia Shipitsin
c44d6c6c71 CI: use the latest docker for QUIC Interop
quic-interop runner is using features available in Docker v28.1
while Github runner includes v28.0

let's for sure setup the latest available
2026-02-23 11:11:20 +01:00
Frederic Lecaille
dfa8907a3d CLEANUP: ssl: Remove a useless variable from ssl_gen_x509()
This function was recently created by moving code from acme_gen_tmp_x509()
(in acme.c) to ssl_gencrt.c (ssl_gen_x509()). The <ctmp> variable was
initialized and then freed without ever being used. This was already the
case in the original acme_gen_tmp_x509() function.

This patch removes these useless statements.

Reported in GH #3284
2026-02-23 10:47:15 +01:00
Frederic Lecaille
bb3304c6af CLEANUP: haterm: avoid static analyzer warnings about rand() use
Avoid such a warnings from coverity:

CID 1645121: (#1 of 1): Calling risky function (DC.WEAK_CRYPTO)
dont_call: random should not be used for security-related applications,
because linear congruential algorithms are too easy to break.

Reported in GH #3283 and #3285
2026-02-23 10:39:59 +01:00
Frederic Lecaille
a5a053e612 CLEANUP: haterm: remove unreachable labels hstream_add_data()
This function does not return a status. It nevers fails.
Remove some useled dead code.
2026-02-23 10:13:25 +01:00
Mia Kanashi
5aa30847ae BUG/MINOR: acme: fix incorrect number of arguments allowed in config
Fix incorrect number of arguments allowed in config for keywords
"chalenge" and "account-key".
2026-02-23 09:46:34 +01:00
Hyeonggeun Oh
ca5c07b677 MINOR: ssl: clarify error reporting for unsupported keywords
This patch changes the registration of the following keywords to be
unconditional:
  - ssl-dh-param-file
  - ssl-engine
  - ssl-propquery, ssl-provider, ssl-provider-path
  - ssl-default-bind-curves, ssl-default-server-curves
  - ssl-default-bind-sigalgs, ssl-default-server-sigalgs
  - ssl-default-bind-client-sigalgs, ssl-default-server-client-sigalgs

Instead of excluding them at compile time via #ifdef guards in the keyword
registration table, their parsing functions now check feature availability
at runtime and return a descriptive error when the feature is missing.

For features controlled by the SSL library (providers, curves, sigalgs,
DH), the error message includes the actual OpenSSL version string via
OpenSSL_version(OPENSSL_VERSION), so users can immediately identify which
library they are running rather than seeing cryptic internal macro names.

For ssl-dh-param-file, the message also includes "(no DH support)" as a
hint, since OPENSSL_NO_DH can be set either by an OpenSSL build or by
HAProxy itself in certain configurations.

For ssl-engine, which depends on a HAProxy build-time flag (USE_ENGINE),
the message retains the flag name as it is more actionable for the user.

This addresses issue https://github.com/haproxy/haproxy/issues/3246.
2026-02-23 09:40:18 +01:00
William Lallemand
8d54cda0af BUG/MINOR: acme: wrong labels logic always memprintf errmsg
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In acme_req_finalize(), acme_req_challenge(), acme_req_neworder(),
acme_req_account(), and acme_post_as_get(), the success path always
calls unconditionally memprintf(errmsg, ...).

This may result in a leak of errmsg.

Additionally, acme_res_chkorder(), acme_res_finalize(), acme_res_auth(),
and acme_res_neworder() had unused 'out:' labels that were removed.

Must be backported as far as 3.2.
2026-02-22 22:30:38 +01:00
William Lallemand
e63722fed4 BUG/MINOR: acme: acme_ctx_destroy() leaks auth->dns
365a696 ("MINOR: acme: emit a log for DNS-01 challenge response")
introduces the auth->dns member which is istdup(). But this member is
never free, instead auth->token was freed twice by mistake.

Must be backported to 3.2.
2026-02-21 16:10:29 +01:00
Amaury Denoyelle
0f95e73032 BUG/MINOR: quic/h3: display QUIC/H3 backend module on HTML stats
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
QUIC is now implemented on the backend side. Complete definitions for
QUIC/H3 stats module to add STATS_PX_CAP_BE capability.

This change is necessary to display QUIC/H3 counters on backend lines
for HTML stats page.

This should be backported up to 3.3.
2026-02-20 14:08:27 +01:00
Amaury Denoyelle
5f26cf162c MINOR: quic: add BUG_ON() on half_open_conn counter access from BE
half_open_conn is a proxy counter used to account for quic_conn in
half-open state : this represents a connection whose address is not yet
validated (handshake successful, or via token validation).

This counter only has sense for the frontend side. Currently, code is
safe as access is only performed if quic_conn is not yet flagged with
QUIC_FL_CONN_PEER_VALIDATED_ADDR, which is always set for backend
connections.

To better reflect this, add a BUG_ON() when half_open_conn is
incremented/decremented to ensure this never occurs for backend
connections.
2026-02-20 14:08:27 +01:00
Amaury Denoyelle
b8cb8e1a65 BUG/MINOR: quic: fix counters used on BE side
quic_conn is initialized with a pointer to its proxy counters. These
counters are then updated during the connection lifetime.

Counters pointer was incorrect for backend quic_conn, as it always
referenced frontend counters. For pure backend, no stats would be
updated. For listen instances, this resulted in incorrect stats
reporting.

Fix this by correctly set proxy counters based on the connection side.

This must be backported up to 3.3.
2026-02-20 14:08:27 +01:00
Frederic Lecaille
db360d466b BUG/MINOR: haterm: missing allocation check in copy_argv()
This is a very minor bug with a very low probability of occurring.
However, it could be flagged by a static analyzer or result in a small
contribution, which is always time-consuming for very little gain.
2026-02-20 12:12:10 +01:00
Frederic Lecaille
92581043fb MINOR: haterm: add long options for QUIC and TCP "bind" settings
Add the --quic-bind-opts and --tcp-bind-opts long options to append
settings to all QUIC and TCP bind lines. This requires modifying the argv
parser to first process these new options, ensuring they are available
during the second argv pass to be added to each relevant "bind" line.
2026-02-20 12:00:34 +01:00
Frederic Lecaille
8927426f78 MINOR: haterm: provide -b and -c options (RSA key size, ECDSA curves)
Add -b and -c options to the haterm argv parser. Use -b to specify the RSA
private key size (in bits) and -c to define the ECDSA certificate curves.
These self-signed certificates are required for haterm SSL bindings.
2026-02-20 10:43:54 +01:00
Amaury Denoyelle
f71b2f4338 BUG/MINOR: server: enable no-check-sni-auto for dynamic servers
Allows server keyword "no-check-sni-auto" for dynamic servers. This may
be necessary to users who do not want to benefit from auto SNI for
checks.

Keyword "check-sni-auto" is still deactivated for dynamic servers, for
the same reason as "sni-auto" (cf the previous patch for a complete
explanation).

This must be backported up to 3.3.
2026-02-20 09:02:47 +01:00
Amaury Denoyelle
de5fc2f515 BUG/MINOR: server: set auto SNI for dynamic servers
Auto SNI configuration is configured during check config validity.
However, nothing was implemented for dynamic servers.

Fix this by implementing auto SNI configuration during "add server" CLI
handler. Auto SNI configuration code is moved in a dedicated function
srv_configure_auto_sni() called both for static and dynamic servers.

Along with this, allows the keyword "no-sni-auto" on dynamic servers, so
that this process can be deactivated if wanted. Note that "sni-auto"
remains unavailable as it only makes sense with default-servers which
are never used for dynamic server creation.

This must be backported up to 3.3.
2026-02-20 09:02:47 +01:00
Amaury Denoyelle
2b0fc33114 BUG/MINOR: proxy: detect strdup error on server auto SNI
There was no check on the result of strdup() used to setup auto SNI on a
server instance during check config validity. In case of failure, the
error would be silently ignored as the following server_parse_exprs()
does nothing when <sni_expr> server field is NULL. Hence, no SNI would
be used on the server, without any error nor warning reported.

Fix this by adding a check on strdup() return value. On error, ERR_ABORT
is reported along with an alert, parsing should be interrupted as soon
as possible.

This must be backported up to 3.3. Note that the related code in this
case is present in cfgparse.c source file.
2026-02-20 09:02:47 +01:00
William Lallemand
f5a182c7e7 CLEANUP: acme: remove duplicate includes
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Remove a bunch of duplicate includes in acme.c.
2026-02-19 23:57:20 +01:00
Willy Tarreau
028940725a [RELEASE] Released version 3.4-dev5
Released version 3.4-dev5 with the following main changes :
    - DOC: internals: addd mworker V3 internals
    - BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
    - BUG/MEDIUM: threads: Differ checking the max threads per group number
    - BUG/MINOR: startup: fix allocation error message of progname string
    - BUG/MINOR: startup: handle a possible strdup() failure
    - MINOR: cfgparse: validate defaults proxies separately
    - MINOR: cfgparse: move proxy post-init in a dedicated function
    - MINOR: proxy: refactor proxy inheritance of a defaults section
    - MINOR: proxy: refactor mode parsing
    - MINOR: backend: add function to check support for dynamic servers
    - MINOR: proxy: define "add backend" handler
    - MINOR: proxy: parse mode on dynamic backend creation
    - MINOR: proxy: parse guid on dynamic backend creation
    - MINOR: proxy: check default proxy compatibility on "add backend"
    - MEDIUM: proxy: implement dynamic backend creation
    - MINOR: proxy: assign dynamic proxy ID
    - REGTESTS: add dynamic backend creation test
    - BUG/MINOR: proxy: fix clang build error on "add backend" handler
    - BUG/MINOR: proxy: fix null dereference in "add backend" handler
    - MINOR: net_helper: extend the ip.fp output with an option presence mask
    - BUG/MINOR: proxy: fix default ALPN bind settings
    - CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
    - BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
    - CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
    - MINOR: activity: support setting/clearing lock/memory watching for task profiling
    - MEDIUM: activity: apply and use new finegrained task profiling settings
    - MINOR: activity: allow to switch per-task lock/memory profiling at runtime
    - MINOR: startup: Add the SSL lib verify directory in haproxy -vv
    - BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
    - CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
    - DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
    - MINOR: queues: Check minconn first in srv_dynamic_maxconn()
    - MINOR: servers: Call process_srv_queue() without lock when possible
    - BUG/MINOR: quic: ensure handshake speed up is only run once per conn
    - BUG/MAJOR: quic: reject invalid token
    - BUG/MAJOR: quic: fix parsing frame type
    - MINOR: ssl: Missing '\n' in error message
    - MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
    - MINOR: jwt: Add new jwt_decrypt_jwk converter
    - REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
    - MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
    - MINOR: startup: sort the feature list in haproxy -vv
    - MINOR: startup: show the list of detected features at runtime with haproxy -vv
    - SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
    - MINOR: filters: rework RESUME_FILTER_* macros as inline functions
    - MINOR: filters: rework filter iteration for channel related callback functions
    - MEDIUM: filters: use per-channel filter list when relevant
    - DEV: gdb: add a utility to find the post-mortem address from a core
    - BUG/MINOR: deviceatlas: add missing return on error in config parsers
    - BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
    - BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
    - BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
    - BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
    - BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
    - BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
    - BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
    - BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
    - MINOR: deviceatlas: check getproptype return and remove pprop indirection
    - MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
    - MINOR: deviceatlas: define header_evidence_entry in dummy library header
    - MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
    - CLEANUP: deviceatlas: add unlikely hints and minor code tidying
    - DEV: gdb: use unsigned longs to display pools memory usage
    - BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
    - BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
    - BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
    - BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
    - BUG/MINOR: ssl: error with ssl-f-use when no "crt"
    - MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
    - BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
    - BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
    - MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
    - REGTESTS: fix quoting in feature cmd which prevents test execution
    - BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
    - BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
    - BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
    - DEV: term-events: Fix hanshake events decoding
    - BUG/MINOR: flt-trace: Properly compute length of the first DATA block
    - MINOR: flt-trace: Add an option to limit the amount of data forwarded
    - CLEANUP: compression: Remove unused static buffers
    - BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
    - BUG/MINOR: http-ana: Stop to wait for body on client error/abort
    - MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
    - REORG: stconn: Move functions related to channel buffers to sc_strm.h
    - BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
    - MINOR: tree-wide: Use the buffer size instead of global setting when possible
    - MINOR: buffers: Swap buffers of same size only
    - BUG/MINOR: config: Check buffer pool creation for failures
    - MEDIUM: cache: Don't rely on a chunk to store messages payload
    - MEDIUM: stream: Limit number of synchronous send per stream wakeup
    - MEDIUM: compression: Be sure to never compress more than a chunk at once
    - MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
    - MEDIUM: applet: Disable 0-copy for buffers of different size
    - MINOR: h1-htx: Disable 0-copy for buffers of different size
    - MEDIUM: stream: Offer buffers of default size only
    - BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
    - MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
    - MEDIUM: htx: Refactor htx defragmentation to merge data blocks
    - MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
    - MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
    - MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
    - MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
    - MEDIUM: chunk: Add support for large chunks
    - MEDIUM: stconn: Properly handle large buffers during a receive
    - MEDIUM: sample: Get chunks with a size dependent on input data when necessary
    - MEDIUM: http-fetch: Be able to use large chunks when necessary
    - MINPR: htx: Get large chunk if necessary to perform a defrag
    - MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
    - MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
    - MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
    - CI: do not use ghcr.io for Quic Interop workflows
    - BUG/MEDIUM: ssl: SSL backend sessions used after free
    - CI: vtest: move the vtest2 URL to vinyl-cache.org
    - CI: github: disable windows.yml by default on unofficials repo
    - MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
    - CLEANUP: mux-h1: Remove unneeded null check
    - DOC: remove openssl no-deprecated CI image
    - BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
    - BUG/MINOR: backend: check delay MUX before conn_prepare()
    - OPTIM: backend: reduce contention when checking MUX init with ALPN
    - DOC: configuration: add the ACME wiki page link
    - MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
    - MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
    - MINOR: trace: add definitions for haterm streams
    - MINOR: init: allow a fileless init mode
    - MEDIUM: init: allow the redefinition of argv[] parsing function
    - MINOR: stconn: stream instantiation from proxy callback
    - MINOR: haterm: add haterm HTTP server
    - MINOR: haterm: new "haterm" utility
    - MINOR: haterm: increase thread-local pool size
    - BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
    - BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
    - BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
    - CI: github: only enable OS X on development branches
2026-02-19 16:26:52 +01:00
William Lallemand
41a71aec3d CI: github: only enable OS X on development branches
Don't use the macOS job on maintenance branches, it's mainly use for
development and checking portability, but we don't support actively
macOS on stable branches.
2026-02-19 16:22:42 +01:00
Aurelien DARRAGON
09bf116242 BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
When leveraging shm-stats-file, global_now_ms and global_now_ns are stored
(and thus shared) inside the shared map, so that all co-processes share
the same clock source.

Since the global_now_{ns,ms} clocks are derived from now_ns, and given
that now_ns is a monotonic clock (hence inconsistent from one host to
another or reset after reboot) special care must be taken to detect
situations where the clock stored in the shared map is inconsistent
with the one from the local process during startup, and cannot be
relied upon anymore. A common situation where the current implementation
fails is resuming from a shared file after reboot: the global_now_ns stored
in the shm-stats-file will be greater than the local now_ns after reboot,
and applying the shared offset doesn't help since it was only relevant to
processes prior to rebooting. Haproxy's clock code doesn't expect that
(once the now offset is applied) global_now_ns > now_ns, and it creates
ambiguous situation where the clock computations (both haproxy oriented
and shm-stats-file oriented) are broken.

To fix the issue, when we detect that the clock stored in the shm is
off by more than SHM_STATS_FILE_HEARTBEAT_TIMEOUT (60s) from the
local now_ns, since this situation is not supposed to happen in normal
environment on the host, we assume that the shm file was previously used
on a different system (or that the current host rebooted).

In this case, we perform a manually adjustment of the now offset so that
the monotonic clock from the current host is consistent again with the
global_now_ns stored in the file. Doing so we can ensure that clock-
dependent objects (such as freq_counters) stored within the map will keep
working as if we just (re)started where we left off when the last process
stopped updating the map.

Normally it is not expected that we update the now offset stored in the
map once the map was already created (because of concurrent accesses to
the file when multiple processes are attached to it), but in this specific
case, we know we are the first process on this host to start working
(again) on the file, thus we update the offset as if we created the
file ourself, while keeping existing content.

It should be backported in 3.3
2026-02-19 16:14:02 +01:00
Aurelien DARRAGON
2b7849fd02 BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
shm-stats-file heartbeat is derived from now_ms with an extra time added
to it, thus it should be handled using the same time as now_ms is.

Until now, we used to handle heartbeat using signed integer. This was not
found to cause severe harm but it could result in improper handling due
to early wrapping because of signedness for instance, so let's better fix
that before it becomes a real issue.

It should be backported in 3.3
2026-02-19 16:13:55 +01:00
Aurelien DARRAGON
04a4d242c9 BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
Amaury reported that when the following warning is reported by haproxy:

  [WARNING]  (296347) : config: failed to get shm stats file slot for 'haproxy.stats', all slots are occupied

haproxy would segfault right after during clock update operation.

The reason for the warning being emitted is not the object of this commit
(all shm-stats-file slots occupied by simultaneous co-processes) but since
it was is intended that haproxy is able to keep working despite that
warning (ignoring the use of shm-stats-file), we should fix the crash.

The crash is caused by the fact that we detach from the shared memory while
the global_now_ns and global_now_ms clock pointers still point to the shared
memory. Instead we should revert to using our local clock instead before
detaching from the map.

It should be backported in 3.3
2026-02-19 16:13:48 +01:00
Willy Tarreau
0bb686a72d MINOR: haterm: increase thread-local pool size
QUIC uses many objects and the default pool size causes a lot of
thrashing at the current request rate, taking ~12% CPU in pools.
Let's increase it to 3MB, which allows us to reach around 11M
req/s on a 80-core machine.
2026-02-19 15:45:01 +01:00
Frederic Lecaille
b007b7aa04 MINOR: haterm: new "haterm" utility
haterm_init.c is added to implement haproxy_init_args() which overloads
the one defined by haproxy.c. This way, haterm program uses its own argv[]
parsing function. It generates its own configuration in memory that is
parsed during boot and executed by the common code.
2026-02-19 15:45:01 +01:00
Frederic Lecaille
c9d47804d1 MINOR: haterm: add haterm HTTP server
Contrary to haproxy, httpterm does not support all the HTTP protocols.
Furthermore, it has become easier to handle inbound/outbound
connections / streams since the rework done at conn_stream level.

This patch implements httpterm HTTP server services into haproxy. To do
so, it proceeds the same way as for the TCP checks which use only one
stream connector, but on frontend side.

The makefile is modified to handle haterm.c in additions to all the C
files for haproxy to build new haterm program into haproxy, the haterm
server also instantiates a haterm stream (hstream struct) attached to a
stream connector for each incoming connection without backend stream
connector. This is the role of sc_new_from_endp() called by the muxes to
instantiate streams/hstreams.

As for stream_new(), hstream_new() instantiates a task named
process_hstream() (see haterm.c) which has the same role as
process_stream() but for haterm streams.

haterm into haproxy takes advantage of the HTTP muxes and HTX API to
support all the HTTP protocols supported by haproxy.
2026-02-19 15:10:37 +01:00
Frederic Lecaille
2bf091e9da MINOR: stconn: stream instantiation from proxy callback
Add a pointer to function to proxies as ->stream_new_from_sc proxy
struct member to instantiate stream from connection as this is done by
all the muxes when they call sc_new_from_endp(). The default value for
this pointer is obviously stream_new() which is exported by this patch.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
1dc20a630a MEDIUM: init: allow the redefinition of argv[] parsing function
This patches allows the argv[] parsing function to be redefined from
others C modules. This is done extracting the function which really
parse the argv[] array to implement haproxy_init_args(). This function
is declared as a weak symbol which may be overloaded by others C module.

Same thing for copy_argv() which checks/cleanup/modifies the argv array.
One may want this function to be redefined. This is the case when other
C modules do not handle the same command line option. Copying such
argv[] would lead to conflicts with the original haproxy argv[] during
the copy.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
6013f4baeb MINOR: init: allow a fileless init mode
This patch provides the possibility to initialize haproxy without
configuration file. This may be identified by the new global and exported
<fileless_mode> and <fileless_cfg> variables which may be used to
provide a struct cfgfile to haproxy by others means than a physical
file (built in memory).
When enabled, this fileless mode skips all the configuration files
parsing.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
234ce775c3 MINOR: trace: add definitions for haterm streams
Add definitions for haterm stream as arguments to be used by the TRACE API.
This will be used by the haterm module to come which will have to handle
hstream struct objects (in place of stream struct objects).
2026-02-19 14:46:49 +01:00
Frederic Lecaille
5d3bca4b17 MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
Add "generate-dummy" on/off type keyword to "load" directive to
automatically generate dummy certificates as this is done for ACME from
ckch_conf_load_pem_or_generate() function which is called if a "crt"
keyword is also provide for this directive.

Also implement "keytype" to specify the key type used for these
certificates.  Only "RSA" or "ECDSA" is accepted. This patch also
implements "bits" keyword for the "load" directive to specify the
private key size used for RSA. For ECDSA, a new "curves" keyword is also
provided by this patch to specify the curves to be used for the EDCSA
private keys generation.

ckch_conf_load_pem_or_generate() is modified to use these parameters
provided by "keytype", "bits" and "curves" to generate the private key
with ssl_gen_EVP_PKEY() before generating the X509 certificate calling
ssl_gen_x509().
2026-02-19 14:46:49 +01:00
Frederic Lecaille
36b1fba871 MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
Move acme_EVP_PKEY_gen() implementation to ssl_gencrt.c and rename it to
ssl_EVP_PKEY_gen().  Also extract from acme_gen_tmp_x509() the generic
part to implement ssl_gen_x509() into ssl_gencrt.c.

To generate a self-signed expired certificate ssl_EVP_PKEY_gen() must be
used to generate the private key. Then, ssl_gen_x509() must be called
with the private key as argument.  acme_gen_tmp_x509() is also modified
to called these two functions to generate a temporary certificate has
done before modifying this part.

Such an expired self-signed certificate should not be use on the field
but only during testing and development steps.
2026-02-19 14:46:47 +01:00
William Lallemand
b0240bcfaf DOC: configuration: add the ACME wiki page link
Add the ACME page link to the HAProxy github wiki.
2026-02-19 13:53:50 +01:00
Amaury Denoyelle
c71ef2969b OPTIM: backend: reduce contention when checking MUX init with ALPN
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In connect_server(), MUX initialization must be delayed if ALPN
negotiation is configured, unless ALPN can already be retrieved via the
server cache.

A readlock is used to consult the server cache. Prior to this patch, it
was always taken even if no ALPN is configured. The lock was thus used
for every new backend connection instantiation.

Rewrite the check so that now the lock is only used if ALPN is
configured. Thus, no lock access is done if SSL is not used or if ALPN
is not defined.

In practice, there will be no performance gain, as the read lock should
never block if ALPN is not configured. However, the code is cleaner as
it better reflect that only access to server nego_alpn requires the
path_params lock protection.
2026-02-19 11:27:49 +01:00
Amaury Denoyelle
55e9c67381 BUG/MINOR: backend: check delay MUX before conn_prepare()
In connect_server(), when a new connection must be instantiated, MUX
initialization is delayed if an ALPN setting is present on the server
line configuration, as negotiation must be performed to select the
correct MUX. However, this is not the case if the ALPN can already be
retrieved on the server cache.

This check is performed too late however and may cause issue with the
QUIC stack. The problem can happen when the server ALPN is not yet set.
In the normal case, quic_conn layer is instantiated and MUX init is
delayed until the handshake completion. When the MUX is finally
instantiated, it reused without any issue app_ops from its quic_conn,
which is derived from the negotiated ALPN.

However, there is a race condition if another QUIC connection populates
the server ALPN cache. If this happens after the first quic_conn init
but prior to the MUX delay check, the MUX will thus immediately start in
connect_server(). When app_ops is retrieved from its quic_conn, a crash
occurs in qcc_install_app_ops() as the QUIC handshake is not yet
finalized :

  #0  0x000055e242a66df4 in qcc_install_app_ops (qcc=0x7f127c39da90, app_ops=0x0) at src/mux_quic.c:1697
  1697		if (app_ops->init && !app_ops->init(qcc)) {
  [Current thread is 1 (Thread 0x7f12810f06c0 (LWP 25758))]

To fix this, MUX delay check is moved up in connect_server(). It is now
performed prior conn_prepare() which is responsible for the quic_conn
layer instantiation. Thus, it ensures consistency for the QUIC stack :
MUX init is always delayed if the quic_conn does not reuses itself the
SSL session and ALPN server cache (no quic_reuse_srv_params()).

This must be backported up to 3.3.
2026-02-19 11:22:55 +01:00
David Carlier
194a67600e BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
In acme_gen_tmp_x509(), if X509_set_issuer_name() fails, the code
jumped to the mkcert_error label without freeing the previously
allocated X509_NAME object. The other error paths after X509_NAME_new()
(X509_NAME_add_entry_by_txt and X509_set_subject_name) already properly
freed the name before jumping to mkcert_error, but this one was missed.

Fix this by freeing name before the goto, consistent with the other
error paths in the same function.

Must be backported as far as 3.3.
2026-02-19 10:40:26 +01:00
William Lallemand
92e3635679 DOC: remove openssl no-deprecated CI image
Remove the "openssl no-depcrecated" CI image status which was removed a while
ago.
2026-02-19 10:39:23 +01:00
Egor Shestakov
6ab86ca14c CLEANUP: mux-h1: Remove unneeded null check
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Since 3.1 a task is always created when H1 connections initialize, so
the later null check before task_queue() became unneeded.

Could be backported with 3c09b3432 (BUG/MEDIUM: mux-h1: Fix how timeouts
are applied on H1 connections).
2026-02-19 08:20:37 +01:00
Nenad Merdanovic
5a079d1811 MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
Add the ability to set connect, queue and tarpit timeouts from the
set-timeout action. This is especially useful when using set-dst to
dynamically connect to servers.

This patch also adds the relevant fe_/be_/cur_ sample fetches for these
timeouts.
2026-02-19 08:20:37 +01:00
William Lallemand
c26c721312 CI: github: disable windows.yml by default on unofficials repo
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Disable the windows job for repository in repositories that are not in
the "haproxy" organization. This is mostly used for portability during
development and only making noise during the maintenance cycle.

Must be backported in every branches.
2026-02-18 18:16:21 +01:00
William Lallemand
27e1ec8ca9 CI: vtest: move the vtest2 URL to vinyl-cache.org
VTest git repository was moved on vinyl-cache.org.

Must be backported in every branches.
2026-02-18 16:29:46 +01:00
Frederic Lecaille
3e6d030ce2 BUG/MEDIUM: ssl: SSL backend sessions used after free
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This bug impacts only the backends. The sessions cached could be used after been
freed because of a missing write lock into ssl_sock_handle_hs_error() when freeing
such objects. This issue could be rarely reproduced and only with QUIC with
difficulties (random CRYPTO data corruptions and instrumented code).

Must be backported as far as 2.6.
2026-02-18 15:37:13 +01:00
Ilia Shipitsin
dfe1de4335 CI: do not use ghcr.io for Quic Interop workflows
due to some (yet unknown) changes in ghcr.io we are not able to pull
images from it anymore. Lets temporarily switch to "local only" images
storage.

no functional change
2026-02-18 15:35:18 +01:00
Christopher Faulet
cfa30dea4e MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
the bufsize was already limited to 256 MB because of Lua and HTX
limitations. So the same limit is set on large buffers.
2026-02-18 13:26:21 +01:00
Christopher Faulet
a324616cdb MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
b_is_default() and b_is_large() can now be used to know if a buffer is a
default buffer or a large one. _b_free() now relies on it.

These functions are also used when possible (stream_free(),
stream_release_buffers() and http_wait_for_msg_body()).
2026-02-18 13:26:21 +01:00
Christopher Faulet
5737fc9518 MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
Thanks to previous patches, it is now possible to allocate a large buffer to
store the message payload in the context of the "wait-for-body" action. To
do so, "use-large-buffer" option must be set.

It means now it is no longer necessary to increase the regular buffer size
to be able to get message payloads of some requests or responses.
2026-02-18 13:26:21 +01:00
Christopher Faulet
9ecd0011c1 MINPR: htx: Get large chunk if necessary to perform a defrag
Function performing defragmentation of an HTX message was updated to get a
chunk accordingly to the HTX message size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
4c6ca0b471 MEDIUM: http-fetch: Be able to use large chunks when necessary
The function used to fetch params was update to get a chunk accordingly to
the parameter size. The same was also performed when the message body is
retrieved.
2026-02-18 13:26:21 +01:00
Christopher Faulet
dfc4085413 MEDIUM: sample: Get chunks with a size dependent on input data when necessary
The function used to duplicate a sample was update to support large buffers. In
addition several converters were also reviewed to deal with large buffers. For
instance, base64 encoding and decoding must use chunks of the same size than the
sample. For some of them, a retry is performed to enlarge the chunk is possible.

TODO: Review reg_sub, concat and add_item to get larger chunk if necessary
2026-02-18 13:26:21 +01:00
Christopher Faulet
bd25c63067 MEDIUM: stconn: Properly handle large buffers during a receive
While large buffers are still unused internally, functions receiving data
from endpoint (connections or applets) were updated to block the receives
when channels are using large buffer and the data forwarding was started.

The goal of this patch is to be able to flush large buffers at the end of
the analyzis stage to return asap on regular buffers.
2026-02-18 13:26:21 +01:00
Christopher Faulet
ce912271db MEDIUM: chunk: Add support for large chunks
Because there is now a memory pool for large buffers, we must also add the
support for large chunks. So, if large buffers are configured, a dedicated
memory pool is created to allocate large chunks. alloc_large_trash_chunk()
must be used to allocate a large chunk. alloc_trash_chunk_sz() can be used to
allocate a chunk with the best size. However free_trash_chunk() remains the
only way to release a chunk, regular or large.

In addition, large trash buffers are also created, using the same mechanism
than for regular trash buffers. So three thread-local trash buffers are
created. get_large_trash_chunk() must be used to get a large trash buffer.
And get_trash_chunk_sz() may be used to get a trash buffer with the best
size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
d89ec33a34 MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
Add the support for large bufers. A dedicated memory pool is added. The size
of these buffers must be explicitly configured by setting
"tune.bufsize.large" directive. If it is not set, the pool is not
created. In addition, if the size for large buffers is the same than for
regular buffer, the feature is automatically disable.

For now, large buffers remain unused.
2026-02-18 13:26:21 +01:00
Christopher Faulet
ee309bafcf MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
In sample fetch functions retrieving the message payload (req.body,
res.body...), instead of copying the payload in a trash buffer, we know
directely return a pointer the payload in the HTX message. To do so, we must
be sure there is only one HTX DATA block. Thanks to previous patches, it is
possible. However, we must take care to perform a defragmentation if
necessary.
2026-02-18 13:26:21 +01:00
Christopher Faulet
f559c202fb MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
When we are waiting for the request or response payload, it is usually
because the payload will be analyzed in a way or another. So, perform a
defrag if necessary. This should ease payload analyzis.
2026-02-18 13:26:21 +01:00
Christopher Faulet
4f27a72d19 MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
First, an HTX flags was added to know when blocks are unordered. It may
happen when a header is added while part of the payload was already received
or when the start-line is replaced by an new one. In these cases, the blocks
indexes are in the right order but not the blocks payload. Knowing a message
is unordered can be useful to trigger a defragmentation, mainly to be able
to append data properly for instance.

Then, detection of fragmented messages was improved, especially when a
header or a start-line is replaced by a new one.

Finally, when data are added in a message and cannot be appended into the
previous DATA block because the message is not aligned, a defragmentation is
performed to realign the message and append data.
2026-02-18 13:26:21 +01:00
Christopher Faulet
0c6f2207fc MEDIUM: htx: Refactor htx defragmentation to merge data blocks
When an HTX message is defragmented, the HTX DATA blocks are now merged into
one block. Just like the previous commit, this will help all payload
analysis, if any. However, there is an exception when the reference on a
DATA block must be preserved, via the <blk> parameter. In that case, this
DATA block is not merged with previous block.
2026-02-18 13:26:21 +01:00
Christopher Faulet
783db96ccb MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
When htx_xfer_blks() is called and when HTX DATA blocks are copied, we now
merge them in one block. This will help all payload analysis, if any.
2026-02-18 13:26:21 +01:00
Christopher Faulet
a8887e55a0 BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
htx_replace_blk_value() is buggy when a defrag is performed. It only happens
on data expension. But in that case, because a defragmentation is performed,
the blocks data are moved and old data of the updated block are no longer
accessible.

To fix the bug, we now use a chunk to temporarily copy the new data of the
block. This way we can safely perform the HTX defragmentation and then
recopy the data from the chunk to the HTX message.

It is theorically possible to hit this bug but concretly it is pretty hard.

This patch should be backported to all stable versions.
2026-02-18 13:26:21 +01:00
Christopher Faulet
e62e8de5a7 MEDIUM: stream: Offer buffers of default size only
Check the channels buffers size on release before trying to offer it to
waiting entities. Only normal buffers must be considered. This will be
mandatory when the large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
bc586b4138 MINOR: h1-htx: Disable 0-copy for buffers of different size
When a message payload is parsed, it is possible to swap buffers. We must
only take care both buffers have same size. It will be mandatory when the
large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
8b27dfdfb0 MEDIUM: applet: Disable 0-copy for buffers of different size
Just like the previous commit, we must take care to never swap buffers of
different size when data are exchanged between an applet and a SC. it will
be mandatory when the large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
36282ae348 MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
Today, it is useless to check the buffers size before performing a 0-copy in
muxes when data are sent, but it will be mandatory when the large buffers
support on channels will be added. Indeed, muxes will still rely on normal
buffers, so we must take care to never swap buffers of different size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
f82ace414b MEDIUM: compression: Be sure to never compress more than a chunk at once
When the compression is performed, a trash chunk is used. So be sure to
never compression more data than the trash size. Otherwise the commression
could fail. Today, this cannot happen. But with the large buffers support on
channels, it could be an issue.

Note that this part should be reviewed to evaluate if we should use a larger
chunk too to perform the compression, maybe via an option.
2026-02-18 13:26:21 +01:00
Christopher Faulet
53b7150357 MEDIUM: stream: Limit number of synchronous send per stream wakeup
It is not a bug fix, because there is no way to hit the issue for now. But
there is nothing preventing a loop of synchronous sends in process_stream().
Indead, when a synchronous send is successfully performed, we restart the
SCs evaluation and at the end another synchronous send is attempted. So with
an endpoint consuming data bit by bit or with a filter fowarding few bytes
at each call, it is possible to loop for a while in process_stream().

Because it is not expected, we now limit the number of synchronous send per
wakeup to two calls. In a nominal case, it should never be more. This commit
is mandatory to be able to handle large buffers on channels

There is no reason to backport this commit except if the large buffers
support on channels are backported.
2026-02-18 13:26:21 +01:00
Christopher Faulet
5965a6e1d2 MEDIUM: cache: Don't rely on a chunk to store messages payload
When the response payload is stored in the cache, we can avoid to use a
trash chunk as temporary space area before copying everything in the cache
in one call. Instead we can directly write each HTX block in the cache, one
by one. It should not be an issue because, most of time, there is only one
DATA block.

This commit depends on "BUG/MEDIUM: shctx: Use the next block when data
exactly filled a block".
2026-02-18 13:26:20 +01:00
Christopher Faulet
9ad9def126 BUG/MINOR: config: Check buffer pool creation for failures
The call to init_buffer() during the worker startup may fail. In that case,
an error message is displayed but the error was not properly handled. So
let's add the proper check and exit on error.
2026-02-18 13:26:20 +01:00
Christopher Faulet
fae478dae5 MINOR: buffers: Swap buffers of same size only
In b_xfer(), we now take care to swap buffers of the same size only. For
now, it is always the case. But that will change.
2026-02-18 13:26:20 +01:00
Christopher Faulet
6bf450b7fe MINOR: tree-wide: Use the buffer size instead of global setting when possible
At many places, we rely on global.tune.bufsize value instead of using the buffer
size. For now, it is not a problem. But if we want to be able to deal with
buffers of different sizes, it is good to reduce as far as possible dependencies
on the global value. most of time, we can use b_size() or c_size()
functions. The main change is performed on the error snapshot where the buffer
size was added into the error_snapshot structure.
2026-02-18 13:26:20 +01:00
David Carlier
fc89ff76c7 BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
Fix two issues in JWE token processing:

- Replace memcmp() with CRYPTO_memcmp() for authentication tag
  verification in build_and_check_tag() to prevent timing
  side-channel attacks. Also add a tag length validation check
  before the comparison to avoid potential buffer over-read when
  the decoded tag length doesn't match the expected HMAC half.

- Remove unreachable break statement after JWE_ALG_A256GCMKW case
  in decrypt_cek_aesgcmkw().
2026-02-18 10:46:32 +01:00
Christopher Faulet
806c8c830d REORG: stconn: Move functions related to channel buffers to sc_strm.h
sc_have_buff(), sc_need_buff(), sc_have_room() and sc_need_room() are
related to the buffer's channel. So we can move them in sc_strm.h header
file. In addition, this will be mandatory for the next commit.
2026-02-18 09:44:16 +01:00
Christopher Faulet
1b1a0b3bae MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
SC_FL_NO_FASTFWD flag was not listed in sc_show_flags() function. Let's do
so.

This patch could be backported as far as 3.0.
2026-02-18 09:44:16 +01:00
Christopher Faulet
0aca25f725 BUG/MINOR: http-ana: Stop to wait for body on client error/abort
During the message analysis, we must take care to stop wait for the message
body if an error was reported on client side or an abort was detected with
abort-on-close configured (by default now).

The bug was introduced when the "wait-for-body" action was added. Only the
producer state was tested. So, when we were waiting for the request payload,
there was no issue. But when we were waiting for the response payload, error
or abort on client side was not considered.

This patch should be backported to all stable versions.
2026-02-18 09:44:16 +01:00
Christopher Faulet
9e17087aeb BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
When the hot list was removed in 3.0, a regression was introduced.
Theorically, it is possible to override data in a block when new data are
appended. It happens when data are copied. If the data size is a multiple of
the block size, all data are copied and the last used block is full. But
instead of saving a reference on the next block as the restart point for the
next copies, we keep a reference on the last full one. On the next read, we
reuse this block and old data are crushed. To hit the bug, no new blocks
should be reserved between the two data copy attempts.

Concretely, for now, it seems not possible to hit the bug. But with a block
size set to 1024, if more than 1024 bytes are reseved, with a first copy of
1024 bytes and a second one with remaining data, data in the first block
will be crushed.

So to fix the bug, the reference of the last block used to write data (which
is in fact the next one to use to perform the next copy) is only updated
when a block is full. In that case the next block is used.

This patch should be backported as far as 3.0 after a period of observation.
2026-02-18 09:44:15 +01:00
Christopher Faulet
b248b1c021 CLEANUP: compression: Remove unused static buffers
Since the legacy HTTP code was removed, the global and thread-local buffers,
tmpbuf and zbuf, are no longer used. So let's removed them.

This could be backported, theorically to all supported versions. But at
least it could be good to do so as far as 3.2 as it saves 2 buffers
per-thread.
2026-02-18 09:44:15 +01:00
Christopher Faulet
829002d459 MINOR: flt-trace: Add an option to limit the amount of data forwarded
"max-fwd <max>" option can now be used to limit the maximum amount of data
forwarded at a time by the filter. It could be handy to make tests.
2026-02-18 09:44:15 +01:00
Christopher Faulet
92307b5fec BUG/MINOR: flt-trace: Properly compute length of the first DATA block
This bug is quite old. When the length of the first DATA block is computed,
the offset is used instead of the block length minus the offset. It is only
used with random forwarding and there is a test just after to prevent any
issue, so there is no effect.

It could be backported to all stable versions.
2026-02-18 09:44:15 +01:00
Christopher Faulet
ccb075fa1b DEV: term-events: Fix hanshake events decoding
Handshakes events were not properly decoded. Only send errors were decoded
as expected, other events were reported with a '-'. It is now fixes.

This patch could be backported as far as 3.2.
2026-02-18 09:44:15 +01:00
Christopher Faulet
1b7843f1c1 BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
The previous fix was wrong. When shut flags are tested for legacy applets,
to know if the I/O handler can be called or not, we must be sure shut for
reads and for writes are both set to skip the applet I/O handler.

This bug introduced regression, at least for the peer applet and for the DNS
applet.

This patch must be backported with abc1947e1 ("BUG/MEDIUM: applet: Fix test
on shut flags for legacy applets"), so as far as 3.0.
2026-02-18 09:44:15 +01:00
Christopher Faulet
8e0c2599b6 BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
If a producer tries to send data via the fast-forward mechanism while the
message is in an unexpected state from the consumer point of view, the
fast-forward is now disabled. Concretely, we now take care that the message
is in its data/tunnel stage to proceed in h1_nego_ff().

By disabling fast-forward in that case, we will automatically fall back on
the regular sending path and be able to handle the error in h1_snd_buf().

This patch should be backported as far as 3.0
2026-02-18 09:44:15 +01:00
Christopher Faulet
cda056b9f4 BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
If is illegal to send data if the stream is already closed. The case is
properly handled when data are sent via snd_buf(), by draining the data. But
it was still possible to process these data via nego_ff().

So, in this patch, both for the H2 and QUIC multiplexers, the fast-forward
is disabled if the stream is closed and nothing is performed. Doing so, we
will automatically fall back on the regular sending path and be able to
drain data in snd_buf().

Thanks to Mike Walker for his investigation on the subject.

This patch should be backported as far as 3.0.
2026-02-18 09:44:09 +01:00
Amaury Denoyelle
2f94f61c31 REGTESTS: fix quoting in feature cmd which prevents test execution
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Remove extra quote in feature cmd used to test SSL compatibility with
set_ssl_cafile QUIC regtest. Due to this syntax error, the test was
never executed.

No need to backport.
2026-02-17 18:31:29 +01:00
Amaury Denoyelle
18a78956cb MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
Add a BUG_ON_STRESS() to be able to detect if data draining is performed
due to early stream closure.
2026-02-17 18:18:44 +01:00
Amaury Denoyelle
4c275c7d17 BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
HTTP/3 CONNECT transcoding is not properly implemented on the frontend
side. Neither tunnel mode of application nor extended connect are
currently functional.

Clarify this situation by rejecting any CONNETC attempts on the frontend
side. The stream is thus now closed via a RESET_STREAM with error code
REQUEST_REJECTED.

This should be backported to every stable versions.
2026-02-17 18:18:44 +01:00
Amaury Denoyelle
f3003d1508 BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
This reverts commit 235e8f1afd.

Prior to the above commit, snd_buf callback for QUIC MUX was able to
deal with data even after stream closure. The excess was simply
discarded, as no STREAM frame can be emitted after FIN/RESET_STREAM.
This code was later removed and replaced by a BUG_ON() to ensure snd_buf
is never called after stream closure.

However, this approach is too strict. Indeed, there is nothing in the
haproxy stream architecture which forbids this scheduling, in part
because QUIC MUX is the sole responsible of the stream closure. As such,
it is preferable to revert to the old code to prevent any triggering of
a BUG_ON() failure.

Note that nego_ff does not implement data draining if called after
stream closure. This will be done in a future patch.

Thanks to Mike Walker for his investigation on the subject.

This must be backported up to 2.8.
2026-02-17 18:18:44 +01:00
Aurelien DARRAGON
747ff09818 MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This is a follow up to b6bdb2553 ("MEDIUM: backend: make "balance random"
consider req rate when loads are equal")

In the above patch, we used the global sess_per_sec metric to choose which
server we should be using. But the original intent was to use the per
thread group statistic.

No backport needed, the previous patch already improved the situation in
3.3, so let's not take the risk of breaking that.
2026-02-17 09:51:46 +01:00
William Lallemand
1274c21a42 BUG/MINOR: ssl: error with ssl-f-use when no "crt"
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
ssl-f-use lines tries to load a crt file, but the "crt" keyword is not
mandatory. That could lead to crtlist_load_crt() being called with a
NULL path, and trying to do a stat.

In this particular case we don't need to try anything and it's better to
leave with an actual error.

Must be backported as far as 3.2.
2026-02-16 18:41:40 +01:00
William Lallemand
0016d45a9c BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
crtlist_load_crt() in post_section_frontend_crt_init() won't give
details about the line being parsed, this should be done by the caller.

Modify post_section_frontend_crt_init() to ouput the right error format.

Must be backported to 3.2.
2026-02-16 18:41:08 +01:00
William Lallemand
e0d1cdff6a BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
cfg_crt_node->filename is leaked on the error path in the ssl-f-use
configuration parser.

Could be backported as far as 3.2
2026-02-16 16:04:35 +01:00
William Lallemand
86df0e206e BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
In post_section_frontend_crt_init(), the crt_entry is populated by the
ssl_conf fromt the cfg_crt_node. On error path, the crt_list is
completely freed, including the ssl_conf structure. But the ssl_conf
structure was already freed when freeing the cfg_crt_node.

Fix the issue by doing a crtlist_dup_ssl_conf(n->ssl_conf) in the
crtlist_entry instead of an assignation.

Fix issue #3268.

Need to be backported as far as 3.2. The previous patch which adds the
crtlist_dup_ssl_conf() declaration is needed.
2026-02-16 16:04:35 +01:00
William Lallemand
df8e05815c BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
Add lacking crtlist_dup_ssl_conf() declaration in ssl_crt-list.h.

Could be backported if needed.
2026-02-16 16:04:35 +01:00
Willy Tarreau
1d2490c5ae DEV: gdb: use unsigned longs to display pools memory usage
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The pools memory usage calculation was done using ints by default, making
it harder to identify large ones. Let's switch to unsigned long for the
size calculations.
2026-02-16 11:07:23 +01:00
David Carlier
cb63e899d9 CLEANUP: deviceatlas: add unlikely hints and minor code tidying
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Add unlikely() hints on error paths in init, conv and fetch functions.
Remove unnecessary zero-initialization of local buffers that are
always written before use. Fix indentation in da_haproxy_checkinst()
and remove unused loop variable initialization.
2026-02-14 15:49:00 +01:00
David Carlier
076ec9443c MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
Precompute the maximum header name length from the atlas evidence
headers at init and hot-reload time. Use it in da_haproxy_fetch() to
skip headers early that cannot match any known DeviceAtlas evidence
header, avoiding unnecessary string copies and comparisons.
2026-02-14 15:49:00 +01:00
David Carlier
f5d03bbe13 MINOR: deviceatlas: define header_evidence_entry in dummy library header
Add the struct header_evidence_entry definition to the dummy dac.h
to accommodate the ongoing deviceatlas module update which now
iterates over atlas header_priorities to precompute maxhdrlen.
The struct was already referenced by struct da_atlas but lacked
a definition in the dummy header.
2026-02-14 15:49:00 +01:00
David Carlier
23aeb72798 MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
Increase DA_MAX_HEADERS from 24 to 32 and hbuf from 24 to 64 to
accommodate current DeviceAtlas data files which may use more headers
and longer header names.
2026-02-14 14:47:22 +01:00
David Carlier
8031bf6e03 MINOR: deviceatlas: check getproptype return and remove pprop indirection
Check the return value of da_atlas_getproptype() and skip the property
on failure instead of using an uninitialized proptype. Also remove the
unnecessary pprop pointer indirection, using prop directly.
2026-02-14 14:47:22 +01:00
David Carlier
0fad24b5da BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
When hot-reloading the atlas in da_haproxy_checkinst(), the configured
cache_size was not applied to the new instance, causing it to use the
default value.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
1d1daff7c4 BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
da_fini() was called unconditionally in deinit_deviceatlas() even when
da_init() was never called. Move it inside the daset check. Also remove
the erroneous shm_unlink() call which could affect the dadwsch shared
memory used by the scheduling process.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
d8ff676592 BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
In da_haproxy_checkinst(), when da_atlas_compile() failed, the cnew
buffer was leaked. Add a free(cnew) in the else branch.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
ea3b1bb866 BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
In da_haproxy_checkinst(), base[0] was checked before acquiring the
lock but not re-checked after. Another thread could have already
processed the reload between the initial check and the lock
acquisition, leading to a race condition.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
734a139c52 BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
In da_haproxy_fetch(), vlen was set from v.len (the raw header value
length) instead of the truncated copy length. Also the cookie-specific
vlen calculation used an incorrect subtraction instead of the actual
extracted cookie value length (pl) returned by
http_extract_cookie_value().

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
7098b4f93a BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
The user-agent string copy had an off-by-one error: the buffer size
limit did not account for the null terminator, and the memcpy length
used i-1 which truncated the last character of the user-agent string.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
d8f219b380 BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
When da_atlas_compile() or da_atlas_open() failed in init_deviceatlas(),
atlasimgptr was leaked and da_fini() was never called. Also add a NULL
check on strdup() for the default cookie name with proper cleanup of
the atlas and image pointer on failure.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
6342705cee BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
Add missing NULL checks after strdup() for the json file path in
da_json_file() and the cookie name in da_properties_cookie().

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
2d6e9e15cd BUG/MINOR: deviceatlas: add missing return on error in config parsers
da_log_level() and da_cache_size() were missing a return -1 on error,
causing fall-through to the normal return 0 path when invalid values
were provided.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
Willy Tarreau
5689605c8e DEV: gdb: add a utility to find the post-mortem address from a core
More and more often, core dumps retrieved on systems that build with
-fPIE by default are becoming unexploitable. Even functions and global
symbols get relocated and gdb cannot figure their final position.
Ironically the post_mortem struct lying in its own section that was
meant to ease its finding is not exempt from this problem.

The only remaining way is to inspect the core to search for the
post-mortem magic, figure its offset from the file and look up the
corresponding virtual address with objdump. This is quite a hassle.

This patch implements a simple utility that opens a 64-bit core dump,
scans the program headers looking for a data segment which contains
the post-mortem magic, and prints it on stdout. It also places the
"pm_init" command alone on its own line to ease copy-pasting into the
gdb console. With this, at least the other commands in this directory
work again and allow to inspect the program's state. E.g:

  $ ./getpm core.57612
  Found post-mortem magic in segment 5:
    Core File Offset: 0xfc600 (0xd5000 + 0x27600)
    Runtime VAddr:    0x5613e52b6600 (0x5613e528f000 + 0x27600)
    Segment Size:     0x28000

  In gdb, copy-paste this line:

     pm_init 0x5613e52b6600

It's worth noting that the program has so few dependencies that it even
builds with nolibc, allowing to upload a static executable into containers
being debugged and lacking development tools and compilers. The build
procedure is indicated inthe source code.
2026-02-14 14:46:33 +01:00
Aurelien DARRAGON
d71e2e73ea MEDIUM: filters: use per-channel filter list when relevant
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
In the historical implementation, all filter related information where
stored at the stream level (using struct strm_flt * context), and filters
iteration was performed at the stream level also.

We identified that this was not ideal and would make the implementation of
future filters more complex since filters ordering should be handled in
a different order during request and response handling for decompression
for instance.

To make such thing possible, in this commit we migrate some channel
specific filter contexts in the channel directly (request or response),
and we implement 2 additional filter lists, one on the request channel
and another on the response channel. The historical stream filter list
is kept as-is because in some contexts only the stream is available and
we have to iterate on all filters. But for functions where we only are
interested in request side or response side filters, we now use dedicated
channel filters list instead.

The only overhead is that the "struct filter" was expanded by two "struct
list".

For now, no change of behavior is expected.
2026-02-13 12:24:13 +01:00
Aurelien DARRAGON
bb6cfbe754 MINOR: filters: rework filter iteration for channel related callback functions
Multiple channel related functions have the same construction: they use
list_for_each_entry() to work on a given filter from the stream+channel
combination. In future commits we will try to use filter list from
dedicated channel list instead of the stream one, thus in this patch we
need as a prerequisite to implement and use the flt_list_{start,next} API
to iterate over filter list, giving the API the responsibility to iterate
over the correct list depending on the context, while the calling function
remains free to use the iteration construction it needs. This way we will
be able to easily change the way we iterate over filter list without
duplicating the code for requests and responses.
2026-02-13 12:24:07 +01:00
Aurelien DARRAGON
e88b219331 MINOR: filters: rework RESUME_FILTER_* macros as inline functions
There is no need to have those helpers defined as macro, and since it
is not mandatory, code maintenance is much easier using functions,
thus let's switch to function definitions.

Also, we change the way we iterate over the list so that the calling
function now has a pseudo API to get and iterate over filter pointers
while keeping control on how they implement the iterating logic.

One benefit of this is that we will also be able to switch between lists
depending on the channel type, which is a prerequisite for upcoming
rework that split the filter list over request and response channels
(commit will follow)

No change of behavior is expected.
2026-02-13 12:24:00 +01:00
William Lallemand
f47b800ac3 SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Implement a way to set a destination directory using DESTDIR, and a tmp
directory using TMPDIR.

By default:

- DESTDIR is ../vtest like it was done previously
- TMPDIR  is mktemp -d

Only the vtest binary is copied in DESTDIR.

Example:

	TMPDIR=/dev/shm/ DESTDIR=/home/user/.local/bin/ ./scripts/build-vtest.sh
2026-02-12 18:48:09 +01:00
William Lallemand
d13164e105 MINOR: startup: show the list of detected features at runtime with haproxy -vv
Features prefixed by "HAVE_WORKING_" in the haproxy -vv feature list,
are features that are detected during runtime.

This patch splits these features on another line in haproxy -vv. This
line is named "Detected feature list".
2026-02-12 18:02:19 +01:00
William Lallemand
b90b312a50 MINOR: startup: sort the feature list in haproxy -vv
The feature list in haproxy -vv is partly generated from the Makefile
using the USE_* keywords, but it's also possible to add keywords in the
feature list using hap_register_feature(), which adds the keyword at the
end of list. When doing so, the list is not correctly sorted anymore.

This patch fixes the problem by splitting the string using an array of
ist and applying a qsort() on it.
2026-02-12 18:02:19 +01:00
William Lallemand
1592ed9854 MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
the TCP_MD5SIG ifdef is not enough to check if the feature is usable.
The code might compile but the OS could prevent to use it.

This patch tries to use the TCP_MD5SIG setsockopt before adding
HAVE_WORKING_TCP_MD5SIG in the feature list.  so it would prevent to
start reg-tests if the OS can't run it.
2026-02-12 18:02:19 +01:00
Remi Tricot-Le Breton
f9b3319f48 REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
Test the new "jwt_decrypt_jwk" converter that takes a JWK as argument,
either as a string or in a variable.
Only "RSA" and "oct" types are managed for now.
2026-02-12 16:31:38 +01:00
Remi Tricot-Le Breton
aad212954f MINOR: jwt: Add new jwt_decrypt_jwk converter
This converter takes a private key in the JWK format (RFC7517) that can
be provided as a string of via a variable.
The only keys managed for now are of type 'RSA' or 'oct'.
2026-02-12 16:31:27 +01:00
Remi Tricot-Le Breton
b26f0cc45a MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
Add helper functions that take a JWK (JSON representation of an RSA
private key) into an EVP_PKEY (containing the private key).
Those functions are not used yet, they will be used in the upcoming
'jwt_decrypt_jwk' converter.
2026-02-12 16:31:12 +01:00
Remi Tricot-Le Breton
b3a44158fb MINOR: ssl: Missing '\n' in error message
Fix missing '\n' in error message raised when trying to load a password
protected private key.
2026-02-12 16:29:01 +01:00
Amaury Denoyelle
8e16fd2cf1 BUG/MAJOR: quic: fix parsing frame type
QUIC frame type is encoded as a varint. Initially, haproxy parsed it as
a single byte, which was enough to cover frames defined in RFC9000.

The code has been extended recently to support multi-bytes encoded
value, in anticipation of QUIC frames extension support. However, there
was no check on the varint format. This is interpreted erroneously as a
PADDING frame as this serves as the initial value. Thus the rest of the
packet is incorrectly handled, with various resulting effects, including
infinite loops and/or crashes.

This patch fixes this by checking the return value of quic_dec_int(). If
varint cannot be parsed, the connection is immediately closed.

This issue is assigned to CVE-2026-26080 report.

This must be backported up to 3.2.

Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
2026-02-12 09:09:44 +01:00
Amaury Denoyelle
4aa974f949 BUG/MAJOR: quic: reject invalid token
Token parsing code on INITIAL packet for the NEW_TOKEN format is not
robust enough and may even crash on some rare malformed packets.

This patch fixes this by adding a check on the expected length of the
received token. The packet is now rejected if the token does not match
QUIC_TOKEN_LEN. This check is legitimate as haproxy should only parse
tokens emitted by itself.

This issue has been introduced with the implementation of NEW_TOKEN
tokens parsing required for 0-RTT support.

This issue is assigned to CVE-2026-26081 report.

This must be backported up to 3.0.

Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
2026-02-12 09:09:44 +01:00
Amaury Denoyelle
d80f0143c9 BUG/MINOR: quic: ensure handshake speed up is only run once per conn
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
When a duplicated CRYPTO frame is received during handshake, a server
may consider that there was a packet loss and immediately retransmit its
pending CRYPTO data without having to wait for PTO expiration. However,
RFC 9002 indicates that this should only be performed at most once per
connection to avoid excessive packet transmission.

QUIC connection is flagged with QUIC_FL_CONN_HANDSHAKE_SPEED_UP to mark
that a fast retransmit has been performed. However, during the
refactoring on CRYPTO handling with the storage conversion from ncbuf to
ncbmbuf, the check on the flag was accidentely removed. The faulty patch
is the following one :

  commit f50425c021
  MINOR: quic: remove received CRYPTO temporary tree storage

This patch adds again the check on QUIC_FL_CONN_HANDSHAKE_SPEED_UP
before initiating fast retransmit. This ensures this is only performed
once per connection.

This must be backported up to 3.3.
2026-02-12 09:09:44 +01:00
Olivier Houchard
b65df062be MINOR: servers: Call process_srv_queue() without lock when possible
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In server_warmup(), call process_srv_queue() only once we released the
server lock, as we don't need it.
2026-02-12 02:19:38 +01:00
Olivier Houchard
a8f50cff7e MINOR: queues: Check minconn first in srv_dynamic_maxconn()
In srv_dynamic_maxconn(), we'll decide that the max number of connection
is the server's maxconn if 1) the proxy's number of connection is over
fullconn, or if minconn was not set.
Check if minconn is not set first, as it will be true most of the time,
and as the proxy's "beconn" variable is in a busy cache line, it can be
costly to access it, while minconn/maxconn is in a cache line that
should very rarely change.
2026-02-12 02:18:59 +01:00
Willy Tarreau
c622ed23c8 DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Oto Valek rightfully reported in issue #3262 that the proxy-protocol
doc makes no mention of the packed attribute on struct pp2_tlv_ssl,
which is mandatory since fields are not type-aligned in it. Let's
add it in the definition and make an explicit mention about it to
save implementers from wasting their time trying to debug this.

It can be backported.
2026-02-11 14:46:55 +01:00
Egor Shestakov
f5f9c008b1 CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
It is likely that copy-pasted text was not initially adjusted in the
comments, that is misleading regarding number of arguments.

No backport needed.
2026-02-11 09:10:56 +01:00
William Lallemand
ea92b0ef01 BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The documentation of @system-ca specifies that one can overwrite the
value provided by the SSL Library using SSL_CERT_DIR.

However it seems like X509_get_default_cert_dir() is not affected by
this environment variable, and X509_get_default_cert_dir_env() need to
be used in order to get the variable name, and get the value manually.

This could be backported in every stable branches. Note that older
branches don't have the memprintf in ssl_sock.c.
2026-02-10 21:34:45 +01:00
William Lallemand
2ac0d12790 MINOR: startup: Add the SSL lib verify directory in haproxy -vv
SSL libraries built manually might lack the right
X509_get_default_cert_dir() value.

The common way to fix the problem is to build openssl with
./configure --openssldir=/etc/ssl/

In order to verify this setting, output it with haproxy -vv.
2026-02-10 21:06:38 +01:00
Willy Tarreau
c724693b95 MINOR: activity: allow to switch per-task lock/memory profiling at runtime
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Given that we already have "set profiling task", it's easy to permit to
enable/disable the lock and/or memory profiling at run time. However, the
change will only be applied next time the task profiling will be switched
from off/auto to on.

The patch is very minor and is best viewed with git show -b because it
indents a whole block that moves in a "if" clause.

This can be backported to 3.3 along with the two previous patches.
2026-02-10 17:53:01 +01:00
Willy Tarreau
e2631ee5f7 MEDIUM: activity: apply and use new finegrained task profiling settings
In continuity of previous patch, this one makes use of the new profiling
flags. For this, based on the global "profiling" setting, when switching
profiling on, we set or clear two flags on the thread context,
TH_FL_TASK_PROFILING_L and TH_FL_TASK_PROFILING_M to indicate whether
lock profiling and/or malloc profiling are desired when profiling is
enabled. These flags are checked along with TH_FL_TASK_PROFILING to
decide when to collect time around a lock or a malloc. And by default
we're back to the behavior of 3.2 in that neither lock nor malloc times
are collected anymore.

This is sufficient to see the CPU usage spent in the VDSO to significantly
drop from 22% to 2.2% on a highly loaded system.

This should be backported to 3.3 along with the previous patch.
2026-02-10 17:52:59 +01:00
Willy Tarreau
a7b2353cb3 MINOR: activity: support setting/clearing lock/memory watching for task profiling
Damien Claisse reported in issue #3257 a performance regression between
3.2 and 3.3 when task profiling is enabled, more precisely in relation
with the following patches were merged:

  98cc815e3e ("MINOR: activity: collect time spent with a lock held for each task")
  503084643f ("MINOR: activity: collect time spent waiting on a lock for each task")
  9d8c2a888b ("MINOR: activity: collect CPU time spent on memory allocations for each task")

The issue mostly comes from the first patches. What happens is that the
local time is taken when entering and leaving each lock, which costs a
lot on a contended system. The problem here is the lack of finegrained
settings for lock and malloc profiling.

This patch introduces a better approach. The task profiler goes back to
its default behavior in on/auto modes, but the configuration now accepts
new extra options "lock", "no-lock", "memory", "no-memory" to precisely
indicate other timers to watch for each task when profiling turns on.

This is achieved by setting two new flags HA_PROF_TASKS_LOCK and
HA_PROF_TASKS_MEM in the global "profiling" variable.

This patch only parses the new values and assigns them to the global
variable from the config file for now. The doc was updated.
2026-02-10 17:47:02 +01:00
Willy Tarreau
3b45beb465 CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
Commit 3674afe8a0 ("BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING
and clr FL_NOTIFIED") accidentally left a strange-looking line wrapping
making one think of an editing mistake, let's fix it and keep it on a
single line given that even indented wrapping is almost as large.

This can be backported with the fix above till 2.8 to keep the patch
context consistent between versions.
2026-02-10 14:11:42 +01:00
Willy Tarreau
64c5d45a26 BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
An issue was introduced in 3.0 with commit faa8c3e024 ("MEDIUM: lb-chash:
Deterministic node hashes based on server address"): the new server_key
field and lb_nodes entries initialization were not updated for servers
added at run time with "add server": server_key remains zero and the key
used in lb_node remains the one depending only on the server's ID.

This will cause trouble when adding new servers with consistent hashing,
because the hash-key will be ignored until the server's weight changes
and the key difference is detected, leading to its recalculation.

This is essentially caused by the poorly placed lb_nodes initialization
that is specific to lb-chash and had to be replicated in the code dealing
with server addition.

This commit solves the problem by adding a new ->server_init() function
in the lbprm proxy struct, that is called by the server addition code.
This also allows to abandon the complex check for LB algos that was
placed there for that purpose. For now only lb-chash provides such a
function, and calls it as well during initial setup. This way newly
added servers always use the correct key now.

While it should also theoretically have had an impact on servers added
with the "random" algorithm, it's unlikely that the difference between
proper server keys and those based on their ID could have had any visible
effect.

This patch should be backported as far as 3.0. The backport may be eased
by a preliminary backport of previous commit "CLEANUP: lb-chash: free
lb_nodes from chash's deinit(), not global", though this is not strictly
necessary if context is manually adjusted.
2026-02-10 07:22:54 +01:00
Willy Tarreau
62239539bf CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
There's an ambuity on the ownership of lb_nodes in chash, it's allocated
by chash but freed by the server code in srv_free_params() from srv_drop()
upon deinit. Let's move this free() call to a chash-specific function
which will own the responsibility for doing this instead. Note that
the .server_deinit() callback is properly called both on proxy being
taken down and on server deletion.
2026-02-10 07:20:50 +01:00
Amaury Denoyelle
91a5b67b25 BUG/MINOR: proxy: fix default ALPN bind settings
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
For "add backend" implementation, postparsing code in
check_config_validity() from cfgparse.c has been extracted in a new
dedicated function named proxy_finalize() into proxy.c.

This has caused unexpected compilation issue as in the latter file
TLSEXT_TYPE_application_layer_protocol_negotiation macro may be
undefined, in particular when building without QUIC support. Thus, code
related to default ALPN on binds is discarded after the preprocessing
stage.

Fix this by including openssl-compat header file into proxy source file.
This should be sufficient to ensure SSL related defines are properly
included.

This should fix recent issues on SSL regtests.

No need to backport.
2026-02-09 13:52:25 +01:00
Willy Tarreau
ecffaa6d5a MINOR: net_helper: extend the ip.fp output with an option presence mask
Emeric suggested that it's sometimes convenient to instantly know if a
client has advertised support for window scaling or timestamps for
example. While the info is present in the TCP options output, it's hard
to extract since it respects the options order.

So here we're extending the 56-bit fingerprint with 8 extra bits that
indicate the presence of options 2..8, and any option above 9 for the
last bit. In practice this is sufficient since higher options are not
commonly used. Also TCP option 5 is normally not sent on the SYN (SACK,
only SACK_perm is sent), and echo options 6 & 7 are no longer used
(replaced with timestamps). These fields might be repurposed in the
future if some more meaningful options are to be mapped (e.g. MPTCP,
TFO cookie, auth).
2026-02-09 09:18:04 +01:00
Amaury Denoyelle
a1db464c3e BUG/MINOR: proxy: fix null dereference in "add backend" handler
When a backend is created at runtime, the new proxy instance is inserted
at the end of proxies_list. This operation is buggy if this list is
empty : the code causes a null dereference which will lead to a crash.
This causes the following compilation error :

  CC      src/proxy.o
src/proxy.c: In function 'cli_parse_add_backend':
src/proxy.c:4933:36: warning: null pointer dereference [-Wnull-dereference]
 4933 |                 proxies_list->next = px;
      |                 ~~~~~~~~~~~~~~~~~~~^~~~

This patch fixes this issue. Note that in reality it cannot occur at
this moment as proxies_list cannot be empty (haproxy requires at least
one frontend to start, and the list also always contains internal
proxies).

No need to backport.
2026-02-06 21:35:12 +01:00
Amaury Denoyelle
5dff6e439d BUG/MINOR: proxy: fix clang build error on "add backend" handler
This patch fixes the following compilation error :
  src/proxy.c:4954:12: error: format string is not a string literal
        (potentially insecure) [-Werror,-Wformat-security]
   4954 |         ha_notice(msg);
        |                   ^~~

No need to backport.
2026-02-06 21:17:18 +01:00
Amaury Denoyelle
d7cdd2c7f4 REGTESTS: add dynamic backend creation test
Add a new regtests to validate backend creation at runtime. A server is
then added and requests made to validate the newly created instance
before (with force-be-switch) and after publishing.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
5753c14e84 MINOR: proxy: assign dynamic proxy ID
Implement proxy ID generation for dynamic backends. This is performed
through the already function existing proxy_get_next_id().

As an optimization, lookup will performed starting from a global
variable <dynpx_next_id>. It is initialized to the greatest ID assigned
after parsing, and updated each time a backend instance is created. When
backend deletion will be implemented, it could be lowered to the newly
available slot.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
3115eb82a6 MEDIUM: proxy: implement dynamic backend creation
Implement the required operations for "add backend" handler. This
requires a new proxy allocation, settings copy from the specified
default instance and proxy config finalization. All handlers registered
via REGISTER_POST_PROXY_CHECK() are also called on the newly created
instance.

If no error were encountered, the newly created proxy is finally
attached in the proxies list.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
07195a1af4 MINOR: proxy: check default proxy compatibility on "add backend"
This commits completes "add backend" handler with some checks performed
on the specified default proxy instance. These are additional checks
outside of the already existing inheritance rules, specific to dynamic
backends.

For now, a default proxy is considered not compatible if it is not in
mode TCP/HTTP. Also, a default proxy is rejected if it references HTTP
errors. This limitation may be lifted in the future, when HTTP errors
are partiallay reworked.
2026-02-06 17:28:26 +01:00
Amaury Denoyelle
a603811aac MINOR: proxy: parse guid on dynamic backend creation
Defines an extra optional GUID argument for "add backend" command. This
can be useful as it is not possible to define it via a default proxy
instance.
2026-02-06 17:28:04 +01:00
Amaury Denoyelle
e152913327 MINOR: proxy: parse mode on dynamic backend creation
Add an optional "mode" argument to "add backend" CLI command. This
argument allows to specify if the backend is in TCP or HTTP mode.

By default, it is mandatory, unless the inherited default proxy already
explicitely specifies the mode. To differentiate if TCP mode is implicit
or explicit, a new proxy flag PR_FL_DEF_EXPLICIT_MODE is defined. It is
set for every defaults instances which explicitely defined their mode.
2026-02-06 17:27:50 +01:00
Amaury Denoyelle
7ac5088c50 MINOR: proxy: define "add backend" handler
Define a basic CLI handler for "add backend".

For now, this handler only performs a parsing of the name argument and
return an error if a duplicate already exists. It runs under thread
isolation, to guarantee thread safety during the proxy creation.

This feature is considered in development. CLI command requires to set
experimental-mode.
2026-02-06 17:26:55 +01:00
Amaury Denoyelle
817003aa31 MINOR: backend: add function to check support for dynamic servers
Move backend compatibility checks performed during 'add server' in a
dedicated function be_supports_dynamic_srv(). This should simplify
addition of future restriction.

This function will be reused when implementing backend creation at
runtime.
2026-02-06 14:35:19 +01:00
Amaury Denoyelle
dc6cf224dd MINOR: proxy: refactor mode parsing
Define a new utility function str_to_proxy_mode() which is able to
convert a string into the corresponding proxy mode if possible. This new
function is used for the parsing of "mode" configuration proxy keyword.

This patch will be reused for dynamic backend implementation, in order
to parse a similar "mode" argument via a CLI handler.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
87ea407cce MINOR: proxy: refactor proxy inheritance of a defaults section
If a proxy is referencing a defaults instance, some checks must be
performed to ensure that inheritance will be compatible. Refcount of the
defaults instance may also be incremented if some settings cannot be
copied. This operation is performed when parsing a new proxy of defaults
section which references a defaults, either implicitely or explicitely.

This patch extracts this code into a dedicated function named
proxy_ref_defaults(). This in turn may call defaults_px_ref()
(previously called proxy_ref_defaults()) to increment its refcount.

The objective of this patch is to be able to reuse defaults inheritance
validation for dynamic backends created at runtime, outside of the
parsing code.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
a8bc83bea5 MINOR: cfgparse: move proxy post-init in a dedicated function
A lot of proxies initialization code is delayed on post-parsing stage,
as it depends on the configuration fully parsed. This is performed via a
loop on proxies_list.

Extract this code in a dedicated function proxy_finalize(). This patch
will be useful for dynamic backends creation.

Note that for the moment the code has been extracted as-is. With each
new features, some init code was added there. This has become a giant
loop with no real ordering. A future patch may provide some cleanup in
order to reorganize this.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
2c8ad11b73 MINOR: cfgparse: validate defaults proxies separately
Default proxies validation occurs during post-parsing. The objective is
to report any tcp/http-rules which could not behave as expected.

Previously, this was performed while looping over standard proxies list,
when such proxy is referencing a default instance. This was enough as
only named referenced proxies were kept after parsing. However, this is
not the case anymore in the context of dynamic backends creation at
runtime.

As such, this patch now performs validation on every named defaults
outside of the standard proxies list loop. This should not cause any
behavior difference, as defaults are validated without using the proxy
which relies on it.

Along with this change, PR_FL_READY proxy flag is now removed. Its usage
was only really needed for defaults, to avoid validating a same instance
multiple times. With the validation of defaults in their own loop, it is
now redundant.
2026-02-06 14:35:18 +01:00
Egor Shestakov
2a07dc9c24 BUG/MINOR: startup: handle a possible strdup() failure
Fix unhandled strdup() failure when initializing global.log_tag.

Bug was introduced with the fix UAF for global progname pointer from
351ae5dbe. So it must be backported as far as 3.1.
2026-02-06 10:50:31 +01:00
Egor Shestakov
9dd7cf769e BUG/MINOR: startup: fix allocation error message of progname string
Initially when init_early was introduced the progname string was a local
used for temporary storage of log_tag. Now it's global and detached from
log_tag enough. Thus, in the past we could inform that log_tag
allocation has been failed but not now.

Must be backported since the progname string became global, that is
v3.1-dev9-96-g49772c55e
2026-02-06 10:50:31 +01:00
Olivier Houchard
bf7a2808fc BUG/MEDIUM: threads: Differ checking the max threads per group number
Differ checking the max threads per group number until we're done
parsing the configuration file, as it may be set after a "thread-group-
directive. Otherwise the default value of 64 will be used, even if there
is a max-threads-per-group directive.

This should be backported to 3.3.
2026-02-06 03:01:50 +01:00
Olivier Houchard
9766211cf0 BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
Give global.maxthrpertgroup its default value at global creation,
instead of later when we're trying to detect the thread count.
It is used when verifying the configuration file validity, and if it was
not set in the config file, in a few corner cases, the value of 0 would
be used, which would then reject perfectly fine configuration files.

This should be backported to 3.3.
2026-02-06 03:01:36 +01:00
William Lallemand
9e023ae930 DOC: internals: addd mworker V3 internals
Document the mworker V3 implementation introduced in HAProxy 3.1.
Explains the rationale behind moving configuration parsing out of the
master process to improve robustness.

Could be backported to 3.1.
2026-02-04 16:39:44 +01:00
Willy Tarreau
68e9fb73fd [RELEASE] Released version 3.4-dev4
Released version 3.4-dev4 with the following main changes :
    - BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
    - BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
    - BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
    - BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
    - BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
    - BUG/MINOR: config: check capture pool creations for failures
    - BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
    - MEDIUM: pools: better check for size rounding overflow on registration
    - DOC: reg-tests: update VTest upstream link in the starting guide
    - BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
    - BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
    - MINOR: ssl: display libssl errors on private key loading
    - BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
    - MINOR: ssl: allow to disable certificate compression
    - BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
    - DOC: config: mention some possible TLS versions restrictions for kTLS
    - OPTIM: server: move queueslength in server struct
    - OPTIM: proxy: separate queues fields from served
    - OPTIM: server: get rid of the last use of _ha_barrier_full()
    - DOC: config: mention that idle connection sharing is per thread-group
    - MEDIUM: h1: strictly verify quoting in chunk extensions
    - BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
    - BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
    - MEDIUM: ssl: remove connection from msg callback args
    - MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
    - REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
    - DOC: internals: cleanup few typos in master-worker documentation
    - BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
    - MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
    - MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
    - BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
    - BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
    - DOC: config: mention the limitation on server id range for consistent hash
    - MEDIUM: backend: make "balance random" consider req rate when loads are equal
    - BUG/MINOR: config: Fix setting of alt_proto
2026-02-04 14:59:47 +01:00
Aperence
143f5a5c0d BUG/MINOR: config: Fix setting of alt_proto
This patch fixes the bug presented in issue #3254
(https://github.com/haproxy/haproxy/issues/3254), which
occured on FreeBSD when using a stream socket for in
nameserver section. This bug occured due to an incorrect
reset of the alt_proto for a stream socket when the default
socket is created as a datagram socket. This patch fixes
this bug by doing a late assignment to alt_proto when
a datagram socket is requested, leaving only the modification
of alt_proto done by mptcp. Additional documentation
for the use of alt_proto has also been added to
clarify the use of the alt_proto variable.
2026-02-04 14:54:20 +01:00
Willy Tarreau
b6bdb2553b MEDIUM: backend: make "balance random" consider req rate when loads are equal
As reported by Damien Claisse and Cdric Paillet, the "random" LB
algorithm can become particularly unfair with large numbers of servers
having few connections. It's indeed fairly common to see many servers
with zero connection in a thousand-server large farm, and in this case
the P2C algo consisting in checking the servers' loads doesn't help at
all and is basically similar to random(1). In this case, we only rely
on the distribution of server IDs in the random space to pick the best
server, but it's possible to observe huge discrepancies.

An attempt to model the problem clearly shows that with 1600 servers
with weight 10, for 1 million requests, the lowest loaded ones will
take 300 req while the most loaded ones will get 780, with most of
the values between 520 and 700.

In addition, only the first 28 lower bits of server IDs are used for
the key calculation, which means that node keys are more determinist.
Setting random keys in the lowest 28 bits only better packs values
with min around 530 and max around 710, with values mostly between
550 and 680.

This can only be compensated by increasing weights and draws without
being a perfect fix either. At 4 draws, the min is around 560 and the
max around 670, with most values bteween 590 and 650.

This patch takes another approach to this problem: when servers are on
tie regarding their loads, instead of arbitrarily taking the second one,
we now compare their current request rates, which is updated all the
time and smoothed over one second, and we pick the server with the
lowest request rate. Now with 2 draws, the curve is mostly flat, with
the min at 580 and the max at 628, and almost all values between 611
and 625. And 4 draws exclusively gives values from 614 to 624.

Other points will need to be addressed separately (bits of server ID,
maybe refine the hash algorithm), but these ones would affect how
caches are selected, and cannot be changed without an extra option.
For random however we can perform a change without impacting anyone.

This should be backported, probably only to 3.3 since it's where the
"random" algo became the default.
2026-02-04 14:54:16 +01:00
Willy Tarreau
3edf600859 DOC: config: mention the limitation on server id range for consistent hash
When using "hash-type consistent", we default to using the server's ID
as the insertion key. However, that key is scaled to avoid collisions
when inserting multiple slots for a server (16 per weight unit), and
that scaling loses the 4 topmost bits of the ID, so the only effective
range of IDs is 1..268435456, and anything above will provide the same
hashing keys again.

Let's mention this in the documentation, and also remind that it can
affect "balance random". This can be backported to all versions.
2026-02-04 14:54:16 +01:00
Willy Tarreau
cddeea58cd BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
The per-cpu capacity of a cluster was taken into account since 3.2 with
commit 6c88e27cf4 ("MEDIUM: cpu-topo: change "performance" to consider
per-core capacity").

In cpu_policy_performance() and cpu_policy_efficiency(), we're trying
to figure which cores have more capacity than others by comparing their
cluster's average capacity. However, contrary to what the comment says,
we're not averaging per core but per cpu, which makes a difference for
CPUs mixing SMT with non-SMT cores on the same SoC, such as intel's 14th
gen CPUs. Indeed, on a machine where cpufreq is not enabled, all CPUs
can be reported with a capacity of 1024, resulting in a big cluster of
16*1024, and 4 small clusters of 4*1024 each, giving an average of 1024
per CPU, making it impossible to distinguish one from the other. In this
situation, both "cpu-policy performance" and "cpu-policy efficiency"
enable all cores.

But this is wrong, what needs to be taken into account in the divide is
the number of *cores*, not *cpus*, that allows to distinguish big from
little clusters. This was not noticeable on the ARM machines the commit
above aimed at fixing because there, the number of CPUs equals the number
of cores. And on an x86 machine with cpu_freq enabled, the frequencies
continue to help spotting which ones are big/little.

By using nb_cores instead of nb_cpus in the comparison and in the avg_capa
compare function, it properly works again on x86 without affecting other
machines with 1 CPU per core.

This can be backported to 3.2.
2026-02-04 08:49:18 +01:00
Olivier Houchard
3674afe8a0 BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
When we're about to enter polling, atomically set TH_FL_SLEEPING and
remove TH_FL_NOTIFIED, instead of doing it in sequence. Otherwise,
another thread may sett that both the TH_FL_SLEEPING and the
TH_FL_NOTIFIED bits are set, and don't wake up the thread then it should
be doing that.
This prevents a bug where a thread is sleeping while it should be
handling a new connection, which can happen if there are very few
incoming connection. This is easy to reproduce when using only two
threads, and injecting with only one connection, the connection may then
never be handled.

This should be backported up to 2.8.
2026-02-04 07:13:06 +01:00
Hyeonggeun Oh
2527d9dcd1 MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
This patch adds a new 'post-80' option that sets the
CLIENT_PLUGIN_AUTH (0x00080000) capability flag
and explicitly specifies mysql_native_password as
the authentication plugin in the handshake response.

This patch also addes documentation content for post-80 option
support in MySQL 8.x version. Which handles new default auth
plugin caching_sha2_password.

MySQL 8.0 changed the default authentication plugin from
mysql_native_password to caching_sha2_password.
The current mysql-check implementation only supports pre-41
and post-41 client auth protocols, which lack the CLIENT_PLUGIN_AUTH
capability flag. When HAProxy sends a post-41 authentication
packet to a MySQL 8.x server, the server responds with error 1251:
"Client does not support authentication protocol requested by server".

The new client capabilities for post-80 are:
- CLIENT_PROTOCOL_41 (0x00000200)
- CLIENT_SECURE_CONNECTION (0x00008000)
- CLIENT_PLUGIN_AUTH (0x00080000)

Usage example:
backend mysql_servers
	option mysql-check user haproxy post-80
	server db1 192.168.1.10:3306 check

The health check user must be created with mysql_native_password:
CREATE USER 'haproxy'@'%' IDENTIFIED WITH mysql_native_password BY '';

This addresses https://github.com/haproxy/haproxy/issues/2934.
2026-02-03 07:36:53 +01:00
Olivier Houchard
f26562bcb7 MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
Commit fa094d0b61 changed the msg callback
args, but forgot to fix quic_tls_msg_callback() accordingly, so do that,
and remove the unused struct connection paramter.
2026-02-03 04:05:34 +01:00
Christopher Faulet
abc1947e19 BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
A regression was introduced in the commit 0ea601127 ("BUG/MAJOR: applet: Don't
call I/O handler if the applet was shut"). The test on shut flags for legacy
applets is inverted.

It should be harmeless on 3.4 and 3.3 because all applets were converted. But
this fix is mandatory for 3.2 and older.

The patch must be backported as far as 3.0 with the commit above.
2026-01-30 09:55:18 +01:00
Egor Shestakov
02e6375017 DOC: internals: cleanup few typos in master-worker documentation
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
s/mecanism/mechanism
s/got ride/got rid
s/traditionnal/traditional

One typo is confusion between master and worker that results to a
semantic mistake in the sentence:

"...the master will emit an "exit-on-failure" error and will kill every
workers with a SIGTERM and exits with the same error code than the
failed [-master-]{+worker+}..."

Should be backported as far as 3.1.
2026-01-29 18:44:40 +01:00
William Lallemand
da728aa0f6 REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
OpenSSL 4.0 changed the way it stores objects in X509_STORE structures
and are not allowing anymore to iterate on objects in insertion order.

Meaning that the order of the object are not the same before and after
OpenSSL 4.0, and the reg-tests need to handle both cases.
2026-01-29 17:08:45 +01:00
William Lallemand
23e8ed6ea6 MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
OpenSSL 4.0 is deprecating X509_STORE_get0_objects().

Every occurence of X509_STORE_get0_objects() was first replaced by
X509_STORE_get1_objects().
This changes the ref count of the STACK_OF(X509_OBJECT) everywhere, and
need it to be sk_X509_OBJECT_pop_free(objs, X509_OBJECT_free) each time.

X509_STORE_get1_objects() is not available in AWS-LC, OpenSSL < 3.2,
LibreSSL and WolfSSL, so we need to still be compatible with get0.
To achieve this, 2 macros were added X509_STORE_getX_objects() and
sk_X509_OBJECT_popX_free(), these macros will use either the get0 or the
get1 macro depending on their availability. In the case of get0,
sk_X509_OBJECT_popX_free() will just do nothing instead of trying to
free.

Don't backport that unless really needed if we want to be compatible
with OpenSSL 4.0. It changes all the refcounts.
2026-01-29 17:08:41 +01:00
Amaury Denoyelle
fa094d0b61 MEDIUM: ssl: remove connection from msg callback args
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
SSL msg callbacks are used for notification about sent/received SSL
messages. Such callbacks are registered via
ssl_sock_register_msg_callback().

Prior to this patch, connection was passed as first argument of these
callbacks. However, most of them do not use it. Worst, this may lead to
confusion as connection can be NULL in QUIC context.

This patch cleans this by removing connection argument. As an
alternative, connection can be retrieved in callbacks if needed using
ssl_sock_get_conn() but the code must be ready to deal with potential
NULL instances. As an example, heartbeat parsing callback has been
adjusted in this manner.
2026-01-29 11:14:09 +01:00
Amaury Denoyelle
869a997a68 BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
With QUIC backend implementation, SSL code has been adjusted in several
place when accessing connection instance. Indeed, with QUIC usage, SSL
context is tied up to quic_conn, and code may be executed prior/after
connection instantiation. For example, on frontend side, connection is
only created after QUIC handshake completion.

The following patch tried to fix unsafe accesses to connection. In
particular, msg callbacks are not called anymore if connection is NULL.

  fab7da0fd0
  BUG/MEDIUM: quic-be/ssl_sock: TLS callback called without connection

However, most msg callbacks do not need to use the connection instance.
The only occurence where it is accessed is for heartbeat message
parsing, which is the only case of crash solved. The above fix is too
restrictive as it completely prevents execution of these callbacks when
connection is unset. This breaks several features with QUIC, such as SSL
key logging or samples based on ClientHello capture.

The current patch reverts the above one. Thus, this restores invokation
of msg callbacks for QUIC during the whole low-level connection
lifetime. This requires a small adjustment in heartbeat parsing callback
to prevent access on a NULL connection.

The issue on ClientHello capture was mentionned in github issue #2495.

This must be backported up to 3.3.
2026-01-29 11:14:09 +01:00
Willy Tarreau
48d9c90ff2 BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
The help message for "ktls" mentions "expose-experimental-directive"
without the final 's', which is particularly annoying when copy-pasting
the directive from the error message directly into the config.

This should be backported to 3.3.
2026-01-29 11:07:55 +01:00
Willy Tarreau
35d63cc3c7 MEDIUM: h1: strictly verify quoting in chunk extensions
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by Ben Kallus in the following thread:

   https://www.mail-archive.com/haproxy@formilux.org/msg46471.html

there exist some agents which mistakenly accept CRLF inside quoted
chunk extensions, making it possible to fool them by injecting one
extra chunk they won't see for example, or making them miss the end
of the body depending on how it's done. Haproxy, like most other
agents nowadays, doesn't care at all about chunk extensions and just
drops them, in agreement with the spec.

However, as discussed, since chunk extensions are basically never used
except for attacks, and that the cost of just matching quote pairs and
checking backslashed quotes is escape consistency remains relatively
low, it can make sense to add such a check to abort the message parsing
when this situation is encountered. Note that it has to be done at two
places, because there is a fast path and a slow path for chunk parsing.

Also note that it *will* cause transfers using improperly formatted chunk
extensions to fail, but since these are really not used, and that the
likelihood of them being used but improperly quoted certainly is much
lower than the risk of crossing a broken parser on the client's request
path or on the server's response path, we consider the risk as
acceptable. The test is not subject to the configurable parser exceptions
and it's very unlikely that it will ever be needed.

Since this is done in 3.4 which will be LTS, this patch will have to be
backported to 3.3 so that any unlikely trouble gets a chance to be
detected before users upgrade to 3.4.

Thanks to Ben for the discussion, and to Rajat Raghav for sparking it
in the first place even though the original report was mistaken.

Cc: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Cc: Rajat Raghav <xclow3n@gmail.com>
Cc: Christopher Faulet <cfaulet@haproxy.com>
2026-01-28 18:54:23 +01:00
Willy Tarreau
bb36836d76 DOC: config: mention that idle connection sharing is per thread-group
There's already a tunable "tune.idle-pool.shared" allowing to enable or
disable idle connection sharing between threads. However the doc does not
mention that these connections are only shared between threads of the same
thread group, since 2.7 with commit 15c5500b6e ("MEDIUM: conn: make
conn_backend_get always scan the same group"). Let's clarify this and
also give a hint about "max-threads-per-group" which can be helpful for
machines with unified caches.
2026-01-28 17:18:50 +01:00
Willy Tarreau
a79a67b52f OPTIM: server: get rid of the last use of _ha_barrier_full()
The code in srv_add_to_idle_list() has its roots in 2.0 with commit
9ea5d361ae ("MEDIUM: servers: Reorganize the way idle connections are
cleaned."). At this era we didn't yet have the current set of atomic
load/store operations and we used to perform loads using volatile casts
after a barrier. It turns out that this function has kept this schema
over the years, resulting in a big mfence stalling all the pipeline
in the function:

       |     static __inline void
       |     __ha_barrier_full(void)
       |     {
       |     __asm __volatile("mfence" ::: "memory");
 27.08 |       mfence
       |     if ((volatile void *)srv->idle_node.node.leaf_p == NULL) {
  0.84 |       cmpq    $0x0,0x158(%r15)
  0.74 |       je      35f
       |     return 1;

Switching these for a pair of atomic loads got rid of this and brought
0.5 to 3% extra performance depending on the tests due to variations
elsewhere, but it has never been below 0.5%. Note that the second load
doesn't need to be atomic since it's protected by the lock, but it's
cleaner from an API and code review perspective. That's also why it's
relaxed.

This was the last user of _ha_barrier_full(), let's try not to
reintroduce it now!
2026-01-28 16:07:27 +00:00
Willy Tarreau
a9df6947b4 OPTIM: proxy: separate queues fields from served
There's still a lot of contention when accessing the backend's
totpend and queueslength for every request in may_dequeue_tasks(),
even when queues are not used. This only happens because it's stored
in the same cache line as >beconn which is being written by other
threads:

  0.01 |        call       sess_change_server
  0.02 |        mov        0x188(%r15),%esi     ## s->queueslength
       |      if (may_dequeue_tasks(srv, s->be))
  0.00 |        mov        0xa8(%r12),%rax
  0.00 |        mov        -0x50(%rbp),%r11d
  0.00 |        mov        -0x60(%rbp),%r10
  0.00 |        test       %esi,%esi
       |        jne        3349
  0.01 |        mov        0xa00(%rax),%ecx     ## p->queueslength
  8.26 |        test       %ecx,%ecx
  4.08 |        je         288d

This patch moves queueslength and totpend to their own cache line,
thus adding 64 bytes to the struct proxy, but gaining 3.6% of RPS
on a 64-core EPYC thanks to the elimination of this false sharing.

process_stream() goes down from 3.88% to 3.26% in perf top, with
the next top users being inc/dec (s->served) and be->beconn.
2026-01-28 16:07:27 +00:00
Willy Tarreau
3ca2a83fc0 OPTIM: server: move queueslength in server struct
This field is shared by all threads and must be in the shared area
instead, because where it's placed, it slows down access to other
fields of the struct by false sharing. Just moving this field gives
a steady 2% gain on the request rate (1.93 to 1.96 Mrps) on a 64-core
EPYC.
2026-01-28 16:07:27 +00:00
Willy Tarreau
cb3fd012cd DOC: config: mention some possible TLS versions restrictions for kTLS
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
It took me one hour of trial and fail to figure that kTLS and splicing
were not used only for reasons of TLS version, and that switching to
TLS v1.2 solved the issue. Thus, let's mention it in the doc so that
others find it more easily in the future.

This should be backported to 3.3.
2026-01-28 10:45:21 +01:00
William Lallemand
bbab0ac4d0 BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
tune.ssl-certificate-compression expects 'auto' but not 'on'.

Could be backported if the previous patch is backported.
2026-01-27 16:25:11 +01:00
William Lallemand
6995fe60c3 MINOR: ssl: allow to disable certificate compression
This option allows to disable the certificate compression (RFC 8879)
using OpenSSL >= 3.2.0.

This feature is known to permit some denial of services by causing extra
memory allocations of approximately 22MiB and extra CPU work per
connection with OpenSSL versions affected by CVE-2025-66199.
( https://openssl-library.org/news/vulnerabilities/index.html#CVE-2025-66199 )

Setting this to "off" permits to mitigate the problem.

Must be backported to every stable branches.
2026-01-27 16:10:41 +01:00
Christopher Faulet
0ea601127e BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
In 3.0, it was stated an applet could not be woken up after it was shutdown.
So the corresponding test in the applets I/O handler was removed. However,
it seems it may happen, especially when outgoing data are blocked on the
opposite side. But it is really unexpected because the "release" callback
function was already called and the appctx context was most probably
released.

Strangely, it was never detected by any applet till now. But the Prometheus
exporter was never updated and was still testing the shutdown. But when it
was refactored to use the new applet API in 3.3, the test was removed. And
this introduced a regression leading a crash because a server object could
be corrupted. Conditions to hit the bug are not really clear however.

So, now, to avoid any issue with all other applets, the test is performed in
task_process_applet(). The I/O handler is no longer called if the applet is
already shut.

The same is performed for applets still relying on the old API.

An amazing thanks to @idl0r for his invaluable help on this issue !

This patch should fix the issue #3244. It should first be backported to 3.3
and then slowly as far as 3.0.
2026-01-27 16:00:23 +01:00
William Lallemand
0ebef67132 MINOR: ssl: display libssl errors on private key loading
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Display a more precise error message from the libssl when a private key
can't be loaded correctly.
2026-01-26 14:19:19 +01:00
Remi Tricot-Le Breton
9b1faee4c9 BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
The SSL passphrase callback function was only called when loading
private keys from a dedicated file (separate from the corresponding
certificate) but not when both the certificate and the key were in the
same file.
We can now load them properly, regardless of how they are provided.
A flas had to be added in the 'passphrase_cb_data' structure because in
the 'ssl_sock_load_pem_into_ckch' function, when calling
'PEM_read_bio_PrivateKey' there might be no private key in the PEM file
which would mean that the callback never gets called (and cannot set the
'passphrase_idx' to -1).

This patch can be backported to 3.3.
2026-01-26 14:09:13 +01:00
Remi Tricot-Le Breton
d2ccc19fde BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
Some error paths in 'ssl_sock_passwd_cb' (allocation failures) did not
set the 'passphrase_idx' to -1 which is the way for the caller to know
not to call the callback again so in some memory contention contexts we
could end up calling the callback 'infinitely' (or until memory is
finally available).

This patch must be backported to 3.3.
2026-01-26 14:08:50 +01:00
Egor Shestakov
f4cd1e74ba DOC: reg-tests: update VTest upstream link in the starting guide
The starting guide to regression testing had an outdated VTest link.

Could be backported as far as 2.6.
2026-01-26 13:56:13 +01:00
Willy Tarreau
1a3252e956 MEDIUM: pools: better check for size rounding overflow on registration
Certain object sizes cannot be controlled at declaration time because
the resulting object size may be slightly extended (tag, caller),
aligned and rounded up, or even doubled depending on pool settings
(e.g. if backup is used).

This patch addresses this by enlarging the type in the pool registration
to 64-bit so that no info is lost from the declaration, and extra checks
for overflows can be performed during registration after various rounding
steps. This allows to catch issues such as these ones and to report a
suitable error:

  global
      tune.http.logurilen 2147483647

  frontend
      capture request header name len 2147483647
      http-request capture src len 2147483647
      tcp-request content capture src len 2147483647
2026-01-26 11:54:14 +01:00
Willy Tarreau
e9e4821db5 BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
Since 3.3 with commit 945aa0ea82 ("MINOR: initcalls: Add a new initcall
stage, STG_INIT_2"), stkt_late_init() calls stkt_create_stk_ctr_pool()
but doesn't check its return value, so if the pool creation fails, the
process still starts, which is not correct. This patch adds a check for
the return value to make sure we fail to start in this case. This was
not an issue before 3.3 because the function was called as a post-check
handler which did check for errors in the returned values.
2026-01-26 11:45:49 +01:00
Willy Tarreau
4e7c07736a BUG/MINOR: config: check capture pool creations for failures
A few capture pools can fail in case of too large values for example.
These include the req_uri, capture, and caphdr pools, and may be triggered
with "tune.http.logurilen 2147483647" in the global section, or one of
these in a frontend:

  capture request header name len 2147483647
  http-request capture src len 2147483647
  tcp-request content capture src len 2147483647

These seem to be the only occurrences where create_pool()'s return value
is assigned without being checked, so let's add the proper check for
errors there. This can be backported as a hardening measure though the
risks and impacts are extremely low.
2026-01-26 11:45:49 +01:00
Christopher Faulet
c267d24f57 BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Condition to report the support for HAVE_TCP_MD5SIG feature was inverted. It
is only an issue for the reg-test related to this feature.

This patch must be backported to 3.3.
2026-01-23 11:40:54 +01:00
Christopher Faulet
a3e9a04435 BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
UNUSED blocks were not properly handled when the H1 multiplexer was
formatting the start line of a request or a response. UNUSED was ignored but
not removed from HTX message. So the mux can loop infinitly on such block.

It could be seen a a major issue but in fact it happens only if a very
specific case on the reponse processing (at least I think so): the server
must send an interim message (a 100-continue for intance) with the final
response. HAProxy must receive both in same time and the final reponse must
be intercepted (via a http-response return action for instance), In that
case, the interim message is fowarded and the server final reponse is
removed and replaced by a proxy error message.

Now UNUSED htx blocks are properly skipped and removed.

This patch must be backported as far as 3.0.
2026-01-23 11:40:54 +01:00
Christopher Faulet
be68ecc37d BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
If an error occurres during the dump of a metric for a server, we must take
care to detach promex from the watcher list for this server. It must be
performed explicitly because on error, the applet state (st1) is changed, so
it is not possible to detach it during the applet release stage.

This patch must be backported with b4f64c0ab ("BUG/MEDIUM: promex: server
iteration may rely on stale server") as far as 3.0. On older versions, 2.8
and 2.6, the watcher_detach() line must be changed by "srv_drop(ctx->p[1])".
2026-01-23 11:40:54 +01:00
Aurelien DARRAGON
a66b4881d7 BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
We frequently use lua_pcall() to provide safe alternative functions
(encapsulated helpers) that prevent the process from crashing in case
of Lua error when Lua is executed from an unsafe environment.

However, some of those safe helpers don't handle errors properly. In case
of error, the Lua API will always put an error object on top of the stack
as stated in the documentation. This error object can be used to retrieve
more info about the error. But in some cases when we ignore it, we should
still consume it to prevent the stack from being altered with an extra
object when returning from the helper function.

It should be backported to all stable versions. If the patch doesn't apply
automatically, all that's needed is to check for lua_pcall() in hlua.c
and for other cases than 'LUA_OK', make sure that the error object is popped
from the stack before the function returns.
2026-01-23 11:23:37 +01:00
Aurelien DARRAGON
9e9083d0e2 BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
Since commit 365ee28 ("BUG/MINOR: hlua: prevent LJMP in hlua_traceback()")
we now use lua_pcall() to protect sensitive parts of hlua_traceback()
function, and this to prevent Lua from crashing the process in case of
unexpected Lua error.

This is still relevant, but an error was made, as lua_pcall() was given
the nresult argument '1' when _hlua_traceback() internal function
doesn't push any argument on the stack. Because of this, it seems Lua
API still tries to push garbage object on top of the stack before
returning. This may cause functions that leverage hlua_traceback() in
the middle of stack manipulation to end up having a corrupted stack when
continuing after the hlua_traceback().

There doesn't seem to be many places where this could be a problem, as
this was discovered using the reproducer documented in f535d3e
("BUG/MEDIUM: debug: only dump Lua state when panicking"). Indeed, when
hlua_traceback() was used from the signal handler while the thread was
previously executing Lua, when returning to Lua after the handler the
Lua stack would be corrupted.

To fix the issue, we emphasize on the fact that the _hlua_traceback()
function doesn't push anything on the stack, returns 0, thus lua_pcall()
is given 0 'nresult' argument to prevent anything from being pushed after
the execution, preserving the original stack state.

This should be backported to all stable versions (because 365ee28 was
backported there)
2026-01-23 11:23:31 +01:00
187 changed files with 9535 additions and 3531 deletions

7
.github/matrix.py vendored
View file

@ -275,11 +275,8 @@ def main(ref_name):
} }
) )
# macOS # macOS on dev branches
if "haproxy-" not in ref_name:
if "haproxy-" in ref_name:
os = "macos-13" # stable branch
else:
os = "macos-26" # development branch os = "macos-26" # development branch
TARGET = "osx" TARGET = "osx"

View file

@ -11,9 +11,6 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Compile admin/halog/halog
run: |
make admin/halog/halog
- name: Compile dev/flags/flags - name: Compile dev/flags/flags
run: | run: |
make dev/flags/flags make dev/flags/flags

View file

@ -11,7 +11,7 @@ on:
jobs: jobs:
build: combined-build-and-run:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }} if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
permissions: permissions:
@ -21,84 +21,47 @@ jobs:
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Log in to the Container registry - name: Update Docker to the latest
uses: docker/login-action@v3 uses: docker/setup-docker-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image - name: Build Docker image
id: push id: push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v6
with: with:
context: https://github.com/haproxytech/haproxy-qns.git context: https://github.com/haproxytech/haproxy-qns.git
push: true platforms: linux/amd64
build-args: | build-args: |
SSLLIB=AWS-LC SSLLIB=AWS-LC
tags: ghcr.io/${{ github.repository }}:aws-lc tags: local:aws-lc
- name: Cleanup registry
uses: actions/delete-package-versions@v5
with:
owner: ${{ github.repository_owner }}
package-name: 'haproxy'
package-type: container
min-versions-to-keep: 1
delete-only-untagged-versions: 'true'
run:
needs: build
strategy:
matrix:
suite: [
{ client: chrome, tests: "http3" },
{ client: picoquic, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" },
{ client: quic-go, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" },
{ client: ngtcp2, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" }
]
fail-fast: false
name: ${{ matrix.suite.client }}
runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install tshark - name: Install tshark
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get -y install tshark sudo apt-get -y install tshark
- name: Pull image
run: |
docker pull ghcr.io/${{ github.repository }}:aws-lc
- name: Run - name: Run
run: | run: |
git clone https://github.com/quic-interop/quic-interop-runner git clone https://github.com/quic-interop/quic-interop-runner
cd quic-interop-runner cd quic-interop-runner
pip install -r requirements.txt --break-system-packages pip install -r requirements.txt --break-system-packages
python run.py -j result.json -l logs -r haproxy=ghcr.io/${{ github.repository }}:aws-lc -t ${{ matrix.suite.tests }} -c ${{ matrix.suite.client }} -s haproxy python run.py -j result.json -l logs-chrome -r haproxy=local:aws-lc -t "http3" -c chrome -s haproxy
python run.py -j result.json -l logs-picoquic -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c picoquic -s haproxy
python run.py -j result.json -l logs-quic-go -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c quic-go -s haproxy
python run.py -j result.json -l logs-ngtcp2 -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c ngtcp2 -s haproxy
- name: Delete succeeded logs - name: Delete succeeded logs
if: failure() if: failure()
run: | run: |
cd quic-interop-runner/logs/haproxy_${{ matrix.suite.client }} for client in chrome picoquic quic-go ngtcp2; do
pushd quic-interop-runner/logs-${client}/haproxy_${client}
cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf
popd
done
- name: Logs upload - name: Logs upload
if: failure() if: failure()
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: logs-${{ matrix.suite.client }} name: logs
path: quic-interop-runner/logs/ path: quic-interop-runner/logs*/
retention-days: 6 retention-days: 6

View file

@ -11,7 +11,7 @@ on:
jobs: jobs:
build: combined-build-and-run:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }} if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
permissions: permissions:
@ -21,82 +21,45 @@ jobs:
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Log in to the Container registry - name: Update Docker to the latest
uses: docker/login-action@v3 uses: docker/setup-docker-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image - name: Build Docker image
id: push id: push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v6
with: with:
context: https://github.com/haproxytech/haproxy-qns.git context: https://github.com/haproxytech/haproxy-qns.git
push: true platforms: linux/amd64
build-args: | build-args: |
SSLLIB=LibreSSL SSLLIB=LibreSSL
tags: ghcr.io/${{ github.repository }}:libressl tags: local:libressl
- name: Cleanup registry
uses: actions/delete-package-versions@v5
with:
owner: ${{ github.repository_owner }}
package-name: 'haproxy'
package-type: container
min-versions-to-keep: 1
delete-only-untagged-versions: 'true'
run:
needs: build
strategy:
matrix:
suite: [
{ client: picoquic, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,v2" },
{ client: quic-go, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,transferloss,transfercorruption,v2" }
]
fail-fast: false
name: ${{ matrix.suite.client }}
runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install tshark - name: Install tshark
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get -y install tshark sudo apt-get -y install tshark
- name: Pull image
run: |
docker pull ghcr.io/${{ github.repository }}:libressl
- name: Run - name: Run
run: | run: |
git clone https://github.com/quic-interop/quic-interop-runner git clone https://github.com/quic-interop/quic-interop-runner
cd quic-interop-runner cd quic-interop-runner
pip install -r requirements.txt --break-system-packages pip install -r requirements.txt --break-system-packages
python run.py -j result.json -l logs -r haproxy=ghcr.io/${{ github.repository }}:libressl -t ${{ matrix.suite.tests }} -c ${{ matrix.suite.client }} -s haproxy python run.py -j result.json -l logs-picoquic -r haproxy=local:libressl -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,v2" -c picoquic -s haproxy
python run.py -j result.json -l logs-quic-go -r haproxy=local:libressl -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,transferloss,transfercorruption,v2" -c quic-go -s haproxy
- name: Delete succeeded logs - name: Delete succeeded logs
if: failure() if: failure()
run: | run: |
cd quic-interop-runner/logs/haproxy_${{ matrix.suite.client }} for client in picoquic quic-go; do
pushd quic-interop-runner/logs-${client}/haproxy_${client}
cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf
popd
done
- name: Logs upload - name: Logs upload
if: failure() if: failure()
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: logs-${{ matrix.suite.client }} name: logs
path: quic-interop-runner/logs/ path: quic-interop-runner/logs*/
retention-days: 6 retention-days: 6

View file

@ -18,6 +18,7 @@ jobs:
msys2: msys2:
name: ${{ matrix.name }} name: ${{ matrix.name }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
defaults: defaults:
run: run:
shell: msys2 {0} shell: msys2 {0}

285
CHANGELOG
View file

@ -1,6 +1,291 @@
ChangeLog : ChangeLog :
=========== ===========
2026/03/05 : 3.4-dev6
- CLEANUP: acme: remove duplicate includes
- BUG/MINOR: proxy: detect strdup error on server auto SNI
- BUG/MINOR: server: set auto SNI for dynamic servers
- BUG/MINOR: server: enable no-check-sni-auto for dynamic servers
- MINOR: haterm: provide -b and -c options (RSA key size, ECDSA curves)
- MINOR: haterm: add long options for QUIC and TCP "bind" settings
- BUG/MINOR: haterm: missing allocation check in copy_argv()
- BUG/MINOR: quic: fix counters used on BE side
- MINOR: quic: add BUG_ON() on half_open_conn counter access from BE
- BUG/MINOR: quic/h3: display QUIC/H3 backend module on HTML stats
- BUG/MINOR: acme: acme_ctx_destroy() leaks auth->dns
- BUG/MINOR: acme: wrong labels logic always memprintf errmsg
- MINOR: ssl: clarify error reporting for unsupported keywords
- BUG/MINOR: acme: fix incorrect number of arguments allowed in config
- CLEANUP: haterm: remove unreachable labels hstream_add_data()
- CLEANUP: haterm: avoid static analyzer warnings about rand() use
- CLEANUP: ssl: Remove a useless variable from ssl_gen_x509()
- CI: use the latest docker for QUIC Interop
- CI: remove redundant "halog" compilation
- CLENAUP: cfgparse: accept-invalid-http-* does not support "no"/"defaults"
- BUG/MEDIUM: spoe: Acquire context buffer in applet before consuming a frame
- MINOR: traces: always mark trace_source as thread-aligned
- MINOR: ncbmbuf: improve itbmap_next() code
- MINOR: proxy: improve code when checking server name conflicts
- MINOR: quic: add a new metric for ncbuf failures
- BUG/MINOR: haterm: cannot reset default "haterm" mode
- BUG/MEDIUM: cpu-topo: Distribute CPUs fairly across groups
- BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions
- CLEANUP: ssl: remove outdated comments
- MINOR: mux-h2: also count glitches on invalid trailers
- MINOR: mux-h2: add a new setting, "tune.h2.log-errors" to tweak error logging
- BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream
- BUG/MINOR: server: adjust initialization order for dynamic servers
- CLEANUP: tree-wide: drop a few useless null-checks before free()
- CLEANUP: quic-stats: include counters from quic_stats
- REORG: stats/counters: move extra_counters to counters not stats
- CLEANUP: stats: drop stats.h / stats-t.h where not needed
- MEDIUM: counters: change the fill_stats() API to pass the module and extra_counters
- CLEANUP: counters: only retrieve zeroes for unallocated extra_counters
- MEDIUM: counters: add a dedicated storage for extra_counters in various structs
- MINOR: counters: store a tgroup step for extra_counters to access multiple tgroups
- MEDIUM: counters: store the number of thread groups accessing extra_counters
- MINOR: counters: add EXTRA_COUNTERS_BASE() to retrieve extra_counters base storage
- MEDIUM: counters: return aggregate extra counters in ->fill_stats()
- MEDIUM: counters: make EXTRA_COUNTERS_GET() consider tgid
- BUG/MINOR: call EXTRA_COUNTERS_FREE() before srv_free_params() in srv_drop()
- MINOR: promex: test applet resume in stress mode
- BUG/MINOR: promex: fix server iteration when last server is deleted
- BUG/MINOR: proxy: add dynamic backend into ID tree
- MINOR: proxy: convert proxy flags to uint
- MINOR: server: refactor srv_detach()
- MINOR: proxy: define a basic "del backend" CLI
- MINOR: proxy: define proxy watcher member
- MINOR: stats: protect proxy iteration via watcher
- MINOR: promex: use watcher to iterate over backend instances
- MINOR: lua: use watcher for proxies iterator
- MINOR: proxy: add refcount to proxies
- MINOR: proxy: rename default refcount to avoid confusion
- MINOR: server: take proxy refcount when deleting a server
- MINOR: lua: handle proxy refcount
- MINOR: proxy: prevent backend removal when unsupported
- MINOR: proxy: prevent deletion of backend referenced by config elements
- MINOR: proxy: prevent backend deletion if server still exists in it
- MINOR: server: mark backend removal as forbidden if QUIC was used
- MINOR: cli: implement wait on be-removable
- MINOR: proxy: add comment for defaults_px_ref/unref_all()
- MEDIUM: proxy: add lock for global accesses during proxy free
- MEDIUM: proxy: add lock for global accesses during default free
- MINOR: proxy: use atomic ops for default proxy refcount
- MEDIUM: proxy: implement backend deletion
- REGTESTS: add a test on "del backend"
- REGTESTS: complete "del backend" with unnamed defaults ref free
- BUG/MINOR: hlua: fix return with push nil on proxy check
- BUG/MEDIUM: stream: Handle TASK_WOKEN_RES as a stream event
- MINOR: quic: use signed char type for ALPN manipulation
- MINOR: quic/h3: reorganize stream reject after MUX closure
- MINOR: mux-quic: add function for ALPN to app-ops conversion
- MEDIUM: quic/mux-quic: adjust app-ops install
- MINOR: quic: use server cache for ALPN on BE side
- BUG/MEDIUM: hpack: correctly deal with too large decoded numbers
- BUG/MAJOR: qpack: unchecked length passed to huffman decoder
- BUG/MINOR: qpack: fix 1-byte OOB read in qpack_decode_fs_pfx()
- BUG/MINOR: quic: fix OOB read in preferred_address transport parameter
- BUG/MEDIUM: qpack: correctly deal with too large decoded numbers
- BUG/MINOR: hlua: Properly enable/disable line receives from HTTP applet
- BUG/MEDIUM: hlua: Fix end of request detection when retrieving payload
- BUG/MINOR: hlua: Properly enable/disable receives for TCP applets
- MINOR: htx: Add a function to retrieve the HTTP version from a start-line
- MINOR: h1-htx: Reports non-HTTP version via dedicated flags
- BUG/MINOR: h1-htx: Be sure that H1 response version starts by "HTTP/"
- MINOR: http-ana: Save the message version in the http_msg structure
- MEDIUM: http-fetch: Rework how HTTP message version is retrieved
- MEDIUM: http-ana: Use the version of the opposite side for internal messages
- DEBUG: stream: Display the currently running rule in stream dump
- MINOR: filters: Use filter API as far as poissible to break loops on filters
- MINOR: filters: Set last_entity when a filter fails on stream_start callback
- MINOR: stream: Display the currently running filter per channel in stream dump
- DOC: config: Use the right alias for %B
- BUG/MINOR: channel: Increase the stconn bytes_in value in channel_add_input()
- BUG/MINOR: sample: Fix sample to retrieve the number of bytes received and sent
- BUG/MINOR: http-ana: Increment scf bytes_out value if an haproxy error is sent
- BUG/MAJOR: fcgi: Fix param decoding by properly checking its size
- BUG/MAJOR: resolvers: Properly lowered the names found in DNS response
- BUG/MEDIUM: mux-fcgi: Use a safe loop to resume each stream eligible for sending
- MINOR: mux-fcgi: Use a dedicated function to resume streams eligible for sending
- CLEANUP: qpack: simplify length checks in qpack_decode_fs()
- MINOR: counters: Introduce COUNTERS_UPDATE_MAX()
- MINOR: listeners: Update the frequency counters separately when needed
- MINOR: proxies: Update beconn separately
- MINOR: stats: Add an option to disable the calculation of max counters
2026/02/19 : 3.4-dev5
- DOC: internals: addd mworker V3 internals
- BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
- BUG/MEDIUM: threads: Differ checking the max threads per group number
- BUG/MINOR: startup: fix allocation error message of progname string
- BUG/MINOR: startup: handle a possible strdup() failure
- MINOR: cfgparse: validate defaults proxies separately
- MINOR: cfgparse: move proxy post-init in a dedicated function
- MINOR: proxy: refactor proxy inheritance of a defaults section
- MINOR: proxy: refactor mode parsing
- MINOR: backend: add function to check support for dynamic servers
- MINOR: proxy: define "add backend" handler
- MINOR: proxy: parse mode on dynamic backend creation
- MINOR: proxy: parse guid on dynamic backend creation
- MINOR: proxy: check default proxy compatibility on "add backend"
- MEDIUM: proxy: implement dynamic backend creation
- MINOR: proxy: assign dynamic proxy ID
- REGTESTS: add dynamic backend creation test
- BUG/MINOR: proxy: fix clang build error on "add backend" handler
- BUG/MINOR: proxy: fix null dereference in "add backend" handler
- MINOR: net_helper: extend the ip.fp output with an option presence mask
- BUG/MINOR: proxy: fix default ALPN bind settings
- CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
- BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
- CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
- MINOR: activity: support setting/clearing lock/memory watching for task profiling
- MEDIUM: activity: apply and use new finegrained task profiling settings
- MINOR: activity: allow to switch per-task lock/memory profiling at runtime
- MINOR: startup: Add the SSL lib verify directory in haproxy -vv
- BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
- CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
- DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
- MINOR: queues: Check minconn first in srv_dynamic_maxconn()
- MINOR: servers: Call process_srv_queue() without lock when possible
- BUG/MINOR: quic: ensure handshake speed up is only run once per conn
- BUG/MAJOR: quic: reject invalid token
- BUG/MAJOR: quic: fix parsing frame type
- MINOR: ssl: Missing '\n' in error message
- MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
- MINOR: jwt: Add new jwt_decrypt_jwk converter
- REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
- MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
- MINOR: startup: sort the feature list in haproxy -vv
- MINOR: startup: show the list of detected features at runtime with haproxy -vv
- SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
- MINOR: filters: rework RESUME_FILTER_* macros as inline functions
- MINOR: filters: rework filter iteration for channel related callback functions
- MEDIUM: filters: use per-channel filter list when relevant
- DEV: gdb: add a utility to find the post-mortem address from a core
- BUG/MINOR: deviceatlas: add missing return on error in config parsers
- BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
- BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
- BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
- BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
- BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
- BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
- BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
- BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
- MINOR: deviceatlas: check getproptype return and remove pprop indirection
- MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
- MINOR: deviceatlas: define header_evidence_entry in dummy library header
- MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
- CLEANUP: deviceatlas: add unlikely hints and minor code tidying
- DEV: gdb: use unsigned longs to display pools memory usage
- BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
- BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
- BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
- BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
- BUG/MINOR: ssl: error with ssl-f-use when no "crt"
- MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
- BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
- BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
- MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
- REGTESTS: fix quoting in feature cmd which prevents test execution
- BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
- BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
- BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
- DEV: term-events: Fix hanshake events decoding
- BUG/MINOR: flt-trace: Properly compute length of the first DATA block
- MINOR: flt-trace: Add an option to limit the amount of data forwarded
- CLEANUP: compression: Remove unused static buffers
- BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
- BUG/MINOR: http-ana: Stop to wait for body on client error/abort
- MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
- REORG: stconn: Move functions related to channel buffers to sc_strm.h
- BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
- MINOR: tree-wide: Use the buffer size instead of global setting when possible
- MINOR: buffers: Swap buffers of same size only
- BUG/MINOR: config: Check buffer pool creation for failures
- MEDIUM: cache: Don't rely on a chunk to store messages payload
- MEDIUM: stream: Limit number of synchronous send per stream wakeup
- MEDIUM: compression: Be sure to never compress more than a chunk at once
- MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
- MEDIUM: applet: Disable 0-copy for buffers of different size
- MINOR: h1-htx: Disable 0-copy for buffers of different size
- MEDIUM: stream: Offer buffers of default size only
- BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
- MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
- MEDIUM: htx: Refactor htx defragmentation to merge data blocks
- MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
- MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
- MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
- MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
- MEDIUM: chunk: Add support for large chunks
- MEDIUM: stconn: Properly handle large buffers during a receive
- MEDIUM: sample: Get chunks with a size dependent on input data when necessary
- MEDIUM: http-fetch: Be able to use large chunks when necessary
- MINPR: htx: Get large chunk if necessary to perform a defrag
- MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
- MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
- MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
- CI: do not use ghcr.io for Quic Interop workflows
- BUG/MEDIUM: ssl: SSL backend sessions used after free
- CI: vtest: move the vtest2 URL to vinyl-cache.org
- CI: github: disable windows.yml by default on unofficials repo
- MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
- CLEANUP: mux-h1: Remove unneeded null check
- DOC: remove openssl no-deprecated CI image
- BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
- BUG/MINOR: backend: check delay MUX before conn_prepare()
- OPTIM: backend: reduce contention when checking MUX init with ALPN
- DOC: configuration: add the ACME wiki page link
- MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
- MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
- MINOR: trace: add definitions for haterm streams
- MINOR: init: allow a fileless init mode
- MEDIUM: init: allow the redefinition of argv[] parsing function
- MINOR: stconn: stream instantiation from proxy callback
- MINOR: haterm: add haterm HTTP server
- MINOR: haterm: new "haterm" utility
- MINOR: haterm: increase thread-local pool size
- BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
- BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
- BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
- CI: github: only enable OS X on development branches
2026/02/04 : 3.4-dev4
- BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
- BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
- BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
- BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
- BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
- BUG/MINOR: config: check capture pool creations for failures
- BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
- MEDIUM: pools: better check for size rounding overflow on registration
- DOC: reg-tests: update VTest upstream link in the starting guide
- BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
- BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
- MINOR: ssl: display libssl errors on private key loading
- BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
- MINOR: ssl: allow to disable certificate compression
- BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
- DOC: config: mention some possible TLS versions restrictions for kTLS
- OPTIM: server: move queueslength in server struct
- OPTIM: proxy: separate queues fields from served
- OPTIM: server: get rid of the last use of _ha_barrier_full()
- DOC: config: mention that idle connection sharing is per thread-group
- MEDIUM: h1: strictly verify quoting in chunk extensions
- BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
- BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
- MEDIUM: ssl: remove connection from msg callback args
- MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
- REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
- DOC: internals: cleanup few typos in master-worker documentation
- BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
- MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
- MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
- BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
- BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
- DOC: config: mention the limitation on server id range for consistent hash
- MEDIUM: backend: make "balance random" consider req rate when loads are equal
- BUG/MINOR: config: Fix setting of alt_proto
2026/01/22 : 3.4-dev3 2026/01/22 : 3.4-dev3
- BUILD: ssl: strchr definition changed in C23 - BUILD: ssl: strchr definition changed in C23
- BUILD: tools: memchr definition changed in C23 - BUILD: tools: memchr definition changed in C23

View file

@ -956,6 +956,7 @@ endif # obsolete targets
endif # TARGET endif # TARGET
OBJS = OBJS =
HATERM_OBJS =
ifneq ($(EXTRA_OBJS),) ifneq ($(EXTRA_OBJS),)
OBJS += $(EXTRA_OBJS) OBJS += $(EXTRA_OBJS)
@ -1003,12 +1004,14 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/http_acl.o src/dict.o src/dgram.o src/pipe.o \ src/http_acl.o src/dict.o src/dgram.o src/pipe.o \
src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \ src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \
src/httpclient_cli.o src/version.o src/ncbmbuf.o src/ech.o \ src/httpclient_cli.o src/version.o src/ncbmbuf.o src/ech.o \
src/cfgparse-peers.o src/cfgparse-peers.o src/haterm.o
ifneq ($(TRACE),) ifneq ($(TRACE),)
OBJS += src/calltrace.o OBJS += src/calltrace.o
endif endif
HATERM_OBJS += $(OBJS) src/haterm_init.o
# Used only for forced dependency checking. May be cleared during development. # Used only for forced dependency checking. May be cleared during development.
INCLUDES = $(wildcard include/*/*.h) INCLUDES = $(wildcard include/*/*.h)
DEP = $(INCLUDES) .build_opts DEP = $(INCLUDES) .build_opts
@ -1040,7 +1043,7 @@ IGNORE_OPTS=help install install-man install-doc install-bin \
uninstall clean tags cscope tar git-tar version update-version \ uninstall clean tags cscope tar git-tar version update-version \
opts reg-tests reg-tests-help unit-tests admin/halog/halog dev/flags/flags \ opts reg-tests reg-tests-help unit-tests admin/halog/halog dev/flags/flags \
dev/haring/haring dev/ncpu/ncpu dev/poll/poll dev/tcploop/tcploop \ dev/haring/haring dev/ncpu/ncpu dev/poll/poll dev/tcploop/tcploop \
dev/term_events/term_events dev/term_events/term_events dev/gdb/pm-from-core
ifneq ($(TARGET),) ifneq ($(TARGET),)
ifeq ($(filter $(firstword $(MAKECMDGOALS)),$(IGNORE_OPTS)),) ifeq ($(filter $(firstword $(MAKECMDGOALS)),$(IGNORE_OPTS)),)
@ -1056,6 +1059,9 @@ endif # non-empty target
haproxy: $(OPTIONS_OBJS) $(OBJS) haproxy: $(OPTIONS_OBJS) $(OBJS)
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
haterm: $(OPTIONS_OBJS) $(HATERM_OBJS)
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
objsize: haproxy objsize: haproxy
$(Q)objdump -t $^|grep ' g '|grep -F '.text'|awk '{print $$5 FS $$6}'|sort $(Q)objdump -t $^|grep ' g '|grep -F '.text'|awk '{print $$5 FS $$6}'|sort
@ -1071,6 +1077,9 @@ admin/dyncookie/dyncookie: admin/dyncookie/dyncookie.o
dev/flags/flags: dev/flags/flags.o dev/flags/flags: dev/flags/flags.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/gdb/pm-from-core: dev/gdb/pm-from-core.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/haring/haring: dev/haring/haring.o dev/haring/haring: dev/haring/haring.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
@ -1169,7 +1178,7 @@ distclean: clean
$(Q)rm -f admin/dyncookie/dyncookie $(Q)rm -f admin/dyncookie/dyncookie
$(Q)rm -f dev/haring/haring dev/ncpu/ncpu{,.so} dev/poll/poll dev/tcploop/tcploop $(Q)rm -f dev/haring/haring dev/ncpu/ncpu{,.so} dev/poll/poll dev/tcploop/tcploop
$(Q)rm -f dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht $(Q)rm -f dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht
$(Q)rm -f dev/qpack/decode $(Q)rm -f dev/qpack/decode dev/gdb/pm-from-core
tags: tags:
$(Q)find src include \( -name '*.c' -o -name '*.h' \) -print0 | \ $(Q)find src include \( -name '*.c' -o -name '*.h' \) -print0 | \

View file

@ -2,7 +2,6 @@
[![alpine/musl](https://github.com/haproxy/haproxy/actions/workflows/musl.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/musl.yml) [![alpine/musl](https://github.com/haproxy/haproxy/actions/workflows/musl.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/musl.yml)
[![AWS-LC](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml) [![AWS-LC](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml)
[![openssl no-deprecated](https://github.com/haproxy/haproxy/actions/workflows/openssl-nodeprecated.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/openssl-nodeprecated.yml)
[![Illumos](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml) [![Illumos](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml)
[![NetBSD](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml) [![NetBSD](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml)
[![FreeBSD](https://api.cirrus-ci.com/github/haproxy/haproxy.svg?task=FreeBSD)](https://cirrus-ci.com/github/haproxy/haproxy/) [![FreeBSD](https://api.cirrus-ci.com/github/haproxy/haproxy.svg?task=FreeBSD)](https://cirrus-ci.com/github/haproxy/haproxy/)

View file

@ -1,2 +1,2 @@
$Format:%ci$ $Format:%ci$
2026/01/22 2026/03/05

View file

@ -1 +1 @@
3.4-dev3 3.4-dev6

View file

@ -31,6 +31,7 @@ static struct {
da_atlas_t atlas; da_atlas_t atlas;
da_evidence_id_t useragentid; da_evidence_id_t useragentid;
da_severity_t loglevel; da_severity_t loglevel;
size_t maxhdrlen;
char separator; char separator;
unsigned char daset:1; unsigned char daset:1;
} global_deviceatlas = { } global_deviceatlas = {
@ -42,6 +43,7 @@ static struct {
.atlasmap = NULL, .atlasmap = NULL,
.atlasfd = -1, .atlasfd = -1,
.useragentid = 0, .useragentid = 0,
.maxhdrlen = 0,
.daset = 0, .daset = 0,
.separator = '|', .separator = '|',
}; };
@ -57,6 +59,10 @@ static int da_json_file(char **args, int section_type, struct proxy *curpx,
return -1; return -1;
} }
global_deviceatlas.jsonpath = strdup(args[1]); global_deviceatlas.jsonpath = strdup(args[1]);
if (unlikely(global_deviceatlas.jsonpath == NULL)) {
memprintf(err, "deviceatlas json file : out of memory.\n");
return -1;
}
return 0; return 0;
} }
@ -73,6 +79,7 @@ static int da_log_level(char **args, int section_type, struct proxy *curpx,
loglevel = atol(args[1]); loglevel = atol(args[1]);
if (loglevel < 0 || loglevel > 3) { if (loglevel < 0 || loglevel > 3) {
memprintf(err, "deviceatlas log level : expects a log level between 0 and 3, %s given.\n", args[1]); memprintf(err, "deviceatlas log level : expects a log level between 0 and 3, %s given.\n", args[1]);
return -1;
} else { } else {
global_deviceatlas.loglevel = (da_severity_t)loglevel; global_deviceatlas.loglevel = (da_severity_t)loglevel;
} }
@ -101,6 +108,10 @@ static int da_properties_cookie(char **args, int section_type, struct proxy *cur
return -1; return -1;
} else { } else {
global_deviceatlas.cookiename = strdup(args[1]); global_deviceatlas.cookiename = strdup(args[1]);
if (unlikely(global_deviceatlas.cookiename == NULL)) {
memprintf(err, "deviceatlas cookie name : out of memory.\n");
return -1;
}
} }
global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename); global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename);
return 0; return 0;
@ -119,6 +130,7 @@ static int da_cache_size(char **args, int section_type, struct proxy *curpx,
cachesize = atol(args[1]); cachesize = atol(args[1]);
if (cachesize < 0 || cachesize > DA_CACHE_MAX) { if (cachesize < 0 || cachesize > DA_CACHE_MAX) {
memprintf(err, "deviceatlas cache size : expects a cache size between 0 and %d, %s given.\n", DA_CACHE_MAX, args[1]); memprintf(err, "deviceatlas cache size : expects a cache size between 0 and %d, %s given.\n", DA_CACHE_MAX, args[1]);
return -1;
} else { } else {
#ifdef APINOCACHE #ifdef APINOCACHE
fprintf(stdout, "deviceatlas cache size : no-op, its support is disabled.\n"); fprintf(stdout, "deviceatlas cache size : no-op, its support is disabled.\n");
@ -165,7 +177,7 @@ static int init_deviceatlas(void)
da_status_t status; da_status_t status;
jsonp = fopen(global_deviceatlas.jsonpath, "r"); jsonp = fopen(global_deviceatlas.jsonpath, "r");
if (jsonp == 0) { if (unlikely(jsonp == 0)) {
ha_alert("deviceatlas : '%s' json file has invalid path or is not readable.\n", ha_alert("deviceatlas : '%s' json file has invalid path or is not readable.\n",
global_deviceatlas.jsonpath); global_deviceatlas.jsonpath);
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
@ -177,9 +189,11 @@ static int init_deviceatlas(void)
status = da_atlas_compile(jsonp, da_haproxy_read, da_haproxy_seek, status = da_atlas_compile(jsonp, da_haproxy_read, da_haproxy_seek,
&global_deviceatlas.atlasimgptr, &atlasimglen); &global_deviceatlas.atlasimgptr, &atlasimglen);
fclose(jsonp); fclose(jsonp);
if (status != DA_OK) { if (unlikely(status != DA_OK)) {
ha_alert("deviceatlas : '%s' json file is invalid.\n", ha_alert("deviceatlas : '%s' json file is invalid.\n",
global_deviceatlas.jsonpath); global_deviceatlas.jsonpath);
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;
} }
@ -187,8 +201,10 @@ static int init_deviceatlas(void)
status = da_atlas_open(&global_deviceatlas.atlas, extraprops, status = da_atlas_open(&global_deviceatlas.atlas, extraprops,
global_deviceatlas.atlasimgptr, atlasimglen); global_deviceatlas.atlasimgptr, atlasimglen);
if (status != DA_OK) { if (unlikely(status != DA_OK)) {
ha_alert("deviceatlas : data could not be compiled.\n"); ha_alert("deviceatlas : data could not be compiled.\n");
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;
} }
@ -197,11 +213,28 @@ static int init_deviceatlas(void)
if (global_deviceatlas.cookiename == 0) { if (global_deviceatlas.cookiename == 0) {
global_deviceatlas.cookiename = strdup(DA_COOKIENAME_DEFAULT); global_deviceatlas.cookiename = strdup(DA_COOKIENAME_DEFAULT);
if (unlikely(global_deviceatlas.cookiename == NULL)) {
ha_alert("deviceatlas : out of memory.\n");
da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename); global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename);
} }
global_deviceatlas.useragentid = da_atlas_header_evidence_id(&global_deviceatlas.atlas, global_deviceatlas.useragentid = da_atlas_header_evidence_id(&global_deviceatlas.atlas,
"user-agent"); "user-agent");
{
size_t hi;
global_deviceatlas.maxhdrlen = 16;
for (hi = 0; hi < global_deviceatlas.atlas.header_evidence_count; hi++) {
size_t nl = strlen(global_deviceatlas.atlas.header_priorities[hi].name);
if (nl > global_deviceatlas.maxhdrlen)
global_deviceatlas.maxhdrlen = nl;
}
}
if ((global_deviceatlas.atlasfd = shm_open(ATLASMAPNM, O_RDWR, 0660)) != -1) { if ((global_deviceatlas.atlasfd = shm_open(ATLASMAPNM, O_RDWR, 0660)) != -1) {
global_deviceatlas.atlasmap = mmap(NULL, ATLASTOKSZ, PROT_READ | PROT_WRITE, MAP_SHARED, global_deviceatlas.atlasfd, 0); global_deviceatlas.atlasmap = mmap(NULL, ATLASTOKSZ, PROT_READ | PROT_WRITE, MAP_SHARED, global_deviceatlas.atlasfd, 0);
if (global_deviceatlas.atlasmap == MAP_FAILED) { if (global_deviceatlas.atlasmap == MAP_FAILED) {
@ -231,15 +264,13 @@ static void deinit_deviceatlas(void)
free(global_deviceatlas.cookiename); free(global_deviceatlas.cookiename);
da_atlas_close(&global_deviceatlas.atlas); da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr); free(global_deviceatlas.atlasimgptr);
da_fini();
} }
if (global_deviceatlas.atlasfd != -1) { if (global_deviceatlas.atlasfd != -1) {
munmap(global_deviceatlas.atlasmap, ATLASTOKSZ); munmap(global_deviceatlas.atlasmap, ATLASTOKSZ);
close(global_deviceatlas.atlasfd); close(global_deviceatlas.atlasfd);
shm_unlink(ATLASMAPNM);
} }
da_fini();
} }
static void da_haproxy_checkinst(void) static void da_haproxy_checkinst(void)
@ -258,6 +289,10 @@ static void da_haproxy_checkinst(void)
da_property_decl_t extraprops[1] = {{NULL, 0}}; da_property_decl_t extraprops[1] = {{NULL, 0}};
#ifdef USE_THREAD #ifdef USE_THREAD
HA_SPIN_LOCK(OTHER_LOCK, &dadwsch_lock); HA_SPIN_LOCK(OTHER_LOCK, &dadwsch_lock);
if (base[0] == 0) {
HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock);
return;
}
#endif #endif
strlcpy2(atlasp, base + sizeof(char), sizeof(atlasp)); strlcpy2(atlasp, base + sizeof(char), sizeof(atlasp));
jsonp = fopen(atlasp, "r"); jsonp = fopen(atlasp, "r");
@ -275,10 +310,20 @@ static void da_haproxy_checkinst(void)
fclose(jsonp); fclose(jsonp);
if (status == DA_OK) { if (status == DA_OK) {
if (da_atlas_open(&inst, extraprops, cnew, atlassz) == DA_OK) { if (da_atlas_open(&inst, extraprops, cnew, atlassz) == DA_OK) {
inst.config.cache_size = global_deviceatlas.cachesize;
da_atlas_close(&global_deviceatlas.atlas); da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr); free(global_deviceatlas.atlasimgptr);
global_deviceatlas.atlasimgptr = cnew; global_deviceatlas.atlasimgptr = cnew;
global_deviceatlas.atlas = inst; global_deviceatlas.atlas = inst;
{
size_t hi;
global_deviceatlas.maxhdrlen = 16;
for (hi = 0; hi < inst.header_evidence_count; hi++) {
size_t nl = strlen(inst.header_priorities[hi].name);
if (nl > global_deviceatlas.maxhdrlen)
global_deviceatlas.maxhdrlen = nl;
}
}
base[0] = 0; base[0] = 0;
ha_notice("deviceatlas : new instance, data file date `%s`.\n", ha_notice("deviceatlas : new instance, data file date `%s`.\n",
da_getdatacreationiso8601(&global_deviceatlas.atlas)); da_getdatacreationiso8601(&global_deviceatlas.atlas));
@ -286,6 +331,8 @@ static void da_haproxy_checkinst(void)
ha_alert("deviceatlas : instance update failed.\n"); ha_alert("deviceatlas : instance update failed.\n");
free(cnew); free(cnew);
} }
} else {
free(cnew);
} }
#ifdef USE_THREAD #ifdef USE_THREAD
HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock); HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock);
@ -297,7 +344,7 @@ static void da_haproxy_checkinst(void)
static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_t *devinfo) static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_t *devinfo)
{ {
struct buffer *tmp; struct buffer *tmp;
da_propid_t prop, *pprop; da_propid_t prop;
da_status_t status; da_status_t status;
da_type_t proptype; da_type_t proptype;
const char *propname; const char *propname;
@ -317,13 +364,15 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
chunk_appendf(tmp, "%c", global_deviceatlas.separator); chunk_appendf(tmp, "%c", global_deviceatlas.separator);
continue; continue;
} }
pprop = &prop; if (unlikely(da_atlas_getproptype(&global_deviceatlas.atlas, prop, &proptype) != DA_OK)) {
da_atlas_getproptype(&global_deviceatlas.atlas, *pprop, &proptype); chunk_appendf(tmp, "%c", global_deviceatlas.separator);
continue;
}
switch (proptype) { switch (proptype) {
case DA_TYPE_BOOLEAN: { case DA_TYPE_BOOLEAN: {
bool val; bool val;
status = da_getpropboolean(devinfo, *pprop, &val); status = da_getpropboolean(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%d", val); chunk_appendf(tmp, "%d", val);
} }
@ -332,7 +381,7 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
case DA_TYPE_INTEGER: case DA_TYPE_INTEGER:
case DA_TYPE_NUMBER: { case DA_TYPE_NUMBER: {
long val; long val;
status = da_getpropinteger(devinfo, *pprop, &val); status = da_getpropinteger(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%ld", val); chunk_appendf(tmp, "%ld", val);
} }
@ -340,7 +389,7 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
} }
case DA_TYPE_STRING: { case DA_TYPE_STRING: {
const char *val; const char *val;
status = da_getpropstring(devinfo, *pprop, &val); status = da_getpropstring(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%s", val); chunk_appendf(tmp, "%s", val);
} }
@ -371,29 +420,26 @@ static int da_haproxy_conv(const struct arg *args, struct sample *smp, void *pri
{ {
da_deviceinfo_t devinfo; da_deviceinfo_t devinfo;
da_status_t status; da_status_t status;
const char *useragent; char useragentbuf[1024];
char useragentbuf[1024] = { 0 };
int i; int i;
if (global_deviceatlas.daset == 0 || smp->data.u.str.data == 0) { if (unlikely(global_deviceatlas.daset == 0) || smp->data.u.str.data == 0) {
return 1; return 1;
} }
da_haproxy_checkinst(); da_haproxy_checkinst();
i = smp->data.u.str.data > sizeof(useragentbuf) ? sizeof(useragentbuf) : smp->data.u.str.data; i = smp->data.u.str.data > sizeof(useragentbuf) - 1 ? sizeof(useragentbuf) - 1 : smp->data.u.str.data;
memcpy(useragentbuf, smp->data.u.str.area, i - 1); memcpy(useragentbuf, smp->data.u.str.area, i);
useragentbuf[i - 1] = 0; useragentbuf[i] = 0;
useragent = (const char *)useragentbuf;
status = da_search(&global_deviceatlas.atlas, &devinfo, status = da_search(&global_deviceatlas.atlas, &devinfo,
global_deviceatlas.useragentid, useragent, 0); global_deviceatlas.useragentid, useragentbuf, 0);
return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo); return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo);
} }
#define DA_MAX_HEADERS 24 #define DA_MAX_HEADERS 32
static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private) static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private)
{ {
@ -403,10 +449,10 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
struct channel *chn; struct channel *chn;
struct htx *htx; struct htx *htx;
struct htx_blk *blk; struct htx_blk *blk;
char vbuf[DA_MAX_HEADERS][1024] = {{ 0 }}; char vbuf[DA_MAX_HEADERS][1024];
int i, nbh = 0; int i, nbh = 0;
if (global_deviceatlas.daset == 0) { if (unlikely(global_deviceatlas.daset == 0)) {
return 0; return 0;
} }
@ -414,18 +460,17 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
chn = (smp->strm ? &smp->strm->req : NULL); chn = (smp->strm ? &smp->strm->req : NULL);
htx = smp_prefetch_htx(smp, chn, NULL, 1); htx = smp_prefetch_htx(smp, chn, NULL, 1);
if (!htx) if (unlikely(!htx))
return 0; return 0;
i = 0;
for (blk = htx_get_first_blk(htx); nbh < DA_MAX_HEADERS && blk; blk = htx_get_next_blk(htx, blk)) { for (blk = htx_get_first_blk(htx); nbh < DA_MAX_HEADERS && blk; blk = htx_get_next_blk(htx, blk)) {
size_t vlen; size_t vlen;
char *pval; char *pval;
da_evidence_id_t evid; da_evidence_id_t evid;
enum htx_blk_type type; enum htx_blk_type type;
struct ist n, v; struct ist n, v;
char hbuf[24] = { 0 }; char hbuf[64];
char tval[1024] = { 0 }; char tval[1024];
type = htx_get_blk_type(blk); type = htx_get_blk_type(blk);
@ -438,20 +483,18 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
continue; continue;
} }
/* The HTTP headers used by the DeviceAtlas API are not longer */ if (n.len > global_deviceatlas.maxhdrlen || n.len >= sizeof(hbuf)) {
if (n.len >= sizeof(hbuf)) {
continue; continue;
} }
memcpy(hbuf, n.ptr, n.len); memcpy(hbuf, n.ptr, n.len);
hbuf[n.len] = 0; hbuf[n.len] = 0;
pval = v.ptr;
vlen = v.len;
evid = -1; evid = -1;
i = v.len > sizeof(tval) - 1 ? sizeof(tval) - 1 : v.len; i = v.len > sizeof(tval) - 1 ? sizeof(tval) - 1 : v.len;
memcpy(tval, v.ptr, i); memcpy(tval, v.ptr, i);
tval[i] = 0; tval[i] = 0;
pval = tval; pval = tval;
vlen = i;
if (strcasecmp(hbuf, "Accept-Language") == 0) { if (strcasecmp(hbuf, "Accept-Language") == 0) {
evid = da_atlas_accept_language_evidence_id(&global_deviceatlas.atlas); evid = da_atlas_accept_language_evidence_id(&global_deviceatlas.atlas);
@ -469,7 +512,7 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
continue; continue;
} }
vlen -= global_deviceatlas.cookienamelen - 1; vlen = pl;
pval = p; pval = p;
evid = da_atlas_clientprop_evidence_id(&global_deviceatlas.atlas); evid = da_atlas_clientprop_evidence_id(&global_deviceatlas.atlas);
} else { } else {

View file

@ -141,6 +141,11 @@ enum {
DA_INITIAL_MEMORY_ESTIMATE = 1024 * 1024 * 14 DA_INITIAL_MEMORY_ESTIMATE = 1024 * 1024 * 14
}; };
struct header_evidence_entry {
const char *name;
da_evidence_id_t id;
};
struct da_config { struct da_config {
unsigned int cache_size; unsigned int cache_size;
unsigned int __reserved[15]; /* enough reserved keywords for future use */ unsigned int __reserved[15]; /* enough reserved keywords for future use */

View file

@ -36,6 +36,7 @@
#include <haproxy/stats.h> #include <haproxy/stats.h>
#include <haproxy/stconn.h> #include <haproxy/stconn.h>
#include <haproxy/stream.h> #include <haproxy/stream.h>
#include <haproxy/stress.h>
#include <haproxy/task.h> #include <haproxy/task.h>
#include <haproxy/tools.h> #include <haproxy/tools.h>
#include <haproxy/version.h> #include <haproxy/version.h>
@ -82,6 +83,7 @@ struct promex_ctx {
unsigned field_num; /* current field number (ST_I_PX_* etc) */ unsigned field_num; /* current field number (ST_I_PX_* etc) */
unsigned mod_field_num; /* first field number of the current module (ST_I_PX_* etc) */ unsigned mod_field_num; /* first field number of the current module (ST_I_PX_* etc) */
int obj_state; /* current state among PROMEX_{FRONT|BACK|SRV|LI}_STATE_* */ int obj_state; /* current state among PROMEX_{FRONT|BACK|SRV|LI}_STATE_* */
struct watcher px_watch; /* watcher to automatically update next pointer */
struct watcher srv_watch; /* watcher to automatically update next pointer */ struct watcher srv_watch; /* watcher to automatically update next pointer */
struct list modules; /* list of promex modules to export */ struct list modules; /* list of promex modules to export */
struct eb_root filters; /* list of filters to apply on metrics name */ struct eb_root filters; /* list of filters to apply on metrics name */
@ -347,6 +349,10 @@ static int promex_dump_ts(struct appctx *appctx, struct ist prefix,
istcat(&n, prefix, PROMEX_MAX_NAME_LEN); istcat(&n, prefix, PROMEX_MAX_NAME_LEN);
istcat(&n, name, PROMEX_MAX_NAME_LEN); istcat(&n, name, PROMEX_MAX_NAME_LEN);
/* In stress mode, force yielding on each metric. */
if (STRESS_RUN1(istlen(*out), 0))
goto full;
if ((ctx->flags & PROMEX_FL_METRIC_HDR) && if ((ctx->flags & PROMEX_FL_METRIC_HDR) &&
!promex_dump_ts_header(n, desc, type, out, max)) !promex_dump_ts_header(n, desc, type, out, max))
goto full; goto full;
@ -626,8 +632,6 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_FE)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_FE))
continue; continue;
@ -664,8 +668,7 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_FE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_FE))
goto next_px2; goto next_px2;
counters = EXTRA_COUNTERS_GET(px->extra_counters_fe, mod); if (!mod->fill_stats(mod, px->extra_counters_fe, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -817,8 +820,6 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_LI)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_LI))
continue; continue;
@ -864,8 +865,7 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
labels[lb_idx+1].name = ist("mod"); labels[lb_idx+1].name = ist("mod");
labels[lb_idx+1].value = ist2(mod->name, strlen(mod->name)); labels[lb_idx+1].value = ist2(mod->name, strlen(mod->name));
counters = EXTRA_COUNTERS_GET(li->extra_counters, mod); if (!mod->fill_stats(mod, li->extra_counters, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -941,9 +941,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
unsigned int srv_state_count[PROMEX_SRV_STATE_COUNT] = { 0 }; unsigned int srv_state_count[PROMEX_SRV_STATE_COUNT] = { 0 };
@ -1098,9 +1095,16 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
&val, labels, &out, max)) &val, labels, &out, max))
goto full; goto full;
next_px: next_px:
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
} }
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0] via watcher.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
} }
/* Skip extra counters */ /* Skip extra counters */
@ -1113,8 +1117,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_BE)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_BE))
continue; continue;
@ -1125,9 +1127,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
struct promex_metric metric; struct promex_metric metric;
@ -1151,8 +1150,7 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px2; goto next_px2;
counters = EXTRA_COUNTERS_GET(px->extra_counters_be, mod); if (!mod->fill_stats(mod, px->extra_counters_be, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -1163,25 +1161,39 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
goto full; goto full;
next_px2: next_px2:
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
} }
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0] via watcher.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
} }
ctx->field_num += mod->stats_count; ctx->field_num += mod->stats_count;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
} }
px = NULL;
mod = NULL;
end: end:
if (ret) {
watcher_detach(&ctx->px_watch);
mod = NULL;
}
if (out.len) { if (out.len) {
if (!htx_add_data_atonce(htx, out)) if (!htx_add_data_atonce(htx, out)) {
watcher_detach(&ctx->px_watch);
return -1; /* Unexpected and unrecoverable error */ return -1; /* Unexpected and unrecoverable error */
} }
/* Save pointers (0=current proxy, 1=current stats module) of the current context */ }
ctx->p[0] = px;
/* Save pointers of the current context for dump resumption :
* 0=current proxy, 1=current stats module
* Note that p[0] is already automatically updated via px_watch.
*/
ctx->p[1] = mod; ctx->p[1] = mod;
return ret; return ret;
full: full:
@ -1223,9 +1235,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
enum promex_mt_type type; enum promex_mt_type type;
@ -1245,17 +1254,12 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px; goto next_px;
if (!sv) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = px->srv;
}
while (sv) { while (sv) {
labels[lb_idx].name = ist("server"); labels[lb_idx].name = ist("server");
labels[lb_idx].value = ist2(sv->id, strlen(sv->id)); labels[lb_idx].value = ist2(sv->id, strlen(sv->id));
if (!stats_fill_sv_line(px, sv, 0, stats, ST_I_PX_MAX, &(ctx->field_num))) if (!stats_fill_sv_line(px, sv, 0, stats, ST_I_PX_MAX, &(ctx->field_num)))
return -1; goto error;
if ((ctx->flags & PROMEX_FL_NO_MAINT_SRV) && (sv->cur_admin & SRV_ADMF_MAINT)) if ((ctx->flags & PROMEX_FL_NO_MAINT_SRV) && (sv->cur_admin & SRV_ADMF_MAINT))
goto next_sv; goto next_sv;
@ -1405,9 +1409,25 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
next_px: next_px:
watcher_detach(&ctx->srv_watch); watcher_detach(&ctx->srv_watch);
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
if (px) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
} }
}
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0]/p[1] via px_watch/srv_watch.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
if (likely(px)) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
}
} }
/* Skip extra counters */ /* Skip extra counters */
@ -1420,8 +1440,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_SRV)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_SRV))
continue; continue;
@ -1432,9 +1450,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
struct promex_metric metric; struct promex_metric metric;
@ -1455,11 +1470,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px2; goto next_px2;
if (!sv) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = px->srv;
}
while (sv) { while (sv) {
labels[lb_idx].name = ist("server"); labels[lb_idx].name = ist("server");
labels[lb_idx].value = ist2(sv->id, strlen(sv->id)); labels[lb_idx].value = ist2(sv->id, strlen(sv->id));
@ -1471,9 +1481,8 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
goto next_sv2; goto next_sv2;
counters = EXTRA_COUNTERS_GET(sv->extra_counters, mod); if (!mod->fill_stats(mod, sv->extra_counters, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num)) goto error;
return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
metric.type = ((val.type == FN_GAUGE) ? PROMEX_MT_GAUGE : PROMEX_MT_COUNTER); metric.type = ((val.type == FN_GAUGE) ? PROMEX_MT_GAUGE : PROMEX_MT_COUNTER);
@ -1488,33 +1497,57 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
next_px2: next_px2:
watcher_detach(&ctx->srv_watch); watcher_detach(&ctx->srv_watch);
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
if (px) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
} }
}
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0]/p[1] via px_watch/srv_watch.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
if (likely(px)) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
}
} }
ctx->field_num += mod->stats_count; ctx->field_num += mod->stats_count;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
} }
px = NULL;
sv = NULL;
mod = NULL;
end: end:
if (ret) {
watcher_detach(&ctx->px_watch);
watcher_detach(&ctx->srv_watch);
mod = NULL;
}
if (out.len) { if (out.len) {
if (!htx_add_data_atonce(htx, out)) if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */ return -1; /* Unexpected and unrecoverable error */
} }
/* Save pointers (0=current proxy, 1=current server, 2=current stats module) of the current context */ /* Save pointers of the current context for dump resumption :
ctx->p[0] = px; * 0=current proxy, 1=current server, 2=current stats module
ctx->p[1] = sv; * Note that p[0]/p[1] are already automatically updated via px_watch/srv_watch.
*/
ctx->p[2] = mod; ctx->p[2] = mod;
return ret; return ret;
full: full:
ret = 0; ret = 0;
goto end; goto end;
error:
watcher_detach(&ctx->px_watch);
watcher_detach(&ctx->srv_watch);
return -1;
} }
/* Dump metrics of module <mod>. It returns 1 on success, 0 if <out> is full and /* Dump metrics of module <mod>. It returns 1 on success, 0 if <out> is full and
@ -1735,6 +1768,11 @@ static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
ctx->field_num = ST_I_PX_PXNAME; ctx->field_num = ST_I_PX_PXNAME;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
appctx->st1 = PROMEX_DUMPER_BACK; appctx->st1 = PROMEX_DUMPER_BACK;
if (ctx->flags & PROMEX_FL_SCOPE_BACK) {
/* Update ctx.p[0] via watcher. */
watcher_attach(&ctx->px_watch, proxies_list);
}
__fallthrough; __fallthrough;
case PROMEX_DUMPER_BACK: case PROMEX_DUMPER_BACK:
@ -1752,6 +1790,15 @@ static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
ctx->field_num = ST_I_PX_PXNAME; ctx->field_num = ST_I_PX_PXNAME;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
appctx->st1 = PROMEX_DUMPER_SRV; appctx->st1 = PROMEX_DUMPER_SRV;
if (ctx->flags & PROMEX_FL_SCOPE_SERVER) {
/* Update ctx.p[0] via watcher. */
watcher_attach(&ctx->px_watch, proxies_list);
if (likely(proxies_list)) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, proxies_list->srv);
}
}
__fallthrough; __fallthrough;
case PROMEX_DUMPER_SRV: case PROMEX_DUMPER_SRV:
@ -2029,6 +2076,7 @@ static int promex_appctx_init(struct appctx *appctx)
LIST_INIT(&ctx->modules); LIST_INIT(&ctx->modules);
ctx->filters = EB_ROOT; ctx->filters = EB_ROOT;
appctx->st0 = PROMEX_ST_INIT; appctx->st0 = PROMEX_ST_INIT;
watcher_init(&ctx->px_watch, &ctx->p[0], offsetof(struct proxy, watcher_list));
watcher_init(&ctx->srv_watch, &ctx->p[1], offsetof(struct server, watcher_list)); watcher_init(&ctx->srv_watch, &ctx->p[1], offsetof(struct server, watcher_list));
return 0; return 0;
} }
@ -2043,6 +2091,11 @@ static void promex_appctx_release(struct appctx *appctx)
struct promex_metric_filter *flt; struct promex_metric_filter *flt;
struct eb32_node *node, *next; struct eb32_node *node, *next;
if (appctx->st1 == PROMEX_DUMPER_BACK ||
appctx->st1 == PROMEX_DUMPER_SRV) {
watcher_detach(&ctx->px_watch);
}
if (appctx->st1 == PROMEX_DUMPER_SRV) if (appctx->st1 == PROMEX_DUMPER_SRV)
watcher_detach(&ctx->srv_watch); watcher_detach(&ctx->srv_watch);

View file

@ -1,11 +1,10 @@
#!/bin/bash #!/bin/sh
set -e set -e
export VERBOSE=1 export VERBOSE=1
export TIMEOUT=90 export TIMEOUT=90
export MASTER_SOCKET=${MASTER_SOCKET:-/var/run/haproxy-master.sock} export MASTER_SOCKET="${MASTER_SOCKET:-/var/run/haproxy-master.sock}"
export RET=
alert() { alert() {
if [ "$VERBOSE" -ge "1" ]; then if [ "$VERBOSE" -ge "1" ]; then
@ -15,32 +14,38 @@ alert() {
reload() { reload() {
while read -r line; do if [ -S "$MASTER_SOCKET" ]; then
socat_addr="UNIX-CONNECT:${MASTER_SOCKET}"
if [ "$line" = "Success=0" ]; then
RET=1
elif [ "$line" = "Success=1" ]; then
RET=0
elif [ "$line" = "Another reload is still in progress." ]; then
alert "$line"
elif [ "$line" = "--" ]; then
continue;
else else
if [ "$RET" = 1 ] && [ "$VERBOSE" = "2" ]; then case "$MASTER_SOCKET" in
echo "$line" >&2 *:[0-9]*)
elif [ "$VERBOSE" = "3" ]; then socat_addr="TCP:${MASTER_SOCKET}"
echo "$line" >&2 ;;
fi *)
fi alert "Invalid master socket address '${MASTER_SOCKET}': expected a UNIX socket file or <host>:<port>"
done < <(echo "reload" | socat -t"${TIMEOUT}" "${MASTER_SOCKET}" -)
if [ -z "$RET" ]; then
alert "Couldn't finish the reload before the timeout (${TIMEOUT})."
return 1 return 1
;;
esac
fi fi
return "$RET" echo "reload" | socat -t"${TIMEOUT}" "$socat_addr" - | {
read -r status || { alert "No status received (connection error or timeout after ${TIMEOUT}s)."; exit 1; }
case "$status" in
"Success=1") ret=0 ;;
"Success=0") ret=1 ;;
*) alert "Unexpected response: '$status'"; exit 1 ;;
esac
read -r _ # consume "--"
if [ "$VERBOSE" -ge 3 ] || { [ "$ret" = 1 ] && [ "$VERBOSE" -ge 2 ]; }; then
cat >&2
else
cat >/dev/null
fi
exit "$ret"
}
} }
usage() { usage() {
@ -52,12 +57,12 @@ usage() {
echo " EXPERIMENTAL script!" echo " EXPERIMENTAL script!"
echo "" echo ""
echo "Options:" echo "Options:"
echo " -S, --master-socket <path> Use the master socket at <path> (default: ${MASTER_SOCKET})" echo " -S, --master-socket <addr> Unix socket path or <host>:<port> (default: ${MASTER_SOCKET})"
echo " -d, --debug Debug mode, set -x" echo " -d, --debug Debug mode, set -x"
echo " -t, --timeout Timeout (socat -t) (default: ${TIMEOUT})" echo " -t, --timeout Timeout (socat -t) (default: ${TIMEOUT})"
echo " -s, --silent Silent mode (no output)" echo " -s, --silent Silent mode (no output)"
echo " -v, --verbose Verbose output (output from haproxy on failure)" echo " -v, --verbose Verbose output (output from haproxy on failure)"
echo " -vv Even more verbose output (output from haproxy on success and failure)" echo " -vv --verbose=all Very verbose output (output from haproxy on success and failure)"
echo " -h, --help This help" echo " -h, --help This help"
echo "" echo ""
echo "Examples:" echo "Examples:"
@ -84,7 +89,7 @@ main() {
VERBOSE=2 VERBOSE=2
shift shift
;; ;;
-vv|--verbose) -vv|--verbose=all)
VERBOSE=3 VERBOSE=3
shift shift
;; ;;

141
dev/gdb/pm-from-core.c Normal file
View file

@ -0,0 +1,141 @@
/*
* Find the post-mortem offset from a core dump
*
* Copyright (C) 2026 Willy Tarreau <w@1wt.eu>
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/* Note: builds with no option under glibc, and can be built as a minimal
* uploadable static executable using nolibc as well:
gcc -o pm-from-core -nostdinc -nostdlib -s -Os -static -fno-ident \
-fno-exceptions -fno-asynchronous-unwind-tables -fno-unwind-tables \
-Wl,--gc-sections,--orphan-handling=discard,-znoseparate-code \
-I /path/to/nolibc-sysroot/include pm-from-core.c
*/
#define _GNU_SOURCE
#include <sys/mman.h>
#include <sys/stat.h>
#include <elf.h>
#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#if defined(__GLIBC__)
# define my_memmem memmem
#else
void *my_memmem(const void *haystack, size_t haystacklen,
const void *needle, size_t needlelen)
{
while (haystacklen >= needlelen) {
if (!memcmp(haystack, needle, needlelen))
return (void*)haystack;
haystack++;
haystacklen--;
}
return NULL;
}
#endif
#define MAGIC "POST-MORTEM STARTS HERE+7654321\0"
int main(int argc, char **argv)
{
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
struct stat st;
uint8_t *mem;
int i, fd;
if (argc < 2) {
printf("Usage: %s <core_file>\n", argv[0]);
exit(1);
}
fd = open(argv[1], O_RDONLY);
/* Let's just map the core dump as an ELF header */
fstat(fd, &st);
mem = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (mem == MAP_FAILED) {
perror("mmap()");
exit(1);
}
/* get the program headers */
ehdr = (Elf64_Ehdr *)mem;
/* check that it's really a core. Should be "\x7fELF" */
if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) {
fprintf(stderr, "ELF magic not found.\n");
exit(1);
}
if (ehdr->e_ident[EI_CLASS] != ELFCLASS64) {
fprintf(stderr, "Only 64-bit ELF supported.\n");
exit(1);
}
if (ehdr->e_type != ET_CORE) {
fprintf(stderr, "ELF type %d, not a core dump.\n", ehdr->e_type);
exit(1);
}
/* OK we can safely go with program headers */
phdr = (Elf64_Phdr *)(mem + ehdr->e_phoff);
for (i = 0; i < ehdr->e_phnum; i++) {
uint64_t size = phdr[i].p_filesz;
uint64_t offset = phdr[i].p_offset;
uint64_t vaddr = phdr[i].p_vaddr;
uint64_t found_ofs;
uint8_t *found;
if (phdr[i].p_type != PT_LOAD)
continue;
//printf("Scanning segment %d...\n", ehdr->e_phnum);
//printf("\r%-5d: off=%lx va=%lx sz=%lx ", i, (long)offset, (long)vaddr, (long)size);
if (!size)
continue;
if (size >= 1048576) // don't scan large segments
continue;
found = my_memmem(mem + offset, size, MAGIC, sizeof(MAGIC) - 1);
if (!found)
continue;
found_ofs = found - (mem + offset);
printf("Found post-mortem magic in segment %d:\n", i);
printf(" Core File Offset: 0x%lx (0x%lx + 0x%lx)\n", offset + found_ofs, offset, found_ofs);
printf(" Runtime VAddr: 0x%lx (0x%lx + 0x%lx)\n", vaddr + found_ofs, vaddr, found_ofs);
printf(" Segment Size: 0x%lx\n", size);
printf("\nIn gdb, copy-paste this line:\n\n pm_init 0x%lx\n\n", vaddr + found_ofs);
return 0;
}
//printf("\r%75s\n", "\r");
printf("post-mortem magic not found\n");
return 1;
}

View file

@ -14,8 +14,8 @@ define pools_dump
set $idx=$idx + 1 set $idx=$idx + 1
end end
set $mem = $total * $e->size set $mem = (unsigned long)$total * $e->size
printf "list=%#lx pool_head=%p name=%s size=%u alloc=%u used=%u mem=%u\n", $p, $e, $e->name, $e->size, $total, $used, $mem printf "list=%#lx pool_head=%p name=%s size=%u alloc=%u used=%u mem=%lu\n", $p, $e, $e->name, $e->size, $total, $used, $mem
set $p = *(void **)$p set $p = *(void **)$p
end end
end end

View file

@ -30,8 +30,8 @@ static const char *tevt_fd_types[16] = {
}; };
static const char *tevt_hs_types[16] = { static const char *tevt_hs_types[16] = {
[ 0] = "-", [ 1] = "-", [ 2] = "-", [ 3] = "rcv_err", [ 0] = "-", [ 1] = "-", [ 2] = "-", [ 3] = "-",
[ 4] = "snd_err", [ 5] = "-", [ 6] = "-", [ 7] = "-", [ 4] = "snd_err", [ 5] = "truncated_shutr", [ 6] = "truncated_rcv_err", [ 7] = "-",
[ 8] = "-", [ 9] = "-", [10] = "-", [11] = "-", [ 8] = "-", [ 9] = "-", [10] = "-", [11] = "-",
[12] = "-", [13] = "-", [14] = "-", [15] = "-", [12] = "-", [13] = "-", [14] = "-", [15] = "-",
}; };

View file

@ -3,7 +3,7 @@
Configuration Manual Configuration Manual
---------------------- ----------------------
version 3.4 version 3.4
2026/01/22 2026/03/05
This document covers the configuration language as implemented in the version This document covers the configuration language as implemented in the version
@ -1869,6 +1869,7 @@ The following keywords are supported in the "global" section :
- tune.buffers.limit - tune.buffers.limit
- tune.buffers.reserve - tune.buffers.reserve
- tune.bufsize - tune.bufsize
- tune.bufsize.large
- tune.bufsize.small - tune.bufsize.small
- tune.comp.maxlevel - tune.comp.maxlevel
- tune.defaults.purge - tune.defaults.purge
@ -1983,6 +1984,7 @@ The following keywords are supported in the "global" section :
- tune.ssl.cachesize - tune.ssl.cachesize
- tune.ssl.capture-buffer-size - tune.ssl.capture-buffer-size
- tune.ssl.capture-cipherlist-size (deprecated) - tune.ssl.capture-cipherlist-size (deprecated)
- tune.ssl.certificate-compression
- tune.ssl.default-dh-param - tune.ssl.default-dh-param
- tune.ssl.force-private-cache - tune.ssl.force-private-cache
- tune.ssl.hard-maxrecord - tune.ssl.hard-maxrecord
@ -3602,6 +3604,11 @@ ssl-skip-self-issued-ca
certificates. It's useless for BoringSSL, .issuer is ignored because ocsp certificates. It's useless for BoringSSL, .issuer is ignored because ocsp
bits does not need it. Requires at least OpenSSL 1.0.2. bits does not need it. Requires at least OpenSSL 1.0.2.
stats calculate-max-counters [on|off]
Activates or deactivates the calculation of stats max counters. If you
don't need them, deactivating them may increase performances a bit.
The default is on.
stats maxconn <connections> stats maxconn <connections>
By default, the stats socket is limited to 10 concurrent connections. It is By default, the stats socket is limited to 10 concurrent connections. It is
possible to change this value with "stats maxconn". possible to change this value with "stats maxconn".
@ -4004,7 +4011,7 @@ profiling.memory { on | off }
use in production. The same may be achieved at run time on the CLI using the use in production. The same may be achieved at run time on the CLI using the
"set profiling memory" command, please consult the management manual. "set profiling memory" command, please consult the management manual.
profiling.tasks { auto | on | off } profiling.tasks { auto | on | off | lock | no-lock | memory | no-memory }*
Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto' Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto'
the profiling automatically turns on a thread when it starts to suffer from the profiling automatically turns on a thread when it starts to suffer from
an average latency of 1000 microseconds or higher as reported in the an average latency of 1000 microseconds or higher as reported in the
@ -4015,6 +4022,18 @@ profiling.tasks { auto | on | off }
systems, containers, or virtual machines, or when the system swaps (which systems, containers, or virtual machines, or when the system swaps (which
must absolutely never happen on a load balancer). must absolutely never happen on a load balancer).
When task profiling is enabled, HAProxy can also collect the time each task
spends with a lock held or waiting for a lock, as well as the time spent
waiting for a memory allocation to succeed in case of a pool cache miss. This
can sometimes help understand certain causes of latency. For this, the extra
keywords "lock" (to enable lock time collection), "no-lock" (to disable it),
"memory" (to enable memory allocation time collection) or "no-memory" (to
disable it) may additionally be passed. By default they are not enabled since
they can have a non-negligible CPU impact on highly loaded systems (3-10%).
Note that the overhead is only taken when profiling is effectively running,
so that when running in "auto" mode, it will only appear when HAProxy decides
to turn it on.
CPU profiling per task can be very convenient to report where the time is CPU profiling per task can be very convenient to report where the time is
spent and which requests have what effect on which other request. Enabling spent and which requests have what effect on which other request. Enabling
it will typically affect the overall's performance by less than 1%, thus it it will typically affect the overall's performance by less than 1%, thus it
@ -4105,6 +4124,17 @@ tune.bufsize <size>
value set using this parameter will automatically be rounded up to the next value set using this parameter will automatically be rounded up to the next
multiple of 8 on 32-bit machines and 16 on 64-bit machines. multiple of 8 on 32-bit machines and 16 on 64-bit machines.
tune.bufsize.large <size>
Sets the size in butes for large buffers. By defaults, support for large
buffers is not enabled, it must explicitly be enable by setting this value.
These buffers are designed to be used in some specific contexts where more
data must be bufferized without changing the size of regular buffers. The
large buffers are not implicitly used.
Note that when large buffers are configured, three special large buffers will
be allocated for each threads during startup for internal usage.
tune.bufsize.small <size> tune.bufsize.small <size>
Sets the size in bytes for small buffers. The defaults value is 1024. Sets the size in bytes for small buffers. The defaults value is 1024.
@ -4453,6 +4483,16 @@ tune.h2.initial-window-size <number>
specific settings tune.h2.fe.initial-window-size and specific settings tune.h2.fe.initial-window-size and
tune.h2.be.initial-window-size. tune.h2.be.initial-window-size.
tune.h2.log-errors { none | connection | stream }
Sets the level of errors in the H2 demultiplexer that will generate a log.
The default is "stream", which means that any decoding error encountered in
the demultiplexer will lead to the emission of a log. The "connection" value
indicates that only logs that result in invalidating the connection will
produce a log. Finally, "none" indicates that no decoding error will produce
any log. It is recommended to set at least "connection" in order to detect
protocol anomalies, even if this means temporarily switching to "none" during
difficult periods.
tune.h2.max-concurrent-streams <number> tune.h2.max-concurrent-streams <number>
Sets the default HTTP/2 maximum number of concurrent streams per connection Sets the default HTTP/2 maximum number of concurrent streams per connection
(i.e. the number of outstanding requests on a single connection). This value (i.e. the number of outstanding requests on a single connection). This value
@ -4526,7 +4566,11 @@ tune.idle-pool.shared { on | off }
disabling this option without setting a conservative value on "pool-low-conn" disabling this option without setting a conservative value on "pool-low-conn"
for all servers relying on connection reuse to achieve a high performance for all servers relying on connection reuse to achieve a high performance
level, otherwise connections might be closed very often as the thread count level, otherwise connections might be closed very often as the thread count
increases. increases. Note that in any case, connections are only shared between threads
of the same thread group. This means that systems with many NUMA nodes may
show slightly more persistent connections while machines with unified caches
and many CPU cores per node may experience higher CPU usage. In the latter
case, the "max-thread-per-group" tunable may be used to improve the behavior.
tune.idletimer <timeout> tune.idletimer <timeout>
Sets the duration after which HAProxy will consider that an empty buffer is Sets the duration after which HAProxy will consider that an empty buffer is
@ -5310,6 +5354,22 @@ tune.ssl.capture-cipherlist-size <number> (deprecated)
formats. If the value is 0 (default value) the capture is disabled, formats. If the value is 0 (default value) the capture is disabled,
otherwise a buffer is allocated for each SSL/TLS connection. otherwise a buffer is allocated for each SSL/TLS connection.
tune.ssl.certificate-compression { auto | off }
This setting allows to configure the certificate compression support which is
an extension (RFC 8879) to TLS 1.3.
When set to "auto" it uses the default value of the TLS library.
With "off" it tries to explicitely disable the support of the feature.
HAProxy won't try to send compressed certificates anymore nor accept
compressed certificates.
Configures both backend and frontend sides.
This keyword is supported by OpenSSL >= 3.2.0.
The default value is auto.
tune.ssl.default-dh-param <number> tune.ssl.default-dh-param <number>
Sets the maximum size of the Diffie-Hellman parameters used for generating Sets the maximum size of the Diffie-Hellman parameters used for generating
the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The
@ -6262,8 +6322,16 @@ balance url_param <param> [check_post]
will take away N-1 of the highest loaded servers at the will take away N-1 of the highest loaded servers at the
expense of performance. With very high values, the algorithm expense of performance. With very high values, the algorithm
will converge towards the leastconn's result but much slower. will converge towards the leastconn's result but much slower.
In addition, for large server farms with very low loads (or
perfect balance), comparing loads will often lead to a tie,
so in case of equal loads between all measured servers, their
request rate over the last second are compared, which allows
to better balance server usage over time in the same spirit
as roundrobin does, and smooth consistent hash unfairness.
The default value is 2, which generally shows very good The default value is 2, which generally shows very good
distribution and performance. This algorithm is also known as distribution and performance. For large farms with low loads
(less than a few requests per second per server), it may help
to raise it to 3 or even 4. This algorithm is also known as
the Power of Two Random Choices and is described here : the Power of Two Random Choices and is described here :
http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
@ -6867,12 +6935,16 @@ compression offload
See also : "compression type", "compression algo", "compression direction" See also : "compression type", "compression algo", "compression direction"
compression direction <direction> compression direction <direction> (deprecated)
Makes haproxy able to compress both requests and responses. Makes haproxy able to compress both requests and responses.
Valid values are "request", to compress only requests, "response", to Valid values are "request", to compress only requests, "response", to
compress only responses, or "both", when you want to compress both. compress only responses, or "both", when you want to compress both.
The default value is "response". The default value is "response".
This directive is only relevant when legacy "filter compression" was
enabled, as with explicit comp-req and comp-res filters compression
direction is redundant.
May be used in the following contexts: http May be used in the following contexts: http
See also : "compression type", "compression algo", "compression offload" See also : "compression type", "compression algo", "compression offload"
@ -9254,6 +9326,9 @@ mode { tcp|http|log|spop }
processing and switching will be possible. This is the mode which processing and switching will be possible. This is the mode which
brings HAProxy most of its value. brings HAProxy most of its value.
haterm The frontend will work in haterm HTTP benchmark mode. This is
not supported by backends. See doc/haterm.txt for details.
log When used in a backend section, it will turn the backend into a log When used in a backend section, it will turn the backend into a
log backend. Such backend can be used as a log destination for log backend. Such backend can be used as a log destination for
any "log" directive by using the "backend@<name>" syntax. Log any "log" directive by using the "backend@<name>" syntax. Log
@ -10800,7 +10875,7 @@ no option logasap
logging. logging.
option mysql-check [ user <username> [ { post-41 | pre-41 } ] ] option mysql-check [ user <username> [ { post-41 | pre-41 | post-80 } ] ]
Use MySQL health checks for server testing Use MySQL health checks for server testing
May be used in the following contexts: tcp May be used in the following contexts: tcp
@ -10813,6 +10888,12 @@ option mysql-check [ user <username> [ { post-41 | pre-41 } ] ]
server. server.
post-41 Send post v4.1 client compatible checks (the default) post-41 Send post v4.1 client compatible checks (the default)
pre-41 Send pre v4.1 client compatible checks pre-41 Send pre v4.1 client compatible checks
post-80 Send post v8.0 client compatible checks with CLIENT_PLUGIN_AUTH
capability set and mysql_native_password as the authentication
plugin. Use this option when connecting to MySQL 8.0+ servers
where the health check user is created with mysql_native_password
authentication. Example:
CREATE USER 'haproxy'@'%' IDENTIFIED WITH mysql_native_password BY '';
If you specify a username, the check consists of sending two MySQL packet, If you specify a username, the check consists of sending two MySQL packet,
one Client Authentication packet, and one QUIT packet, to correctly close one Client Authentication packet, and one QUIT packet, to correctly close
@ -16298,20 +16379,24 @@ set-status <status> [reason <str>]
http-response set-status 503 reason "Slow Down". http-response set-status 503 reason "Slow Down".
set-timeout { client | server | tunnel } { <timeout> | <expr> } set-timeout { client | connect | queue | server | tarpit | tunnel } { <timeout> | <expr> }
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
- | - | - | - | - | X | X | - - | - | - | - | - | X | X | -
This action overrides the specified "client", "server" or "tunnel" timeout This action overrides the specified "client", "connect", "queue", "server",
for the current stream only. The timeout can be specified in milliseconds or "tarpit" or "tunnel" timeout for the current stream only. Changing one timeout
with any other unit if the number is suffixed by the unit as explained at the does not influence any other timeouts, even if they are inherited from each
top of this document. It is also possible to write an expression which must other during configuration parsing (see last example). The timeout can be
return a number interpreted as a timeout in milliseconds. specified in milliseconds or with any other unit if the number is suffixed by
the unit as explained at the top of this document. It is also possible to
write an expression which must return a number interpreted as a timeout in
milliseconds.
Note that the server/tunnel timeouts are only relevant on the backend side Note that the connect, queue, server and tunnel timeouts are only relevant on
and thus this rule is only available for the proxies with backend the backend side and thus this rule is only available for the proxies with
capabilities. Likewise, client timeout is only relevant for frontend side. backend capabilities. Likewise, client timeout is only relevant for frontend
Also the timeout value must be non-null to obtain the expected results. side. Tarpit timeout is available to both sides. The timeout value must be
non-null to obtain the expected results.
Example: Example:
http-request set-timeout tunnel 5s http-request set-timeout tunnel 5s
@ -16321,6 +16406,18 @@ set-timeout { client | server | tunnel } { <timeout> | <expr> }
http-response set-timeout tunnel 5s http-response set-timeout tunnel 5s
http-response set-timeout server res.hdr(X-Refresh-Seconds),mul(1000) http-response set-timeout server res.hdr(X-Refresh-Seconds),mul(1000)
Example:
defaults
# This will set both tarpit and queue timeout to 5s as they are not
# defined
timeout connect 5s
timeout client 30s
timeout server 30s
listen foo
# This will only change the connect timeout to 10s without affecting
# queue or tarpit timeouts
http-request set-timeout connect 10s
set-tos <tos> (deprecated) set-tos <tos> (deprecated)
This is an alias for "set-fc-tos" (which should be used instead). This is an alias for "set-fc-tos" (which should be used instead).
@ -16544,7 +16641,7 @@ use-service <service-name>
http-request use-service prometheus-exporter if { path /metrics } http-request use-service prometheus-exporter if { path /metrics }
wait-for-body time <time> [ at-least <bytes> ] wait-for-body time <time> [ at-least <bytes> ] [use-large-buffer]
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
- | - | - | - | - | X | X | - - | - | - | - | - | X | X | -
@ -16562,6 +16659,10 @@ wait-for-body time <time> [ at-least <bytes> ]
happens first, this timeout will not occur even if the full body has happens first, this timeout will not occur even if the full body has
not yet been received. not yet been received.
"use-large-buffer" option may be set to allocate a large buffer if regular
one is to small to store the message body. To be used, "tune.bufsize.large"
global option must be defined.
This action may be used as a replacement for "option http-buffer-request". This action may be used as a replacement for "option http-buffer-request".
Arguments : Arguments :
@ -16576,7 +16677,7 @@ wait-for-body time <time> [ at-least <bytes> ]
Example: Example:
http-request wait-for-body time 1s at-least 1k if METH_POST http-request wait-for-body time 1s at-least 1k if METH_POST
See also : "option http-buffer-request" See also : "option http-buffer-request" and "tune.bufsize.large"
wait-for-handshake wait-for-handshake
@ -17203,7 +17304,9 @@ interface <interface>
ktls <on|off> [ EXPERIMENTAL ] ktls <on|off> [ EXPERIMENTAL ]
Enables or disables ktls for those sockets. If enabled, kTLS will be used Enables or disables ktls for those sockets. If enabled, kTLS will be used
if the kernel supports it and the cipher is compatible. This is only if the kernel supports it and the cipher is compatible. This is only
available on Linux kernel 4.17 and above. available on Linux kernel 4.17 and above. Please note that some network
drivers and/or TLS stacks might restrict kTLS usage to TLS v1.2 only. See
also "force-tlsv12".
label <label> label <label>
Sets an optional label for these sockets. It could be used group sockets by Sets an optional label for these sockets. It could be used group sockets by
@ -18306,7 +18409,10 @@ hash-key <key>
id The node keys will be derived from the server's numeric id The node keys will be derived from the server's numeric
identifier as set from "id" or which defaults to its position identifier as set from "id" or which defaults to its position
in the server list. in the server list. This is the default. Note that only the 28
lowest bits of the ID will be used (i.e. (id % 268435456)), so
better only use values comprised between 1 and this value to
avoid overlap.
addr The node keys will be derived from the server's address, when addr The node keys will be derived from the server's address, when
available, or else fall back on "id". available, or else fall back on "id".
@ -18318,7 +18424,9 @@ hash-key <key>
HAProxy processes are balancing traffic to the same set of servers. If the HAProxy processes are balancing traffic to the same set of servers. If the
server order of each process is different (because, for example, DNS records server order of each process is different (because, for example, DNS records
were resolved in different orders) then this will allow each independent were resolved in different orders) then this will allow each independent
HAProxy processes to agree on routing decisions. HAProxy processes to agree on routing decisions. Note: "balance random" also
uses "hash-type consistent", and the quality of the distribution will depend
on the quality of the keys.
id <value> id <value>
May be used in the following contexts: tcp, http, log May be used in the following contexts: tcp, http, log
@ -18463,9 +18571,10 @@ See also: "option tcp-check", "option httpchk"
ktls <on|off> [ EXPERIMENTAL ] ktls <on|off> [ EXPERIMENTAL ]
May be used in the following contexts: tcp, http, log, peers, ring May be used in the following contexts: tcp, http, log, peers, ring
Enables or disables ktls for those sockets. If enabled, kTLS will be used Enables or disables ktls for those sockets. If enabled, kTLS will be used if
if the kernel supports it and the cipher is compatible. the kernel supports it and the cipher is compatible. This is only available
This is only available on Linux. on Linux 4.17 and above. Please note that some network drivers and/or TLS
stacks might restrict kTLS usage to TLS v1.2 only. See also "force-tlsv12".
log-bufsize <bufsize> log-bufsize <bufsize>
May be used in the following contexts: log May be used in the following contexts: log
@ -20552,6 +20661,7 @@ ip.ver binary integer
ipmask(mask4[,mask6]) address address ipmask(mask4[,mask6]) address address
json([input-code]) string string json([input-code]) string string
json_query(json_path[,output_type]) string _outtype_ json_query(json_path[,output_type]) string _outtype_
jwt_decrypt_jwk(<jwk>) string binary
jwt_decrypt_cert(<cert>) string binary jwt_decrypt_cert(<cert>) string binary
jwt_decrypt_secret(<secret>) string binary jwt_decrypt_secret(<secret>) string binary
jwt_header_query([json_path[,output_type]]) string string jwt_header_query([json_path[,output_type]]) string string
@ -21196,9 +21306,9 @@ ip.fp([<mode>])
can be used to distinguish between multiple apparently identical hosts. The can be used to distinguish between multiple apparently identical hosts. The
real-world use case is to refine the identification of misbehaving hosts real-world use case is to refine the identification of misbehaving hosts
between a shared IP address to avoid blocking legitimate users when only one between a shared IP address to avoid blocking legitimate users when only one
is misbehaving and needs to be blocked. The converter builds a 7-byte binary is misbehaving and needs to be blocked. The converter builds a 8-byte minimum
block based on the input. The bytes of the fingerprint are arranged like binary block based on the input. The bytes of the fingerprint are arranged
this: like this:
- byte 0: IP TOS field (see ip.tos) - byte 0: IP TOS field (see ip.tos)
- byte 1: - byte 1:
- bit 7: IPv6 (1) / IPv4 (0) - bit 7: IPv6 (1) / IPv4 (0)
@ -21213,10 +21323,13 @@ ip.fp([<mode>])
- bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised) - bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised)
- byte 3..4: tcp.win - byte 3..4: tcp.win
- byte 5..6: tcp.options.mss, or zero if absent - byte 5..6: tcp.options.mss, or zero if absent
- byte 7: 1 bit per present TCP option, with options 2 to 8 being mapped to
bits 0..6 respectively, and bit 7 indicating the presence of any
option from 9 to 255.
The <mode> argument permits to append more information to the fingerprint. By The <mode> argument permits to append more information to the fingerprint. By
default, when the <mode> argument is not set or is zero, the fingerprint is default, when the <mode> argument is not set or is zero, the fingerprint is
solely made of the 7 bytes described above. If <mode> is specified as another solely made of the 8 bytes described above. If <mode> is specified as another
value, it then corresponds to the sum of the following values, and the value, it then corresponds to the sum of the following values, and the
respective components will be concatenated to the fingerprint, in the order respective components will be concatenated to the fingerprint, in the order
below: below:
@ -21226,7 +21339,7 @@ ip.fp([<mode>])
- 4: the source IP address is appended to the fingerprint, which adds - 4: the source IP address is appended to the fingerprint, which adds
4 bytes for IPv4 and 16 for IPv6. 4 bytes for IPv4 and 16 for IPv6.
Example: make a 12..24 bytes fingerprint using the base FP, the TTL and the Example: make a 13..25 bytes fingerprint using the base FP, the TTL and the
source address (1+4=5): source address (1+4=5):
frontend test frontend test
@ -21402,6 +21515,44 @@ jwt_decrypt_cert(<cert>)
http-request set-var(txn.bearer) http_auth_bearer http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_cert("/foo/bar.pem")] http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_cert("/foo/bar.pem")]
jwt_decrypt_jwk(<jwk>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the provided JSON Web Key (RFC7517).
The <jwk> parameter must be a valid JWK of type 'oct' or 'RSA' ('kty' field
of the JSON key) that can be provided either as a string or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used to decode token that have a symmetric-type
algorithm ("alg" field of the JOSE header) among the following: A128KW,
A192KW, A256KW, A128GCMKW, A192GCMKW, A256GCMKW, dir. In this case, we expect
the provided JWK to be of the 'oct' type. Please note that the A128KW and
A192KW algorithms are not available on AWS-LC and decryption will not work.
This converter also manages tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256. In
such a case an 'RSA' type JWK representing a private key must be provided.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Because of the way quotes, commas and double quotes are treated in the
configuration, the contents of the JWK must be properly escaped for this
converter to work properly (see section 2.2 for more information).
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret(\'{\"kty\":\"oct\",\"k\":\"wAsgsg\"}\')
# or via a variable
http-request set-var(txn.bearer) http_auth_bearer
http-request set-var(txn.jwk) str(\'{\"kty\":\"oct\",\"k\":\"Q-NFLlghQ\"}\')
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_jwk(txn.jwk)
jwt_decrypt_secret(<secret>) jwt_decrypt_secret(<secret>)
Performs a signature validation of a JSON Web Token following the JSON Web Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content Encryption format (see RFC 7516) given in input and return its content
@ -23715,12 +23866,18 @@ bc_src_port integer
bc_srv_queue integer bc_srv_queue integer
be_id integer be_id integer
be_name string be_name string
be_connect_timeout integer
be_queue_timeout integer
be_server_timeout integer be_server_timeout integer
be_tarpit_timeout integer
be_tunnel_timeout integer be_tunnel_timeout integer
bytes_in integer bytes_in integer
bytes_out integer bytes_out integer
cur_connect_timeout integer
cur_client_timeout integer cur_client_timeout integer
cur_queue_timeout integer
cur_server_timeout integer cur_server_timeout integer
cur_tarpit_timeout integer
cur_tunnel_timeout integer cur_tunnel_timeout integer
dst ip dst ip
dst_conn integer dst_conn integer
@ -23754,6 +23911,7 @@ fc_src ip
fc_src_is_local boolean fc_src_is_local boolean
fc_src_port integer fc_src_port integer
fc_unacked integer fc_unacked integer
fe_tarpit_timeout integer
fe_client_timeout integer fe_client_timeout integer
fe_defbe string fe_defbe string
fe_id integer fe_id integer
@ -24048,6 +24206,11 @@ be_id : integer
used in a frontend and no backend was used, it returns the current used in a frontend and no backend was used, it returns the current
frontend's id. It can also be used in a tcp-check or an http-check ruleset. frontend's id. It can also be used in a tcp-check or an http-check ruleset.
be_connect_timeout : integer
Returns the configuration value in millisecond for the connect timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_connect_timeout".
be_name : string be_name : string
Returns a string containing the current backend's name. It can be used in Returns a string containing the current backend's name. It can be used in
frontends with responses to check which backend processed the request. If frontends with responses to check which backend processed the request. If
@ -24055,11 +24218,21 @@ be_name : string
frontend's name. It can also be used in a tcp-check or an http-check frontend's name. It can also be used in a tcp-check or an http-check
ruleset. ruleset.
be_queue_timeout : integer
Returns the configuration value in millisecond for the queue timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_queue_timeout".
be_server_timeout : integer be_server_timeout : integer
Returns the configuration value in millisecond for the server timeout of the Returns the configuration value in millisecond for the server timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_server_timeout". also the "cur_server_timeout".
be_tarpit_timeout : integer
Returns the configuration value in millisecond for the queue timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_tarpit_timeout".
be_tunnel_timeout : integer be_tunnel_timeout : integer
Returns the configuration value in millisecond for the tunnel timeout of the Returns the configuration value in millisecond for the tunnel timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See current backend. This timeout can be overwritten by a "set-timeout" rule. See
@ -24071,16 +24244,32 @@ bytes_in : integer
bytes_out : integer bytes_out : integer
See "res.bytes_in". See "res.bytes_in".
cur_connect_timeout : integer
Returns the currently applied connect timeout in millisecond for the stream.
In the default case, this will be equal to be_connect_timeout unless a
"set-timeout" rule has been applied. See also "be_connect_timeout".
cur_client_timeout : integer cur_client_timeout : integer
Returns the currently applied client timeout in millisecond for the stream. Returns the currently applied client timeout in millisecond for the stream.
In the default case, this will be equal to fe_client_timeout unless a In the default case, this will be equal to fe_client_timeout unless a
"set-timeout" rule has been applied. See also "fe_client_timeout". "set-timeout" rule has been applied. See also "fe_client_timeout".
cur_queue_timeout : integer
Returns the currently applied queue timeout in millisecond for the stream.
In the default case, this will be equal to be_queue_timeout unless a
"set-timeout" rule has been applied. See also "be_queue_timeout".
cur_server_timeout : integer cur_server_timeout : integer
Returns the currently applied server timeout in millisecond for the stream. Returns the currently applied server timeout in millisecond for the stream.
In the default case, this will be equal to be_server_timeout unless a In the default case, this will be equal to be_server_timeout unless a
"set-timeout" rule has been applied. See also "be_server_timeout". "set-timeout" rule has been applied. See also "be_server_timeout".
cur_tarpit_timeout : integer
Returns the currently applied tarpit timeout in millisecond for the stream.
In the default case, this will be equal to fe_tarpit_timeout/be_tarpit_timeout
unless a "set-timeout" rule has been applied. See also "fe_tarpit_timeout"
and "be_tarpit_timeout".
cur_tunnel_timeout : integer cur_tunnel_timeout : integer
Returns the currently applied tunnel timeout in millisecond for the stream. Returns the currently applied tunnel timeout in millisecond for the stream.
In the default case, this will be equal to be_tunnel_timeout unless a In the default case, this will be equal to be_tunnel_timeout unless a
@ -24465,6 +24654,10 @@ fe_name : string
backends to check from which frontend it was called, or to stick all users backends to check from which frontend it was called, or to stick all users
coming via a same frontend to the same server. coming via a same frontend to the same server.
fe_tarpit_timeout : integer
Returns the configuration value in millisecond for the tarpit timeout of the
current frontend. This timeout can be overwritten by a "set-timeout" rule.
req.bytes_in : integer req.bytes_in : integer
This returns the number of bytes received from the client. The value This returns the number of bytes received from the client. The value
corresponds to what was received by HAProxy, including some headers and some corresponds to what was received by HAProxy, including some headers and some
@ -26723,9 +26916,10 @@ capture.req.uri : string
allocated. allocated.
capture.req.ver : string capture.req.ver : string
This extracts the request's HTTP version and returns either "HTTP/1.0" or This extracts the request's HTTP version and returns it with the format
"HTTP/1.1". Unlike "req.ver", it can be used in both request, response, and "HTTP/<major>.<minor>". It can be used in both request, response, and logs
logs because it relies on a persistent flag. because it relies on a persistent information. If the request version is not
valid, this sample fetch fails.
capture.res.hdr(<idx>) : string capture.res.hdr(<idx>) : string
This extracts the content of the header captured by the "capture response This extracts the content of the header captured by the "capture response
@ -26734,9 +26928,10 @@ capture.res.hdr(<idx>) : string
See also: "capture response header" See also: "capture response header"
capture.res.ver : string capture.res.ver : string
This extracts the response's HTTP version and returns either "HTTP/1.0" or This extracts the response's HTTP version and returns it with the format
"HTTP/1.1". Unlike "res.ver", it can be used in logs because it relies on a "HTTP/<major>.<minor>". It can be used in logs because it relies on a
persistent flag. persistent information. If the response version is not valid, this sample
fetch fails.
cookie([<name>]) : string (deprecated) cookie([<name>]) : string (deprecated)
This extracts the last occurrence of the cookie name <name> on a "Cookie" This extracts the last occurrence of the cookie name <name> on a "Cookie"
@ -27087,16 +27282,14 @@ req.timer.tq : integer
req.ver : string req.ver : string
req_ver : string (deprecated) req_ver : string (deprecated)
Returns the version string from the HTTP request, for example "1.1". This can Returns the version string from the HTTP request, with the format
be useful for ACL. For logs use the "%HV" logformat alias. Some predefined "<major>.<minor>". This can be useful for ACL. Some predefined ACL already
ACL already check for versions 1.0 and 1.1. check for common versions. It can be used in both request, response, and
logs because it relies on a persistent information. If the request version is
not valid, this sample fetch fails.
Common values are "1.0", "1.1", "2.0" or "3.0". Common values are "1.0", "1.1", "2.0" or "3.0".
In the case of http/2 and http/3, the value is not extracted from the HTTP
version in the request line but is determined by the negotiated protocol
version.
ACL derivatives : ACL derivatives :
req.ver : exact string match req.ver : exact string match
@ -27300,8 +27493,9 @@ res.timer.hdr : integer
res.ver : string res.ver : string
resp_ver : string (deprecated) resp_ver : string (deprecated)
Returns the version string from the HTTP response, for example "1.1". This Returns the version string from the HTTP response, with the format
can be useful for logs, but is mostly there for ACL. "<major>.<minor>". This can be useful for logs, but is mostly there for
ACL. If the response version is not valid, this sample fetch fails.
It may be used in tcp-check based expect rules. It may be used in tcp-check based expect rules.
@ -28561,7 +28755,7 @@ Please refer to the table below for currently defined aliases :
| Others | | Others |
+---+------+------------------------------------------------------+---------+ +---+------+------------------------------------------------------+---------+
| | %B | bytes_read (from server to client) | numeric | | | %B | bytes_read (from server to client) | numeric |
| | | %[req.bytes_in] | | | | | %[res.bytes_in] | |
+---+------+------------------------------------------------------+---------+ +---+------+------------------------------------------------------+---------+
| H | %CC | captured_request_cookie | string | | H | %CC | captured_request_cookie | string |
+---+------+------------------------------------------------------+---------+ +---+------+------------------------------------------------------+---------+
@ -29681,7 +29875,7 @@ See also : "filter"
9.1. Trace 9.1. Trace
---------- ----------
filter trace [name <name>] [random-forwarding] [hexdump] filter trace [name <name>] [random-forwarding] [max-fwd <max>] [hexdump]
Arguments: Arguments:
<name> is an arbitrary name that will be reported in <name> is an arbitrary name that will be reported in
@ -29694,6 +29888,11 @@ filter trace [name <name>] [random-forwarding] [hexdump]
data. With this parameter, it only forwards a random data. With this parameter, it only forwards a random
amount of the parsed data. amount of the parsed data.
<max> is the maximum amount of data that can be forwarded at
a time. "max-fwd" option can be combined with the
random forwarding. <max> must be an positive integer.
0 means there is no limit.
<hexdump> dumps all forwarded data to the server and the client. <hexdump> dumps all forwarded data to the server and the client.
This filter can be used as a base to develop new filters. It defines all This filter can be used as a base to develop new filters. It defines all
@ -29709,7 +29908,21 @@ a server by adding some latencies in the processing.
9.2. HTTP compression 9.2. HTTP compression
--------------------- ---------------------
filter compression filter comp-req
Enables filter that explicitly tries to compress HTTP requests according to
"compression" settings. Implicitly sets "compression direction request".
filter comp-res
Enables filter that explicitly tries to compress HTTP responses according to
"compression" settings. Implicitly sets "compression direction response"
filter compression (deprecated)
Alias for backward compatibility purposes that is functionnally equivalent to
enabling both "comp-req" and "comp-res" filter. "compression" keyword must be
used to configure appropriate behavior:
The HTTP compression has been moved in a filter in HAProxy 1.7. "compression" The HTTP compression has been moved in a filter in HAProxy 1.7. "compression"
keyword must still be used to enable and configure the HTTP compression. And keyword must still be used to enable and configure the HTTP compression. And
@ -31684,8 +31897,8 @@ ocsp-update [ off | on ]
jwt [ off | on ] jwt [ off | on ]
Allow for this certificate to be used for JWT validation or decryption via Allow for this certificate to be used for JWT validation or decryption via
the "jwt_verify_cert" or "jwt_decrypt_cert" converters when set to 'on'. Its the "jwt_verify_cert", "jwt_decrypt_cert" or "jwt_decrypt" converters when
value defaults to 'off'. set to 'on'. Its value defaults to 'off'.
When set to 'on' for a given certificate, the CLI command "del ssl cert" will When set to 'on' for a given certificate, the CLI command "del ssl cert" will
not work. In order to be deleted, a certificate must not be used, either for not work. In order to be deleted, a certificate must not be used, either for
@ -31694,6 +31907,30 @@ jwt [ off | on ]
This option can be changed during runtime via the "add ssl jwt" and "del ssl This option can be changed during runtime via the "add ssl jwt" and "del ssl
jwt" CLI commands. See also "show ssl jwt" CLI command. jwt" CLI commands. See also "show ssl jwt" CLI command.
generate-dummy [ off | on ]
Allow the generation of a private key and its self-signed certificate at
parsing time when set to 'on'. This may be useful if one does not have a
certificate at disposal during testing phase for instance. In this case,
"keytype", "bits" and "curves" may be used to customize the private key. When
not used, the default value is 'off'. (also see "keytype", "bits" and
"curves").
keytype [ RSA | ECDSA ]
Allow the selection of the private key type used to generate at parsing time
a self-signed certificate. This is the case if "generate-dummy" is set to 'on'
for this certificate. When not used, the default is 'RSA'.
(also see "generate-dummy").
bits <number>
Configure the number of bits to generate an RSA self-signed certificate when
"generate-dummy" is set to 'on' for this self-signed certificate and "keytype"
is set to 'RSA'. When not used, the default is 2048. (also see
"generate-dummy").
curves <string>
Configure the curves when "generate-dummy" is set to 'on' and "keytype" is
set to 'ECDSA" for this self-signed certificate. The default is 'P-384'.
12.8. ACME 12.8. ACME
---------- ----------
@ -31706,6 +31943,9 @@ The ACME section allows to configure HAProxy as an ACMEv2 client. This feature
is experimental meaning that "expose-experimental-directives" must be in the is experimental meaning that "expose-experimental-directives" must be in the
global section so this can be used. global section so this can be used.
A guide is available on the HAProxy wiki
https://github.com/haproxy/wiki/wiki/ACME:--native-haproxy
Current limitations as of 3.3: Current limitations as of 3.3:
- The feature is limited to the HTTP-01 or DNS-01 challenges for now. HTTP-01 - The feature is limited to the HTTP-01 or DNS-01 challenges for now. HTTP-01
is completely handled by HAProxy, but DNS-01 needs either the dataplaneAPI or is completely handled by HAProxy, but DNS-01 needs either the dataplaneAPI or

140
doc/haterm.txt Normal file
View file

@ -0,0 +1,140 @@
------
HATerm
------
HAProxy's dummy HTTP
server for benchmarks
1. Background
-------------
HATerm is a dummy HTTP server that leverages the flexible and scalable
architecture of HAProxy to ease benchmarking of HTTP agents in all versions of
HTTP currently supported by HAProxy (HTTP/1, HTTP/2, HTTP/3), and both in clear
and TLS / QUIC. It follows the same principle as its ancestor HTTPTerm [1],
consisting in producing HTTP responses entirely configured by the request
parameters (size, response time, status etc). It also preserves the spirit
HTTPTerm which does not require any configuration beyond an optional listening
address and a port number, though it also supports advanced configurations with
the full spectrum of HAProxy features for specific testing. The goal remains
to make it almost as fast as the original HTTPTerm so that it can become a
de-facto replacement, with a compatible command line and request parameters
that will not change users' habits.
[1] https://github.com/wtarreau/httpterm
2. Compilation
--------------
HATerm may be compiled in the same way as HAProxy but with "haterm" as Makefile
target to provide on the "make" command line as follows:
$ make -j $(nproc) TARGET=linux-glibc haterm
HATerm supports HTTPS/SSL/TCP:
$ make TARGET=linux-glibc USE_OPENSSL=1
It also supports QUIC:
$ make -j $(nproc) TARGET=linux-glibc USE_OPENSSL=1 USE_QUIC=1 haterm
Technically speaking, it uses the regular HAProxy source and object code with a
different command line parser. As such, all build options supported by HAProxy
also apply to HATerm. See INSTALL for more details about how to compile them.
3. Execution
------------
HATerm is a very easy to use HTTP server with supports for all the HTTP
versions. It displays its usage when run without argument or wrong arguments:
$ ./haterm
Usage : haterm -L [<ip>]:<clear port>[:<TCP&QUIC SSL port>] [-L...]* [opts]
where <opts> may be any combination of:
-G <line> : multiple option; append <line> to the "global" section
-F <line> : multiple option; append <line> to the "frontend" section
-T <line> : multiple option; append <line> to the "traces" section
-C : dump the configuration and exit
-D : goes daemon
-b <keysize> : RSA key size in bits (ex: "2048", "4096"...)
-c <curves> : ECSDA curves (ex: "P-256", "P-384"...)
-v : shows version
-d : enable the traces for all http protocols
--quic-bind-opts <opts> : append options to QUIC "bind" lines
--tcp-bind-opts <opts> : append options to TCP "bind" lines
Arguments -G, -F, -T permit to append one or multiple lines at the end of their
respective sections. A tab character ('\t') is prepended at the beginning of
the argument, and a line feed ('\n') is appended at the end. It is also
possible to insert multiple lines at once using escape sequences '\n' and '\t'
inside the string argument.
As HAProxy, HATerm may listen on several TCP/UDP addresses which can be
provided by multiple "-L" options. To be functional, it needs at least one
correct "-L" option to be set.
Examples:
$ ./haterm -L 127.0.0.1:8888 # listen on 127.0.0.1:8888 TCP address
$ ./haterm -L 127.0.0.1:8888:8889 # listen on 127.0.0.1:8888 TCP address,
# 127.0.01:8889 SSL/TCP address,
# and 127.0.01:8889 QUIC/UDP address
$ ./haterm -L 127.0.0.1:8888:8889 -L [::1]:8888:8889
With USE_QUIC_OPENSSL_COMPAT support, the user must configure a global
section as for HAProxy. HATerm sets internally its configuration in.
memory as this is done by HAProxy from configuration files:
$ ./haterm -L 127.0.0.1:8888:8889
[NOTICE] (1371578) : haproxy version is 3.4-dev4-ba5eab-28
[NOTICE] (1371578) : path to executable is ./haterm
[ALERT] (1371578) : Binding [haterm cfgfile:12] for frontend
___haterm_frontend___: this SSL library does not
support the QUIC protocol. A limited compatibility
layer may be enabled using the "limited-quic" global
option if desired.
Such an alert may be fixed with "-G' option:
$ ./haterm -L 127.0.0.1:8888:8889 -G "limited-quic"
When the SSL support is not compiled in, the second port is ignored. This is
also the case for the QUIC support.
HATerm adjusts its responses depending on the requests it receives. An empty
query string provides the information about how the URIs are understood by
HATerm:
$ curl http://127.0.0.1:8888/?
HAProxy's dummy HTTP server for benchmarks - version 3.4-dev4.
All integer argument values are in the form [digits]*[kmgr] (r=random(0..1))
The following arguments are supported to override the default objects :
- /?s=<size> return <size> bytes.
E.g. /?s=20k
- /?r=<retcode> present <retcode> as the HTTP return code.
E.g. /?r=404
- /?c=<cache> set the return as not cacheable if <1.
E.g. /?c=0
- /?A=<req-after> drain the request body after sending the response.
E.g. /?A=1
- /?C=<close> force the response to use close if >0.
E.g. /?C=1
- /?K=<keep-alive> force the response to use keep-alive if >0.
E.g. /?K=1
- /?t=<time> wait <time> milliseconds before responding.
E.g. /?t=500
- /?k=<enable> Enable transfer encoding chunked with only one chunk
if >0.
- /?R=<enable> Enable sending random data if >0.
Note that those arguments may be cumulated on one line separated by a set of
delimitors among [&?,;/] :
- GET /?s=20k&c=1&t=700&K=30r HTTP/1.0
- GET /?r=500?s=0?c=0?t=1000 HTTP/1.0

View file

@ -11,7 +11,7 @@ default init, this was controversial but fedora and archlinux already uses it.
At this time HAProxy still had a multi-process model, and the way haproxy is At this time HAProxy still had a multi-process model, and the way haproxy is
working was incompatible with the daemon mode. working was incompatible with the daemon mode.
Systemd is compatible with traditionnal forking services, but somehow HAProxy Systemd is compatible with traditional forking services, but somehow HAProxy
is different. To work correctly, systemd needs a main PID, this is the PID of is different. To work correctly, systemd needs a main PID, this is the PID of
the process that systemd will supervises. the process that systemd will supervises.
@ -45,7 +45,7 @@ However the wrapper suffered from several problems:
### mworker V1 ### mworker V1
HAProxy 1.8 got ride of the wrapper which was replaced by the master worker HAProxy 1.8 got rid of the wrapper which was replaced by the master worker
mode. This first version was basically a reintegration of the wrapper features mode. This first version was basically a reintegration of the wrapper features
within HAProxy. HAProxy is launched with the -W flag, read the configuration and within HAProxy. HAProxy is launched with the -W flag, read the configuration and
then fork. In mworker mode, the master is usually launched as a root process, then fork. In mworker mode, the master is usually launched as a root process,
@ -86,7 +86,7 @@ retrieved automatically.
The master is supervising the workers, when a current worker (not a previous one The master is supervising the workers, when a current worker (not a previous one
from before the reload) is exiting without being asked for a reload, the master from before the reload) is exiting without being asked for a reload, the master
will emit an "exit-on-failure" error and will kill every workers with a SIGTERM will emit an "exit-on-failure" error and will kill every workers with a SIGTERM
and exits with the same error code than the failed master, this behavior can be and exits with the same error code than the failed worker, this behavior can be
changed by using the "no exit-on-failure" option in the global section. changed by using the "no exit-on-failure" option in the global section.
While the master is supervising the workers using the wait() function, the While the master is supervising the workers using the wait() function, the
@ -186,8 +186,8 @@ number that can be found in HAPROXY_PROCESSES. With this change the stats socket
in the configuration is less useful and everything can be done from the master in the configuration is less useful and everything can be done from the master
CLI. CLI.
With 2.7, the reload mecanism of the master CLI evolved, with previous versions, With 2.7, the reload mechanism of the master CLI evolved, with previous versions,
this mecanism was asynchronous, so once the `reload` command was received, the this mechanism was asynchronous, so once the `reload` command was received, the
master would reload, the active master CLI connection was closed, and there was master would reload, the active master CLI connection was closed, and there was
no way to return a status as a response to the `reload` command. To achieve a no way to return a status as a response to the `reload` command. To achieve a
synchronous reload, a dedicated sockpair is used, one side uses a master CLI synchronous reload, a dedicated sockpair is used, one side uses a master CLI
@ -208,3 +208,38 @@ starts with -st to achieve a hard stop on the previous worker.
Version 3.0 got rid of the libsystemd dependencies for sd_notify() after the Version 3.0 got rid of the libsystemd dependencies for sd_notify() after the
events of xz/openssh, the function is now implemented directly in haproxy in events of xz/openssh, the function is now implemented directly in haproxy in
src/systemd.c. src/systemd.c.
### mworker V3
This version was implemented with HAProxy 3.1, the goal was to stop parsing and
applying the configuration in the master process.
One of the caveats of the previous implementation was that the parser could take
a lot of time, and the master process would be stuck in the parser instead of
handling its polling loop, signals etc. Some parts of the configuration parsing
could also be less reliable with third-party code (EXTRA_OBJS), it could, for
example, allow opening FDs and not closing them before the reload which
would crash the master after a few reloads.
The startup of the master-worker was reorganized this way:
- the "discovery" mode, which is a lighter configuration parsing step, only
applies the configuration which need to be effective for the master process.
For example, "master-worker", "mworker-max-reloads" and less than 20 other
keywords that are identified by KWF_DISCOVERY in the code. It is really fast
as it don't need all the configuration to be applied in the master process.
- the master will then fork a worker, with a PROC_O_INIT flag. This worker has
a temporary sockpair connected to the master CLI. Once the worker is forked,
the master initializes its configuration and starts its polling loop.
- The newly forked worker will try to parse the configuration, which could
result in a failure (exit 1), or any bad error code. In case of success, the
worker will send a "READY" message to the master CLI then close this FD. At
this step everything was initialized and the worker can enter its polling
loop.
- The master then waits for the worker, it could:
* receive the READY message over the mCLI, resulting in a successful loading
of haproxy
* receive a SIGCHLD, meaning the worker exited and couldn't load

View file

@ -1725,6 +1725,30 @@ add acl [@<ver>] <acl> <pattern>
This command cannot be used if the reference <acl> is a name also used with This command cannot be used if the reference <acl> is a name also used with
a map. In this case, the "add map" command must be used instead. a map. In this case, the "add map" command must be used instead.
add backend <name> from <defproxy> [mode <mode>] [guid <guid>] [ EXPERIMENTAL ]
Instantiate a new backend proxy with the name <name>.
Only TCP or HTTP proxies can be created. All of the settings are inherited
from <defproxy> default proxy instance. By default, it is mandatory to
specify the backend mode via the argument of the same name, unless <defproxy>
already defines it explicitely. It is also possible to use an optional GUID
argument if wanted.
Servers can be added via the command "add server". The backend is initialized
in the unpublished state. Once considered ready for traffic, use "publish
backend" to expose the newly created instance.
All named default proxies can be used, given that they validate the same
inheritance rules applied during configuration parsing. There is some
exceptions though, for example when the mode is neither TCP nor HTTP. Another
exception is that it is not yet possible to use a default proxies which
reference custom HTTP errors, for example via the errorfiles or http-rules
keywords.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it
also requires experimental mode (see "experimental-mode on").
add map [@<ver>] <map> <key> <value> add map [@<ver>] <map> <key> <value>
add map [@<ver>] <map> <payload> add map [@<ver>] <map> <payload>
Add an entry into the map <map> to associate the value <value> to the key Add an entry into the map <map> to associate the value <value> to the key
@ -2100,6 +2124,30 @@ del acl <acl> [<key>|#<ref>]
listing the content of the acl. Note that if the reference <acl> is a name and listing the content of the acl. Note that if the reference <acl> is a name and
is shared with a map, the entry will be also deleted in the map. is shared with a map, the entry will be also deleted in the map.
del backend <name>
Removes the backend proxy with the name <name>.
This operation is only possible for TCP or HTTP proxies. To succeed, the
backend instance must have been first unpublished. Also, all of its servers
must first be removed (via "del server" CLI). Finally, no stream must still
be attached to the backend instance.
There is additional restrictions which prevent backend removal. First, a
backend cannot be removed if it is explicitely referenced by config elements,
for example via a use_backend rule or in sample expressions. Some proxies
options are also incompatible with runtime deletion. Currently, this is the
case when deprecated dispatch or option transparent are used. Also, a backend
cannot be removed if there is a stick-table declared in it. Finally, it is
impossible for now to remove a backend if QUIC servers were present in it.
It can be useful to use "wait be-removable" prior to this command to check
for the aformentioned requisites. This also provides a methode to wait for
the final closure of the streams attached to the target backend.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it
also requires experimental mode (see "experimental-mode on").
del map <map> [<key>|#<ref>] del map <map> [<key>|#<ref>]
Delete all the map entries from the map <map> corresponding to the key <key>. Delete all the map entries from the map <map> corresponding to the key <key>.
<map> is the #<id> or the <name> returned by "show map". If the <ref> is used, <map> is the #<id> or the <name> returned by "show map". If the <ref> is used,
@ -2534,7 +2582,8 @@ set maxconn global <maxconn>
delayed until the threshold is reached. A value of zero restores the initial delayed until the threshold is reached. A value of zero restores the initial
setting. setting.
set profiling { tasks | memory } { auto | on | off } set profiling memory { on | off }
set profiling tasks { auto | on | off | lock | no-lock | memory | no-memory }
Enables or disables CPU or memory profiling for the indicated subsystem. This Enables or disables CPU or memory profiling for the indicated subsystem. This
is equivalent to setting or clearing the "profiling" settings in the "global" is equivalent to setting or clearing the "profiling" settings in the "global"
section of the configuration file. Please also see "show profiling". Note section of the configuration file. Please also see "show profiling". Note
@ -2544,6 +2593,13 @@ set profiling { tasks | memory } { auto | on | off }
on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at
compile time. compile time.
. For tasks profiling, it is possible to enable or disable the collection of
per-task lock and memory timings at runtime, but the change is only taken
into account next time the profiler switches from off/auto to on (either
automatically or manually). Thus when using "no-lock" to disable per-task
lock profiling and save CPU cycles, it is recommended to flip the task
profiling off then on to commit the change.
set rate-limit connections global <value> set rate-limit connections global <value>
Change the process-wide connection rate limit, which is set by the global Change the process-wide connection rate limit, which is set by the global
'maxconnrate' setting. A value of zero disables the limitation. This limit 'maxconnrate' setting. A value of zero disables the limitation. This limit
@ -4494,6 +4550,13 @@ wait { -h | <delay> } [<condition> [<args>...]]
specified condition to be satisfied, to unrecoverably fail, or to remain specified condition to be satisfied, to unrecoverably fail, or to remain
unsatisfied for the whole <delay> duration. The supported conditions are: unsatisfied for the whole <delay> duration. The supported conditions are:
- be-removable <proxy> : this will wait for the specified proxy backend to be
removable by the "del backend" command. Some conditions will never be
accepted (e.g. backend not yet unpublished or with servers in it) and will
cause the report of a specific error message indicating what condition is
not met. If everything is OK before the delay, a success is returned and
the operation is terminated.
- srv-removable <proxy>/<server> : this will wait for the specified server to - srv-removable <proxy>/<server> : this will wait for the specified server to
be removable by the "del server" command, i.e. be in maintenance and no be removable by the "del server" command, i.e. be in maintenance and no
longer have any connection on it (neither active or idle). Some conditions longer have any connection on it (neither active or idle). Some conditions

View file

@ -627,7 +627,10 @@ For the type PP2_TYPE_SSL, the value is itself a defined like this :
uint8_t client; uint8_t client;
uint32_t verify; uint32_t verify;
struct pp2_tlv sub_tlv[0]; struct pp2_tlv sub_tlv[0];
}; } __attribute__((packed));
Note the "packed" attribute which indicates that each field starts immediately
after the previous one (i.e. without type-specific alignment nor padding).
The <verify> field will be zero if the client presented a certificate The <verify> field will be zero if the client presented a certificate
and it was successfully verified, and non-zero otherwise. and it was successfully verified, and non-zero otherwise.

View file

@ -24,7 +24,7 @@ vtest installation
------------------------ ------------------------
To use vtest you will have to download and compile the recent vtest To use vtest you will have to download and compile the recent vtest
sources found at https://github.com/vtest/VTest. sources found at https://github.com/vtest/VTest2.
To compile vtest: To compile vtest:

View file

@ -102,7 +102,10 @@ enum act_name {
/* Timeout name valid for a set-timeout rule */ /* Timeout name valid for a set-timeout rule */
enum act_timeout_name { enum act_timeout_name {
ACT_TIMEOUT_CONNECT,
ACT_TIMEOUT_SERVER, ACT_TIMEOUT_SERVER,
ACT_TIMEOUT_QUEUE,
ACT_TIMEOUT_TARPIT,
ACT_TIMEOUT_TUNNEL, ACT_TIMEOUT_TUNNEL,
ACT_TIMEOUT_CLIENT, ACT_TIMEOUT_CLIENT,
}; };

View file

@ -33,6 +33,8 @@
#define HA_PROF_TASKS_MASK 0x00000003 /* per-task CPU profiling mask */ #define HA_PROF_TASKS_MASK 0x00000003 /* per-task CPU profiling mask */
#define HA_PROF_MEMORY 0x00000004 /* memory profiling */ #define HA_PROF_MEMORY 0x00000004 /* memory profiling */
#define HA_PROF_TASKS_MEM 0x00000008 /* per-task CPU profiling with memory */
#define HA_PROF_TASKS_LOCK 0x00000010 /* per-task CPU profiling with locks */
#ifdef USE_MEMORY_PROFILING #ifdef USE_MEMORY_PROFILING

View file

@ -192,6 +192,7 @@ struct lbprm {
void (*server_requeue)(struct server *); /* function used to place the server where it must be */ void (*server_requeue)(struct server *); /* function used to place the server where it must be */
void (*proxy_deinit)(struct proxy *); /* to be called when we're destroying the proxy */ void (*proxy_deinit)(struct proxy *); /* to be called when we're destroying the proxy */
void (*server_deinit)(struct server *); /* to be called when we're destroying the server */ void (*server_deinit)(struct server *); /* to be called when we're destroying the server */
int (*server_init)(struct server *); /* initialize a freshly added server (runtime); <0=fail. */
}; };
#endif /* _HAPROXY_BACKEND_T_H */ #endif /* _HAPROXY_BACKEND_T_H */

View file

@ -69,6 +69,7 @@ int backend_parse_balance(const char **args, char **err, struct proxy *curproxy)
int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit); int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit);
int be_downtime(struct proxy *px); int be_downtime(struct proxy *px);
int be_supports_dynamic_srv(struct proxy *px, char **msg);
void recount_servers(struct proxy *px); void recount_servers(struct proxy *px);
void update_backend_weight(struct proxy *px); void update_backend_weight(struct proxy *px);

View file

@ -24,6 +24,7 @@
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/filters-t.h>
#include <haproxy/show_flags-t.h> #include <haproxy/show_flags-t.h>
/* The CF_* macros designate Channel Flags, which may be ORed in the bit field /* The CF_* macros designate Channel Flags, which may be ORed in the bit field
@ -205,6 +206,7 @@ struct channel {
unsigned char xfer_large; /* number of consecutive large xfers */ unsigned char xfer_large; /* number of consecutive large xfers */
unsigned char xfer_small; /* number of consecutive small xfers */ unsigned char xfer_small; /* number of consecutive small xfers */
int analyse_exp; /* expiration date for current analysers (if set) */ int analyse_exp; /* expiration date for current analysers (if set) */
struct chn_flt flt; /* current state of filters active on this channel */
}; };

View file

@ -376,6 +376,7 @@ static inline void channel_add_input(struct channel *chn, unsigned int len)
c_adv(chn, fwd); c_adv(chn, fwd);
} }
/* notify that some data was read */ /* notify that some data was read */
chn_prod(chn)->bytes_in += len;
chn->flags |= CF_READ_EVENT; chn->flags |= CF_READ_EVENT;
} }
@ -787,8 +788,12 @@ static inline int channel_recv_max(const struct channel *chn)
*/ */
static inline size_t channel_data_limit(const struct channel *chn) static inline size_t channel_data_limit(const struct channel *chn)
{ {
size_t max = (global.tune.bufsize - global.tune.maxrewrite);
size_t max;
if (!c_size(chn))
return 0;
max = (c_size(chn) - global.tune.maxrewrite);
if (IS_HTX_STRM(chn_strm(chn))) if (IS_HTX_STRM(chn_strm(chn)))
max -= HTX_BUF_OVERHEAD; max -= HTX_BUF_OVERHEAD;
return max; return max;

View file

@ -32,6 +32,7 @@
extern struct pool_head *pool_head_trash; extern struct pool_head *pool_head_trash;
extern struct pool_head *pool_head_large_trash;
/* function prototypes */ /* function prototypes */
@ -46,6 +47,9 @@ int chunk_asciiencode(struct buffer *dst, struct buffer *src, char qc);
int chunk_strcmp(const struct buffer *chk, const char *str); int chunk_strcmp(const struct buffer *chk, const char *str);
int chunk_strcasecmp(const struct buffer *chk, const char *str); int chunk_strcasecmp(const struct buffer *chk, const char *str);
struct buffer *get_trash_chunk(void); struct buffer *get_trash_chunk(void);
struct buffer *get_large_trash_chunk(void);
struct buffer *get_trash_chunk_sz(size_t size);
struct buffer *get_larger_trash_chunk(struct buffer *chunk);
int init_trash_buffers(int first); int init_trash_buffers(int first);
static inline void chunk_reset(struct buffer *chk) static inline void chunk_reset(struct buffer *chk)
@ -106,12 +110,53 @@ static forceinline struct buffer *alloc_trash_chunk(void)
return chunk; return chunk;
} }
/*
* Allocate a large trash chunk from the reentrant pool. The buffer starts at
* the end of the chunk. This chunk must be freed using free_trash_chunk(). This
* call may fail and the caller is responsible for checking that the returned
* pointer is not NULL.
*/
static forceinline struct buffer *alloc_large_trash_chunk(void)
{
struct buffer *chunk;
if (!pool_head_large_trash)
return NULL;
chunk = pool_alloc(pool_head_large_trash);
if (chunk) {
char *buf = (char *)chunk + sizeof(struct buffer);
*buf = 0;
chunk_init(chunk, buf,
pool_head_large_trash->size - sizeof(struct buffer));
}
return chunk;
}
/*
* Allocate a trash chunk accordingly to the requested size. This chunk must be
* freed using free_trash_chunk(). This call may fail and the caller is
* responsible for checking that the returned pointer is not NULL.
*/
static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
{
if (likely(size <= pool_head_trash->size))
return alloc_trash_chunk();
else if (pool_head_large_trash && size <= pool_head_large_trash->size)
return alloc_large_trash_chunk();
else
return NULL;
}
/* /*
* free a trash chunk allocated by alloc_trash_chunk(). NOP on NULL. * free a trash chunk allocated by alloc_trash_chunk(). NOP on NULL.
*/ */
static forceinline void free_trash_chunk(struct buffer *chunk) static forceinline void free_trash_chunk(struct buffer *chunk)
{ {
if (likely(chunk && chunk->size == pool_head_trash->size - sizeof(struct buffer)))
pool_free(pool_head_trash, chunk); pool_free(pool_head_trash, chunk);
else
pool_free(pool_head_large_trash, chunk);
} }
/* copies chunk <src> into <chk>. Returns 0 in case of failure. */ /* copies chunk <src> into <chk>. Returns 0 in case of failure. */

View file

@ -100,6 +100,7 @@ enum cli_wait_err {
enum cli_wait_cond { enum cli_wait_cond {
CLI_WAIT_COND_NONE, // no condition to wait on CLI_WAIT_COND_NONE, // no condition to wait on
CLI_WAIT_COND_SRV_UNUSED,// wait for server to become unused CLI_WAIT_COND_SRV_UNUSED,// wait for server to become unused
CLI_WAIT_COND_BE_UNUSED, // wait for backend to become unused
}; };
struct cli_wait_ctx { struct cli_wait_ctx {

View file

@ -185,6 +185,29 @@ struct be_counters {
} p; /* protocol-specific stats */ } p; /* protocol-specific stats */
}; };
/* extra counters that are registered at boot by various modules */
enum counters_type {
COUNTERS_FE = 0,
COUNTERS_BE,
COUNTERS_SV,
COUNTERS_LI,
COUNTERS_RSLV,
COUNTERS_OFF_END /* must always be last */
};
struct extra_counters {
char **datap; /* points to pointer to heap containing counters allocated in a linear fashion */
size_t size; /* size of allocated data */
size_t tgrp_step; /* distance in words between two datap for consecutive tgroups, 0 for single */
uint nbtgrp; /* number of thread groups accessing these counters */
enum counters_type type; /* type of object containing the counters */
};
#define EXTRA_COUNTERS(name) \
struct extra_counters *name
#endif /* _HAPROXY_COUNTERS_T_H */ #endif /* _HAPROXY_COUNTERS_T_H */
/* /*

View file

@ -26,6 +26,9 @@
#include <haproxy/counters-t.h> #include <haproxy/counters-t.h>
#include <haproxy/guid-t.h> #include <haproxy/guid-t.h>
#include <haproxy/global.h>
extern THREAD_LOCAL void *trash_counters;
int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid, char **errmsg); int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid, char **errmsg);
int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid, char **errmsg); int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid, char **errmsg);
@ -101,4 +104,106 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
__ret; \ __ret; \
}) })
#define COUNTERS_UPDATE_MAX(counter, count) \
do { \
if (!(global.tune.options & GTUNE_NO_MAX_COUNTER)) \
HA_ATOMIC_UPDATE_MAX(counter, count); \
} while (0)
/* Manipulation of extra_counters, for boot-time registrable modules */
/* retrieve the base storage of extra counters (first tgroup if any) */
#define EXTRA_COUNTERS_BASE(counters, mod) \
(likely(counters) ? \
((void *)(*(counters)->datap + (mod)->counters_off[(counters)->type])) : \
(trash_counters))
/* retrieve the pointer to the extra counters storage for module <mod> for the
* current TGID.
*/
#define EXTRA_COUNTERS_GET(counters, mod) \
(likely(counters) ? \
((void *)(counters)->datap[(counters)->tgrp_step * (tgid - 1)] + \
(mod)->counters_off[(counters)->type]) : \
(trash_counters))
#define EXTRA_COUNTERS_REGISTER(counters, ctype, alloc_failed_label, storage, step) \
do { \
typeof(*counters) _ctr; \
_ctr = calloc(1, sizeof(*_ctr)); \
if (!_ctr) \
goto alloc_failed_label; \
_ctr->type = (ctype); \
_ctr->tgrp_step = (step); \
_ctr->datap = (storage); \
*(counters) = _ctr; \
} while (0)
#define EXTRA_COUNTERS_ADD(mod, counters, new_counters, csize) \
do { \
typeof(counters) _ctr = (counters); \
(mod)->counters_off[_ctr->type] = _ctr->size; \
_ctr->size += (csize); \
} while (0)
#define EXTRA_COUNTERS_ALLOC(counters, alloc_failed_label, nbtg) \
do { \
typeof(counters) _ctr = (counters); \
char **datap = _ctr->datap; \
uint tgrp; \
_ctr->nbtgrp = _ctr->tgrp_step ? (nbtg) : 1; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
*datap = malloc((_ctr)->size); \
if (!*_ctr->datap) \
goto alloc_failed_label; \
datap += _ctr->tgrp_step; \
} \
} while (0)
#define EXTRA_COUNTERS_INIT(counters, mod, init_counters, init_counters_size) \
do { \
typeof(counters) _ctr = (counters); \
char **datap = _ctr->datap; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
memcpy(*datap + mod->counters_off[_ctr->type], \
(init_counters), (init_counters_size)); \
datap += _ctr->tgrp_step; \
} \
} while (0)
#define EXTRA_COUNTERS_FREE(counters) \
do { \
typeof(counters) _ctr = (counters); \
if (_ctr) { \
char **datap = _ctr->datap; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
ha_free(datap); \
datap += _ctr->tgrp_step; \
} \
free(_ctr); \
} \
} while (0)
/* aggregate all values of <metricp> over the thread groups handled by
* <counters>. <metricp> MUST correspond to an entry of the first tgrp of
* <counters>. The number of groups and the step are found in <counters>. The
* type of the return value is the same as <metricp>, and must be a scalar so
* that values are summed before being returned.
*/
#define EXTRA_COUNTERS_AGGR(counters, metricp) \
({ \
typeof(counters) _ctr = (counters); \
typeof(metricp) *valp, _ret = 0; \
if (_ctr) { \
size_t ofs = (char *)&metricp - _ctr->datap[0]; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
valp = (typeof(valp))(_ctr->datap[tgrp * (counters)->tgrp_step] + ofs); \
_ret += HA_ATOMIC_LOAD(valp); \
} \
} \
_ret; \
})
#endif /* _HAPROXY_COUNTERS_H */ #endif /* _HAPROXY_COUNTERS_H */

View file

@ -24,12 +24,12 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/connection-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/connection-t.h>
#include <haproxy/counters-t.h>
#include <haproxy/dgram-t.h> #include <haproxy/dgram-t.h>
#include <haproxy/dns_ring-t.h> #include <haproxy/dns_ring-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/task-t.h> #include <haproxy/task-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
@ -152,6 +152,7 @@ struct dns_nameserver {
struct dns_stream_server *stream; /* used for tcp dns */ struct dns_stream_server *stream; /* used for tcp dns */
EXTRA_COUNTERS(extra_counters); EXTRA_COUNTERS(extra_counters);
char *extra_counters_storage; /* storage used for extra_counters above */
struct dns_counters *counters; struct dns_counters *counters;
struct list list; /* nameserver chained list */ struct list list; /* nameserver chained list */

View file

@ -36,6 +36,7 @@
#include <haproxy/pool.h> #include <haproxy/pool.h>
extern struct pool_head *pool_head_buffer; extern struct pool_head *pool_head_buffer;
extern struct pool_head *pool_head_large_buffer;
int init_buffer(void); int init_buffer(void);
void buffer_dump(FILE *o, struct buffer *b, int from, int to); void buffer_dump(FILE *o, struct buffer *b, int from, int to);
@ -53,6 +54,30 @@ static inline int buffer_almost_full(const struct buffer *buf)
return b_almost_full(buf); return b_almost_full(buf);
} }
/* Return 1 if <sz> is the default buffer size */
static inline int b_is_default_sz(size_t sz)
{
return (sz == pool_head_buffer->size);
}
/* Return 1 if <sz> is the size of a large buffer (alwoys false is large buffers are not configured) */
static inline int b_is_large_sz(size_t sz)
{
return (pool_head_large_buffer && sz == pool_head_large_buffer->size);
}
/* Return 1 if <bug> is a default buffer */
static inline int b_is_default(struct buffer *buf)
{
return b_is_default_sz(b_size(buf));
}
/* Return 1 if <buf> is a large buffer (alwoys 0 is large buffers are not configured) */
static inline int b_is_large(struct buffer *buf)
{
return b_is_large_sz(b_size(buf));
}
/**************************************************/ /**************************************************/
/* Functions below are used for buffer allocation */ /* Functions below are used for buffer allocation */
/**************************************************/ /**************************************************/
@ -136,13 +161,18 @@ static inline char *__b_get_emergency_buf(void)
#define __b_free(_buf) \ #define __b_free(_buf) \
do { \ do { \
char *area = (_buf)->area; \ char *area = (_buf)->area; \
size_t sz = (_buf)->size; \
\ \
/* let's first clear the area to save an occasional "show sess all" \ /* let's first clear the area to save an occasional "show sess all" \
* glancing over our shoulder from getting a dangling pointer. \ * glancing over our shoulder from getting a dangling pointer. \
*/ \ */ \
*(_buf) = BUF_NULL; \ *(_buf) = BUF_NULL; \
__ha_barrier_store(); \ __ha_barrier_store(); \
if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \ /* if enabled, large buffers are always strictly greater \
* than the default buffers */ \
if (unlikely(b_is_large_sz(sz))) \
pool_free(pool_head_large_buffer, area); \
else if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \
th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \ th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \
else \ else \
pool_free(pool_head_buffer, area); \ pool_free(pool_head_buffer, area); \

View file

@ -232,22 +232,28 @@ struct filter {
* 0: request channel, 1: response channel */ * 0: request channel, 1: response channel */
unsigned int pre_analyzers; /* bit field indicating analyzers to pre-process */ unsigned int pre_analyzers; /* bit field indicating analyzers to pre-process */
unsigned int post_analyzers; /* bit field indicating analyzers to post-process */ unsigned int post_analyzers; /* bit field indicating analyzers to post-process */
struct list list; /* Next filter for the same proxy/stream */ struct list list; /* Filter list for the stream */
/* req_list and res_list are exactly equivalent, except the order may differ */
struct list req_list; /* Filter list for request channel */
struct list res_list; /* Filter list for response channel */
}; };
/* /*
* Structure reprensenting the "global" state of filters attached to a stream. * Structure reprensenting the "global" state of filters attached to a stream.
* Doesn't hold much information, as the channel themselves hold chn_flt struct
* which contains the per-channel members.
*/ */
struct strm_flt { struct strm_flt {
struct list filters; /* List of filters attached to a stream */ struct list filters; /* List of filters attached to a stream */
struct filter *current[2]; /* From which filter resume processing, for a specific channel.
* This is used for resumable callbacks only,
* If NULL, we start from the first filter.
* 0: request channel, 1: response channel */
unsigned short flags; /* STRM_FL_* */ unsigned short flags; /* STRM_FL_* */
unsigned char nb_req_data_filters; /* Number of data filters registered on the request channel */ };
unsigned char nb_rsp_data_filters; /* Number of data filters registered on the response channel */
unsigned long long offset[2]; /* structure holding filter state for some members that are channel oriented */
struct chn_flt {
struct list filters; /* List of filters attached to a channel */
struct filter *current; /* From which filter resume processing, for a specific channel. */
unsigned char nb_data_filters; /* Number of data filters registered on channel */
unsigned long long offset;
}; };
#endif /* _HAPROXY_FILTERS_T_H */ #endif /* _HAPROXY_FILTERS_T_H */

View file

@ -28,7 +28,9 @@
#include <haproxy/stream-t.h> #include <haproxy/stream-t.h>
extern const char *trace_flt_id; extern const char *trace_flt_id;
extern const char *http_comp_flt_id; extern const char *http_comp_req_flt_id;
extern const char *http_comp_res_flt_id;
extern const char *cache_store_flt_id; extern const char *cache_store_flt_id;
extern const char *spoe_filter_id; extern const char *spoe_filter_id;
extern const char *fcgi_flt_id; extern const char *fcgi_flt_id;
@ -40,13 +42,13 @@ extern const char *fcgi_flt_id;
/* Useful macros to access per-channel values. It can be safely used inside /* Useful macros to access per-channel values. It can be safely used inside
* filters. */ * filters. */
#define CHN_IDX(chn) (((chn)->flags & CF_ISRESP) == CF_ISRESP) #define CHN_IDX(chn) (((chn)->flags & CF_ISRESP) == CF_ISRESP)
#define FLT_STRM_OFF(s, chn) (strm_flt(s)->offset[CHN_IDX(chn)]) #define FLT_STRM_OFF(s, chn) (chn->flt.offset)
#define FLT_OFF(flt, chn) ((flt)->offset[CHN_IDX(chn)]) #define FLT_OFF(flt, chn) ((flt)->offset[CHN_IDX(chn)])
#define HAS_FILTERS(strm) ((strm)->strm_flt.flags & STRM_FLT_FL_HAS_FILTERS) #define HAS_FILTERS(strm) ((strm)->strm_flt.flags & STRM_FLT_FL_HAS_FILTERS)
#define HAS_REQ_DATA_FILTERS(strm) ((strm)->strm_flt.nb_req_data_filters != 0) #define HAS_REQ_DATA_FILTERS(strm) ((strm)->req.flt.nb_data_filters != 0)
#define HAS_RSP_DATA_FILTERS(strm) ((strm)->strm_flt.nb_rsp_data_filters != 0) #define HAS_RSP_DATA_FILTERS(strm) ((strm)->res.flt.nb_data_filters != 0)
#define HAS_DATA_FILTERS(strm, chn) (((chn)->flags & CF_ISRESP) ? HAS_RSP_DATA_FILTERS(strm) : HAS_REQ_DATA_FILTERS(strm)) #define HAS_DATA_FILTERS(strm, chn) (((chn)->flags & CF_ISRESP) ? HAS_RSP_DATA_FILTERS(strm) : HAS_REQ_DATA_FILTERS(strm))
#define IS_REQ_DATA_FILTER(flt) ((flt)->flags & FLT_FL_IS_REQ_DATA_FILTER) #define IS_REQ_DATA_FILTER(flt) ((flt)->flags & FLT_FL_IS_REQ_DATA_FILTER)
@ -137,14 +139,11 @@ static inline void
register_data_filter(struct stream *s, struct channel *chn, struct filter *filter) register_data_filter(struct stream *s, struct channel *chn, struct filter *filter)
{ {
if (!IS_DATA_FILTER(filter, chn)) { if (!IS_DATA_FILTER(filter, chn)) {
if (chn->flags & CF_ISRESP) { if (chn->flags & CF_ISRESP)
filter->flags |= FLT_FL_IS_RSP_DATA_FILTER; filter->flags |= FLT_FL_IS_RSP_DATA_FILTER;
strm_flt(s)->nb_rsp_data_filters++; else
}
else {
filter->flags |= FLT_FL_IS_REQ_DATA_FILTER; filter->flags |= FLT_FL_IS_REQ_DATA_FILTER;
strm_flt(s)->nb_req_data_filters++; chn->flt.nb_data_filters++;
}
} }
} }
@ -153,16 +152,64 @@ static inline void
unregister_data_filter(struct stream *s, struct channel *chn, struct filter *filter) unregister_data_filter(struct stream *s, struct channel *chn, struct filter *filter)
{ {
if (IS_DATA_FILTER(filter, chn)) { if (IS_DATA_FILTER(filter, chn)) {
if (chn->flags & CF_ISRESP) { if (chn->flags & CF_ISRESP)
filter->flags &= ~FLT_FL_IS_RSP_DATA_FILTER; filter->flags &= ~FLT_FL_IS_RSP_DATA_FILTER;
strm_flt(s)->nb_rsp_data_filters--; else
filter->flags &= ~FLT_FL_IS_REQ_DATA_FILTER;
chn->flt.nb_data_filters--;
}
}
/*
* flt_list_start() and flt_list_next() can be used to iterate over the list of filters
* for a given <strm> and <chn> combination. It will automatically choose the proper
* list to iterate from depending on the context.
*
* flt_list_start() has to be called exactly once to get the first value from the list
* to get the following values, use flt_list_next() until NULL is returned.
*
* Example:
*
* struct filter *filter;
*
* for (filter = flt_list_start(stream, channel); filter;
* filter = flt_list_next(stream, channel, filter)) {
* ...
* }
*/
static inline struct filter *flt_list_start(struct stream *strm, struct channel *chn)
{
struct filter *filter;
if (chn->flags & CF_ISRESP) {
filter = LIST_NEXT(&chn->flt.filters, struct filter *, res_list);
if (&filter->res_list == &chn->flt.filters)
filter = NULL; /* empty list */
} }
else { else {
filter->flags &= ~FLT_FL_IS_REQ_DATA_FILTER; filter = LIST_NEXT(&chn->flt.filters, struct filter *, req_list);
strm_flt(s)->nb_req_data_filters--; if (&filter->req_list == &chn->flt.filters)
filter = NULL; /* empty list */
} }
return filter;
}
static inline struct filter *flt_list_next(struct stream *strm, struct channel *chn,
struct filter *filter)
{
if (chn->flags & CF_ISRESP) {
filter = LIST_NEXT(&filter->res_list, struct filter *, res_list);
if (&filter->res_list == &chn->flt.filters)
filter = NULL; /* end of list */
} }
else {
filter = LIST_NEXT(&filter->req_list, struct filter *, req_list);
if (&filter->req_list == &chn->flt.filters)
filter = NULL; /* end of list */
}
return filter;
} }
/* This function must be called when a filter alter payload data. It updates /* This function must be called when a filter alter payload data. It updates
@ -177,7 +224,8 @@ flt_update_offsets(struct filter *filter, struct channel *chn, int len)
struct stream *s = chn_strm(chn); struct stream *s = chn_strm(chn);
struct filter *f; struct filter *f;
list_for_each_entry(f, &strm_flt(s)->filters, list) { for (f = flt_list_start(s, chn); f;
f = flt_list_next(s, chn, f)) {
if (f == filter) if (f == filter)
break; break;
FLT_OFF(f, chn) += len; FLT_OFF(f, chn) += len;

View file

@ -86,6 +86,7 @@
#define GTUNE_LISTENER_MQ_OPT (1<<28) #define GTUNE_LISTENER_MQ_OPT (1<<28)
#define GTUNE_LISTENER_MQ_ANY (GTUNE_LISTENER_MQ_FAIR | GTUNE_LISTENER_MQ_OPT) #define GTUNE_LISTENER_MQ_ANY (GTUNE_LISTENER_MQ_FAIR | GTUNE_LISTENER_MQ_OPT)
#define GTUNE_NO_KTLS (1<<29) #define GTUNE_NO_KTLS (1<<29)
#define GTUNE_NO_MAX_COUNTER (1<<30)
/* subsystem-specific debugging options for tune.debug */ /* subsystem-specific debugging options for tune.debug */
#define GDBG_CPU_AFFINITY (1U<< 0) #define GDBG_CPU_AFFINITY (1U<< 0)
@ -179,6 +180,7 @@ struct global {
uint recv_enough; /* how many input bytes at once are "enough" */ uint recv_enough; /* how many input bytes at once are "enough" */
uint bufsize; /* buffer size in bytes, defaults to BUFSIZE */ uint bufsize; /* buffer size in bytes, defaults to BUFSIZE */
uint bufsize_small;/* small buffer size in bytes */ uint bufsize_small;/* small buffer size in bytes */
uint bufsize_large;/* large buffer size in bytes */
int maxrewrite; /* buffer max rewrite size in bytes, defaults to MAXREWRITE */ int maxrewrite; /* buffer max rewrite size in bytes, defaults to MAXREWRITE */
int reserved_bufs; /* how many buffers can only be allocated for response */ int reserved_bufs; /* how many buffers can only be allocated for response */
int buf_limit; /* if not null, how many total buffers may only be allocated */ int buf_limit; /* if not null, how many total buffers may only be allocated */

View file

@ -24,6 +24,7 @@
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/global-t.h> #include <haproxy/global-t.h>
#include <haproxy/cfgparse.h>
extern struct global global; extern struct global global;
extern int pid; /* current process id */ extern int pid; /* current process id */
@ -54,6 +55,8 @@ extern char **old_argv;
extern const char *old_unixsocket; extern const char *old_unixsocket;
extern int daemon_fd[2]; extern int daemon_fd[2];
extern int devnullfd; extern int devnullfd;
extern int fileless_mode;
extern struct cfgfile fileless_cfg;
struct proxy; struct proxy;
struct server; struct server;

View file

@ -99,7 +99,7 @@ enum h1m_state {
#define H1_MF_TE_CHUNKED 0x00010000 // T-E "chunked" #define H1_MF_TE_CHUNKED 0x00010000 // T-E "chunked"
#define H1_MF_TE_OTHER 0x00020000 // T-E other than supported ones found (only "chunked" is supported for now) #define H1_MF_TE_OTHER 0x00020000 // T-E other than supported ones found (only "chunked" is supported for now)
#define H1_MF_UPG_H2C 0x00040000 // "h2c" or "h2" used as upgrade token #define H1_MF_UPG_H2C 0x00040000 // "h2c" or "h2" used as upgrade token
#define H1_MF_NOT_HTTP 0x00080000 // Not an HTTP message (e.g "RTSP", only possible if invalid message are accepted)
/* Mask to use to reset H1M flags when we restart headers parsing. /* Mask to use to reset H1M flags when we restart headers parsing.
* *
* WARNING: Don't forget to update it if a new flag must be preserved when * WARNING: Don't forget to update it if a new flag must be preserved when
@ -263,6 +263,8 @@ static inline int h1_parse_chunk_size(const struct buffer *buf, int start, int s
const char *ptr_old = ptr; const char *ptr_old = ptr;
const char *end = b_wrap(buf); const char *end = b_wrap(buf);
uint64_t chunk = 0; uint64_t chunk = 0;
int backslash = 0;
int quote = 0;
stop -= start; // bytes left stop -= start; // bytes left
start = stop; // bytes to transfer start = stop; // bytes to transfer
@ -327,13 +329,37 @@ static inline int h1_parse_chunk_size(const struct buffer *buf, int start, int s
if (--stop == 0) if (--stop == 0)
return 0; return 0;
while (!HTTP_IS_CRLF(*ptr)) { /* The loop seeks the first CRLF or non-tab CTL char
* and stops there. If a backslash/quote is active,
* it's an error. If none, we assume it's the CRLF
* and go back to the top of the loop checking for
* CR then LF. This way CTLs, lone LF etc are handled
* in the fallback path. This allows to protect
* remotes against their own possibly non-compliant
* chunk-ext parser which could mistakenly skip a
* quoted CRLF. Chunk-ext are not used anyway, except
* by attacks.
*/
while (!HTTP_IS_CTL(*ptr) || HTTP_IS_SPHT(*ptr)) {
if (backslash)
backslash = 0; // escaped char
else if (*ptr == '\\' && quote)
backslash = 1;
else if (*ptr == '\\') // backslash not permitted outside quotes
goto error;
else if (*ptr == '"') // begin/end of quoted-pair
quote = !quote;
if (++ptr >= end) if (++ptr >= end)
ptr = b_orig(buf); ptr = b_orig(buf);
if (--stop == 0) if (--stop == 0)
return 0; return 0;
} }
/* we have a CRLF now, loop above */
/* mismatched quotes / backslashes end here */
if (quote || backslash)
goto error;
/* CTLs (CRLF) fall to the common check */
continue; continue;
} }
else else

View file

@ -222,6 +222,7 @@ struct hlua_proxy_list {
}; };
struct hlua_proxy_list_iterator_context { struct hlua_proxy_list_iterator_context {
struct watcher px_watch; /* watcher to automatically update next pointer on backend deletion */
struct proxy *next; struct proxy *next;
char capabilities; char capabilities;
}; };

View file

@ -0,0 +1,36 @@
#ifndef _HAPROXY_HSTREAM_T_H
#define _HAPROXY_HSTREAM_T_H
#include <haproxy/dynbuf-t.h>
#include <haproxy/http-t.h>
#include <haproxy/obj_type-t.h>
/* hastream stream */
struct hstream {
enum obj_type obj_type;
struct session *sess;
struct stconn *sc;
struct task *task;
struct buffer req;
struct buffer res;
unsigned long long to_write; /* #of response data bytes to write after headers */
struct buffer_wait buf_wait; /* Wait list for buffer allocation */
int flags;
int ka; /* .0: keep-alive .1: forced .2: http/1.1, .3: was_reused */
int req_cache;
unsigned long long req_size; /* values passed in the URI to override the server's */
unsigned long long req_body; /* remaining body to be consumed from the request */
int req_code;
int res_wait; /* time to wait before replying in ms */
int res_time;
int req_chunked;
int req_random;
int req_after_res; /* Drain the request body after having sent the response */
enum http_meth_t req_meth;
};
#endif /* _HAPROXY_HSTREAM_T_H */

12
include/haproxy/hstream.h Normal file
View file

@ -0,0 +1,12 @@
#ifndef _HAPROXY_HSTREAM_H
#define _HAPROXY_HSTREAM_H
#include <haproxy/cfgparse.h>
#include <haproxy/hstream-t.h>
struct task *sc_hstream_io_cb(struct task *t, void *ctx, unsigned int state);
int hstream_wake(struct stconn *sc);
void hstream_shutdown(struct stconn *sc);
void *hstream_new(struct session *sess, struct stconn *sc, struct buffer *input);
#endif /* _HAPROXY_HSTREAM_H */

View file

@ -228,7 +228,8 @@ enum h1_state {
*/ */
struct http_msg { struct http_msg {
enum h1_state msg_state; /* where we are in the current message parsing */ enum h1_state msg_state; /* where we are in the current message parsing */
/* 3 bytes unused here */ unsigned char vsn; /* HTTP version, 4 bits per digit */
/* 2 bytes unused here */
unsigned int flags; /* flags describing the message (HTTP version, ...) */ unsigned int flags; /* flags describing the message (HTTP version, ...) */
struct channel *chn; /* pointer to the channel transporting the message */ struct channel *chn; /* pointer to the channel transporting the message */
}; };

View file

@ -49,7 +49,7 @@ int http_req_replace_stline(int action, const char *replace, int len,
int http_res_set_status(unsigned int status, struct ist reason, struct stream *s); int http_res_set_status(unsigned int status, struct ist reason, struct stream *s);
void http_check_request_for_cacheability(struct stream *s, struct channel *req); void http_check_request_for_cacheability(struct stream *s, struct channel *req);
void http_check_response_for_cacheability(struct stream *s, struct channel *res); void http_check_response_for_cacheability(struct stream *s, struct channel *res);
enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn, unsigned int time, unsigned int bytes); enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn, unsigned int time, unsigned int bytes, unsigned int large_buffer);
void http_perform_server_redirect(struct stream *s, struct stconn *sc); void http_perform_server_redirect(struct stream *s, struct stconn *sc);
void http_server_error(struct stream *s, struct stconn *sc, int err, int finst, struct http_reply *msg); void http_server_error(struct stream *s, struct stconn *sc, int err, int finst, struct http_reply *msg);
void http_reply_and_close(struct stream *s, short status, struct http_reply *msg); void http_reply_and_close(struct stream *s, short status, struct http_reply *msg);

View file

@ -141,6 +141,7 @@
#define HTX_SL_F_NORMALIZED_URI 0x00000800 /* The received URI is normalized (an implicit absolute-uri form) */ #define HTX_SL_F_NORMALIZED_URI 0x00000800 /* The received URI is normalized (an implicit absolute-uri form) */
#define HTX_SL_F_CONN_UPG 0x00001000 /* The message contains "connection: upgrade" header */ #define HTX_SL_F_CONN_UPG 0x00001000 /* The message contains "connection: upgrade" header */
#define HTX_SL_F_BODYLESS_RESP 0x00002000 /* The response to this message is bodyloess (only for reqyest) */ #define HTX_SL_F_BODYLESS_RESP 0x00002000 /* The response to this message is bodyloess (only for reqyest) */
#define HTX_SL_F_NOT_HTTP 0x00004000 /* Not an HTTP message (e.g "RTSP", only possible if invalid message are accepted) */
/* This function is used to report flags in debugging tools. Please reflect /* This function is used to report flags in debugging tools. Please reflect
* below any single-bit flag addition above in the same order via the * below any single-bit flag addition above in the same order via the
@ -177,7 +178,7 @@ static forceinline char *hsl_show_flags(char *buf, size_t len, const char *delim
#define HTX_FL_PARSING_ERROR 0x00000001 /* Set when a parsing error occurred */ #define HTX_FL_PARSING_ERROR 0x00000001 /* Set when a parsing error occurred */
#define HTX_FL_PROCESSING_ERROR 0x00000002 /* Set when a processing error occurred */ #define HTX_FL_PROCESSING_ERROR 0x00000002 /* Set when a processing error occurred */
#define HTX_FL_FRAGMENTED 0x00000004 /* Set when the HTX buffer is fragmented */ #define HTX_FL_FRAGMENTED 0x00000004 /* Set when the HTX buffer is fragmented */
/* 0x00000008 unused */ #define HTX_FL_UNORDERED 0x00000008 /* Set when the HTX buffer are not ordered */
#define HTX_FL_EOM 0x00000010 /* Set when end-of-message is reached from the HTTP point of view #define HTX_FL_EOM 0x00000010 /* Set when end-of-message is reached from the HTTP point of view
* (at worst, on the EOM block is missing) * (at worst, on the EOM block is missing)
*/ */
@ -192,7 +193,7 @@ static forceinline char *htx_show_flags(char *buf, size_t len, const char *delim
_(0); _(0);
/* flags */ /* flags */
_(HTX_FL_PARSING_ERROR, _(HTX_FL_PROCESSING_ERROR, _(HTX_FL_PARSING_ERROR, _(HTX_FL_PROCESSING_ERROR,
_(HTX_FL_FRAGMENTED, _(HTX_FL_EOM)))); _(HTX_FL_FRAGMENTED, _(HTX_FL_UNORDERED, _(HTX_FL_EOM)))));
/* epilogue */ /* epilogue */
_(~0U); _(~0U);
return buf; return buf;

View file

@ -98,6 +98,11 @@ static inline struct ist htx_sl_p3(const struct htx_sl *sl)
return ist2(HTX_SL_P3_PTR(sl), HTX_SL_P3_LEN(sl)); return ist2(HTX_SL_P3_PTR(sl), HTX_SL_P3_LEN(sl));
} }
static inline struct ist htx_sl_vsn(const struct htx_sl *sl)
{
return ((sl->flags & HTX_SL_F_IS_RESP) ? htx_sl_p1(sl) : htx_sl_p3(sl));
}
static inline struct ist htx_sl_req_meth(const struct htx_sl *sl) static inline struct ist htx_sl_req_meth(const struct htx_sl *sl)
{ {
return htx_sl_p1(sl); return htx_sl_p1(sl);
@ -474,11 +479,12 @@ static inline struct htx_sl *htx_add_stline(struct htx *htx, enum htx_blk_type t
static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist name, static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist name,
const struct ist value) const struct ist value)
{ {
struct htx_blk *blk; struct htx_blk *blk, *tailblk;
if (name.len > 255 || value.len > 1048575) if (name.len > 255 || value.len > 1048575)
return NULL; return NULL;
tailblk = htx_get_tail_blk(htx);
blk = htx_add_blk(htx, HTX_BLK_HDR, name.len + value.len); blk = htx_add_blk(htx, HTX_BLK_HDR, name.len + value.len);
if (!blk) if (!blk)
return NULL; return NULL;
@ -486,6 +492,8 @@ static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist n
blk->info += (value.len << 8) + name.len; blk->info += (value.len << 8) + name.len;
ist2bin_lc(htx_get_blk_ptr(htx, blk), name); ist2bin_lc(htx_get_blk_ptr(htx, blk), name);
memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len); memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len);
if (tailblk && htx_get_blk_type(tailblk) >= HTX_BLK_EOH)
htx->flags |= HTX_FL_UNORDERED;
return blk; return blk;
} }
@ -495,11 +503,12 @@ static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist n
static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist name, static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist name,
const struct ist value) const struct ist value)
{ {
struct htx_blk *blk; struct htx_blk *blk, *tailblk;
if (name.len > 255 || value.len > 1048575) if (name.len > 255 || value.len > 1048575)
return NULL; return NULL;
tailblk = htx_get_tail_blk(htx);
blk = htx_add_blk(htx, HTX_BLK_TLR, name.len + value.len); blk = htx_add_blk(htx, HTX_BLK_TLR, name.len + value.len);
if (!blk) if (!blk)
return NULL; return NULL;
@ -507,6 +516,8 @@ static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist
blk->info += (value.len << 8) + name.len; blk->info += (value.len << 8) + name.len;
ist2bin_lc(htx_get_blk_ptr(htx, blk), name); ist2bin_lc(htx_get_blk_ptr(htx, blk), name);
memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len); memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len);
if (tailblk && htx_get_blk_type(tailblk) >= HTX_BLK_EOT)
htx->flags |= HTX_FL_UNORDERED;
return blk; return blk;
} }

View file

@ -147,14 +147,14 @@ __attribute__((constructor)) static void __initcb_##linenum() \
#define _DECLARE_INITCALL(...) \ #define _DECLARE_INITCALL(...) \
__DECLARE_INITCALL(__VA_ARGS__) __DECLARE_INITCALL(__VA_ARGS__)
/* This requires that function <function> is called with pointer argument /* This requires that function <function> is called without arguments
* <argument> during init stage <stage> which must be one of init_stage. * during init stage <stage> which must be one of init_stage.
*/ */
#define INITCALL0(stage, function) \ #define INITCALL0(stage, function) \
_DECLARE_INITCALL(stage, __LINE__, function, 0, 0, 0) _DECLARE_INITCALL(stage, __LINE__, function, 0, 0, 0)
/* This requires that function <function> is called with pointer argument /* This requires that function <function> is called with pointer argument
* <argument> during init stage <stage> which must be one of init_stage. * <arg1> during init stage <stage> which must be one of init_stage.
*/ */
#define INITCALL1(stage, function, arg1) \ #define INITCALL1(stage, function, arg1) \
_DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0) _DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0)

View file

@ -28,13 +28,13 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/counters-t.h>
#include <haproxy/guid-t.h> #include <haproxy/guid-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/quic_cc-t.h> #include <haproxy/quic_cc-t.h>
#include <haproxy/quic_sock-t.h> #include <haproxy/quic_sock-t.h>
#include <haproxy/quic_tp-t.h> #include <haproxy/quic_tp-t.h>
#include <haproxy/receiver-t.h> #include <haproxy/receiver-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
/* Some pointer types reference below */ /* Some pointer types reference below */
@ -263,6 +263,7 @@ struct listener {
struct li_per_thread *per_thr; /* per-thread fields (one per thread in the group) */ struct li_per_thread *per_thr; /* per-thread fields (one per thread in the group) */
char *extra_counters_storage; /* storage for extra_counters */
EXTRA_COUNTERS(extra_counters); EXTRA_COUNTERS(extra_counters);
}; };

View file

@ -200,6 +200,8 @@ enum qcc_app_ops_close_side {
/* QUIC application layer operations */ /* QUIC application layer operations */
struct qcc_app_ops { struct qcc_app_ops {
const char *alpn;
/* Initialize <qcc> connection app context. */ /* Initialize <qcc> connection app context. */
int (*init)(struct qcc *qcc); int (*init)(struct qcc *qcc);
/* Finish connection initialization if prelude required. */ /* Finish connection initialization if prelude required. */
@ -232,6 +234,9 @@ struct qcc_app_ops {
void (*inc_err_cnt)(void *ctx, int err_code); void (*inc_err_cnt)(void *ctx, int err_code);
/* Set QCC error code as suspicious activity has been detected. */ /* Set QCC error code as suspicious activity has been detected. */
void (*report_susp)(void *ctx); void (*report_susp)(void *ctx);
/* Free function to close a stream after MUX layer shutdown. */
int (*strm_reject)(struct list *out, uint64_t id);
}; };
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -12,6 +12,9 @@
#include <haproxy/mux_quic-t.h> #include <haproxy/mux_quic-t.h>
#include <haproxy/stconn.h> #include <haproxy/stconn.h>
#include <haproxy/h3.h>
#include <haproxy/hq_interop.h>
#define qcc_report_glitch(qcc, inc, ...) ({ \ #define qcc_report_glitch(qcc, inc, ...) ({ \
COUNT_GLITCH(__VA_ARGS__); \ COUNT_GLITCH(__VA_ARGS__); \
_qcc_report_glitch(qcc, inc); \ _qcc_report_glitch(qcc, inc); \
@ -88,7 +91,7 @@ static inline char *qcs_st_to_str(enum qcs_state st)
} }
} }
int qcc_install_app_ops(struct qcc *qcc, const struct qcc_app_ops *app_ops); int qcc_install_app_ops(struct qcc *qcc);
/* Register <qcs> stream for http-request timeout. If the stream is not yet /* Register <qcs> stream for http-request timeout. If the stream is not yet
* attached in the configured delay, qcc timeout task will be triggered. This * attached in the configured delay, qcc timeout task will be triggered. This
@ -115,6 +118,16 @@ void qcc_show_quic(struct qcc *qcc);
void qcc_wakeup(struct qcc *qcc); void qcc_wakeup(struct qcc *qcc);
static inline const struct qcc_app_ops *quic_alpn_to_app_ops(const char *alpn, int alpn_len)
{
if (alpn_len >= 2 && memcmp(alpn, "h3", 2) == 0)
return &h3_ops;
else if (alpn_len >= 10 && memcmp(alpn, "hq-interop", 10) == 0)
return &hq_interop_ops;
return NULL;
}
#endif /* USE_QUIC */ #endif /* USE_QUIC */
#endif /* _HAPROXY_MUX_QUIC_H */ #endif /* _HAPROXY_MUX_QUIC_H */

View file

@ -46,6 +46,7 @@ enum obj_type {
#ifdef USE_QUIC #ifdef USE_QUIC
OBJ_TYPE_DGRAM, /* object is a struct quic_dgram */ OBJ_TYPE_DGRAM, /* object is a struct quic_dgram */
#endif #endif
OBJ_TYPE_HATERM, /* object is a struct hstream */
OBJ_TYPE_ENTRIES /* last one : number of entries */ OBJ_TYPE_ENTRIES /* last one : number of entries */
} __attribute__((packed)) ; } __attribute__((packed)) ;

View file

@ -26,6 +26,7 @@
#include <haproxy/applet-t.h> #include <haproxy/applet-t.h>
#include <haproxy/check-t.h> #include <haproxy/check-t.h>
#include <haproxy/connection-t.h> #include <haproxy/connection-t.h>
#include <haproxy/hstream-t.h>
#include <haproxy/listener-t.h> #include <haproxy/listener-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/pool.h> #include <haproxy/pool.h>
@ -189,6 +190,19 @@ static inline struct check *objt_check(enum obj_type *t)
return __objt_check(t); return __objt_check(t);
} }
static inline struct hstream *__objt_hstream(enum obj_type *t)
{
return container_of(t, struct hstream, obj_type);
}
static inline struct hstream *objt_hstream(enum obj_type *t)
{
if (!t || *t != OBJ_TYPE_HATERM)
return NULL;
return __objt_hstream(t);
}
#ifdef USE_QUIC #ifdef USE_QUIC
static inline struct quic_dgram *__objt_dgram(enum obj_type *t) static inline struct quic_dgram *__objt_dgram(enum obj_type *t)
{ {

View file

@ -380,6 +380,14 @@ static inline unsigned long ERR_peek_error_func(const char **func)
#endif #endif
#if (HA_OPENSSL_VERSION_NUMBER >= 0x40000000L) && !defined(OPENSSL_IS_AWSLC) && !defined(LIBRESSL_VERSION_NUMBER) && !defined(USE_OPENSSL_WOLFSSL)
# define X509_STORE_getX_objects(x) X509_STORE_get1_objects(x)
# define sk_X509_OBJECT_popX_free(x, y) sk_X509_OBJECT_pop_free(x,y)
#else
# define X509_STORE_getX_objects(x) X509_STORE_get0_objects(x)
# define sk_X509_OBJECT_popX_free(x, y) ({})
#endif
#if (HA_OPENSSL_VERSION_NUMBER >= 0x1010000fL) || (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER >= 0x2070200fL) #if (HA_OPENSSL_VERSION_NUMBER >= 0x1010000fL) || (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER >= 0x2070200fL)
#define __OPENSSL_110_CONST__ const #define __OPENSSL_110_CONST__ const
#else #else

View file

@ -72,8 +72,8 @@ struct pool_registration {
struct list list; /* link element */ struct list list; /* link element */
const char *name; /* name of the pool */ const char *name; /* name of the pool */
const char *file; /* where the pool is declared */ const char *file; /* where the pool is declared */
ullong size; /* expected object size */
unsigned int line; /* line in the file where the pool is declared, 0 if none */ unsigned int line; /* line in the file where the pool is declared, 0 if none */
unsigned int size; /* expected object size */
unsigned int flags; /* MEM_F_* */ unsigned int flags; /* MEM_F_* */
unsigned int type_align; /* type-imposed alignment; 0=unspecified */ unsigned int type_align; /* type-imposed alignment; 0=unspecified */
unsigned int align; /* expected alignment; 0=unspecified */ unsigned int align; /* expected alignment; 0=unspecified */

View file

@ -183,7 +183,7 @@ unsigned long long pool_total_allocated(void);
unsigned long long pool_total_used(void); unsigned long long pool_total_used(void);
void pool_flush(struct pool_head *pool); void pool_flush(struct pool_head *pool);
void pool_gc(struct pool_head *pool_ctx); void pool_gc(struct pool_head *pool_ctx);
struct pool_head *create_pool_with_loc(const char *name, unsigned int size, unsigned int align, struct pool_head *create_pool_with_loc(const char *name, ullong size, unsigned int align,
unsigned int flags, const char *file, unsigned int line); unsigned int flags, const char *file, unsigned int line);
struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg); struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg);
void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg); void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg);

View file

@ -38,7 +38,6 @@
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/queue-t.h> #include <haproxy/queue-t.h>
#include <haproxy/server-t.h> #include <haproxy/server-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/tcpcheck-t.h> #include <haproxy/tcpcheck-t.h>
#include <haproxy/thread-t.h> #include <haproxy/thread-t.h>
#include <haproxy/tools-t.h> #include <haproxy/tools-t.h>
@ -240,14 +239,16 @@ enum PR_SRV_STATE_FILE {
#define PR_RE_JUNK_REQUEST 0x00020000 /* We received an incomplete or garbage response */ #define PR_RE_JUNK_REQUEST 0x00020000 /* We received an incomplete or garbage response */
/* Proxy flags */ /* Proxy flags */
#define PR_FL_DISABLED 0x01 /* The proxy was disabled in the configuration (not at runtime) */ #define PR_FL_DISABLED 0x00000001 /* The proxy was disabled in the configuration (not at runtime) */
#define PR_FL_STOPPED 0x02 /* The proxy was stopped */ #define PR_FL_STOPPED 0x00000002 /* The proxy was stopped */
#define PR_FL_READY 0x04 /* The proxy is ready to be used (initialized and configured) */ #define PR_FL_DEF_EXPLICIT_MODE 0x00000004 /* Proxy mode is explicitely defined - only used for defaults instance */
#define PR_FL_EXPLICIT_REF 0x08 /* The default proxy is explicitly referenced by another proxy */ #define PR_FL_EXPLICIT_REF 0x00000008 /* The default proxy is explicitly referenced by another proxy */
#define PR_FL_IMPLICIT_REF 0x10 /* The default proxy is implicitly referenced by another proxy */ #define PR_FL_IMPLICIT_REF 0x00000010 /* The default proxy is implicitly referenced by another proxy */
#define PR_FL_PAUSED 0x20 /* The proxy was paused at run time (reversible) */ #define PR_FL_PAUSED 0x00000020 /* The proxy was paused at run time (reversible) */
#define PR_FL_CHECKED 0x40 /* The proxy configuration was fully checked (including postparsing checks) */ #define PR_FL_CHECKED 0x00000040 /* The proxy configuration was fully checked (including postparsing checks) */
#define PR_FL_BE_UNPUBLISHED 0x80 /* The proxy cannot be targetted by content switching rules */ #define PR_FL_BE_UNPUBLISHED 0x00000080 /* The proxy cannot be targetted by content switching rules */
#define PR_FL_DELETED 0x00000100 /* Proxy has been deleted and must be manipulated with care */
#define PR_FL_NON_PURGEABLE 0x00000200 /* Proxy referenced by config elements which prevent its runtime removal. */
struct stream; struct stream;
@ -293,7 +294,8 @@ struct error_snapshot {
struct server *srv; /* server associated with the error (or NULL) */ struct server *srv; /* server associated with the error (or NULL) */
/* @64 */ /* @64 */
unsigned int ev_id; /* event number (counter incremented for each capture) */ unsigned int ev_id; /* event number (counter incremented for each capture) */
/* @68: 4 bytes hole here */ unsigned int buf_size; /* buffer size */
struct sockaddr_storage src; /* client's address */ struct sockaddr_storage src; /* client's address */
/**** protocol-specific part ****/ /**** protocol-specific part ****/
@ -305,13 +307,16 @@ struct error_snapshot {
struct proxy_per_tgroup { struct proxy_per_tgroup {
struct queue queue; struct queue queue;
struct lbprm_per_tgrp lbprm; struct lbprm_per_tgrp lbprm;
char *extra_counters_fe_storage; /* storage for extra_counters_fe */
char *extra_counters_be_storage; /* storage for extra_counters_be */
} THREAD_ALIGNED(); } THREAD_ALIGNED();
struct proxy { struct proxy {
enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */ enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */
char flags; /* bit field PR_FL_* */
enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */ enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */
char cap; /* supported capabilities (PR_CAP_*) */ char cap; /* supported capabilities (PR_CAP_*) */
/* 1 byte hole */
unsigned int flags; /* bit field PR_FL_* */
int to_log; /* things to be logged (LW_*), special value LW_LOGSTEPS == follow log-steps */ int to_log; /* things to be logged (LW_*), special value LW_LOGSTEPS == follow log-steps */
unsigned long last_change; /* internal use only: last time the proxy state was changed */ unsigned long last_change; /* internal use only: last time the proxy state was changed */
@ -412,6 +417,7 @@ struct proxy {
int redispatch_after; /* number of retries before redispatch */ int redispatch_after; /* number of retries before redispatch */
unsigned down_time; /* total time the proxy was down */ unsigned down_time; /* total time the proxy was down */
int (*accept)(struct stream *s); /* application layer's accept() */ int (*accept)(struct stream *s); /* application layer's accept() */
void *(*stream_new_from_sc)(struct session *sess, struct stconn *sc, struct buffer *in); /* stream instantiation callback for mux stream connector */
struct conn_src conn_src; /* connection source settings */ struct conn_src conn_src; /* connection source settings */
enum obj_type *default_target; /* default target to use for accepted streams or NULL */ enum obj_type *default_target; /* default target to use for accepted streams or NULL */
struct proxy *next; struct proxy *next;
@ -474,7 +480,7 @@ struct proxy {
struct log_steps log_steps; /* bitfield of log origins where log should be generated during request handling */ struct log_steps log_steps; /* bitfield of log origins where log should be generated during request handling */
const char *file_prev; /* file of the previous instance found with the same name, or NULL */ const char *file_prev; /* file of the previous instance found with the same name, or NULL */
int line_prev; /* line of the previous instance found with the same name, or 0 */ int line_prev; /* line of the previous instance found with the same name, or 0 */
unsigned int refcount; /* refcount on this proxy (only used for default proxy for now) */ unsigned int def_ref; /* default proxy only refcount */
} conf; /* config information */ } conf; /* config information */
struct http_ext *http_ext; /* http ext options */ struct http_ext *http_ext; /* http ext options */
struct ceb_root *used_server_addr; /* list of server addresses in use */ struct ceb_root *used_server_addr; /* list of server addresses in use */
@ -503,15 +509,23 @@ struct proxy {
struct list filter_configs; /* list of the filters that are declared on this proxy */ struct list filter_configs; /* list of the filters that are declared on this proxy */
struct guid_node guid; /* GUID global tree node */ struct guid_node guid; /* GUID global tree node */
struct mt_list watcher_list; /* list of elems which currently references this proxy instance (currently only used with backends) */
uint refcount; /* refcount to keep proxy from being deleted during runtime */
EXTRA_COUNTERS(extra_counters_fe); EXTRA_COUNTERS(extra_counters_fe);
EXTRA_COUNTERS(extra_counters_be); EXTRA_COUNTERS(extra_counters_be);
THREAD_ALIGN(); THREAD_ALIGN();
unsigned int queueslength; /* Sum of the length of each queue */ /* these ones change all the time */
int served; /* # of active sessions currently being served */ int served; /* # of active sessions currently being served */
int totpend; /* total number of pending connections on this instance (for stats) */
unsigned int feconn, beconn; /* # of active frontend and backends streams */ unsigned int feconn, beconn; /* # of active frontend and backends streams */
THREAD_ALIGN();
/* these ones are only changed when queues are involved, but checked
* all the time.
*/
unsigned int queueslength; /* Sum of the length of each queue */
int totpend; /* total number of pending connections on this instance (for stats) */
}; };
struct switching_rule { struct switching_rule {

View file

@ -26,6 +26,7 @@
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/applet-t.h> #include <haproxy/applet-t.h>
#include <haproxy/counters.h>
#include <haproxy/freq_ctr.h> #include <haproxy/freq_ctr.h>
#include <haproxy/list.h> #include <haproxy/list.h>
#include <haproxy/listener-t.h> #include <haproxy/listener-t.h>
@ -41,6 +42,8 @@ extern unsigned int error_snapshot_id; /* global ID assigned to each error then
extern struct ceb_root *proxy_by_name; /* tree of proxies sorted by name */ extern struct ceb_root *proxy_by_name; /* tree of proxies sorted by name */
extern struct list defaults_list; /* all defaults proxies list */ extern struct list defaults_list; /* all defaults proxies list */
extern unsigned int dynpx_next_id;
extern const struct cfg_opt cfg_opts[]; extern const struct cfg_opt cfg_opts[];
extern const struct cfg_opt cfg_opts2[]; extern const struct cfg_opt cfg_opts2[];
extern const struct cfg_opt cfg_opts3[]; extern const struct cfg_opt cfg_opts3[];
@ -56,9 +59,10 @@ void stop_proxy(struct proxy *p);
int stream_set_backend(struct stream *s, struct proxy *be); int stream_set_backend(struct stream *s, struct proxy *be);
void deinit_proxy(struct proxy *p); void deinit_proxy(struct proxy *p);
void free_proxy(struct proxy *p); void proxy_drop(struct proxy *p);
const char *proxy_cap_str(int cap); const char *proxy_cap_str(int cap);
const char *proxy_mode_str(int mode); const char *proxy_mode_str(int mode);
enum pr_mode str_to_proxy_mode(const char *mode);
const char *proxy_find_best_option(const char *word, const char **extra); const char *proxy_find_best_option(const char *word, const char **extra);
uint proxy_get_next_id(uint from); uint proxy_get_next_id(uint from);
void proxy_store_name(struct proxy *px); void proxy_store_name(struct proxy *px);
@ -74,12 +78,12 @@ void defaults_px_destroy_all_unref(void);
void defaults_px_detach(struct proxy *px); void defaults_px_detach(struct proxy *px);
void defaults_px_ref_all(void); void defaults_px_ref_all(void);
void defaults_px_unref_all(void); void defaults_px_unref_all(void);
int proxy_ref_defaults(struct proxy *px, struct proxy *defpx, char **errmsg);
void proxy_ref_defaults(struct proxy *px, struct proxy *defpx);
void proxy_unref_defaults(struct proxy *px); void proxy_unref_defaults(struct proxy *px);
int setup_new_proxy(struct proxy *px, const char *name, unsigned int cap, char **errmsg); int setup_new_proxy(struct proxy *px, const char *name, unsigned int cap, char **errmsg);
struct proxy *alloc_new_proxy(const char *name, unsigned int cap, struct proxy *alloc_new_proxy(const char *name, unsigned int cap,
char **errmsg); char **errmsg);
void proxy_take(struct proxy *px);
struct proxy *parse_new_proxy(const char *name, unsigned int cap, struct proxy *parse_new_proxy(const char *name, unsigned int cap,
const char *file, int linenum, const char *file, int linenum,
const struct proxy *defproxy); const struct proxy *defproxy);
@ -97,6 +101,9 @@ int resolve_stick_rule(struct proxy *curproxy, struct sticking_rule *mrule);
void free_stick_rules(struct list *rules); void free_stick_rules(struct list *rules);
void free_server_rules(struct list *srules); void free_server_rules(struct list *srules);
int proxy_init_per_thr(struct proxy *px); int proxy_init_per_thr(struct proxy *px);
int proxy_finalize(struct proxy *px, int *err_code);
int be_check_for_deletion(const char *bename, struct proxy **pb, const char **pm);
/* /*
* This function returns a string containing the type of the proxy in a format * This function returns a string containing the type of the proxy in a format
@ -175,7 +182,7 @@ static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
} }
if (l && l->counters && l->counters->shared.tg) if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn); _HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max, COUNTERS_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1)); update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
} }
@ -188,7 +195,7 @@ static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
} }
if (l && l->counters && l->counters->shared.tg) if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max, COUNTERS_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1)); update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
} }
@ -215,7 +222,7 @@ static inline void proxy_inc_be_ctr(struct proxy *be)
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1); update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
} }
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max, COUNTERS_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters._sess_per_sec, 1)); update_freq_ctr(&be->be_counters._sess_per_sec, 1));
} }
@ -235,7 +242,7 @@ static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
} }
if (l && l->counters && l->counters->shared.tg) if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]); _HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max, COUNTERS_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1)); update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
} }

View file

@ -13,6 +13,7 @@ int qcs_http_handle_standalone_fin(struct qcs *qcs);
size_t qcs_http_snd_buf(struct qcs *qcs, struct buffer *buf, size_t count, size_t qcs_http_snd_buf(struct qcs *qcs, struct buffer *buf, size_t count,
char *fin); char *fin);
size_t qcs_http_reset_buf(struct qcs *qcs, struct buffer *buf, size_t count);
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -38,7 +38,6 @@
#include <haproxy/quic_cc-t.h> #include <haproxy/quic_cc-t.h>
#include <haproxy/quic_frame-t.h> #include <haproxy/quic_frame-t.h>
#include <haproxy/quic_openssl_compat-t.h> #include <haproxy/quic_openssl_compat-t.h>
#include <haproxy/quic_stats-t.h>
#include <haproxy/quic_tls-t.h> #include <haproxy/quic_tls-t.h>
#include <haproxy/quic_tp-t.h> #include <haproxy/quic_tp-t.h>
#include <haproxy/show_flags-t.h> #include <haproxy/show_flags-t.h>
@ -401,6 +400,8 @@ struct quic_conn {
struct eb_root streams_by_id; /* qc_stream_desc tree */ struct eb_root streams_by_id; /* qc_stream_desc tree */
const char *alpn;
/* MUX */ /* MUX */
struct qcc *qcc; struct qcc *qcc;
struct task *timer_task; struct task *timer_task;
@ -409,7 +410,9 @@ struct quic_conn {
/* Handshake expiration date */ /* Handshake expiration date */
unsigned int hs_expire; unsigned int hs_expire;
const struct qcc_app_ops *app_ops; /* Callback to close any stream after MUX closure - set by the MUX itself */
int (*strm_reject)(struct list *out, uint64_t stream_id);
/* Proxy counters */ /* Proxy counters */
struct quic_counters *prx_counters; struct quic_counters *prx_counters;

View file

@ -30,6 +30,7 @@
#include <import/eb64tree.h> #include <import/eb64tree.h>
#include <import/ebmbtree.h> #include <import/ebmbtree.h>
#include <haproxy/counters.h>
#include <haproxy/chunk.h> #include <haproxy/chunk.h>
#include <haproxy/dynbuf.h> #include <haproxy/dynbuf.h>
#include <haproxy/ncbmbuf.h> #include <haproxy/ncbmbuf.h>
@ -83,7 +84,7 @@ void qc_check_close_on_released_mux(struct quic_conn *qc);
int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len, int quic_stateless_reset_token_cpy(unsigned char *pos, size_t len,
const unsigned char *salt, size_t saltlen); const unsigned char *salt, size_t saltlen);
int quic_reuse_srv_params(struct quic_conn *qc, int quic_reuse_srv_params(struct quic_conn *qc,
const unsigned char *alpn, const char *alpn,
const struct quic_early_transport_params *etps); const struct quic_early_transport_params *etps);
/* Returns true if <qc> is used on the backed side (as a client). */ /* Returns true if <qc> is used on the backed side (as a client). */
@ -193,13 +194,17 @@ static inline void *qc_counters(enum obj_type *o, const struct stats_module *m)
p = l ? l->bind_conf->frontend : p = l ? l->bind_conf->frontend :
s ? s->proxy : NULL; s ? s->proxy : NULL;
return p ? EXTRA_COUNTERS_GET(p->extra_counters_fe, m) : NULL; if (l && p)
return EXTRA_COUNTERS_GET(p->extra_counters_fe, m);
else if (s && p)
return EXTRA_COUNTERS_GET(p->extra_counters_be, m);
return NULL;
} }
void chunk_frm_appendf(struct buffer *buf, const struct quic_frame *frm); void chunk_frm_appendf(struct buffer *buf, const struct quic_frame *frm);
void quic_set_connection_close(struct quic_conn *qc, const struct quic_err err); void quic_set_connection_close(struct quic_conn *qc, const struct quic_err err);
void quic_set_tls_alert(struct quic_conn *qc, int alert); void quic_set_tls_alert(struct quic_conn *qc, int alert);
int quic_set_app_ops(struct quic_conn *qc, const unsigned char *alpn, size_t alpn_len); int qc_register_alpn(struct quic_conn *qc, const char *alpn, int alpn_len);
int qc_check_dcid(struct quic_conn *qc, unsigned char *dcid, size_t dcid_len); int qc_check_dcid(struct quic_conn *qc, unsigned char *dcid, size_t dcid_len);
void qc_notify_err(struct quic_conn *qc); void qc_notify_err(struct quic_conn *qc);

View file

@ -20,8 +20,7 @@
#define QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION "CLIENT_TRAFFIC_SECRET_0" #define QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION "CLIENT_TRAFFIC_SECRET_0"
#define QUIC_OPENSSL_COMPAT_SERVER_APPLICATION "SERVER_TRAFFIC_SECRET_0" #define QUIC_OPENSSL_COMPAT_SERVER_APPLICATION "SERVER_TRAFFIC_SECRET_0"
void quic_tls_compat_msg_callback(struct connection *conn, void quic_tls_compat_msg_callback(int write_p, int version, int content_type,
int write_p, int version, int content_type,
const void *buf, size_t len, SSL *ssl); const void *buf, size_t len, SSL *ssl);
int quic_tls_compat_init(struct bind_conf *bind_conf, SSL_CTX *ctx); int quic_tls_compat_init(struct bind_conf *bind_conf, SSL_CTX *ctx);
void quic_tls_compat_keylog_callback(const SSL *ssl, const char *line); void quic_tls_compat_keylog_callback(const SSL *ssl, const char *line);

View file

@ -6,8 +6,6 @@
#error "Must define USE_OPENSSL" #error "Must define USE_OPENSSL"
#endif #endif
extern struct stats_module quic_stats_module;
enum { enum {
QUIC_ST_RXBUF_FULL, QUIC_ST_RXBUF_FULL,
QUIC_ST_DROPPED_PACKET, QUIC_ST_DROPPED_PACKET,
@ -52,6 +50,7 @@ enum {
QUIC_ST_STREAM_DATA_BLOCKED, QUIC_ST_STREAM_DATA_BLOCKED,
QUIC_ST_STREAMS_BLOCKED_BIDI, QUIC_ST_STREAMS_BLOCKED_BIDI,
QUIC_ST_STREAMS_BLOCKED_UNI, QUIC_ST_STREAMS_BLOCKED_UNI,
QUIC_ST_NCBUF_GAP_LIMIT,
QUIC_STATS_COUNT /* must be the last */ QUIC_STATS_COUNT /* must be the last */
}; };
@ -99,6 +98,7 @@ struct quic_counters {
long long stream_data_blocked; /* total number of times STREAM_DATA_BLOCKED frame was received */ long long stream_data_blocked; /* total number of times STREAM_DATA_BLOCKED frame was received */
long long streams_blocked_bidi; /* total number of times STREAMS_BLOCKED_BIDI frame was received */ long long streams_blocked_bidi; /* total number of times STREAMS_BLOCKED_BIDI frame was received */
long long streams_blocked_uni; /* total number of times STREAMS_BLOCKED_UNI frame was received */ long long streams_blocked_uni; /* total number of times STREAMS_BLOCKED_UNI frame was received */
long long ncbuf_gap_limit; /* total number of times we failed to add data to ncbuf due to gap size limit */
}; };
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -7,7 +7,9 @@
#endif #endif
#include <haproxy/quic_stats-t.h> #include <haproxy/quic_stats-t.h>
#include <haproxy/stats-t.h>
extern struct stats_module quic_stats_module;
void quic_stats_transp_err_count_inc(struct quic_counters *ctrs, int error_code); void quic_stats_transp_err_count_inc(struct quic_counters *ctrs, int error_code);
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -27,7 +27,6 @@
#include <haproxy/connection-t.h> #include <haproxy/connection-t.h>
#include <haproxy/dns-t.h> #include <haproxy/dns-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/task-t.h> #include <haproxy/task-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>

View file

@ -38,7 +38,7 @@ void sc_update_tx(struct stconn *sc);
struct task *sc_conn_io_cb(struct task *t, void *ctx, unsigned int state); struct task *sc_conn_io_cb(struct task *t, void *ctx, unsigned int state);
int sc_conn_sync_recv(struct stconn *sc); int sc_conn_sync_recv(struct stconn *sc);
void sc_conn_sync_send(struct stconn *sc); int sc_conn_sync_send(struct stconn *sc);
int sc_applet_sync_recv(struct stconn *sc); int sc_applet_sync_recv(struct stconn *sc);
void sc_applet_sync_send(struct stconn *sc); void sc_applet_sync_send(struct stconn *sc);
@ -74,6 +74,70 @@ static inline struct buffer *sc_ob(const struct stconn *sc)
{ {
return &sc_oc(sc)->buf; return &sc_oc(sc)->buf;
} }
/* The application layer tells the stream connector that it just got the input
* buffer it was waiting for. A read activity is reported. The SC_FL_HAVE_BUFF
* flag is set and held until sc_used_buff() is called to indicate it was
* used.
*/
static inline void sc_have_buff(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_BUFF) {
sc->flags &= ~SC_FL_NEED_BUFF;
sc->flags |= SC_FL_HAVE_BUFF;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector failed to get an input buffer and is waiting for it.
* It indicates a willingness to deliver data to the buffer that will have to
* be retried. As such, callers will often automatically clear SE_FL_HAVE_NO_DATA
* to be called again as soon as SC_FL_NEED_BUFF is cleared.
*/
static inline void sc_need_buff(struct stconn *sc)
{
sc->flags |= SC_FL_NEED_BUFF;
}
/* The stream connector indicates that it has successfully allocated the buffer
* it was previously waiting for so it drops the SC_FL_HAVE_BUFF bit.
*/
static inline void sc_used_buff(struct stconn *sc)
{
sc->flags &= ~SC_FL_HAVE_BUFF;
}
/* Tell a stream connector some room was made in the input buffer and any
* failed attempt to inject data into it may be tried again. This is usually
* called after a successful transfer of buffer contents to the other side.
* A read activity is reported.
*/
static inline void sc_have_room(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_ROOM) {
sc->flags &= ~SC_FL_NEED_ROOM;
sc->room_needed = 0;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector announces it failed to put data into the input buffer
* by lack of room. Since it indicates a willingness to deliver data to the
* buffer that will have to be retried. Usually the caller will also clear
* SE_FL_HAVE_NO_DATA to be called again as soon as SC_FL_NEED_ROOM is cleared.
*
* The caller is responsible to specified the amount of free space required to
* progress. It must take care to not exceed the buffer size.
*/
static inline void sc_need_room(struct stconn *sc, ssize_t room_needed)
{
sc->flags |= SC_FL_NEED_ROOM;
BUG_ON_HOT(room_needed > (ssize_t)c_size(sc_ic(sc)));
sc->room_needed = room_needed;
}
/* returns the stream's task associated to this stream connector */ /* returns the stream's task associated to this stream connector */
static inline struct task *sc_strm_task(const struct stconn *sc) static inline struct task *sc_strm_task(const struct stconn *sc)
{ {
@ -344,10 +408,15 @@ static inline int sc_sync_recv(struct stconn *sc)
/* Perform a synchronous send using the right version, depending the endpoing is /* Perform a synchronous send using the right version, depending the endpoing is
* a connection or an applet. * a connection or an applet.
*/ */
static inline void sc_sync_send(struct stconn *sc) static inline int sc_sync_send(struct stconn *sc, unsigned cnt)
{ {
if (sc_ep_test(sc, SE_FL_T_MUX)) if (!sc_ep_test(sc, SE_FL_T_MUX))
sc_conn_sync_send(sc); return 0;
if (cnt >= 2 && co_data(sc_oc(sc))) {
task_wakeup(__sc_strm(sc)->task, TASK_WOKEN_MSG);
return 0;
}
return sc_conn_sync_send(sc);
} }
/* Combines both sc_update_rx() and sc_update_tx() at once */ /* Combines both sc_update_rx() and sc_update_tx() at once */

View file

@ -286,6 +286,7 @@ struct srv_per_tgroup {
struct queue queue; /* pending connections */ struct queue queue; /* pending connections */
struct server *server; /* pointer to the corresponding server */ struct server *server; /* pointer to the corresponding server */
struct eb32_node lb_node; /* node used for tree-based load balancing */ struct eb32_node lb_node; /* node used for tree-based load balancing */
char *extra_counters_storage; /* storage for extra_counters */
struct server *next_full; /* next server in the temporary full list */ struct server *next_full; /* next server in the temporary full list */
unsigned int last_other_tgrp_served; /* Last other tgrp we dequeued from */ unsigned int last_other_tgrp_served; /* Last other tgrp we dequeued from */
unsigned int self_served; /* Number of connection we dequeued from our own queue */ unsigned int self_served; /* Number of connection we dequeued from our own queue */
@ -383,7 +384,6 @@ struct server {
unsigned next_eweight; /* next pending eweight to commit */ unsigned next_eweight; /* next pending eweight to commit */
unsigned cumulative_weight; /* weight of servers prior to this one in the same group, for chash balancing */ unsigned cumulative_weight; /* weight of servers prior to this one in the same group, for chash balancing */
int maxqueue; /* maximum number of pending connections allowed */ int maxqueue; /* maximum number of pending connections allowed */
unsigned int queueslength; /* Sum of the length of each queue */
int shard; /* shard (in peers protocol context only) */ int shard; /* shard (in peers protocol context only) */
int log_bufsize; /* implicit ring bufsize (for log server only - in log backend) */ int log_bufsize; /* implicit ring bufsize (for log server only - in log backend) */
@ -406,6 +406,7 @@ struct server {
unsigned int max_used_conns; /* Max number of used connections (the counter is reset at each connection purges */ unsigned int max_used_conns; /* Max number of used connections (the counter is reset at each connection purges */
unsigned int est_need_conns; /* Estimate on the number of needed connections (max of curr and previous max_used) */ unsigned int est_need_conns; /* Estimate on the number of needed connections (max of curr and previous max_used) */
unsigned int curr_sess_idle_conns; /* Current number of idle connections attached to a session instead of idle/safe trees. */ unsigned int curr_sess_idle_conns; /* Current number of idle connections attached to a session instead of idle/safe trees. */
unsigned int queueslength; /* Sum of the length of each queue */
/* elements only used during boot, do not perturb and plug the hole */ /* elements only used during boot, do not perturb and plug the hole */
struct guid_node guid; /* GUID global tree node */ struct guid_node guid; /* GUID global tree node */

View file

@ -29,6 +29,7 @@
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/applet-t.h> #include <haproxy/applet-t.h>
#include <haproxy/arg-t.h> #include <haproxy/arg-t.h>
#include <haproxy/counters.h>
#include <haproxy/freq_ctr.h> #include <haproxy/freq_ctr.h>
#include <haproxy/proxy-t.h> #include <haproxy/proxy-t.h>
#include <haproxy/resolvers-t.h> #include <haproxy/resolvers-t.h>
@ -55,6 +56,7 @@ int srv_update_addr(struct server *s, void *ip, int ip_sin_family, struct server
struct sample_expr *_parse_srv_expr(char *expr, struct arg_list *args_px, struct sample_expr *_parse_srv_expr(char *expr, struct arg_list *args_px,
const char *file, int linenum, char **err); const char *file, int linenum, char **err);
int server_parse_exprs(struct server *srv, struct proxy *px, char **err); int server_parse_exprs(struct server *srv, struct proxy *px, char **err);
int srv_configure_auto_sni(struct server *srv, int *err_code, char **err);
int server_set_inetaddr(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater, struct buffer *msg); int server_set_inetaddr(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater, struct buffer *msg);
int server_set_inetaddr_warn(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater); int server_set_inetaddr_warn(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater);
void server_get_inetaddr(struct server *s, struct server_inetaddr *inetaddr); void server_get_inetaddr(struct server *s, struct server_inetaddr *inetaddr);
@ -211,7 +213,7 @@ static inline void srv_inc_sess_ctr(struct server *s)
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1); update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
} }
HA_ATOMIC_UPDATE_MAX(&s->counters.sps_max, COUNTERS_UPDATE_MAX(&s->counters.sps_max,
update_freq_ctr(&s->counters._sess_per_sec, 1)); update_freq_ctr(&s->counters._sess_per_sec, 1));
} }
@ -349,28 +351,26 @@ static inline int srv_is_transparent(const struct server *srv)
(srv->flags & SRV_F_MAPPORTS); (srv->flags & SRV_F_MAPPORTS);
} }
/* Detach server from proxy list. It is supported to call this /* Detach <srv> server from its parent proxy list.
* even if the server is not yet in the list *
* Must be called under thread isolation or when it is safe to assume * Must be called under thread isolation.
* that the parent proxy doesn't is not skimming through the server list
*/ */
static inline void srv_detach(struct server *srv) static inline void srv_detach(struct server *srv)
{ {
struct proxy *px = srv->proxy; struct proxy *px = srv->proxy;
if (px->srv == srv)
px->srv = srv->next;
else {
struct server *prev; struct server *prev;
if (px->srv == srv) {
px->srv = srv->next;
}
else {
for (prev = px->srv; prev && prev->next != srv; prev = prev->next) for (prev = px->srv; prev && prev->next != srv; prev = prev->next)
; ;
BUG_ON(!prev); /* Server instance not found in proxy list ? */
BUG_ON(!prev);
prev->next = srv->next; prev->next = srv->next;
} }
/* reset the proxy's ready_srv if it was this one */
/* Reset the proxy's ready_srv if it was this one. */
HA_ATOMIC_CAS(&px->ready_srv, &srv, NULL); HA_ATOMIC_CAS(&px->ready_srv, &srv, NULL);
} }

View file

@ -73,6 +73,14 @@ struct ckch_conf {
char *id; char *id;
char **domains; char **domains;
} acme; } acme;
struct {
struct {
char *type; /* "RSA" or "ECSDA" */
int bits; /* bits for RSA */
char *curves; /* NID of curves for ECDSA*/
} key;
int on;
} gencrt;
}; };
/* /*

View file

@ -80,6 +80,7 @@ void ssl_store_delete_cafile_entry(struct cafile_entry *ca_e);
int ssl_store_load_ca_from_buf(struct cafile_entry *ca_e, char *cert_buf, int append); int ssl_store_load_ca_from_buf(struct cafile_entry *ca_e, char *cert_buf, int append);
int ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type); int ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type);
int __ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type, int shuterror); int __ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type, int shuterror);
const char *ha_default_cert_dir();
extern struct cert_exts cert_exts[]; extern struct cert_exts cert_exts[];
extern int (*ssl_commit_crlfile_cb)(const char *path, X509_STORE *ctx, char **err); extern int (*ssl_commit_crlfile_cb)(const char *path, X509_STORE *ctx, char **err);

View file

@ -28,6 +28,7 @@
/* crt-list entry functions */ /* crt-list entry functions */
void ssl_sock_free_ssl_conf(struct ssl_bind_conf *conf); void ssl_sock_free_ssl_conf(struct ssl_bind_conf *conf);
struct ssl_bind_conf *crtlist_dup_ssl_conf(struct ssl_bind_conf *src);
char **crtlist_dup_filters(char **args, int fcount); char **crtlist_dup_filters(char **args, int fcount);
void crtlist_free_filters(char **args); void crtlist_free_filters(char **args);
void crtlist_entry_free(struct crtlist_entry *entry); void crtlist_entry_free(struct crtlist_entry *entry);

View file

@ -32,6 +32,8 @@ int ssl_sock_set_generated_cert(SSL_CTX *ctx, unsigned int key, struct bind_conf
unsigned int ssl_sock_generated_cert_key(const void *data, size_t len); unsigned int ssl_sock_generated_cert_key(const void *data, size_t len);
int ssl_sock_gencert_load_ca(struct bind_conf *bind_conf); int ssl_sock_gencert_load_ca(struct bind_conf *bind_conf);
void ssl_sock_gencert_free_ca(struct bind_conf *bind_conf); void ssl_sock_gencert_free_ca(struct bind_conf *bind_conf);
EVP_PKEY *ssl_gen_EVP_PKEY(int keytype, int curves, int bits, char **errmsg);
X509 *ssl_gen_x509(EVP_PKEY *pkey);
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */
#endif /* _HAPROXY_SSL_GENCERT_H */ #endif /* _HAPROXY_SSL_GENCERT_H */

View file

@ -194,7 +194,7 @@ struct issuer_chain {
struct connection; struct connection;
typedef void (*ssl_sock_msg_callback_func)(struct connection *conn, typedef void (*ssl_sock_msg_callback_func)(
int write_p, int version, int content_type, int write_p, int version, int content_type,
const void *buf, size_t len, SSL *ssl); const void *buf, size_t len, SSL *ssl);
@ -338,6 +338,8 @@ struct global_ssl {
int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */ int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */
char **passphrase_cmd; char **passphrase_cmd;
int passphrase_cmd_args_cnt; int passphrase_cmd_args_cnt;
unsigned int certificate_compression:1; /* allow to explicitely disable certificate compression */
}; };
/* The order here matters for picking a default context, /* The order here matters for picking a default context,
@ -361,6 +363,7 @@ struct passphrase_cb_data {
const char *path; const char *path;
struct ckch_data *ckch_data; struct ckch_data *ckch_data;
int passphrase_idx; int passphrase_idx;
int callback_called;
}; };
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */

View file

@ -25,12 +25,13 @@
#include <haproxy/connection.h> #include <haproxy/connection.h>
#include <haproxy/counters.h>
#include <haproxy/openssl-compat.h> #include <haproxy/openssl-compat.h>
#include <haproxy/pool-t.h> #include <haproxy/pool-t.h>
#include <haproxy/proxy-t.h> #include <haproxy/proxy-t.h>
#include <haproxy/quic_conn-t.h> #include <haproxy/quic_conn-t.h>
#include <haproxy/ssl_sock-t.h> #include <haproxy/ssl_sock-t.h>
#include <haproxy/stats.h> #include <haproxy/stats-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
extern struct list tlskeys_reference; extern struct list tlskeys_reference;
@ -73,7 +74,7 @@ int ssl_sock_get_alpn(const struct connection *conn, void *xprt_ctx,
const char **str, int *len); const char **str, int *len);
int ssl_bio_and_sess_init(struct connection *conn, SSL_CTX *ssl_ctx, int ssl_bio_and_sess_init(struct connection *conn, SSL_CTX *ssl_ctx,
SSL **ssl, BIO **bio, BIO_METHOD *bio_meth, void *ctx); SSL **ssl, BIO **bio, BIO_METHOD *bio_meth, void *ctx);
int ssl_sock_srv_try_reuse_sess(struct ssl_sock_ctx *ctx, struct server *srv); void ssl_sock_srv_try_reuse_sess(struct ssl_sock_ctx *ctx, struct server *srv);
const char *ssl_sock_get_sni(struct connection *conn); const char *ssl_sock_get_sni(struct connection *conn);
const char *ssl_sock_get_cert_sig(struct connection *conn); const char *ssl_sock_get_cert_sig(struct connection *conn);
const char *ssl_sock_get_cipher_name(struct connection *conn); const char *ssl_sock_get_cipher_name(struct connection *conn);

View file

@ -47,7 +47,7 @@ struct shm_stats_file_hdr {
*/ */
struct { struct {
pid_t pid; pid_t pid;
int heartbeat; // last activity of this process + heartbeat timeout, in ticks uint heartbeat; // last activity of this process + heartbeat timeout, in ticks
} slots[64]; } slots[64];
int objects; /* actual number of objects stored in the shm */ int objects; /* actual number of objects stored in the shm */
int objects_slots; /* total available objects slots unless map is resized */ int objects_slots; /* total available objects slots unless map is resized */

View file

@ -25,6 +25,7 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/counters-t.h>
/* Flags for applet.ctx.stats.flags */ /* Flags for applet.ctx.stats.flags */
#define STAT_F_FMT_HTML 0x00000001 /* dump the stats in HTML format */ #define STAT_F_FMT_HTML 0x00000001 /* dump the stats in HTML format */
@ -515,23 +516,13 @@ struct field {
} u; } u;
}; };
enum counters_type {
COUNTERS_FE = 0,
COUNTERS_BE,
COUNTERS_SV,
COUNTERS_LI,
COUNTERS_RSLV,
COUNTERS_OFF_END
};
/* Entity used to generate statistics on an HAProxy component */ /* Entity used to generate statistics on an HAProxy component */
struct stats_module { struct stats_module {
struct list list; struct list list;
const char *name; const char *name;
/* functor used to generate the stats module using counters provided through data parameter */ /* function used to generate the stats module using counters provided through data parameter */
int (*fill_stats)(void *data, struct field *, unsigned int *); int (*fill_stats)(struct stats_module *, struct extra_counters *, struct field *, unsigned int *);
struct stat_col *stats; /* statistics provided by the module */ struct stat_col *stats; /* statistics provided by the module */
void *counters; /* initial values of allocated counters */ void *counters; /* initial values of allocated counters */
@ -543,12 +534,6 @@ struct stats_module {
char clearable; /* reset on a clear counters */ char clearable; /* reset on a clear counters */
}; };
struct extra_counters {
char *data; /* heap containing counters allocated in a linear fashion */
size_t size; /* size of allocated data */
enum counters_type type; /* type of object containing the counters */
};
/* stats_domain is used in a flag as a 1 byte field */ /* stats_domain is used in a flag as a 1 byte field */
enum stats_domain { enum stats_domain {
STATS_DOMAIN_PROXY = 0, STATS_DOMAIN_PROXY = 0,
@ -593,58 +578,9 @@ struct show_stat_ctx {
int iid, type, sid; /* proxy id, type and service id if bounding of stats is enabled */ int iid, type, sid; /* proxy id, type and service id if bounding of stats is enabled */
int st_code; /* the status code returned by an action */ int st_code; /* the status code returned by an action */
struct buffer chunk; /* temporary buffer which holds a single-line output */ struct buffer chunk; /* temporary buffer which holds a single-line output */
struct watcher px_watch; /* watcher to automatically update obj1 on backend deletion */
struct watcher srv_watch; /* watcher to automatically update obj2 on server deletion */ struct watcher srv_watch; /* watcher to automatically update obj2 on server deletion */
enum stat_state state; /* phase of output production */ enum stat_state state; /* phase of output production */
}; };
extern THREAD_LOCAL void *trash_counters;
#define EXTRA_COUNTERS(name) \
struct extra_counters *name
#define EXTRA_COUNTERS_GET(counters, mod) \
(likely(counters) ? \
((void *)((counters)->data + (mod)->counters_off[(counters)->type])) : \
(trash_counters))
#define EXTRA_COUNTERS_REGISTER(counters, ctype, alloc_failed_label) \
do { \
typeof(*counters) _ctr; \
_ctr = calloc(1, sizeof(*_ctr)); \
if (!_ctr) \
goto alloc_failed_label; \
_ctr->type = (ctype); \
*(counters) = _ctr; \
} while (0)
#define EXTRA_COUNTERS_ADD(mod, counters, new_counters, csize) \
do { \
typeof(counters) _ctr = (counters); \
(mod)->counters_off[_ctr->type] = _ctr->size; \
_ctr->size += (csize); \
} while (0)
#define EXTRA_COUNTERS_ALLOC(counters, alloc_failed_label) \
do { \
typeof(counters) _ctr = (counters); \
_ctr->data = malloc((_ctr)->size); \
if (!_ctr->data) \
goto alloc_failed_label; \
} while (0)
#define EXTRA_COUNTERS_INIT(counters, mod, init_counters, init_counters_size) \
do { \
typeof(counters) _ctr = (counters); \
memcpy(_ctr->data + mod->counters_off[_ctr->type], \
(init_counters), (init_counters_size)); \
} while (0)
#define EXTRA_COUNTERS_FREE(counters) \
do { \
if (counters) { \
free((counters)->data); \
free(counters); \
} \
} while (0)
#endif /* _HAPROXY_STATS_T_H */ #endif /* _HAPROXY_STATS_T_H */

View file

@ -24,6 +24,7 @@
#define _HAPROXY_STATS_H #define _HAPROXY_STATS_H
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/counters.h>
#include <haproxy/listener-t.h> #include <haproxy/listener-t.h>
#include <haproxy/stats-t.h> #include <haproxy/stats-t.h>
#include <haproxy/tools-t.h> #include <haproxy/tools-t.h>
@ -167,7 +168,8 @@ static inline enum stats_domain_px_cap stats_px_get_cap(uint32_t domain)
} }
int stats_allocate_proxy_counters_internal(struct extra_counters **counters, int stats_allocate_proxy_counters_internal(struct extra_counters **counters,
int type, int px_cap); int type, int px_cap,
char **storage, size_t step);
int stats_allocate_proxy_counters(struct proxy *px); int stats_allocate_proxy_counters(struct proxy *px);
void stats_register_module(struct stats_module *m); void stats_register_module(struct stats_module *m);

View file

@ -224,7 +224,7 @@ static forceinline char *sc_show_flags(char *buf, size_t len, const char *delim,
_(SC_FL_NEED_BUFF, _(SC_FL_NEED_ROOM, _(SC_FL_NEED_BUFF, _(SC_FL_NEED_ROOM,
_(SC_FL_RCV_ONCE, _(SC_FL_SND_ASAP, _(SC_FL_SND_NEVERWAIT, _(SC_FL_SND_EXP_MORE, _(SC_FL_RCV_ONCE, _(SC_FL_SND_ASAP, _(SC_FL_SND_NEVERWAIT, _(SC_FL_SND_EXP_MORE,
_(SC_FL_ABRT_WANTED, _(SC_FL_SHUT_WANTED, _(SC_FL_ABRT_DONE, _(SC_FL_SHUT_DONE, _(SC_FL_ABRT_WANTED, _(SC_FL_SHUT_WANTED, _(SC_FL_ABRT_DONE, _(SC_FL_SHUT_DONE,
_(SC_FL_EOS, _(SC_FL_HAVE_BUFF)))))))))))))))))))); _(SC_FL_EOS, _(SC_FL_HAVE_BUFF, _(SC_FL_NO_FASTFWD)))))))))))))))))))));
/* epilogue */ /* epilogue */
_(~0U); _(~0U);
return buf; return buf;

View file

@ -24,6 +24,7 @@
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/connection.h> #include <haproxy/connection.h>
#include <haproxy/hstream-t.h>
#include <haproxy/htx-t.h> #include <haproxy/htx-t.h>
#include <haproxy/obj_type.h> #include <haproxy/obj_type.h>
#include <haproxy/stconn-t.h> #include <haproxy/stconn-t.h>
@ -45,10 +46,12 @@ void se_shutdown(struct sedesc *sedesc, enum se_shut_mode mode);
struct stconn *sc_new_from_endp(struct sedesc *sedesc, struct session *sess, struct buffer *input); struct stconn *sc_new_from_endp(struct sedesc *sedesc, struct session *sess, struct buffer *input);
struct stconn *sc_new_from_strm(struct stream *strm, unsigned int flags); struct stconn *sc_new_from_strm(struct stream *strm, unsigned int flags);
struct stconn *sc_new_from_check(struct check *check, unsigned int flags); struct stconn *sc_new_from_check(struct check *check, unsigned int flags);
struct stconn *sc_new_from_haterm(struct sedesc *sd, struct session *sess, struct buffer *input);
void sc_free(struct stconn *sc); void sc_free(struct stconn *sc);
int sc_attach_mux(struct stconn *sc, void *target, void *ctx); int sc_attach_mux(struct stconn *sc, void *target, void *ctx);
int sc_attach_strm(struct stconn *sc, struct stream *strm); int sc_attach_strm(struct stconn *sc, struct stream *strm);
int sc_attach_hstream(struct stconn *sc, struct hstream *hs);
void sc_destroy(struct stconn *sc); void sc_destroy(struct stconn *sc);
int sc_reset_endp(struct stconn *sc); int sc_reset_endp(struct stconn *sc);
@ -331,6 +334,21 @@ static inline struct check *sc_check(const struct stconn *sc)
return NULL; return NULL;
} }
/* Returns the haterm stream from a sc if the application is a
* haterm stream. Otherwise NULL is returned. __sc_hstream() returns the haterm
* stream without any control while sc_hstream() check the application type.
*/
static inline struct hstream *__sc_hstream(const struct stconn *sc)
{
return __objt_hstream(sc->app);
}
static inline struct hstream *sc_hstream(const struct stconn *sc)
{
if (obj_type(sc->app) == OBJ_TYPE_HATERM)
return __objt_hstream(sc->app);
return NULL;
}
/* Returns the name of the application layer's name for the stconn, /* Returns the name of the application layer's name for the stconn,
* or "NONE" when none is attached. * or "NONE" when none is attached.
*/ */
@ -397,67 +415,6 @@ static inline void se_need_remote_conn(struct sedesc *se)
se_fl_set(se, SE_FL_APPLET_NEED_CONN); se_fl_set(se, SE_FL_APPLET_NEED_CONN);
} }
/* The application layer tells the stream connector that it just got the input
* buffer it was waiting for. A read activity is reported. The SC_FL_HAVE_BUFF
* flag is set and held until sc_used_buff() is called to indicate it was
* used.
*/
static inline void sc_have_buff(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_BUFF) {
sc->flags &= ~SC_FL_NEED_BUFF;
sc->flags |= SC_FL_HAVE_BUFF;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector failed to get an input buffer and is waiting for it.
* It indicates a willingness to deliver data to the buffer that will have to
* be retried. As such, callers will often automatically clear SE_FL_HAVE_NO_DATA
* to be called again as soon as SC_FL_NEED_BUFF is cleared.
*/
static inline void sc_need_buff(struct stconn *sc)
{
sc->flags |= SC_FL_NEED_BUFF;
}
/* The stream connector indicates that it has successfully allocated the buffer
* it was previously waiting for so it drops the SC_FL_HAVE_BUFF bit.
*/
static inline void sc_used_buff(struct stconn *sc)
{
sc->flags &= ~SC_FL_HAVE_BUFF;
}
/* Tell a stream connector some room was made in the input buffer and any
* failed attempt to inject data into it may be tried again. This is usually
* called after a successful transfer of buffer contents to the other side.
* A read activity is reported.
*/
static inline void sc_have_room(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_ROOM) {
sc->flags &= ~SC_FL_NEED_ROOM;
sc->room_needed = 0;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector announces it failed to put data into the input buffer
* by lack of room. Since it indicates a willingness to deliver data to the
* buffer that will have to be retried. Usually the caller will also clear
* SE_FL_HAVE_NO_DATA to be called again as soon as SC_FL_NEED_ROOM is cleared.
*
* The caller is responsible to specified the amount of free space required to
* progress. It must take care to not exceed the buffer size.
*/
static inline void sc_need_room(struct stconn *sc, ssize_t room_needed)
{
sc->flags |= SC_FL_NEED_ROOM;
BUG_ON_HOT(room_needed > (ssize_t)global.tune.bufsize);
sc->room_needed = room_needed;
}
/* The stream endpoint indicates that it's ready to consume data from the /* The stream endpoint indicates that it's ready to consume data from the
* stream's output buffer. Report a send activity if the SE is unblocked. * stream's output buffer. Report a send activity if the SE is unblocked.
*/ */
@ -568,7 +525,7 @@ static inline size_t se_done_ff(struct sedesc *se)
} }
} }
} }
se->sc->bytes_out += ret;
return ret; return ret;
} }

View file

@ -180,6 +180,7 @@ enum {
STRM_EVT_SHUT_SRV_DOWN = 0x00000004, /* Must be shut because the selected server became available */ STRM_EVT_SHUT_SRV_DOWN = 0x00000004, /* Must be shut because the selected server became available */
STRM_EVT_SHUT_SRV_UP = 0x00000008, /* Must be shut because a preferred server became available */ STRM_EVT_SHUT_SRV_UP = 0x00000008, /* Must be shut because a preferred server became available */
STRM_EVT_KILLED = 0x00000010, /* Must be shut for external reason */ STRM_EVT_KILLED = 0x00000010, /* Must be shut for external reason */
STRM_EVT_RES = 0x00000020, /* A requested resource is available (a buffer, a conn_slot...) */
}; };
/* This function is used to report flags in debugging tools. Please reflect /* This function is used to report flags in debugging tools. Please reflect
@ -320,7 +321,10 @@ struct stream {
struct list *current_rule_list; /* this is used to store the current executed rule list. */ struct list *current_rule_list; /* this is used to store the current executed rule list. */
void *current_rule; /* this is used to store the current rule to be resumed. */ void *current_rule; /* this is used to store the current rule to be resumed. */
int rules_exp; /* expiration date for current rules execution */ int rules_exp; /* expiration date for current rules execution */
int tunnel_timeout; int tunnel_timeout; /* per-stream tunnel timeout, set by set-timeout action */
int connect_timeout; /* per-stream connect timeout, set by set-timeout action */
int queue_timeout; /* per-stream queue timeout, set by set-timeout action */
int tarpit_timeout; /* per-stream tarpit timeout, set by set-timeout action */
struct { struct {
void *ptr; /* Pointer on the entity (def: NULL) */ void *ptr; /* Pointer on the entity (def: NULL) */

View file

@ -59,7 +59,7 @@ extern struct pool_head *pool_head_uniqueid;
extern struct data_cb sess_conn_cb; extern struct data_cb sess_conn_cb;
struct stream *stream_new(struct session *sess, struct stconn *sc, struct buffer *input); void *stream_new(struct session *sess, struct stconn *sc, struct buffer *input);
void stream_free(struct stream *s); void stream_free(struct stream *s);
int stream_upgrade_from_sc(struct stconn *sc, struct buffer *input); int stream_upgrade_from_sc(struct stconn *sc, struct buffer *input);
int stream_set_http_mode(struct stream *s, const struct mux_proto_list *mux_proto); int stream_set_http_mode(struct stream *s, const struct mux_proto_list *mux_proto);
@ -412,6 +412,7 @@ static inline void stream_shutdown(struct stream *s, int why)
static inline unsigned int stream_map_task_state(unsigned int state) static inline unsigned int stream_map_task_state(unsigned int state)
{ {
return ((state & TASK_WOKEN_TIMER) ? STRM_EVT_TIMER : 0) | return ((state & TASK_WOKEN_TIMER) ? STRM_EVT_TIMER : 0) |
((state & TASK_WOKEN_RES) ? STRM_EVT_RES : 0) |
((state & TASK_WOKEN_MSG) ? STRM_EVT_MSG : 0) | ((state & TASK_WOKEN_MSG) ? STRM_EVT_MSG : 0) |
((state & TASK_F_UEVT1) ? STRM_EVT_SHUT_SRV_DOWN : 0) | ((state & TASK_F_UEVT1) ? STRM_EVT_SHUT_SRV_DOWN : 0) |
((state & TASK_F_UEVT3) ? STRM_EVT_SHUT_SRV_UP : 0) | ((state & TASK_F_UEVT3) ? STRM_EVT_SHUT_SRV_UP : 0) |

View file

@ -217,6 +217,7 @@ enum lock_label {
QC_CID_LOCK, QC_CID_LOCK,
CACHE_LOCK, CACHE_LOCK,
GUID_LOCK, GUID_LOCK,
PROXIES_DEL_LOCK,
OTHER_LOCK, OTHER_LOCK,
/* WT: make sure never to use these ones outside of development, /* WT: make sure never to use these ones outside of development,
* we need them for lock profiling! * we need them for lock profiling!

View file

@ -362,15 +362,19 @@ static inline unsigned long thread_isolated()
extern uint64_t now_mono_time(void); \ extern uint64_t now_mono_time(void); \
if (_LK_ != _LK_UN) { \ if (_LK_ != _LK_UN) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) \ if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) \
lock_start = now_mono_time(); \ lock_start = now_mono_time(); \
} \ } \
(void)(expr); \ (void)(expr); \
if (_LK_ == _LK_UN) { \ if (_LK_ == _LK_UN) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (th_ctx->lock_level == 0 && unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) \ if (th_ctx->lock_level == 0 &&\
unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) \
th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \ th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \
} else if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) { \ } else if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) { \
uint64_t now = now_mono_time(); \ uint64_t now = now_mono_time(); \
if (lock_start) \ if (lock_start) \
th_ctx->lock_wait_total += now - lock_start; \ th_ctx->lock_wait_total += now - lock_start; \
@ -384,7 +388,8 @@ static inline unsigned long thread_isolated()
typeof(expr) _expr = (expr); \ typeof(expr) _expr = (expr); \
if (_expr == 0) { \ if (_expr == 0) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) { \ if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) { \
if (_LK_ == _LK_UN && th_ctx->lock_level == 0) \ if (_LK_ == _LK_UN && th_ctx->lock_level == 0) \
th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \ th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \
else if (_LK_ != _LK_UN && th_ctx->lock_level == 1) \ else if (_LK_ != _LK_UN && th_ctx->lock_level == 1) \

View file

@ -69,6 +69,8 @@ enum {
#define TH_FL_IN_DBG_HANDLER 0x00000100 /* thread currently in the debug signal handler */ #define TH_FL_IN_DBG_HANDLER 0x00000100 /* thread currently in the debug signal handler */
#define TH_FL_IN_WDT_HANDLER 0x00000200 /* thread currently in the wdt signal handler */ #define TH_FL_IN_WDT_HANDLER 0x00000200 /* thread currently in the wdt signal handler */
#define TH_FL_IN_ANY_HANDLER 0x00000380 /* mask to test if the thread is in any signal handler */ #define TH_FL_IN_ANY_HANDLER 0x00000380 /* mask to test if the thread is in any signal handler */
#define TH_FL_TASK_PROFILING_L 0x00000400 /* task profiling in locks (also requires TASK_PROFILING) */
#define TH_FL_TASK_PROFILING_M 0x00000800 /* task profiling in mem alloc (also requires TASK_PROFILING) */
/* we have 4 buffer-wait queues, in highest to lowest emergency order */ /* we have 4 buffer-wait queues, in highest to lowest emergency order */
#define DYNBUF_NBQ 4 #define DYNBUF_NBQ 4

View file

@ -45,6 +45,7 @@
#define TRC_ARG_CHK (1 << 3) #define TRC_ARG_CHK (1 << 3)
#define TRC_ARG_QCON (1 << 4) #define TRC_ARG_QCON (1 << 4)
#define TRC_ARG_APPCTX (1 << 5) #define TRC_ARG_APPCTX (1 << 5)
#define TRC_ARG_HSTRM (1 << 6)
#define TRC_ARG1_PRIV (TRC_ARG_PRIV << 0) #define TRC_ARG1_PRIV (TRC_ARG_PRIV << 0)
#define TRC_ARG1_CONN (TRC_ARG_CONN << 0) #define TRC_ARG1_CONN (TRC_ARG_CONN << 0)
@ -53,6 +54,7 @@
#define TRC_ARG1_CHK (TRC_ARG_CHK << 0) #define TRC_ARG1_CHK (TRC_ARG_CHK << 0)
#define TRC_ARG1_QCON (TRC_ARG_QCON << 0) #define TRC_ARG1_QCON (TRC_ARG_QCON << 0)
#define TRC_ARG1_APPCTX (TRC_ARG_APPCTX << 0) #define TRC_ARG1_APPCTX (TRC_ARG_APPCTX << 0)
#define TRC_ARG1_HSTRM (TRC_ARG_HSTRM << 0)
#define TRC_ARG2_PRIV (TRC_ARG_PRIV << 8) #define TRC_ARG2_PRIV (TRC_ARG_PRIV << 8)
#define TRC_ARG2_CONN (TRC_ARG_CONN << 8) #define TRC_ARG2_CONN (TRC_ARG_CONN << 8)
@ -61,6 +63,7 @@
#define TRC_ARG2_CHK (TRC_ARG_CHK << 8) #define TRC_ARG2_CHK (TRC_ARG_CHK << 8)
#define TRC_ARG2_QCON (TRC_ARG_QCON << 8) #define TRC_ARG2_QCON (TRC_ARG_QCON << 8)
#define TRC_ARG2_APPCTX (TRC_ARG_APPCTX << 8) #define TRC_ARG2_APPCTX (TRC_ARG_APPCTX << 8)
#define TRC_ARG2_HSTRM (TRC_ARG_HSTRM << 8)
#define TRC_ARG3_PRIV (TRC_ARG_PRIV << 16) #define TRC_ARG3_PRIV (TRC_ARG_PRIV << 16)
#define TRC_ARG3_CONN (TRC_ARG_CONN << 16) #define TRC_ARG3_CONN (TRC_ARG_CONN << 16)
@ -69,6 +72,7 @@
#define TRC_ARG3_CHK (TRC_ARG_CHK << 16) #define TRC_ARG3_CHK (TRC_ARG_CHK << 16)
#define TRC_ARG3_QCON (TRC_ARG_QCON << 16) #define TRC_ARG3_QCON (TRC_ARG_QCON << 16)
#define TRC_ARG3_APPCTX (TRC_ARG_APPCTX << 16) #define TRC_ARG3_APPCTX (TRC_ARG_APPCTX << 16)
#define TRC_ARG3_HSTRM (TRC_ARG_HSTRM << 16)
#define TRC_ARG4_PRIV (TRC_ARG_PRIV << 24) #define TRC_ARG4_PRIV (TRC_ARG_PRIV << 24)
#define TRC_ARG4_CONN (TRC_ARG_CONN << 24) #define TRC_ARG4_CONN (TRC_ARG_CONN << 24)
@ -77,6 +81,7 @@
#define TRC_ARG4_CHK (TRC_ARG_CHK << 24) #define TRC_ARG4_CHK (TRC_ARG_CHK << 24)
#define TRC_ARG4_QCON (TRC_ARG_QCON << 24) #define TRC_ARG4_QCON (TRC_ARG_QCON << 24)
#define TRC_ARG4_APPCTX (TRC_ARG_APPCTX << 24) #define TRC_ARG4_APPCTX (TRC_ARG_APPCTX << 24)
#define TRC_ARG4_HSTRM (TRC_ARG_HSTRM << 24)
/* usable to detect the presence of any arg of the desired type */ /* usable to detect the presence of any arg of the desired type */
#define TRC_ARGS_CONN (TRC_ARG_CONN * 0x01010101U) #define TRC_ARGS_CONN (TRC_ARG_CONN * 0x01010101U)
@ -85,6 +90,7 @@
#define TRC_ARGS_CHK (TRC_ARG_CHK * 0x01010101U) #define TRC_ARGS_CHK (TRC_ARG_CHK * 0x01010101U)
#define TRC_ARGS_QCON (TRC_ARG_QCON * 0x01010101U) #define TRC_ARGS_QCON (TRC_ARG_QCON * 0x01010101U)
#define TRC_ARGS_APPCTX (TRC_ARG_APPCTX * 0x01010101U) #define TRC_ARGS_APPCTX (TRC_ARG_APPCTX * 0x01010101U)
#define TRC_ARGS_HSTRM (TRC_ARG_HSTRM * 0x01010101U)
enum trace_state { enum trace_state {
@ -148,6 +154,7 @@ struct trace_ctx {
const struct check *check; const struct check *check;
const struct quic_conn *qc; const struct quic_conn *qc;
const struct appctx *appctx; const struct appctx *appctx;
const struct hstream *hs;
}; };
/* Regarding the verbosity, if <decoding> is not NULL, it must point to a NULL- /* Regarding the verbosity, if <decoding> is not NULL, it must point to a NULL-
@ -185,7 +192,7 @@ struct trace_source {
const void *lockon_ptr; // what to lockon when lockon is set const void *lockon_ptr; // what to lockon when lockon is set
const struct trace_source *follow; // other trace source's tracker to follow const struct trace_source *follow; // other trace source's tracker to follow
int cmdline; // true if source was activated via -dt command line args int cmdline; // true if source was activated via -dt command line args
}; } THREAD_ALIGNED();
#endif /* _HAPROXY_TRACE_T_H */ #endif /* _HAPROXY_TRACE_T_H */

View file

@ -1,6 +1,6 @@
varnishtest "Test the support for tcp-md5sig option (linux only)" varnishtest "Test the support for tcp-md5sig option (linux only)"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(HAVE_TCP_MD5SIG)'" feature cmd "$HAPROXY_PROGRAM -cc 'feature(HAVE_WORKING_TCP_MD5SIG)'"
feature ignore_unknown_macro feature ignore_unknown_macro
haproxy h1 -conf { haproxy h1 -conf {

View file

@ -1,8 +1,9 @@
#REGTEST_TYPE=devel #REGTEST_TYPE=devel
# This reg-test checks the behaviour of the jwt_decrypt_secret and # This reg-test checks the behaviour of the jwt_decrypt_secret,
# jwt_decrypt_cert converters that decode a JSON Web Encryption (JWE) token, # jwt_decrypt_cert and jwt_decrypt_jwk converters that decode a JSON Web
# checks its signature and decrypt its content (RFC 7516). # Encryption (JWE) token, check its signature and decrypt its content (RFC
# 7516).
# The tokens have two tiers of encryption, one that is used to encrypt a secret # The tokens have two tiers of encryption, one that is used to encrypt a secret
# ("alg" field of the JOSE header) and this secret is then used to # ("alg" field of the JOSE header) and this secret is then used to
# encrypt/decrypt the data contained in the token ("enc" field of the JOSE # encrypt/decrypt the data contained in the token ("enc" field of the JOSE
@ -13,12 +14,12 @@
# have a hardcoded "AWS-LC UNMANAGED" value put in the response header instead # have a hardcoded "AWS-LC UNMANAGED" value put in the response header instead
# of the decrypted contents. # of the decrypted contents.
varnishtest "Test the 'jwt_decrypt' functionalities" varnishtest "Test the 'jwt_decrypt_jwk' functionalities"
feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'" feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(3.4-dev2)'"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL) && openssl_version_atleast(1.1.1)'" feature cmd "$HAPROXY_PROGRAM -cc 'feature(OPENSSL) && openssl_version_atleast(1.1.1)'"
feature ignore_unknown_macro feature ignore_unknown_macro
server s1 -repeat 10 { server s1 -repeat 20 {
rxreq rxreq
txresp txresp
} -start } -start
@ -57,6 +58,7 @@ haproxy h1 -conf {
use_backend secret_based_alg if { path_beg /secret } use_backend secret_based_alg if { path_beg /secret }
use_backend pem_based_alg if { path_beg /pem } use_backend pem_based_alg if { path_beg /pem }
use_backend jwk if { path_beg /jwk }
default_backend dflt default_backend dflt
@ -85,6 +87,21 @@ haproxy h1 -conf {
http-after-response set-header X-Decrypted %[var(txn.decrypted)] http-after-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port} server s1 ${s1_addr}:${s1_port}
backend jwk
http-request set-var(txn.jwe) http_auth_bearer
http-request set-var(txn.jwk) req.fhdr(X-JWK)
http-request set-var(txn.decrypted) var(txn.jwe),jwt_decrypt_jwk(txn.jwk)
.if ssllib_name_startswith(AWS-LC)
acl aws_unmanaged var(txn.jwe),jwt_header_query('$.alg') -m str "A128KW"
http-request set-var(txn.decrypted) str("AWS-LC UNMANAGED") if aws_unmanaged
.endif
http-after-response set-header X-Decrypted %[var(txn.decrypted)]
server s1 ${s1_addr}:${s1_port}
backend dflt backend dflt
server s1 ${s1_addr}:${s1_port} server s1 ${s1_addr}:${s1_port}
@ -102,6 +119,10 @@ client c1_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg" txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp rxresp
expect resp.http.x-decrypted == "Setec Astronomy" expect resp.http.x-decrypted == "Setec Astronomy"
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-JWK: {\"kty\":\"oct\", \"k\":\"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg\"}"
rxresp
expect resp.http.x-decrypted == "Setec Astronomy"
} -run } -run
@ -113,6 +134,10 @@ client c1_2 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg" txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-Secret: ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp rxresp
expect resp.http.x-decrypted == "" expect resp.http.x-decrypted == ""
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-JWK: {\"kty\":\"oct\", \"k\":\"ZMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg\"}"
rxresp
expect resp.http.x-decrypted == ""
} -run } -run
@ -124,6 +149,10 @@ client c1_3 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: zMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg" txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8w" -hdr "X-Secret: zMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg"
rxresp rxresp
expect resp.http.x-decrypted == "" expect resp.http.x-decrypted == ""
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiZGlyIiwgImVuYyI6ICJBMjU2R0NNIn0..hxCk0nP4aVNpgfb7.inlyAZtUzDCTpD_9iuWx.Pyu90cmgkXenMIVu9RUp8v" -hdr "X-JWK: {\"kty\":\"oct\", \"k\":\"zMpktzGq1g6_r4fKVdnx9OaYr4HjxPjIs7l7SwAsgsg\"}"
rxresp
expect resp.http.x-decrypted == ""
} -run } -run
@ -134,6 +163,11 @@ client c2_1 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_MA" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ" txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_MA" -hdr "X-Secret: 3921VrO5TrLvPQ-NFLlghQ"
rxresp rxresp
expect resp.http.x-decrypted ~ "(Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo\\. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt\\. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem\\. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur\\? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur\\?|AWS-LC UNMANAGED)" expect resp.http.x-decrypted ~ "(Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo\\. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt\\. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem\\. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur\\? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur\\?|AWS-LC UNMANAGED)"
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiQTEyOEtXIiwgImVuYyI6ICJBMTI4Q0JDLUhTMjU2In0.AaOyP1zNjsywJOoQ941JJWT4LQIDlpy3UibM_48HrsoCJ5ENpQhfbQ.h2ZBUiy9ofvcDZOwV2iVJA.K0FhK6ri44ZWmtFUtJRpiZSeT8feKX5grFpU8xG5026bGXAdZADO4ZkQ8DRvSEE9DwNIlK6cIEoSavm12gSzQVXajz3MWv5U6VbK5gPFCeFjJfMPmdQ9THIi-hapcueSxYz2rkcGxo3iP3ixE_bww8UB_XlQvnokhFxtf8NushMkjef4RDrW5vQu4j_qPbqG334msDKmFi8Klprs6JktrADeEJ0bPGN80NKEWp7XPcCbfmcwYe-9z_tPw_KJcQhLpQevfPLfVI4WjPgPxYNGw03qKYnLD7oTjr9qCrQmzUVXutlhxfpD3UQr11SJu8q19Ug82bON-GRd2CjpSrErQq42dd0_mWjG9iDqjqpYFBK9DV_qawy2dxFbfIcCsnb6ewifjoJLiFg2OT7-YdTaC7kqaXeE1JpA-OtMXN72FUDrnQ8r9ifj_VpMNvBf_36dbOCT-cGwIOI8Pf6HH2smXULhtBv9q-qO2zyScpmliqZDXUqmvQ8rxi-xYI2hijV80jo14teZgIotWsZE2FrMPJTkegDmh4cG5UzoUsQxzPhXqHvkss6Hv7h-_fmvXvXY1AZ8T8bL1qM4bS8mKpewmGtjmU6S220tL60ieT2QL0vmTFlJkOE8uFreWlPnxNKBix_zj4Smhg1zS_sl7GoXhp5Q_QY3MOMM5-gCAALY0crqLLWtHswElVOiJSyd64T9HFyXm7Rleqq2kLXmTvDhOR6lzMnA0rcGP7lQGYlLZgFiicsMY722XlKI3v1-cJYvj2RZMPe1ijBLFFTqyPeCBkbsDC3XCpWhMByNHSHKN3t-NJmQBIC-89ZeOMU-WBtqrDDi_CMnaz9mwkyt3P7ja_fVskc4KKBBlMVYDZ3DJeJw3Kg9Pie0XlqHkD6W1vyAWjOM2z76Rh_3553dLAH1HxNRwidLjq3SvoaX3TOU5O2_omFGPBek7QdzhNBGLgv6Zlul_XxZq9UGiVo1jrnkd40_vAZQRL6NyMxGBEij_b8F_wDMz5njrL-a0c2Y5mMno-q8gmM4sFKI1BS5HsrUAw.PFFSFlDslALnebAdaqS_MA" -hdr "X-JWK: {\"kty\":\"oct\", \"k\":\"3921VrO5TrLvPQ-NFLlghQ\"}"
rxresp
expect resp.http.x-decrypted ~ "(Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo\\. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt\\. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem\\. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur\\? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur\\?|AWS-LC UNMANAGED)"
} -run } -run
@ -178,6 +212,10 @@ client c5 -connect ${h1_mainfe_sock} {
txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-Secret: vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw" txreq -url "/secret" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-Secret: vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw"
rxresp rxresp
expect resp.http.x-decrypted == "My Encrypted message" expect resp.http.x-decrypted == "My Encrypted message"
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-JWK: {\"k\":\"vprpatiNyI-biJY57qr8Gg4--4Rycgb2G5yO1_myYAw\",\"kty\":\"oct\"}"
rxresp
expect resp.http.x-decrypted == "My Encrypted message"
} -run } -run
@ -187,6 +225,12 @@ client c6 -connect ${h1_mainfe_sock} {
txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" -hdr "X-PEM: ${testdir}/rsa1_5.pem" txreq -url "/pem" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" -hdr "X-PEM: ${testdir}/rsa1_5.pem"
rxresp rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?" expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" \
-hdr "X-JWK: { \"kty\": \"RSA\", \"e\": \"AQAB\", \"n\": \"wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w\", \"kid\": \"ff3c5c96-392e-46ef-a839-6ff16027af78\", \"d\": \"b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ\", \"p\": \"8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0\", \"q\": \"zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M\", \"dp\": \"1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE\", \"dq\": \"kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM\", \"qi\": \"j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5Vbsg_PWYpFSR7re6FRbF9EYOM7F2-HRv1idxKCWoyQfBqk\" }"
rxresp
expect resp.http.x-decrypted == "Sed ut perspiciatis unde omnis iste natus error sit voluptatem doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. porro quisquam est, qui dolorem ipsum quia dolor sit amet, adipisci velit, sed quia non numquam eius modi tempora incidunt ut dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in voluptate velit esse quam nihil molestiae consequatur, vel illum qui eum fugiat quo voluptas nulla pariatur?"
} -run } -run
@ -199,3 +243,22 @@ client c7 -connect ${h1_mainfe_sock} {
} -run } -run
# Test 'jwt_decrypt_jwk' error cases
client c8 -connect ${h1_mainfe_sock} {
# Invalid 'oct' JWK
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-JWK: {\"k\":\"invalid\",\"kty\":\"oct\"}"
rxresp
expect resp.http.x-decrypted == ""
# Wrong JWK type
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiJBMjU2R0NNS1ciLCJlbmMiOiJBMTkyQ0JDLUhTMzg0IiwiaXYiOiJzVE81QjlPRXFuaUhCX3dYIiwidGFnIjoid2M1ZnRpYUFnNGNOR1JkZzNWQ3FXdyJ9.2zqnM9zeNU-eAMp5h2uFJyxbHHKsZs9YAYKzOcIF3d3Q9uq1TMQAvqOIuXw3kU9o.hh5aObIoIMR6Ke0rXm6V1A.R7U-4OlqOR6f2C1b3nI5bFqZBIGNBgza7FfoPEgrQT8.asJCzUAHCuxS7o8Ut4ENfaY5RluLB35F" -hdr "X-JWK: {\"k\":\"invalid\",\"kty\":\"RSA\"}"
rxresp
expect resp.http.x-decrypted == ""
# Invalid 'RSA' JWK (truncated 'qi')
txreq -url "/jwk" -hdr "Authorization: Bearer eyJhbGciOiAiUlNBMV81IiwgImVuYyI6ICJBMjU2R0NNIn0.ew8AbprGcd_J73-CZPIsE1YonD9rtcL7VCuOOuVkrpS_9UzA9_kMh1yw20u-b5rKJAhmFMCQPXl44ro6IzOeHu8E2X_NlPEnQfyNVQ4R1HB_E9sSk5BLxOH3aHkVUh0I-e2eDDj-pdI3OrdjZtnZEBeQ7tpMcoBEbn1VGg7Pmw4qtdS-0qnDSs-PttU-cejjgPUNLRU8UdoRVC9uJKacJms110QugDuFuMYTTSU2nbIYh0deCMRAuKGWt0Ii6EMYW2JaJ7JfXag59Ar1uylQPyEVrocnOsDuB9xnp2jd796qCPdKxBK9yKUnwjal4SQpYbutr40QzG1S4MsKaUorLg.0el2ruY0mm2s7LUR.X5RI6dF06Y_dbAr8meb-6SG5enj5noto9nzgQU5HDrYdiUofPptIf6E-FikKUM9QR4pY9SyphqbPYeAN1ZYVxBrR8tUf4Do2kw1biuuRAmuIyytpmxwvY946T3ctu1Zw3Ymwe-jWXX08EngzssvzFOGT66gkdufrTkC45Fkr0RBOmWa5OVVg_VR6LwcivtQMmlArlrwbaDmmLqt_2p7afT0UksEz4loq0sskw-p7GbhB2lpzXoDnijdHrQkftRbVCiDbK4-qGr7IRFb0YOHvyVFr-kmDoJv2Zsg_rPKV1LkYmPJUbVDo9T3RAcLinlKPK4ZPC_2bWj3M9BvfOq1HeuyVWzX2Cb1mHFdxXFGqaLPfsE0VOfn0GqL7oHVbuczYYw2eKdmiw5LEMwuuJEdYDE9IIFEe8oRB4hNZ0XMYB6oqqZejD0Fh6nqlj5QUrTYpTSE-3LkgK2zRJ0oZFXZyHCB426bmViuE0mXF7twkQep09g0U35-jFBZcSYBDvZZL1t5d_YEQ0QtO0mEeEpGb0Pvk_EsSMFib7NxClz4_rdtwWCFuM4uFOS5vrQMiMqi_TadhLxrugRFhJpsibuScCiJ7eNDrUvwSWEwv1U593MUX3guDq_ONOo_49EOJSyRJtQCNC6FW6GLWSz9TCo6g5LCnXt-pqwu0Iymr7ZTQ3MTsdq2G55JM2e6SdG43iET8r235hynmXHKPUYHlSjsC2AEAY_pGDO0akIhf4wDVIM5rytn-rjQf-29ZJp05g6KPe-EaN1C-X7aBGhgAEgnX-iaXXbotpGeKRTNj2jAG1UrkYi6BGHxluiXJ8jH_LjHuxKyzIObqK8p28ePDKRL-jyNTrvGW2uorgb_u7HGmWYIWLTI7obnZ5vw3MbkjcwEd4bX5JXUj2rRsUWMlZSSFVO9Wgf7MBvcLsyF0Yqun3p0bi__edmcqNF_uuYZT-8jkUlMborqIDDCYYqIolgi5R1Bmut-gFYq6xyfEncxOi50xmYon50UulVnAH-up_RELGtCjmAivaJb8.upVY733IMAT8YbMab2PZnw" \
-hdr "X-JWK: { \"kty\": \"RSA\", \"e\": \"AQAB\", \"n\": \"wsqJbopx18NQFYLYOq4ZeMSE89yGiEankUpf25yV8QqroKUGrASj_OeqTWUjwPGKTN1vGFFuHYxiJeAUQH2qQPmg9Oqk6-ATBEKn9COKYniQ5459UxCwmZA2RL6ufhrNyq0JF3GfXkjLDBfhU9zJJEOhknsA0L_c-X4AI3d_NbFdMqxNe1V_UWAlLcbKdwO6iC9fAvwUmDQxgy6R0DC1CMouQpenMRcALaSHar1cm4K-syoNobv3HEuqgZ3s6-hOOSqauqAO0GUozPpaIA7OeruyRl5sTWT0r-iz39bchID2bIKtcqLiFcSYPLBcxmsaQCqRlGhmv6stjTCLV1yT9w\", \"kid\": \"ff3c5c96-392e-46ef-a839-6ff16027af78\", \"d\": \"b9hXfQ8lOtw8mX1dpqPcoElGhbczz_-xq2znCXQpbBPSZBUddZvchRSH5pSSKPEHlgb3CSGIdpLqsBCv0C_XmCM9ViN8uqsYgDO9uCLIDK5plWttbkqA_EufvW03R9UgIKWmOL3W4g4t-C2mBb8aByaGGVNjLnlb6i186uBsPGkvaeLHbQcRQKAvhOUTeNiyiiCbUGJwCm4avMiZrsz1r81Y1Z5izo0ERxdZymxM3FRZ9vjTB-6DtitvTXXnaAm1JTu6TIpj38u2mnNLkGMbflOpgelMNKBZVxSmfobIbFN8CHVc1UqLK2ElsZ9RCQANgkMHlMkOMj-XT0wHa3VBUQ\", \"p\": \"8mgriveKJAp1S7SHqirQAfZafxVuAK_A2QBYPsAUhikfBOvN0HtZjgurPXSJSdgR8KbWV7ZjdJM_eOivIb_XiuAaUdIOXbLRet7t9a_NJtmX9iybhoa9VOJFMBq_rbnbbte2kq0-FnXmv3cukbC2LaEw3aEcDgyURLCgWFqt7M0\", \"q\": \"zbbTv5421GowOfKVEuVoA35CEWgl8mdasnEZac2LWxMwKExikKU5LLacLQlcOt7A6n1ZGUC2wyH8mstO5tV34Eug3fnNrbnxFUEE_ZB_njs_rtZnwz57AoUXOXVnd194seIZF9PjdzZcuwXwXbrZ2RSVW8if_ZH5OVYEM1EsA9M\", \"dp\": \"1BaIYmIKn1X3InGlcSFcNRtSOnaJdFhRpotCqkRssKUx2qBlxs7ln_5dqLtZkx5VM_UE_GE7yzc6BZOwBxtOftdsr8HVh-14ksSR9rAGEsO2zVBiEuW4qZf_aQM-ScWfU--wcczZ0dT-Ou8P87Bk9K9fjcn0PeaLoz3WTPepzNE\", \"dq\": \"kYw2u4_UmWvcXVOeV_VKJ5aQZkJ6_sxTpodRBMPyQmkMHKcW4eKU1mcJju_deqWadw5jGPPpm5yTXm5UkAwfOeookoWpGa7CvVf4kPNI6Aphn3GBjunJHNpPuU6w-wvomGsxd-NqQDGNYKHuFFMcyXO_zWXglQdP_1o1tJ1M-BM\", \"qi\": \"j94Ens784M8zsfwWoJhYq9prcSZOGgNbtFWQZO8HP8pcNM9ls7YA4snTtAS_B4peWWFAFZ0LSKPCxAvJnrq69ocmEKEk7ss1Jo062f9pLTQ6cnhMjev3IqLocIFt5\" }"
rxresp
expect resp.http.x-decrypted == ""
} -run

View file

@ -0,0 +1,84 @@
varnishtest "Add backend via cli"
feature ignore_unknown_macro
haproxy hsrv -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${fe}"
http-request return status 200
} -start
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${feS}"
force-be-switch if { req.hdr("x-admin") "1" }
use_backend %[req.hdr(x-be)]
defaults def
defaults def_http
mode http
} -start
client c1 -connect ${h1_feS_sock} {
txreq -hdr "x-be: be"
rxresp
expect resp.status == 503
} -run
haproxy h1 -cli {
# non existent backend
send "experimental-mode on; add backend be from def"
expect ~ "Mode is required"
send "experimental-mode on; add backend be from def_http"
expect ~ "New backend registered."
send "add server be/srv ${hsrv_fe_addr}:${hsrv_fe_port}"
expect ~ "New server registered."
send "enable server be/srv"
expect ~ ".*"
}
client c1 -connect ${h1_feS_sock} {
txreq -hdr "x-be: be"
rxresp
expect resp.status == 503
txreq -hdr "x-be: be" -hdr "x-admin: 1"
rxresp
expect resp.status == 200
} -run
haproxy h1 -cli {
send "publish backend be"
expect ~ "Backend published."
}
client c1 -connect ${h1_feS_sock} {
txreq -hdr "x-be: be"
rxresp
expect resp.status == 200
} -run

View file

@ -0,0 +1,88 @@
varnishtest "Delete backend via cli"
feature ignore_unknown_macro
haproxy h1 -conf {
global
.if feature(THREAD)
thread-groups 1
.endif
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
frontend fe
bind "fd@${feS}"
use_backend be_ref
listen li
bind "fd@${feli}"
backend be_ref
backend be
server s1 ${s1_addr}:${s1_port} disabled
# Defaults with tcp-check rules in it
# Currently this is the only case of runtime ref on an unnamed default
defaults
mode http
timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
timeout client "${HAPROXY_TEST_TIMEOUT-5s}"
timeout server "${HAPROXY_TEST_TIMEOUT-5s}"
option httpchk GET / HTTP/1.1
backend be_unnamed_def_ref
backend be_unnamed_def_ref2
} -start
haproxy h1 -cli {
send "experimental-mode on; del backend other"
expect ~ "No such backend."
send "experimental-mode on; del backend li"
expect ~ "Cannot delete a listen section."
send "experimental-mode on; del backend be_ref"
expect ~ "This proxy cannot be removed at runtime due to other configuration elements pointing to it."
send "show stat be 2 -1"
expect ~ "be,BACKEND,"
send "experimental-mode on; del backend be"
expect ~ "Backend must be unpublished prior to its deletion."
send "unpublish backend be;"
expect ~ ".*"
send "experimental-mode on; del backend be"
expect ~ "Only a backend without server can be deleted."
send "del server be/s1"
expect ~ ".*"
send "experimental-mode on; del backend be"
expect ~ "Backend deleted."
send "show stat be 2 -1"
expect !~ "be,BACKEND,"
}
haproxy h1 -cli {
send "show stat be_unnamed_def_ref 2 -1"
expect ~ "be_unnamed_def_ref,BACKEND,"
send "unpublish backend be_unnamed_def_ref;"
expect ~ ".*"
send "experimental-mode on; del backend be_unnamed_def_ref"
expect ~ "Backend deleted."
send "show stat be_unnamed_def_ref 2 -1"
expect !~ "be_unnamed_def_ref,BACKEND,"
send "unpublish backend be_unnamed_def_ref2;"
expect ~ ".*"
send "experimental-mode on; del backend be_unnamed_def_ref2"
expect ~ "Backend deleted."
}

View file

@ -15,7 +15,7 @@
# - Check that you have socat # - Check that you have socat
varnishtest "Test the 'set ssl ca-file' feature of the CLI" varnishtest "Test the 'set ssl ca-file' feature of the CLI"
feature cmd "$HAPROXY_PROGRAM -cc 'feature(QUIC) && !feature(QUIC_OPENSSL_COMPAT) && !feature(OPENSSL_WOLFSSL)' && !feature(OPENSSL_AWSLC)'" feature cmd "$HAPROXY_PROGRAM -cc 'feature(QUIC) && !feature(QUIC_OPENSSL_COMPAT) && !feature(OPENSSL_WOLFSSL) && !feature(OPENSSL_AWSLC)'"
feature cmd "command -v socat" feature cmd "command -v socat"

View file

@ -51,7 +51,7 @@ haproxy h1 -cli {
# invalid load-balancing algo # invalid load-balancing algo
send "add server other/s1 ${s1_addr}:${s1_port}" send "add server other/s1 ${s1_addr}:${s1_port}"
expect ~ "Backend must use a dynamic load balancing to support dynamic servers." expect ~ "backend 'other' uses a non dynamic load balancing method"
# invalid mux proto # invalid mux proto
send "add server other2/s1 ${s1_addr}:${s1_port} proto h2" send "add server other2/s1 ${s1_addr}:${s1_port} proto h2"

View file

@ -145,7 +145,7 @@ haproxy h1 -cli {
send "show ssl ca-file ${testdir}/certs/set_cafile_interCA1.crt:2" send "show ssl ca-file ${testdir}/certs/set_cafile_interCA1.crt:2"
expect !~ ".*SHA1 FingerPrint: 4FFF535278883264693CEA72C4FAD13F995D0098" expect !~ ".*SHA1 FingerPrint: 4FFF535278883264693CEA72C4FAD13F995D0098"
send "show ssl ca-file ${testdir}/certs/set_cafile_interCA1.crt:2" send "show ssl ca-file ${testdir}/certs/set_cafile_interCA1.crt:2"
expect ~ ".*SHA1 FingerPrint: 3D3D1D10AD74A8135F05A818E10E5FA91433954D" expect ~ ".*SHA1 FingerPrint: 3D3D1D10AD74A8135F05A818E10E5FA91433954D|5F8DAE4B2099A09F9BDDAFD7E9D900F0CE49977C"
} }
client c1 -connect ${h1_clearverifiedlst_sock} { client c1 -connect ${h1_clearverifiedlst_sock} {

View file

@ -86,9 +86,7 @@ haproxy h1 -cli {
expect ~ "\\*${testdir}/certs/interCA2_crl_empty.pem" expect ~ "\\*${testdir}/certs/interCA2_crl_empty.pem"
send "show ssl crl-file \\*${testdir}/certs/interCA2_crl_empty.pem" send "show ssl crl-file \\*${testdir}/certs/interCA2_crl_empty.pem"
expect ~ "Revoked Certificates:" expect ~ "Revoked Certificates:\n.*Serial Number: 1008"
send "show ssl crl-file \\*${testdir}/certs/interCA2_crl_empty.pem:1"
expect ~ "Serial Number: 1008"
} }
# This connection should still succeed since the transaction was not committed # This connection should still succeed since the transaction was not committed

View file

@ -1,10 +1,12 @@
#!/bin/sh #!/bin/sh
DESTDIR=${DESTDIR:-${PWD}/../vtest/}
TMPDIR=${TMPDIR:-$(mktemp -d)}
set -eux set -eux
curl -fsSL https://github.com/vtest/VTest2/archive/main.tar.gz -o VTest.tar.gz curl -fsSL "https://code.vinyl-cache.org/vtest/VTest2/archive/main.tar.gz" -o "${TMPDIR}/VTest.tar.gz"
mkdir ../vtest mkdir -p "${TMPDIR}/vtest"
tar xvf VTest.tar.gz -C ../vtest --strip-components=1 tar xvf ${TMPDIR}/VTest.tar.gz -C "${TMPDIR}/vtest" --strip-components=1
# Special flags due to: https://github.com/vtest/VTest/issues/12 # Special flags due to: https://github.com/vtest/VTest/issues/12
# Note: do not use "make -C ../vtest", otherwise MAKEFLAGS contains "w" # Note: do not use "make -C ../vtest", otherwise MAKEFLAGS contains "w"
@ -13,7 +15,7 @@ tar xvf VTest.tar.gz -C ../vtest --strip-components=1
# MFLAGS works on BSD but misses variable definitions on GNU Make. # MFLAGS works on BSD but misses variable definitions on GNU Make.
# Better just avoid the -C and do the cd ourselves then. # Better just avoid the -C and do the cd ourselves then.
cd ../vtest cd "${TMPDIR}/vtest"
set +e set +e
CPUS=${CPUS:-$(nproc 2>/dev/null)} CPUS=${CPUS:-$(nproc 2>/dev/null)}
@ -28,3 +30,6 @@ if test -f /opt/homebrew/include/pcre2.h; then
else else
make -j${CPUS} FLAGS="-O2 -s -Wall" make -j${CPUS} FLAGS="-O2 -s -Wall"
fi fi
mkdir -p "${DESTDIR}"
cp "${TMPDIR}/vtest/vtest" "${DESTDIR}"

View file

@ -14,11 +14,6 @@
#include <haproxy/acme-t.h> #include <haproxy/acme-t.h>
#include <haproxy/cli.h>
#include <haproxy/cfgparse.h>
#include <haproxy/errors.h>
#include <haproxy/jws.h>
#include <haproxy/base64.h> #include <haproxy/base64.h>
#include <haproxy/cfgparse.h> #include <haproxy/cfgparse.h>
#include <haproxy/cli.h> #include <haproxy/cli.h>
@ -30,6 +25,7 @@
#include <haproxy/pattern.h> #include <haproxy/pattern.h>
#include <haproxy/sink.h> #include <haproxy/sink.h>
#include <haproxy/ssl_ckch.h> #include <haproxy/ssl_ckch.h>
#include <haproxy/ssl_gencert.h>
#include <haproxy/ssl_sock.h> #include <haproxy/ssl_sock.h>
#include <haproxy/ssl_utils.h> #include <haproxy/ssl_utils.h>
#include <haproxy/tools.h> #include <haproxy/tools.h>
@ -157,7 +153,6 @@ enum acme_ret {
ACME_RET_FAIL = 2 ACME_RET_FAIL = 2
}; };
static EVP_PKEY *acme_EVP_PKEY_gen(int keytype, int curves, int bits, char **errmsg);
static int acme_start_task(struct ckch_store *store, char **errmsg); static int acme_start_task(struct ckch_store *store, char **errmsg);
static struct task *acme_scheduler(struct task *task, void *context, unsigned int state); static struct task *acme_scheduler(struct task *task, void *context, unsigned int state);
@ -398,7 +393,7 @@ static int cfg_parse_acme_kws(char **args, int section_type, struct proxy *curpx
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;
} }
if (alertif_too_many_args(2, file, linenum, args, &err_code)) if (alertif_too_many_args(1, file, linenum, args, &err_code))
goto out; goto out;
ha_free(&cur_acme->account.file); ha_free(&cur_acme->account.file);
@ -415,7 +410,7 @@ static int cfg_parse_acme_kws(char **args, int section_type, struct proxy *curpx
goto out; goto out;
} }
if (alertif_too_many_args(2, file, linenum, args, &err_code)) if (alertif_too_many_args(1, file, linenum, args, &err_code))
goto out; goto out;
ha_free(&cur_acme->challenge); ha_free(&cur_acme->challenge);
@ -698,7 +693,7 @@ static int cfg_postsection_acme()
} else { } else {
ha_notice("acme: generate account key '%s' for acme section '%s'.\n", path, cur_acme->name); ha_notice("acme: generate account key '%s' for acme section '%s'.\n", path, cur_acme->name);
if ((key = acme_EVP_PKEY_gen(cur_acme->key.type, cur_acme->key.curves, cur_acme->key.bits, &errmsg)) == NULL) { if ((key = ssl_gen_EVP_PKEY(cur_acme->key.type, cur_acme->key.curves, cur_acme->key.bits, &errmsg)) == NULL) {
ha_alert("acme: %s\n", errmsg); ha_alert("acme: %s\n", errmsg);
goto out; goto out;
} }
@ -883,7 +878,7 @@ static void acme_ctx_destroy(struct acme_ctx *ctx)
istfree(&auth->auth); istfree(&auth->auth);
istfree(&auth->chall); istfree(&auth->chall);
istfree(&auth->token); istfree(&auth->token);
istfree(&auth->token); istfree(&auth->dns);
next = auth->next; next = auth->next;
free(auth); free(auth);
auth = next; auth = next;
@ -1306,7 +1301,6 @@ int acme_res_chkorder(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
}; };
out:
ret = 0; ret = 0;
error: error:
@ -1359,11 +1353,11 @@ int acme_req_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
if (acme_http_req(task, ctx, ctx->finalize, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data))) if (acme_http_req(task, ctx, ctx->finalize, HTTP_METH_POST, hdrs, ist2(req_out->area, req_out->data)))
goto error; goto error;
ret = 0; ret = 0;
goto out;
error: error:
memprintf(errmsg, "couldn't request the finalize URL"); memprintf(errmsg, "couldn't request the finalize URL");
out:
free_trash_chunk(req_in); free_trash_chunk(req_in);
free_trash_chunk(req_out); free_trash_chunk(req_out);
free_trash_chunk(csr); free_trash_chunk(csr);
@ -1415,7 +1409,7 @@ int acme_res_finalize(struct task *task, struct acme_ctx *ctx, char **errmsg)
memprintf(errmsg, "invalid HTTP status code %d when getting Finalize URL", hc->res.status); memprintf(errmsg, "invalid HTTP status code %d when getting Finalize URL", hc->res.status);
goto error; goto error;
} }
out:
ret = 0; ret = 0;
error: error:
@ -1458,9 +1452,10 @@ int acme_req_challenge(struct task *task, struct acme_ctx *ctx, struct acme_auth
goto error; goto error;
ret = 0; ret = 0;
goto out;
error: error:
memprintf(errmsg, "couldn't generate the Challenge request"); memprintf(errmsg, "couldn't generate the Challenge request");
out:
free_trash_chunk(req_in); free_trash_chunk(req_in);
free_trash_chunk(req_out); free_trash_chunk(req_out);
@ -1576,6 +1571,8 @@ int acme_post_as_get(struct task *task, struct acme_ctx *ctx, struct ist url, ch
ret = 0; ret = 0;
goto end;
error_jws: error_jws:
memprintf(errmsg, "couldn't generate the JWS token: %s", errmsg ? *errmsg : ""); memprintf(errmsg, "couldn't generate the JWS token: %s", errmsg ? *errmsg : "");
goto end; goto end;
@ -1764,7 +1761,6 @@ int acme_res_auth(struct task *task, struct acme_ctx *ctx, struct acme_auth *aut
break; break;
} }
out:
ret = 0; ret = 0;
error: error:
@ -1816,9 +1812,10 @@ int acme_req_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
ret = 0; ret = 0;
goto out;
error: error:
memprintf(errmsg, "couldn't generate the newOrder request"); memprintf(errmsg, "couldn't generate the newOrder request");
out:
free_trash_chunk(req_in); free_trash_chunk(req_in);
free_trash_chunk(req_out); free_trash_chunk(req_out);
@ -1926,7 +1923,6 @@ int acme_res_neworder(struct task *task, struct acme_ctx *ctx, char **errmsg)
goto error; goto error;
} }
out:
ret = 0; ret = 0;
error: error:
@ -1978,9 +1974,10 @@ int acme_req_account(struct task *task, struct acme_ctx *ctx, int newaccount, ch
goto error; goto error;
ret = 0; ret = 0;
goto out;
error: error:
memprintf(errmsg, "couldn't generate the newAccount request"); memprintf(errmsg, "couldn't generate the newAccount request");
out:
free_trash_chunk(req_in); free_trash_chunk(req_in);
free_trash_chunk(req_out); free_trash_chunk(req_out);
@ -2587,45 +2584,6 @@ error:
} }
/* Return a new Generated private key of type <keytype> with <bits> and <curves> */
static EVP_PKEY *acme_EVP_PKEY_gen(int keytype, int curves, int bits, char **errmsg)
{
EVP_PKEY_CTX *pkey_ctx = NULL;
EVP_PKEY *pkey = NULL;
if ((pkey_ctx = EVP_PKEY_CTX_new_id(keytype, NULL)) == NULL) {
memprintf(errmsg, "%sCan't generate a private key.\n", errmsg && *errmsg ? *errmsg : "");
goto err;
}
if (EVP_PKEY_keygen_init(pkey_ctx) <= 0) {
memprintf(errmsg, "%sCan't generate a private key.\n", errmsg && *errmsg ? *errmsg : "");
goto err;
}
if (keytype == EVP_PKEY_EC) {
if (EVP_PKEY_CTX_set_ec_paramgen_curve_nid(pkey_ctx, curves) <= 0) {
memprintf(errmsg, "%sCan't set the curves on the new private key.\n", errmsg && *errmsg ? *errmsg : "");
goto err;
}
} else if (keytype == EVP_PKEY_RSA) {
if (EVP_PKEY_CTX_set_rsa_keygen_bits(pkey_ctx, bits) <= 0) {
memprintf(errmsg, "%sCan't set the bits on the new private key.\n", errmsg && *errmsg ? *errmsg : "");
goto err;
}
}
if (EVP_PKEY_keygen(pkey_ctx, &pkey) <= 0) {
memprintf(errmsg, "%sCan't generate a private key.\n", errmsg && *errmsg ? *errmsg : "");
goto err;
}
err:
EVP_PKEY_CTX_free(pkey_ctx);
return pkey;
}
/* /*
* Generate a temporary expired X509 or reuse the one generated. * Generate a temporary expired X509 or reuse the one generated.
* Use tmp_pkey to generate * Use tmp_pkey to generate
@ -2634,81 +2592,18 @@ err:
*/ */
X509 *acme_gen_tmp_x509() X509 *acme_gen_tmp_x509()
{ {
X509 *newcrt = NULL;
X509_NAME *name;
const EVP_MD *digest = NULL;
CONF *ctmp = NULL;
int key_type;
EVP_PKEY *pkey = tmp_pkey;
if (tmp_x509) { if (tmp_x509) {
X509_up_ref(tmp_x509); X509_up_ref(tmp_x509);
return tmp_x509; return tmp_x509;
} }
if (!tmp_pkey) if (!tmp_pkey)
goto mkcert_error;
/* Create the certificate */
if (!(newcrt = X509_new()))
goto mkcert_error;
/* Set version number for the certificate (X509v3) and the serial
* number */
if (X509_set_version(newcrt, 2L) != 1)
goto mkcert_error;
/* Generate an expired certificate */
if (!X509_gmtime_adj(X509_getm_notBefore(newcrt), (long)-60*60*48) ||
!X509_gmtime_adj(X509_getm_notAfter(newcrt),(long)-60*60*24))
goto mkcert_error;
/* set public key in the certificate */
if (X509_set_pubkey(newcrt, pkey) != 1)
goto mkcert_error;
if ((name = X509_NAME_new()) == NULL)
goto mkcert_error;
/* Set the subject name using the servername but the CN */
if (X509_NAME_add_entry_by_txt(name, "CN", MBSTRING_ASC, (unsigned char *)"expired",
-1, -1, 0) != 1) {
X509_NAME_free(name);
goto mkcert_error;
}
if (X509_set_subject_name(newcrt, name) != 1) {
X509_NAME_free(name);
goto mkcert_error;
}
/* Set issuer name as itself */
if (X509_set_issuer_name(newcrt, name) != 1)
goto mkcert_error;
X509_NAME_free(name);
/* Autosign the certificate with the private key */
key_type = EVP_PKEY_base_id(pkey);
if (key_type == EVP_PKEY_RSA)
digest = EVP_sha256();
else if (key_type == EVP_PKEY_EC)
digest = EVP_sha256();
else
goto mkcert_error;
if (!(X509_sign(newcrt, pkey, digest)))
goto mkcert_error;
tmp_x509 = newcrt;
return newcrt;
mkcert_error:
if (ctmp) NCONF_free(ctmp);
if (newcrt) X509_free(newcrt);
return NULL; return NULL;
} tmp_x509 = ssl_gen_x509(tmp_pkey);
return tmp_x509;
}
/* /*
* Generate a temporary RSA2048 pkey or reuse the one generated. * Generate a temporary RSA2048 pkey or reuse the one generated.
@ -2723,7 +2618,7 @@ EVP_PKEY *acme_gen_tmp_pkey()
return tmp_pkey; return tmp_pkey;
} }
tmp_pkey = acme_EVP_PKEY_gen(EVP_PKEY_RSA, 0, 2048, NULL); tmp_pkey = ssl_gen_EVP_PKEY(EVP_PKEY_RSA, 0, 2048, NULL);
return tmp_pkey; return tmp_pkey;
} }
@ -2782,7 +2677,7 @@ static int acme_start_task(struct ckch_store *store, char **errmsg)
ctx->retries = ACME_RETRY; ctx->retries = ACME_RETRY;
if (!cfg->reuse_key) { if (!cfg->reuse_key) {
if ((pkey = acme_EVP_PKEY_gen(cfg->key.type, cfg->key.curves, cfg->key.bits, errmsg)) == NULL) if ((pkey = ssl_gen_EVP_PKEY(cfg->key.type, cfg->key.curves, cfg->key.bits, errmsg)) == NULL)
goto err; goto err;
EVP_PKEY_free(newstore->data->key); EVP_PKEY_free(newstore->data->key);

View file

@ -188,13 +188,30 @@ int cfg_parse_rule_set_timeout(const char **args, int idx, struct act_rule *rule
const char *res; const char *res;
const char *timeout_name = args[idx++]; const char *timeout_name = args[idx++];
if (strcmp(timeout_name, "server") == 0) { if (strcmp(timeout_name, "connect") == 0) {
if (!(px->cap & PR_CAP_BE)) {
memprintf(err, "'%s' has no backend capability", px->id);
return -1;
}
rule->arg.timeout.type = ACT_TIMEOUT_CONNECT;
}
else if (strcmp(timeout_name, "server") == 0) {
if (!(px->cap & PR_CAP_BE)) { if (!(px->cap & PR_CAP_BE)) {
memprintf(err, "'%s' has no backend capability", px->id); memprintf(err, "'%s' has no backend capability", px->id);
return -1; return -1;
} }
rule->arg.timeout.type = ACT_TIMEOUT_SERVER; rule->arg.timeout.type = ACT_TIMEOUT_SERVER;
} }
else if (strcmp(timeout_name, "queue") == 0) {
if (!(px->cap & PR_CAP_BE)) {
memprintf(err, "'%s' has no backend capability", px->id);
return -1;
}
rule->arg.timeout.type = ACT_TIMEOUT_QUEUE;
}
else if (strcmp(timeout_name, "tarpit") == 0) {
rule->arg.timeout.type = ACT_TIMEOUT_TARPIT;
}
else if (strcmp(timeout_name, "tunnel") == 0) { else if (strcmp(timeout_name, "tunnel") == 0) {
if (!(px->cap & PR_CAP_BE)) { if (!(px->cap & PR_CAP_BE)) {
memprintf(err, "'%s' has no backend capability", px->id); memprintf(err, "'%s' has no backend capability", px->id);
@ -211,7 +228,7 @@ int cfg_parse_rule_set_timeout(const char **args, int idx, struct act_rule *rule
} }
else { else {
memprintf(err, memprintf(err,
"'set-timeout' rule supports 'server'/'tunnel'/'client' (got '%s')", "'set-timeout' rule supports 'client'/'connect'/'queue'/'server'/'tarpit'/'tunnel' (got '%s')",
timeout_name); timeout_name);
return -1; return -1;
} }

View file

@ -659,8 +659,20 @@ void activity_count_runtime(uint32_t run_time)
if (!(_HA_ATOMIC_LOAD(&th_ctx->flags) & TH_FL_TASK_PROFILING)) { if (!(_HA_ATOMIC_LOAD(&th_ctx->flags) & TH_FL_TASK_PROFILING)) {
if (unlikely((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_ON || if (unlikely((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_ON ||
((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_AON && ((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_AON &&
swrate_avg(run_time, TIME_STATS_SAMPLES) >= up))) swrate_avg(run_time, TIME_STATS_SAMPLES) >= up))) {
if (profiling & HA_PROF_TASKS_LOCK)
_HA_ATOMIC_OR(&th_ctx->flags, TH_FL_TASK_PROFILING_L);
else
_HA_ATOMIC_AND(&th_ctx->flags, ~TH_FL_TASK_PROFILING_L);
if (profiling & HA_PROF_TASKS_MEM)
_HA_ATOMIC_OR(&th_ctx->flags, TH_FL_TASK_PROFILING_M);
else
_HA_ATOMIC_AND(&th_ctx->flags, ~TH_FL_TASK_PROFILING_M);
_HA_ATOMIC_OR(&th_ctx->flags, TH_FL_TASK_PROFILING); _HA_ATOMIC_OR(&th_ctx->flags, TH_FL_TASK_PROFILING);
}
} else { } else {
if (unlikely((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_OFF || if (unlikely((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_OFF ||
((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_AOFF && ((profiling & HA_PROF_TASKS_MASK) == HA_PROF_TASKS_AOFF &&
@ -692,26 +704,41 @@ static int cfg_parse_prof_memory(char **args, int section_type, struct proxy *cu
} }
#endif // USE_MEMORY_PROFILING #endif // USE_MEMORY_PROFILING
/* config parser for global "profiling.tasks", accepts "on" or "off" */ /* config parser for global "profiling.tasks", accepts "on", "off", 'auto",
* "lock", "no-lock", "memory", "no-memory".
*/
static int cfg_parse_prof_tasks(char **args, int section_type, struct proxy *curpx, static int cfg_parse_prof_tasks(char **args, int section_type, struct proxy *curpx,
const struct proxy *defpx, const char *file, int line, const struct proxy *defpx, const char *file, int line,
char **err) char **err)
{ {
if (too_many_args(1, args, err, NULL)) int arg;
return -1;
if (strcmp(args[1], "on") == 0) { for (arg = 1; *args[arg]; arg++) {
if (strcmp(args[arg], "on") == 0) {
profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_ON; profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_ON;
HA_ATOMIC_STORE(&prof_task_start_ns, now_ns); HA_ATOMIC_STORE(&prof_task_start_ns, now_ns);
} }
else if (strcmp(args[1], "auto") == 0) { else if (strcmp(args[arg], "auto") == 0) {
profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_AOFF; profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_AOFF;
HA_ATOMIC_STORE(&prof_task_start_ns, now_ns); HA_ATOMIC_STORE(&prof_task_start_ns, now_ns);
} }
else if (strcmp(args[1], "off") == 0) else if (strcmp(args[arg], "off") == 0)
profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_OFF; profiling = (profiling & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_OFF;
else { else if (strcmp(args[arg], "lock") == 0)
memprintf(err, "'%s' expects either 'on', 'auto', or 'off' but got '%s'.", args[0], args[1]); profiling |= HA_PROF_TASKS_LOCK;
else if (strcmp(args[arg], "no-lock") == 0)
profiling &= ~HA_PROF_TASKS_LOCK;
else if (strcmp(args[arg], "memory") == 0)
profiling |= HA_PROF_TASKS_MEM;
else if (strcmp(args[arg], "no-memory") == 0)
profiling &= ~HA_PROF_TASKS_MEM;
else
break;
}
/* either no arg or invalid arg */
if (arg == 1 || *args[arg]) {
memprintf(err, "'%s' expects a combination of either 'on', 'auto', 'off', 'lock', 'no-lock', 'memory', or 'no-memory', but got '%s'.", args[0], args[arg]);
return -1; return -1;
} }
return 0; return 0;
@ -720,6 +747,8 @@ static int cfg_parse_prof_tasks(char **args, int section_type, struct proxy *cur
/* parse a "set profiling" command. It always returns 1. */ /* parse a "set profiling" command. It always returns 1. */
static int cli_parse_set_profiling(char **args, char *payload, struct appctx *appctx, void *private) static int cli_parse_set_profiling(char **args, char *payload, struct appctx *appctx, void *private)
{ {
int arg;
if (!cli_has_level(appctx, ACCESS_LVL_ADMIN)) if (!cli_has_level(appctx, ACCESS_LVL_ADMIN))
return 1; return 1;
@ -765,7 +794,8 @@ static int cli_parse_set_profiling(char **args, char *payload, struct appctx *ap
if (strcmp(args[2], "tasks") != 0) if (strcmp(args[2], "tasks") != 0)
return cli_err(appctx, "Expects either 'tasks' or 'memory'.\n"); return cli_err(appctx, "Expects either 'tasks' or 'memory'.\n");
if (strcmp(args[3], "on") == 0) { for (arg = 3; *args[arg]; arg++) {
if (strcmp(args[arg], "on") == 0) {
unsigned int old = profiling; unsigned int old = profiling;
int i; int i;
@ -787,7 +817,7 @@ static int cli_parse_set_profiling(char **args, char *payload, struct appctx *ap
HA_ATOMIC_STORE(&sched_activity[i].caller, NULL); HA_ATOMIC_STORE(&sched_activity[i].caller, NULL);
} }
} }
else if (strcmp(args[3], "auto") == 0) { else if (strcmp(args[arg], "auto") == 0) {
unsigned int old = profiling; unsigned int old = profiling;
unsigned int new; unsigned int new;
@ -801,7 +831,7 @@ static int cli_parse_set_profiling(char **args, char *payload, struct appctx *ap
HA_ATOMIC_STORE(&prof_task_start_ns, now_ns); HA_ATOMIC_STORE(&prof_task_start_ns, now_ns);
HA_ATOMIC_STORE(&prof_task_stop_ns, 0); HA_ATOMIC_STORE(&prof_task_stop_ns, 0);
} }
else if (strcmp(args[3], "off") == 0) { else if (strcmp(args[arg], "off") == 0) {
unsigned int old = profiling; unsigned int old = profiling;
while (!_HA_ATOMIC_CAS(&profiling, &old, (old & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_OFF)) while (!_HA_ATOMIC_CAS(&profiling, &old, (old & ~HA_PROF_TASKS_MASK) | HA_PROF_TASKS_OFF))
; ;
@ -809,8 +839,21 @@ static int cli_parse_set_profiling(char **args, char *payload, struct appctx *ap
if (HA_ATOMIC_LOAD(&prof_task_start_ns)) if (HA_ATOMIC_LOAD(&prof_task_start_ns))
HA_ATOMIC_STORE(&prof_task_stop_ns, now_ns); HA_ATOMIC_STORE(&prof_task_stop_ns, now_ns);
} }
else if (strcmp(args[arg], "lock") == 0)
HA_ATOMIC_OR(&profiling, HA_PROF_TASKS_LOCK);
else if (strcmp(args[arg], "no-lock") == 0)
HA_ATOMIC_AND(&profiling, ~HA_PROF_TASKS_LOCK);
else if (strcmp(args[arg], "memory") == 0)
HA_ATOMIC_OR(&profiling, HA_PROF_TASKS_MEM);
else if (strcmp(args[arg], "no-memory") == 0)
HA_ATOMIC_AND(&profiling, ~HA_PROF_TASKS_MEM);
else else
return cli_err(appctx, "Expects 'on', 'auto', or 'off'.\n"); break; // unknown arg
}
/* either no arg or invalid one */
if (arg == 3 || *args[arg])
return cli_err(appctx, "Expects a combination of either 'on', 'auto', 'off', 'lock', 'no-lock', 'memory' or 'no-memory'.\n");
return 1; return 1;
} }

View file

@ -505,7 +505,7 @@ size_t appctx_htx_rcv_buf(struct appctx *appctx, struct buffer *buf, size_t coun
ret = appctx_htx->data; ret = appctx_htx->data;
buf_htx = htx_from_buf(buf); buf_htx = htx_from_buf(buf);
if (htx_is_empty(buf_htx) && htx_used_space(appctx_htx) <= count) { if (b_size(&appctx->outbuf) == b_size(buf) && htx_is_empty(buf_htx) && htx_used_space(appctx_htx) <= count) {
htx_to_buf(buf_htx, buf); htx_to_buf(buf_htx, buf);
htx_to_buf(appctx_htx, &appctx->outbuf); htx_to_buf(appctx_htx, &appctx->outbuf);
b_xfer(buf, &appctx->outbuf, b_data(&appctx->outbuf)); b_xfer(buf, &appctx->outbuf, b_data(&appctx->outbuf));
@ -848,6 +848,11 @@ struct task *task_run_applet(struct task *t, void *context, unsigned int state)
input = applet_output_data(app); input = applet_output_data(app);
output = co_data(oc); output = co_data(oc);
/* Don't call I/O handler if the applet was shut (release callback was
* already called)
*/
if (!se_fl_test(app->sedesc, SE_FL_SHR) || !se_fl_test(app->sedesc, SE_FL_SHW))
app->applet->fct(app); app->applet->fct(app);
TRACE_POINT(APPLET_EV_PROCESS, app); TRACE_POINT(APPLET_EV_PROCESS, app);
@ -945,6 +950,10 @@ struct task *task_process_applet(struct task *t, void *context, unsigned int sta
applet_need_more_data(app); applet_need_more_data(app);
applet_have_no_more_data(app); applet_have_no_more_data(app);
/* Don't call I/O handler if the applet was shut (release callback was
* already called)
*/
if (!applet_fl_test(app, APPCTX_FL_SHUTDOWN))
app->applet->fct(app); app->applet->fct(app);
TRACE_POINT(APPLET_EV_PROCESS, app); TRACE_POINT(APPLET_EV_PROCESS, app);

View file

@ -59,6 +59,7 @@
#include <haproxy/task.h> #include <haproxy/task.h>
#include <haproxy/ticks.h> #include <haproxy/ticks.h>
#include <haproxy/time.h> #include <haproxy/time.h>
#include <haproxy/tools.h>
#include <haproxy/trace.h> #include <haproxy/trace.h>
#define TRACE_SOURCE &trace_strm #define TRACE_SOURCE &trace_strm
@ -576,9 +577,20 @@ struct server *get_server_rnd(struct stream *s, const struct server *avoid)
/* compare the new server to the previous best choice and pick /* compare the new server to the previous best choice and pick
* the one with the least currently served requests. * the one with the least currently served requests.
*/ */
if (prev && prev != curr && if (prev && prev != curr) {
curr->served * prev->cur_eweight > prev->served * curr->cur_eweight) uint64_t wcurr = (uint64_t)curr->served * prev->cur_eweight;
uint64_t wprev = (uint64_t)prev->served * curr->cur_eweight;
if (wcurr > wprev)
curr = prev; curr = prev;
else if (wcurr == wprev && curr->counters.shared.tg && prev->counters.shared.tg) {
/* same load: pick the lowest weighted request rate */
wcurr = read_freq_ctr_period_estimate(&curr->counters.shared.tg[tgid - 1]->sess_per_sec, MS_TO_TICKS(1000));
wprev = read_freq_ctr_period_estimate(&prev->counters.shared.tg[tgid - 1]->sess_per_sec, MS_TO_TICKS(1000));
if (wprev * curr->cur_eweight < wcurr * prev->cur_eweight)
curr = prev;
}
}
} while (--draws > 0); } while (--draws > 0);
/* if the selected server is full, pretend we have none so that we reach /* if the selected server is full, pretend we have none so that we reach
@ -2047,6 +2059,26 @@ int connect_server(struct stream *s)
srv_conn->sni_hash = ssl_sock_sni_hash(sni); srv_conn->sni_hash = ssl_sock_sni_hash(sni);
} }
} }
#if defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
/* Delay mux initialization if SSL and ALPN/NPN is set
* and server cache is not yet populated. Note that in
* TCP mode this check is ignored as only mux-pt is
* available.
*
* This check must be performed before conn_prepare()
* to ensure consistency accross the whole stack, in
* particular for QUIC between quic-conn and mux layer.
*/
if (IS_HTX_STRM(s) && srv->use_ssl &&
(srv->ssl_ctx.alpn_str || srv->ssl_ctx.npn_str)) {
HA_RWLOCK_RDLOCK(SERVER_LOCK, &srv->path_params.param_lock);
if (srv->path_params.nego_alpn[0] == 0)
may_start_mux_now = 0;
HA_RWLOCK_RDUNLOCK(SERVER_LOCK, &srv->path_params.param_lock);
}
#endif /* TLSEXT_TYPE_application_layer_protocol_negotiation */
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */
if (conn_prepare(srv_conn, proto, srv->xprt)) { if (conn_prepare(srv_conn, proto, srv->xprt)) {
@ -2079,21 +2111,6 @@ int connect_server(struct stream *s)
} }
srv_conn->ctx = s->scb; srv_conn->ctx = s->scb;
#if defined(USE_OPENSSL) && defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
/* Delay mux initialization if SSL and ALPN/NPN is set. Note
* that this is skipped in TCP mode as we only want mux-pt
* anyway.
*/
if (srv) {
HA_RWLOCK_RDLOCK(SERVER_LOCK, &srv->path_params.param_lock);
if (IS_HTX_STRM(s) && srv->use_ssl &&
(srv->ssl_ctx.alpn_str || srv->ssl_ctx.npn_str) &&
srv->path_params.nego_alpn[0] == 0)
may_start_mux_now = 0;
HA_RWLOCK_RDUNLOCK(SERVER_LOCK, &srv->path_params.param_lock);
}
#endif
/* process the case where the server requires the PROXY protocol to be sent */ /* process the case where the server requires the PROXY protocol to be sent */
srv_conn->send_proxy_ofs = 0; srv_conn->send_proxy_ofs = 0;
@ -2187,7 +2204,7 @@ int connect_server(struct stream *s)
*/ */
if (may_start_mux_now) { if (may_start_mux_now) {
const struct mux_ops *alt_mux = const struct mux_ops *alt_mux =
likely(!(s->flags & SF_WEBSOCKET)) ? NULL : srv_get_ws_proto(srv); likely(!(s->flags & SF_WEBSOCKET) || !srv) ? NULL : srv_get_ws_proto(srv);
if (conn_install_mux_be(srv_conn, s->scb, s->sess, alt_mux) < 0) { if (conn_install_mux_be(srv_conn, s->scb, s->sess, alt_mux) < 0) {
conn_full_close(srv_conn); conn_full_close(srv_conn);
return SF_ERR_INTERNAL; return SF_ERR_INTERNAL;
@ -2242,14 +2259,14 @@ int connect_server(struct stream *s)
#endif #endif
/* set connect timeout */ /* set connect timeout */
s->conn_exp = tick_add_ifset(now_ms, s->be->timeout.connect); s->conn_exp = tick_add_ifset(now_ms, s->connect_timeout);
if (srv) { if (srv) {
int count; int count;
s->flags |= SF_CURR_SESS; s->flags |= SF_CURR_SESS;
count = _HA_ATOMIC_ADD_FETCH(&srv->cur_sess, 1); count = _HA_ATOMIC_ADD_FETCH(&srv->cur_sess, 1);
HA_ATOMIC_UPDATE_MAX(&srv->counters.cur_sess_max, count); COUNTERS_UPDATE_MAX(&srv->counters.cur_sess_max, count);
if (s->be->lbprm.server_take_conn) if (s->be->lbprm.server_take_conn)
s->be->lbprm.server_take_conn(srv); s->be->lbprm.server_take_conn(srv);
} }
@ -2365,7 +2382,7 @@ int srv_redispatch_connect(struct stream *s)
return 1; return 1;
case SRV_STATUS_QUEUED: case SRV_STATUS_QUEUED:
s->conn_exp = tick_add_ifset(now_ms, s->be->timeout.queue); s->conn_exp = tick_add_ifset(now_ms, s->queue_timeout);
s->scb->state = SC_ST_QUE; s->scb->state = SC_ST_QUE;
/* handle the unlikely event where we added to the server's /* handle the unlikely event where we added to the server's
@ -3044,6 +3061,27 @@ int be_downtime(struct proxy *px) {
return ns_to_sec(now_ns) - px->last_change + px->down_time; return ns_to_sec(now_ns) - px->last_change + px->down_time;
} }
/* Checks if <px> backend supports the addition of servers at runtime. Either a
* backend or a defaults proxy are supported. If proxy is incompatible, <msg>
* will be allocated to contain a textual explaination.
*/
int be_supports_dynamic_srv(struct proxy *px, char **msg)
{
if (px->lbprm.algo && !(px->lbprm.algo & BE_LB_PROP_DYN)) {
memprintf(msg, "%s '%s' uses a non dynamic load balancing method",
proxy_cap_str(px->cap), px->id);
return 0;
}
if (px->mode == PR_MODE_SYSLOG) {
memprintf(msg, "%s '%s' uses mode log",
proxy_cap_str(px->cap), px->id);
return 0;
}
return 1;
}
/* /*
* This function returns a string containing the balancing * This function returns a string containing the balancing
* mode of the proxy in a format suitable for stats. * mode of the proxy in a format suitable for stats.
@ -3708,6 +3746,42 @@ smp_fetch_srv_uweight(const struct arg *args, struct sample *smp, const char *kw
return 1; return 1;
} }
static int
smp_fetch_be_connect_timeout(const struct arg *args, struct sample *smp, const char *km, void *private)
{
struct proxy *px = NULL;
if (smp->strm)
px = smp->strm->be;
else if (obj_type(smp->sess->origin) == OBJ_TYPE_CHECK)
px = __objt_check(smp->sess->origin)->proxy;
if (!px)
return 0;
smp->flags = SMP_F_VOL_TXN;
smp->data.type = SMP_T_SINT;
smp->data.u.sint = TICKS_TO_MS(px->timeout.connect);
return 1;
}
static int
smp_fetch_be_queue_timeout(const struct arg *args, struct sample *smp, const char *km, void *private)
{
struct proxy *px = NULL;
if (smp->strm)
px = smp->strm->be;
else if (obj_type(smp->sess->origin) == OBJ_TYPE_CHECK)
px = __objt_check(smp->sess->origin)->proxy;
if (!px)
return 0;
smp->flags = SMP_F_VOL_TXN;
smp->data.type = SMP_T_SINT;
smp->data.u.sint = TICKS_TO_MS(px->timeout.queue);
return 1;
}
static int static int
smp_fetch_be_server_timeout(const struct arg *args, struct sample *smp, const char *km, void *private) smp_fetch_be_server_timeout(const struct arg *args, struct sample *smp, const char *km, void *private)
{ {
@ -3726,6 +3800,24 @@ smp_fetch_be_server_timeout(const struct arg *args, struct sample *smp, const ch
return 1; return 1;
} }
static int
smp_fetch_be_tarpit_timeout(const struct arg *args, struct sample *smp, const char *km, void *private)
{
struct proxy *px = NULL;
if (smp->strm)
px = smp->strm->be;
else if (obj_type(smp->sess->origin) == OBJ_TYPE_CHECK)
px = __objt_check(smp->sess->origin)->proxy;
if (!px)
return 0;
smp->flags = SMP_F_VOL_TXN;
smp->data.type = SMP_T_SINT;
smp->data.u.sint = TICKS_TO_MS(px->timeout.tarpit);
return 1;
}
static int static int
smp_fetch_be_tunnel_timeout(const struct arg *args, struct sample *smp, const char *km, void *private) smp_fetch_be_tunnel_timeout(const struct arg *args, struct sample *smp, const char *km, void *private)
{ {
@ -3826,8 +3918,11 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
{ "be_conn_free", smp_fetch_be_conn_free, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, { "be_conn_free", smp_fetch_be_conn_free, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
{ "be_id", smp_fetch_be_id, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, }, { "be_id", smp_fetch_be_id, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "be_name", smp_fetch_be_name, 0, NULL, SMP_T_STR, SMP_USE_BKEND, }, { "be_name", smp_fetch_be_name, 0, NULL, SMP_T_STR, SMP_USE_BKEND, },
{ "be_connect_timeout",smp_fetch_be_connect_timeout,0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "be_queue_timeout", smp_fetch_be_queue_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "be_server_timeout", smp_fetch_be_server_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, }, { "be_server_timeout", smp_fetch_be_server_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "be_sess_rate", smp_fetch_be_sess_rate, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, { "be_sess_rate", smp_fetch_be_sess_rate, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
{ "be_tarpit_timeout", smp_fetch_be_tarpit_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "be_tunnel_timeout", smp_fetch_be_tunnel_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, }, { "be_tunnel_timeout", smp_fetch_be_tunnel_timeout, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
{ "connslots", smp_fetch_connslots, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, { "connslots", smp_fetch_connslots, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
{ "nbsrv", smp_fetch_nbsrv, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, }, { "nbsrv", smp_fetch_nbsrv, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },

View file

@ -348,7 +348,7 @@ size_t b_xfer(struct buffer *dst, struct buffer *src, size_t count)
if (ret > count) if (ret > count)
ret = count; ret = count;
else if (!b_data(dst)) { else if (!b_data(dst) && b_size(dst) == b_size(src)) {
/* zero copy is possible by just swapping buffers */ /* zero copy is possible by just swapping buffers */
struct buffer tmp = *dst; struct buffer tmp = *dst;
*dst = *src; *dst = *src;

Some files were not shown because too many files have changed in this diff Show more