At many places, we rely on global.tune.bufsize value instead of using the buffer
size. For now, it is not a problem. But if we want to be able to deal with
buffers of different sizes, it is good to reduce as far as possible dependencies
on the global value. most of time, we can use b_size() or c_size()
functions. The main change is performed on the error snapshot where the buffer
size was added into the error_snapshot structure.
Fix two issues in JWE token processing:
- Replace memcmp() with CRYPTO_memcmp() for authentication tag
verification in build_and_check_tag() to prevent timing
side-channel attacks. Also add a tag length validation check
before the comparison to avoid potential buffer over-read when
the decoded tag length doesn't match the expected HMAC half.
- Remove unreachable break statement after JWE_ALG_A256GCMKW case
in decrypt_cek_aesgcmkw().
During the message analysis, we must take care to stop wait for the message
body if an error was reported on client side or an abort was detected with
abort-on-close configured (by default now).
The bug was introduced when the "wait-for-body" action was added. Only the
producer state was tested. So, when we were waiting for the request payload,
there was no issue. But when we were waiting for the response payload, error
or abort on client side was not considered.
This patch should be backported to all stable versions.
When the hot list was removed in 3.0, a regression was introduced.
Theorically, it is possible to override data in a block when new data are
appended. It happens when data are copied. If the data size is a multiple of
the block size, all data are copied and the last used block is full. But
instead of saving a reference on the next block as the restart point for the
next copies, we keep a reference on the last full one. On the next read, we
reuse this block and old data are crushed. To hit the bug, no new blocks
should be reserved between the two data copy attempts.
Concretely, for now, it seems not possible to hit the bug. But with a block
size set to 1024, if more than 1024 bytes are reseved, with a first copy of
1024 bytes and a second one with remaining data, data in the first block
will be crushed.
So to fix the bug, the reference of the last block used to write data (which
is in fact the next one to use to perform the next copy) is only updated
when a block is full. In that case the next block is used.
This patch should be backported as far as 3.0 after a period of observation.
Since the legacy HTTP code was removed, the global and thread-local buffers,
tmpbuf and zbuf, are no longer used. So let's removed them.
This could be backported, theorically to all supported versions. But at
least it could be good to do so as far as 3.2 as it saves 2 buffers
per-thread.
This bug is quite old. When the length of the first DATA block is computed,
the offset is used instead of the block length minus the offset. It is only
used with random forwarding and there is a test just after to prevent any
issue, so there is no effect.
It could be backported to all stable versions.
The previous fix was wrong. When shut flags are tested for legacy applets,
to know if the I/O handler can be called or not, we must be sure shut for
reads and for writes are both set to skip the applet I/O handler.
This bug introduced regression, at least for the peer applet and for the DNS
applet.
This patch must be backported with abc1947e1 ("BUG/MEDIUM: applet: Fix test
on shut flags for legacy applets"), so as far as 3.0.
If a producer tries to send data via the fast-forward mechanism while the
message is in an unexpected state from the consumer point of view, the
fast-forward is now disabled. Concretely, we now take care that the message
is in its data/tunnel stage to proceed in h1_nego_ff().
By disabling fast-forward in that case, we will automatically fall back on
the regular sending path and be able to handle the error in h1_snd_buf().
This patch should be backported as far as 3.0
If is illegal to send data if the stream is already closed. The case is
properly handled when data are sent via snd_buf(), by draining the data. But
it was still possible to process these data via nego_ff().
So, in this patch, both for the H2 and QUIC multiplexers, the fast-forward
is disabled if the stream is closed and nothing is performed. Doing so, we
will automatically fall back on the regular sending path and be able to
drain data in snd_buf().
Thanks to Mike Walker for his investigation on the subject.
This patch should be backported as far as 3.0.
HTTP/3 CONNECT transcoding is not properly implemented on the frontend
side. Neither tunnel mode of application nor extended connect are
currently functional.
Clarify this situation by rejecting any CONNETC attempts on the frontend
side. The stream is thus now closed via a RESET_STREAM with error code
REQUEST_REJECTED.
This should be backported to every stable versions.
This reverts commit 235e8f1afd.
Prior to the above commit, snd_buf callback for QUIC MUX was able to
deal with data even after stream closure. The excess was simply
discarded, as no STREAM frame can be emitted after FIN/RESET_STREAM.
This code was later removed and replaced by a BUG_ON() to ensure snd_buf
is never called after stream closure.
However, this approach is too strict. Indeed, there is nothing in the
haproxy stream architecture which forbids this scheduling, in part
because QUIC MUX is the sole responsible of the stream closure. As such,
it is preferable to revert to the old code to prevent any triggering of
a BUG_ON() failure.
Note that nego_ff does not implement data draining if called after
stream closure. This will be done in a future patch.
Thanks to Mike Walker for his investigation on the subject.
This must be backported up to 2.8.
This is a follow up to b6bdb2553 ("MEDIUM: backend: make "balance random"
consider req rate when loads are equal")
In the above patch, we used the global sess_per_sec metric to choose which
server we should be using. But the original intent was to use the per
thread group statistic.
No backport needed, the previous patch already improved the situation in
3.3, so let's not take the risk of breaking that.
ssl-f-use lines tries to load a crt file, but the "crt" keyword is not
mandatory. That could lead to crtlist_load_crt() being called with a
NULL path, and trying to do a stat.
In this particular case we don't need to try anything and it's better to
leave with an actual error.
Must be backported as far as 3.2.
crtlist_load_crt() in post_section_frontend_crt_init() won't give
details about the line being parsed, this should be done by the caller.
Modify post_section_frontend_crt_init() to ouput the right error format.
Must be backported to 3.2.
In post_section_frontend_crt_init(), the crt_entry is populated by the
ssl_conf fromt the cfg_crt_node. On error path, the crt_list is
completely freed, including the ssl_conf structure. But the ssl_conf
structure was already freed when freeing the cfg_crt_node.
Fix the issue by doing a crtlist_dup_ssl_conf(n->ssl_conf) in the
crtlist_entry instead of an assignation.
Fix issue #3268.
Need to be backported as far as 3.2. The previous patch which adds the
crtlist_dup_ssl_conf() declaration is needed.
In the historical implementation, all filter related information where
stored at the stream level (using struct strm_flt * context), and filters
iteration was performed at the stream level also.
We identified that this was not ideal and would make the implementation of
future filters more complex since filters ordering should be handled in
a different order during request and response handling for decompression
for instance.
To make such thing possible, in this commit we migrate some channel
specific filter contexts in the channel directly (request or response),
and we implement 2 additional filter lists, one on the request channel
and another on the response channel. The historical stream filter list
is kept as-is because in some contexts only the stream is available and
we have to iterate on all filters. But for functions where we only are
interested in request side or response side filters, we now use dedicated
channel filters list instead.
The only overhead is that the "struct filter" was expanded by two "struct
list".
For now, no change of behavior is expected.
Multiple channel related functions have the same construction: they use
list_for_each_entry() to work on a given filter from the stream+channel
combination. In future commits we will try to use filter list from
dedicated channel list instead of the stream one, thus in this patch we
need as a prerequisite to implement and use the flt_list_{start,next} API
to iterate over filter list, giving the API the responsibility to iterate
over the correct list depending on the context, while the calling function
remains free to use the iteration construction it needs. This way we will
be able to easily change the way we iterate over filter list without
duplicating the code for requests and responses.
There is no need to have those helpers defined as macro, and since it
is not mandatory, code maintenance is much easier using functions,
thus let's switch to function definitions.
Also, we change the way we iterate over the list so that the calling
function now has a pseudo API to get and iterate over filter pointers
while keeping control on how they implement the iterating logic.
One benefit of this is that we will also be able to switch between lists
depending on the channel type, which is a prerequisite for upcoming
rework that split the filter list over request and response channels
(commit will follow)
No change of behavior is expected.
Features prefixed by "HAVE_WORKING_" in the haproxy -vv feature list,
are features that are detected during runtime.
This patch splits these features on another line in haproxy -vv. This
line is named "Detected feature list".
The feature list in haproxy -vv is partly generated from the Makefile
using the USE_* keywords, but it's also possible to add keywords in the
feature list using hap_register_feature(), which adds the keyword at the
end of list. When doing so, the list is not correctly sorted anymore.
This patch fixes the problem by splitting the string using an array of
ist and applying a qsort() on it.
the TCP_MD5SIG ifdef is not enough to check if the feature is usable.
The code might compile but the OS could prevent to use it.
This patch tries to use the TCP_MD5SIG setsockopt before adding
HAVE_WORKING_TCP_MD5SIG in the feature list. so it would prevent to
start reg-tests if the OS can't run it.
This converter takes a private key in the JWK format (RFC7517) that can
be provided as a string of via a variable.
The only keys managed for now are of type 'RSA' or 'oct'.
Add helper functions that take a JWK (JSON representation of an RSA
private key) into an EVP_PKEY (containing the private key).
Those functions are not used yet, they will be used in the upcoming
'jwt_decrypt_jwk' converter.
QUIC frame type is encoded as a varint. Initially, haproxy parsed it as
a single byte, which was enough to cover frames defined in RFC9000.
The code has been extended recently to support multi-bytes encoded
value, in anticipation of QUIC frames extension support. However, there
was no check on the varint format. This is interpreted erroneously as a
PADDING frame as this serves as the initial value. Thus the rest of the
packet is incorrectly handled, with various resulting effects, including
infinite loops and/or crashes.
This patch fixes this by checking the return value of quic_dec_int(). If
varint cannot be parsed, the connection is immediately closed.
This issue is assigned to CVE-2026-26080 report.
This must be backported up to 3.2.
Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
Token parsing code on INITIAL packet for the NEW_TOKEN format is not
robust enough and may even crash on some rare malformed packets.
This patch fixes this by adding a check on the expected length of the
received token. The packet is now rejected if the token does not match
QUIC_TOKEN_LEN. This check is legitimate as haproxy should only parse
tokens emitted by itself.
This issue has been introduced with the implementation of NEW_TOKEN
tokens parsing required for 0-RTT support.
This issue is assigned to CVE-2026-26081 report.
This must be backported up to 3.0.
Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
When a duplicated CRYPTO frame is received during handshake, a server
may consider that there was a packet loss and immediately retransmit its
pending CRYPTO data without having to wait for PTO expiration. However,
RFC 9002 indicates that this should only be performed at most once per
connection to avoid excessive packet transmission.
QUIC connection is flagged with QUIC_FL_CONN_HANDSHAKE_SPEED_UP to mark
that a fast retransmit has been performed. However, during the
refactoring on CRYPTO handling with the storage conversion from ncbuf to
ncbmbuf, the check on the flag was accidentely removed. The faulty patch
is the following one :
commit f50425c021
MINOR: quic: remove received CRYPTO temporary tree storage
This patch adds again the check on QUIC_FL_CONN_HANDSHAKE_SPEED_UP
before initiating fast retransmit. This ensures this is only performed
once per connection.
This must be backported up to 3.3.
In srv_dynamic_maxconn(), we'll decide that the max number of connection
is the server's maxconn if 1) the proxy's number of connection is over
fullconn, or if minconn was not set.
Check if minconn is not set first, as it will be true most of the time,
and as the proxy's "beconn" variable is in a busy cache line, it can be
costly to access it, while minconn/maxconn is in a cache line that
should very rarely change.
The documentation of @system-ca specifies that one can overwrite the
value provided by the SSL Library using SSL_CERT_DIR.
However it seems like X509_get_default_cert_dir() is not affected by
this environment variable, and X509_get_default_cert_dir_env() need to
be used in order to get the variable name, and get the value manually.
This could be backported in every stable branches. Note that older
branches don't have the memprintf in ssl_sock.c.
SSL libraries built manually might lack the right
X509_get_default_cert_dir() value.
The common way to fix the problem is to build openssl with
./configure --openssldir=/etc/ssl/
In order to verify this setting, output it with haproxy -vv.
Given that we already have "set profiling task", it's easy to permit to
enable/disable the lock and/or memory profiling at run time. However, the
change will only be applied next time the task profiling will be switched
from off/auto to on.
The patch is very minor and is best viewed with git show -b because it
indents a whole block that moves in a "if" clause.
This can be backported to 3.3 along with the two previous patches.
In continuity of previous patch, this one makes use of the new profiling
flags. For this, based on the global "profiling" setting, when switching
profiling on, we set or clear two flags on the thread context,
TH_FL_TASK_PROFILING_L and TH_FL_TASK_PROFILING_M to indicate whether
lock profiling and/or malloc profiling are desired when profiling is
enabled. These flags are checked along with TH_FL_TASK_PROFILING to
decide when to collect time around a lock or a malloc. And by default
we're back to the behavior of 3.2 in that neither lock nor malloc times
are collected anymore.
This is sufficient to see the CPU usage spent in the VDSO to significantly
drop from 22% to 2.2% on a highly loaded system.
This should be backported to 3.3 along with the previous patch.
Damien Claisse reported in issue #3257 a performance regression between
3.2 and 3.3 when task profiling is enabled, more precisely in relation
with the following patches were merged:
98cc815e3e ("MINOR: activity: collect time spent with a lock held for each task")
503084643f ("MINOR: activity: collect time spent waiting on a lock for each task")
9d8c2a888b ("MINOR: activity: collect CPU time spent on memory allocations for each task")
The issue mostly comes from the first patches. What happens is that the
local time is taken when entering and leaving each lock, which costs a
lot on a contended system. The problem here is the lack of finegrained
settings for lock and malloc profiling.
This patch introduces a better approach. The task profiler goes back to
its default behavior in on/auto modes, but the configuration now accepts
new extra options "lock", "no-lock", "memory", "no-memory" to precisely
indicate other timers to watch for each task when profiling turns on.
This is achieved by setting two new flags HA_PROF_TASKS_LOCK and
HA_PROF_TASKS_MEM in the global "profiling" variable.
This patch only parses the new values and assigns them to the global
variable from the config file for now. The doc was updated.
Commit 3674afe8a0 ("BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING
and clr FL_NOTIFIED") accidentally left a strange-looking line wrapping
making one think of an editing mistake, let's fix it and keep it on a
single line given that even indented wrapping is almost as large.
This can be backported with the fix above till 2.8 to keep the patch
context consistent between versions.
An issue was introduced in 3.0 with commit faa8c3e024 ("MEDIUM: lb-chash:
Deterministic node hashes based on server address"): the new server_key
field and lb_nodes entries initialization were not updated for servers
added at run time with "add server": server_key remains zero and the key
used in lb_node remains the one depending only on the server's ID.
This will cause trouble when adding new servers with consistent hashing,
because the hash-key will be ignored until the server's weight changes
and the key difference is detected, leading to its recalculation.
This is essentially caused by the poorly placed lb_nodes initialization
that is specific to lb-chash and had to be replicated in the code dealing
with server addition.
This commit solves the problem by adding a new ->server_init() function
in the lbprm proxy struct, that is called by the server addition code.
This also allows to abandon the complex check for LB algos that was
placed there for that purpose. For now only lb-chash provides such a
function, and calls it as well during initial setup. This way newly
added servers always use the correct key now.
While it should also theoretically have had an impact on servers added
with the "random" algorithm, it's unlikely that the difference between
proper server keys and those based on their ID could have had any visible
effect.
This patch should be backported as far as 3.0. The backport may be eased
by a preliminary backport of previous commit "CLEANUP: lb-chash: free
lb_nodes from chash's deinit(), not global", though this is not strictly
necessary if context is manually adjusted.
There's an ambuity on the ownership of lb_nodes in chash, it's allocated
by chash but freed by the server code in srv_free_params() from srv_drop()
upon deinit. Let's move this free() call to a chash-specific function
which will own the responsibility for doing this instead. Note that
the .server_deinit() callback is properly called both on proxy being
taken down and on server deletion.
For "add backend" implementation, postparsing code in
check_config_validity() from cfgparse.c has been extracted in a new
dedicated function named proxy_finalize() into proxy.c.
This has caused unexpected compilation issue as in the latter file
TLSEXT_TYPE_application_layer_protocol_negotiation macro may be
undefined, in particular when building without QUIC support. Thus, code
related to default ALPN on binds is discarded after the preprocessing
stage.
Fix this by including openssl-compat header file into proxy source file.
This should be sufficient to ensure SSL related defines are properly
included.
This should fix recent issues on SSL regtests.
No need to backport.
Emeric suggested that it's sometimes convenient to instantly know if a
client has advertised support for window scaling or timestamps for
example. While the info is present in the TCP options output, it's hard
to extract since it respects the options order.
So here we're extending the 56-bit fingerprint with 8 extra bits that
indicate the presence of options 2..8, and any option above 9 for the
last bit. In practice this is sufficient since higher options are not
commonly used. Also TCP option 5 is normally not sent on the SYN (SACK,
only SACK_perm is sent), and echo options 6 & 7 are no longer used
(replaced with timestamps). These fields might be repurposed in the
future if some more meaningful options are to be mapped (e.g. MPTCP,
TFO cookie, auth).
When a backend is created at runtime, the new proxy instance is inserted
at the end of proxies_list. This operation is buggy if this list is
empty : the code causes a null dereference which will lead to a crash.
This causes the following compilation error :
CC src/proxy.o
src/proxy.c: In function 'cli_parse_add_backend':
src/proxy.c:4933:36: warning: null pointer dereference [-Wnull-dereference]
4933 | proxies_list->next = px;
| ~~~~~~~~~~~~~~~~~~~^~~~
This patch fixes this issue. Note that in reality it cannot occur at
this moment as proxies_list cannot be empty (haproxy requires at least
one frontend to start, and the list also always contains internal
proxies).
No need to backport.
This patch fixes the following compilation error :
src/proxy.c:4954:12: error: format string is not a string literal
(potentially insecure) [-Werror,-Wformat-security]
4954 | ha_notice(msg);
| ^~~
No need to backport.
Implement proxy ID generation for dynamic backends. This is performed
through the already function existing proxy_get_next_id().
As an optimization, lookup will performed starting from a global
variable <dynpx_next_id>. It is initialized to the greatest ID assigned
after parsing, and updated each time a backend instance is created. When
backend deletion will be implemented, it could be lowered to the newly
available slot.
Implement the required operations for "add backend" handler. This
requires a new proxy allocation, settings copy from the specified
default instance and proxy config finalization. All handlers registered
via REGISTER_POST_PROXY_CHECK() are also called on the newly created
instance.
If no error were encountered, the newly created proxy is finally
attached in the proxies list.
This commits completes "add backend" handler with some checks performed
on the specified default proxy instance. These are additional checks
outside of the already existing inheritance rules, specific to dynamic
backends.
For now, a default proxy is considered not compatible if it is not in
mode TCP/HTTP. Also, a default proxy is rejected if it references HTTP
errors. This limitation may be lifted in the future, when HTTP errors
are partiallay reworked.
Add an optional "mode" argument to "add backend" CLI command. This
argument allows to specify if the backend is in TCP or HTTP mode.
By default, it is mandatory, unless the inherited default proxy already
explicitely specifies the mode. To differentiate if TCP mode is implicit
or explicit, a new proxy flag PR_FL_DEF_EXPLICIT_MODE is defined. It is
set for every defaults instances which explicitely defined their mode.