Commit graph

26087 commits

Author SHA1 Message Date
Amaury Denoyelle
848e0cd052 MINOR: proxy: simplify defaults proxies list storage
Defaults proxies instance are stored in a global name tree. When there
is a name conflict and the older entry cannot be simply discarded as it
is already referenced, the older entry is instead removed from the name
tree and inserted into the orphaned list.

The purpose of the orphaned list was to guarantee that any remaining
unreferenced defaults are purged either on postparsing or deinit.
However, this is in fact completely useless. Indeed on postparsing,
orphaned entries are always referenced. On deinit instead, defaults are
already freed along the cleanup of all frontend/backend instances clean
up, thanks to their refcounting.

This patch streamlines this by removing orphaned list. Instead, a
defaults section is inserted into a new global defaults_list during
their whole lifetime. This is not strictly necessary but it ensures that
defaults instances can still be accessed easily in the future if needed
even if not present in the name tree. On deinit, a BUG_ON() is added to
ensure that defaults_list is indeed emptied.

Another benefit from this patch is to simplify the defaults deletion
procedure. Orphaned simple list is replaced by a proper double linked
list implementation, so a single LIST_DELETE() is now performed. This
will be notably useful as defaults may be removed at runtime in the
future if backends deletion at runtime is implemented.
2026-01-22 17:57:09 +01:00
Amaury Denoyelle
434e979046 MINOR: proxy: refactor defaults proxies API
This patch renames functions which deal with defaults section. A common
"defaults_px_" prefix is defined. This serves as a marker to identify
functions which can only be used with proxies defaults capability. New
BUG_ON() are enforced to ensure this is valid.

Also, older proxy_unref_or_destroy_defaults() is renamed
defaults_px_detach().
2026-01-22 17:55:47 +01:00
Amaury Denoyelle
6c0ea1fe73 MINOR: proxy: remove proxy_preset_defaults()
Function proxy_preset_defaults() purpose has evolved over time.
Originally, it was only used to initialize defaults proxies instances.
Until today, it was extended so that all proxies use it. Its objective
is to initialize settings to common default values.

To remove the confusion, this function is now removed. Its content is
integrated directly into init_new_proxy().
2026-01-22 16:20:25 +01:00
Willy Tarreau
f535d3e031 BUG/MEDIUM: debug: only dump Lua state when panicking
For a long time, we've tried to show the Lua state and backtrace when
dumping threads so as to be able to figure is (and which) Lua code was
misbehaving, e.g. by performing expensive library calls. Since 3.1 with
commit 365ee28510 ("BUG/MINOR: hlua: prevent LJMP in hlua_traceback()"),
it appears that the approach is more fragile (though that fix addressed
a real issue about out-of-memory), and it's possible to occasionally
observe crashes or CPU loops with "show threads" while running Lua
heavily. While users of "show threads" are rare, the watchdog warnings,
which were also enabled on 3.1, also trigger these issues, which is
even more of a concern.

This patch goes the simple way to address this for now: since the purpose
of the Lua backtrace was to help locate Lua call places upon a panic,
let's only call the backtrace on panic but not in other situations. After
a panic we obviously don't care that the Lua stack might be corrupted
since it's never going to be resumed anyway. This may be relaxed in the
future if a solution is found to reliably produce harmless Lua backtraces.

The commit above was backported to all stable branches, so this patch
will be needed everywhere. However, TAINTED_PANIC only appeared in 2.8,
and given the rarety of this bug before 3.1, it's probably not needed
to make any extra effort to go beyond 2.8.

It's easy enough to test a version for being subject to this issue,
by running the following Lua code:

  local function stress(txn)
          for _, backend in pairs(core.backends) do
                  for _, server in pairs(backend.servers) do
                          local stats = server:get_stats()
                  end
          end
  end

  core.register_fetches("stress", stress)

in the following config file:

  global
        stats socket /tmp/haproxy.stat level admin mode 666
        tune.lua.bool-sample-conversion normal
        lua-load-per-thread "stress.lua"

  listen stress
        bind :8001
        mode http
        timeout client 5s
        timeout server 5s
        timeout connect 5s
        http-request return status 200 content-type text/plain lf-string %[lua.stress()]
        server s1 127.0.0.1:8000

and stressing port 8001 with 100+ connections requesting / in loop, then
issuing "show threads" on the CLI using socat in loops as well. Normally
it instantly segfaults (sometimes during the first "show").
2026-01-22 15:47:42 +01:00
Amaury Denoyelle
ac877a25dd BUG/MINOR: proxy: fix deinit crash on defaults with duplicate name
A defaults proxy instance may be move into the orphaned list when it is
replaced by a newer section with the same name. This is attached via
<next> member as a single linked list entry. However, proxy free does
not clear <next> attach point.

This causes a crash on deinit if orphaned list is not empty. First, all
frontend/backend instances are freed. This triggers the release of every
referenced defaults instances as their refcount reach zero, but orphaned
list is not clean up. A loop is then conducted on orphaned list via
proxy_destroy_all_unref_defaults(). This causes a segfault due to access
on already freed entries.

To fix this, this patch extends proxy_destroy_defaults(). If orphaned
list is not empty, a loop is performed to remove a possible entry of the
currently released defaults instance. This ensures that loop over
orphaned list won't be able to access to already freed entries.

This bug is pretty rare as it requires to have duplicate name in
defaults sections, and also to use settings which forces defaults
referencing, such as TCP/HTTP rules. This can be reproduced with the
minimal config here :

defaults def
    http-request return status 200
frontend fe
    bind :20080
defaults def

Note that in fact orphaned list looping is not strictly necessary, as
defaults instances are automatically removed via refcounting. This will
be the purpose of a future patch. However, to limit the risk of
regression on stable releases during backport, this patch uses the more
direct approach for now.

This must be backported up to 3.1.
2026-01-22 15:40:01 +01:00
William Lallemand
b3f7d43248 SCRIPTS: build: enable symbols in AWS-LC builds
The Release target of AWS-LC does not enable symbols. Which makes
debugging and using gdb difficult.

Change the build type to RelWithDebInfo.
2026-01-22 15:25:24 +01:00
William Lallemand
21b192e799 REGTESTS: ssl: fix generate-certificates w/ LibreSSL
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Since commit eb5279b15 ("BUG/MEDIUM: ssl: fix generate-certificates
option when SNI greater than 64bytes") the LibreSSL job does not seem to
work anymore.

Indeed the reg-tests was modified to add a SNI longer than 64 bytes,
without any concern about the DNS standard, which allows only 63 bytes
per label.

LibreSSL is stricter than the other libraries about that, and checks
that the SNI is compliant with the DNS RFC in the
tlsext_sni_is_valid_hostname() function
https://github.com/libressl/openbsd/blob/OPENBSD_7_8/src/lib/libssl/ssl_tlsext.c#L710

This patch fixes the issue by splitting the SNI with a second label to
reach more than 64 bytes.

Must be backported with eb5279b15 in every stable branches.
2026-01-21 16:50:16 +01:00
Amaury Denoyelle
c7004be964 BUG/MEDIUM: mux-quic: prevent BUG_ON() on aborted uni stream close
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
When a QCS instance is fully closed on qcs_close_remote() invokation, it
is moved into purg_list for later cleanup. This reuses <el_send> list
element, so a BUG_ON() ensures that QCS is not already present in
send_list.

This code is safe for bidirectional streams, as local channel is only
closed after FIN or RESET_STREAM emission completion, so such QCS won't
be present in the send_list on full closure.

However, things are different for remote uni streams. As such streams do
not have any local channel, qcs_close_remote() will always proceed to
full closure. Most of the time this is fine, but the aformentionned
BUG_ON() could be triggered if emission is required on a remote uni
stream : this only happens after read was aborted and a STOP_SENDING
frame is prepared.

Fix this by adding an extra operation in qcs_close_remote() : on full
close, STOP_SENDING is cancelled if it was prepared and the QCS instance
is removed from send_list. This is safe as STOP_SENDING is unnecessary
after the remote channel is closed. This operation is performed before
purg_list insertion which prevents the BUG_ON() crash issue.

This patch must be backported up to 3.1.
2026-01-21 14:01:12 +01:00
William Lallemand
eb5279b154 BUG/MEDIUM: ssl: fix generate-certificates option when SNI greater than 64bytes
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The problem is that the certificate is generated with a CN greater than
64 bytes when the SNI is too long, which is not suppose to be supported,
and will end up with a handshake failure.

The patch fixes the issue by avoiding to add a CN when the SNI is longer than
64 bytes. Indeed this is not a mandatory field anymore and was deprecated more
than 20 years ago. The SAN DNS is enough for this case.

Must be backported in every stable branches.
2026-01-21 10:45:22 +01:00
William Lallemand
fbc98ebcda BUG/MEDIUM: ssl: fix error path on generate-certificates
It was reported by Przemyslaw Bromber that using the "generate-certificates"
option combined with AWS-LC would crash HAProxy when a request is done with a
SNI longer than 64 bytes.

The problem is that the certificate is generated with a CN greater than 64
bytes which results in ssl_sock_do_create_cert() returning NULL. This
NULL value being passed to SSL_set_SSL_CTX.

With OpenSSL, passing a NULL SSL_CTX does not seem to be an issue as it
would just ignore it.

With AWS_LC, passing a NULL seems to crash the function. This was
reported to upstream AWS-LC and fixed in patch 7487ad1dcd8
https://github.com/aws/aws-lc/pull/2946.

This must be backported in every branches.
2026-01-21 10:45:22 +01:00
Hyeonggeun Oh
2d8d2b4247 DOC: vars: document dump_all_vars() sample fetch
Add documentation for the dump_all_vars() sample fetch function in the
configuration manual. This function was introduced in the previous commit
to dump all variables in a given scope with optional prefix filtering.

The documentation includes:
- Function signature and return type
- Description of output format
- Explanation of scope and prefix arguments
- Usage examples for common scenarios

This completes the implementation of GitHub issue #1623.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
9f766b2056 MINOR: vars: implement dump_all_vars() sample fetch
This patch implements dump_all_vars([scope],[prefix]) sample fetch
function that dumps all variables in a given scope, optionally
filtered by name prefix.

Output format: var1=value1, var2=value2, ...
- String values are quoted and escaped (", , \r, \n, \b, \0)
- All sample types are supported via sample_convert()
- Scope can be: sess, txn, req, res, proc
- Prefix filtering is optional

Example usage:
  http-request return string %[dump_all_vars(txn)]
  http-request return string %[dump_all_vars(txn,user)]

This addresses GitHub issue #1623.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
95e8483b35 MINOR: vars: store variable names for runtime access
Currently, variable names are only used during parsing and are not
stored at runtime. This makes it impossible to iterate through
variables and retrieve their names.

This patch adds infrastructure to store variable names:
- Add 'name' and 'name_len' fields to var_desc structure
- Add 'name' field to var structure
- Add VDF_NAME_ALLOCATED flag to track memory ownership
- Store names in vars_fill_desc(), var_set(), vars_check_arg(),
  and parse_store()
- Free names in var_clear() and release_store_rule()
- Add ARGT_VAR handling in release_sample_arg() to free the
  allocated name when the flag is set

This prepares the ground for implementing dump_all_vars() in the
next commit.

Tested with:
  - ASAN-enabled build on Linux (TARGET=linux-glibc USE_OPENSSL=1
    ARCH_FLAGS="-g -fsanitize=address")
  - Regression tests: reg-tests/sample_fetches/vars.vtc
  - Regression tests: reg-tests/startup/default_rules.vtc
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
25564b6075 MINOR: tools: add chunk_escape_string() helper function
This function takes a string appends it to a buffer in a format
compatible with most languages (double-quoted, with special characters
escaped). It handles standard escape sequences like \n, \r, \", \\.

This generic utility is desined to be used for logging or debugging
purposes where arbitrary string data needs to be safely emitted without
breaking the output format. It will be primarily used by the upcoming
dump_all_vars() sample fetch to dump variable contents safely.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
7e85391a9e REORG: cfgparse: move peers parsing to cfgparse-peers.c
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This patch move the peers section parsing code from src/cfgparse.c to a
dedicated src/cfgparse-peers.c file. This seperation improves code
organization and prepares for further refactoring of the "peers" keyword
registration system.

No functional changes in this patch - the code is moved as-is with only
the necessary adjustments for compliation (adding SPDX header and
updating Makefile for build).

This is the first patch in a series to address issue #3221, which
reports that "peers" section keywords are not displayed with -dKall.
2026-01-20 17:17:37 +01:00
Egor Shestakov
44c491ae6b DOC: fix mismatched quotes typos around words in the documentation files
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
s/"no'/"no"
s/'private"/"private"
s/"flt'/"flt"

There isn't definite convention but people usually prefer to highlight
something important with quotation marks. For example, it's convenient
to find keywords from a text when they are quoted, mismatches make this
harder.

No backport needed.
2026-01-20 08:15:41 +01:00
Egor Shestakov
0c3b212aab DOC: fix typos in the documentation files
This fixes several obvious typos in the documentation:

s/elvoved/evolved
s/performend/performed
s/importnat/important
s/sharedd/shared
s/eveyone/everyone

No backport needed.
2026-01-20 08:15:28 +01:00
Simon Ser
6f5def3cbd DOC: proxy-protocol: Add SSL client certificate TLV
Add the PP2_SUBTYPE_SSL_CLIENT_CERT code point reservation in the
proxy protocol specification. This is useful in cases where the
backend needs to perform mTLS authentication, but the rules for
certificate validation are backend-specific (e.g. database of
allowed certificate hashes).

This is left optional to leave it up to the frontend configuration
to dictate whether to forward raw certificate data.

Support for this new TLV has been added in tlstunnel:
https://codeberg.org/emersion/tlstunnel/pulls/33
2026-01-20 08:11:19 +01:00
Aurelien DARRAGON
9156d5f775 BUG/MEDIUM: log: parsing log-forward options may result in segfault
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by GH user @HiggsTeilchen on #3250, the use of "option
dont-parse-log" may result in segmentation fault when parsing the
configuration. In fact, "option assume-rfc6587-ntf" is also affected.

The reason behind this is that cfg_parse_log_forward() leverages the
cfg_parse_listen_match_option() function to check for generic proxy
options that are relevant in the PR_MODE_SYSLOG context. And while it
is not documented, this function assumes that the currently evaluated
proxy is stored in the global variable 'curproxy', which
cfg_parse_log_forward() doesn't offer.

cfg_parse_listen_match_option() uses curproxy to check the currently
evaluated proxy's capabilities is compatible with the option, so if a
proxy with the frontend capability was defined earlier in the config,
parsing would succeed, if curproxy points to proxy without the frontend
capabilty (ie: backend), a warning would be emitted to tell that the
option would be ignored while it is perfectly valid for the log-forward
proxy, and if no proxy was defined earlier in the config a segfault would
be triggered.

To fix the issue, we explicitly make "curproxy" global variable point to
the log-forward proxy being parsed in cfg_parse_log_forward() before
leveraging cfg_parse_listen_match_option() to check for compatible
options.

It must be backported with 834e9af8 ("MINOR: log: add options eval for
log-forward"), which was introduced in 3.2 precisely.
2026-01-19 16:53:00 +01:00
William Lallemand
14e890d85e SCRIPTS: build-ssl: fix quictls build for 1.1.1 versions
The quictls build function was not using anymore the right make target
to build older versions. make all must be used instead of make build_sw
for 1.1.1.
2026-01-19 14:45:47 +01:00
William Lallemand
818b32addc SCRIPTS: build-ssl: clone the quictls branch directly
Allow to clone the quictls branch directly using the value from
QUICTLS_VERSION.
2026-01-19 14:45:47 +01:00
Aurelien DARRAGON
b4f64c0abf BUG/MEDIUM: promex: server iteration may rely on stale server
When performing a promex dump, even though we hold reference on server
during resumption after a yield (ie: buffer full), the refcount mechanism
only guarantees that the server pointer will be valid upon resumption, not
that its content will be consistent. As such, sv->next may be garbage upon
resumption. Instead, we must rely on the watcher mechanism to iterate over
server list when resumption is involved like we already do for stats and
lua handlers.

It must be backported anywhere 071ae8ce3 (" BUG/MEDIUM: stats/server: use
watcher to track server during stats dump") was (up to 2.8 it seems)
2026-01-19 14:24:11 +01:00
Aurelien DARRAGON
d38b918da1 BUG/MINOR: server: ensure server is detached from proxy list before being freed
There remained some cases (on error paths) were a server could be freed
while still attached on the parent proxy server list. In 3.3 this can be
problematic because new_server() automatically adds the server to the
parent proxy list.

The bug is insignificant because it is on errors paths during init and
often haproxy exits right after. But let's fix that to ensure no UAF or
undefined behavior occurs because of that.

This patch depends on ("MINOR: cli: use srv_drop() when server was created using new_server()")

It must be backported in 3.3 with the above mentioned patch.
2026-01-19 14:24:04 +01:00
Aurelien DARRAGON
12dc9325a7 MINOR: cli: use srv_drop() when server was created using new_server()
Now that new_server() is becoming more and more complex, we need to
take care that servers created using new_server() must be released
using the corresponding release function srv_drop() which takes care
of properly de-initing the server and its members.
2026-01-19 14:23:58 +01:00
William Lallemand
eebb448f49 CI: github: fix vtest.yml with "not quictls"
Previous patch 0a4642 ("CI: github: define the right quictls version in
each jobs") didn't use the right syntax for string matching.
2026-01-19 13:22:10 +01:00
William Lallemand
0a464215c5 CI: github: define the right quictls version in each jobs
openssl+quictls is not maintained anymore (quictls/openssl), however we
still need to test openssl+quictls 1.1.1. Other openssl+quictls branches
don't need to be tested.

The quictls hardfork is tested in the 'quictls' job, it uses the
'main' branch in the quictls/quictls repository.
2026-01-19 11:45:57 +01:00
William Lallemand
b8e91f619a SCRIPTS: build-ssl: use QUICTLS_VERSION instead of QUICTLS=yes
Quictls has multiple versions and the CI are not able to test a specific
one because the script uses the default git branch.

This patch allows to checkout the right tag or branch.
2026-01-19 11:43:24 +01:00
Ilia Shipitsin
bd8d70413e CI: github: switch monthly Fedora Rawhide build to OpenSSL
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
QuicTLS builds are already run on push and openssl+quictls patchset is
not maintained anymore. The patch switch from openssl+quictls to the
native openssl of fedora.

Fedora Rawhide builds are mainly useful to test the latest gcc and clang
versions as well as default options of the distribution.

The patch also contains a workaround to re-enable legacy algorithms
which are still tested on the CI.
2026-01-19 10:56:48 +01:00
William Lallemand
90c5618ed5 MEDIUM: systemd: implement directory loading
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Redhat-based system already use a CFGDIR variable to load configuration
files from a directory, this patch implements the same feature.

It now requires that /etc/haproxy/conf.d exists or the service won't be
able to start.
2026-01-16 09:55:33 +01:00
Egor Shestakov
a3ee35cbfc REORG/MINOR: cfgparse: eliminate code duplication by lshift_args()
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
There were similar parts of the code in "no" and "default" prefix
keywords handling. This duplication caused the bug once.

No backport needed.
2026-01-16 09:09:24 +01:00
Egor Shestakov
447d73dc99 BUG/MINOR: cfgparse: fix "default" prefix parsing
Fix the left shift of args when "default" prefix matches. The cause of the
bug was the absence of zeroing of the right element during the shift. The
same bug for "no" prefix was fixed by commit 0f99e3497, but missed for
"default".

The shift of ("default", "option", "dontlog-normal")
    produced ("option", "dontlog-normal", "dontlog-normal")
  instead of ("option", "dontlog-normal", "")

As an example, a valid config line:
    default option dontlog-normal

caused a parse error:
[ALERT]    (32914) : config : parsing [bug-default-prefix.cfg:22] : 'option dontlog-normal' cannot handle unexpected argument 'dontlog-normal'.

The patch should be backported to all stable versions, since the absence of
zeroing was introduced with "default" keyword.
2026-01-16 09:09:19 +01:00
Remi Tricot-Le Breton
362ff2628f REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Many tests use the A128KW algorithm which is not supported by AWS-LC but
instead of removing those tests we will just have a hardcoded value set
by default in this case.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
aba18bac71 MINOR: jwe: Some algorithms not supported by AWS-LC
AWS-LC does not have EVP_aes_128_wrap or EVP_aes_192_wrap so the A128KW
and A192KW algorithms will not be supported for JWE token decryption.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
39da1845fc DOC: jwe: Add doc for jwt_decrypt converters
Add doc for jwt_decrypt_secret and jwt_decrypt_cert converters.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
4b73a3ed29 REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
Test the new jwt_decrypt converters.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
e3a782adb5 MINOR: jwe: Add new jwt_decrypt_cert converter
This converter checks the validity and decrypts the content of a JWE
token that has an asymetric "alg" algorithm (RSA). In such a case, we
must provide a path to an already loaded certificate and private key
that has the "jwt" option set to "on".
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
416b87d5db MINOR: jwe: Add new jwt_decrypt_secret converter
This converter checks the validity and decrypts the content of a JWE
token that has a symetric "alg" algorithm. In such a case, we only
require a secret as parameter in order to decrypt the token.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
2b45b7bf4f REGTESTS: ssl: Add tests for new aes cbc converters
This test mimics what was already done for the aes_gcm converters. Some
data is encrypted and directly decrypted and we ensure that the output
was not changed.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
c431034037 MINOR: ssl: Add new aes_cbc_enc/_dec converters
Those converters allow to encrypt or decrypt data with AES in Cipher
Block Chaining mode. They work the same way as the already existing
aes_gcm_enc/_dec ones apart from the AEAD tag notion which is not
supported in CBC mode.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
f0e64de753 MINOR: ssl: Factorize AES GCM data processing
The parameter parsing and processing and the actual crypto part of the
aes_gcm converter are interleaved. This patch puts the crypto parts in a
dedicated function for better reuse in the upcoming JWE processing.
2026-01-15 10:56:27 +01:00
Amaury Denoyelle
6870551a57 MEDIUM: proxy: force traffic on unpublished/disabled backends
A recent patch has introduced a new state for proxies : unpublished
backends. Such backends won't be eligilible for traffic, thus
use_backend/default_backend rules which target them won't match and
content switching rules processing will continue.

This patch defines a new frontend keywords 'force-be-switch'. This
keyword allows to ignore unpublished or disabled state. Thus,
use_backend/default_backend will match even if the target backend is
unpublished or disabled. This is useful to be able to test a backend
instance before exposing it outside.

This new keyword is converted into a persist rule of new type
PERSIST_TYPE_BE_SWITCH, stored in persist_rules list proxy member. This
is the only persist rule applicable to frontend side. Prior to this
commit, pure frontend proxies persist_rules list were always empty.

This new features requires adjustment in process_switching_rules(). Now,
when a use_backend/default_backend rule matches with an non eligible
backend, frontend persist_rules are inspected to detect if a
force-be-switch is present so that the backend may be selected.
2026-01-15 09:08:19 +01:00
Amaury Denoyelle
16f035d555 MINOR: cfgparse: adapt warnif_cond_conflicts() error output
Utility function warnif_cond_conflicts() is used when parsing an ACL.
Previously, the function directly calls ha_warning() to report an error.
Change the function so that it now takes the error message as argument.
Caller can then output it as wanted.

This change is necessary to use the function when parsing a keyword
registered as cfg_kw_list. The next patch will reuse it.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
82907d5621 MINOR: stats: report BE unpublished status
A previous patch defines a new proxy status : unpublished backends. This
patch extends this by changing proxy status reported in stats. If
unpublished is set, an extra "(UNPUB)" is added to the field.

Also, HTML stats is also slightly updated. If a backend is up but
unpublished, its status will be reported in orange color.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
797ec6ede5 MEDIUM: proxy: implement publish/unpublish backend CLI
Define a new set of CLI commands publish/unpublish backend <be>. The
objective is to be able to change the status of a backend to
unpublished. Such a backend is considered ineligible to traffic : this
allows to skip use_backend rules which target it.

Note that contrary to disabled/stopped proxies, an unpublished backend
still has server checks running on it.

Internally, a new proxy flags PR_FL_BE_UNPUBLISHED is defined. CLI
commands handler "publish backend" and "unpublish backend" are executed
under thread isolation. This guarantees that the flag can safely be set
or remove in the CLI handlers, and read during content-switching
processing.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
21fb0a3f58 MEDIUM: proxy: do not select a backend if disabled
A proxy can be marked as disabled using the keyword with the same name.
The doc mentions that it won't process any traffic. However, this is not
really the case for backends as they may still be selected via switching
rules during stream processing.

In fact, currently access to disabled backends will be conducted up to
assign_server(). However, no eligible server is found at this stage,
resulting in a connection closure or an HTTP 503, which is expected. So
in the end, servers in disabled backends won't receive any traffic. But
this is only because post-parsing steps are not performed on such
backends. Thus, this can be considered as functional but only via
side-effects.

This patch clarifies the handling of disable backends, so that they are
never selected via switching rules. Now, process_switching_rules() will
ignore disable backends and continue rules evaluation.

As this is a behavior change, this patch is labelled as medium. The
documentation manuel for use_backend is updated accordingly.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2d26d353ce REGTESTS: add test on backend switching rules selection
Create a new test to ensure that switching rules selection is fine.
Currently, this checks that dynamic backend switching works as expected.
If a matching rule is resolved to an unexisting backend, the default
backend is used instead.

This regtest should be useful as switching-rules will be extended in a
future set of patches to add new abilities on backends, linked to
dynamic backend support.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
12975c5c37 MEDIUM: stream: refactor switching-rules processing
This commit rewrites process_switching_rules() function. The objective
is to simplify backend selection so that a single unified
stream_set_backend() call is kept, both for regular and default backends
case.

This patch will be useful to add new capabilities on backends, in the
context of dynamic backend support implementation.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2f6aab9211 BUG/MINOR: proxy: free persist_rules
force-persist proxy keyword is converted into a persist_rule, stored in
proxy persist_rules list member. Each new rule is dynamically allocated
during parsing.

This commit fixes the memory leak on deinit due to a missing free on
persist_rules list entries. This is done via deinit_proxy()
modification. Each rule in the list is freed, along with its associated
ACL condition type.

This can be backported to every stable version.
2026-01-15 09:08:18 +01:00
Olivier Houchard
a209c35f30 MEDIUM: thread: Turn the group mask in thread set into a group counter
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
If we want to be able to have more than 64 thread groups, we can no
longer use thread group masks as long.
One remaining place where it is done is in struct thread_set. However,
it is not really used as a mask anywhere, all we want is a thread group
counter, so convert that mask to a counter.
2026-01-15 05:24:53 +01:00
Olivier Houchard
6249698840 BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
Fix the arithmetic when pre-filling non_empty_tgids when we still have
more than 32/64 thread groups left, to get the right index, we of course
have to divide the number of thread groups by the number of bits in a
long.
This bug was introduced by commit
7e1fed4b7a, but hopefully was not hit
because it requires to have at least as much thread groups as there are
bits in a long, which is impossible on 64bits machines, as MAX_TGROUPS
is still 32.
2026-01-15 04:28:04 +01:00