Compare commits

..

324 commits

Author SHA1 Message Date
Amaury Denoyelle
e07a75c764 REGTESTS: complete "del backend" with unnamed defaults ref free
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Complete delete backend regtests by checking deletion of a proxy with a
reference on an unnamed defaults instance. This operation is sensible as
the defaults refcount is decremented, and when the last backend is
removed, the defaults is also freed.
2026-03-02 14:15:53 +01:00
Amaury Denoyelle
2f5030c847 REGTESTS: add a test on "del backend"
Add a reg-tests to test "del backend" CLI command. First, checks are
performed to ensure a backend cannot be deleted if not in the expected
state.

Then, a "del backend" success is tested. Stats are dumped to ensure the
backend instance is indeed removed.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
712055f2f8 MEDIUM: proxy: implement backend deletion
This patch finalizes "del backend" handler by implementing the proper
proxy deletion.

After ensuring backend deletion can be performed, several steps are
executed. First, any watcher elements are updated to point on the next
proxy instance. The backend is then removed from ID and name global
trees and is finally detached from proxies_list.

Once the backend instance is removed from proxies_list, the backend
cannot be found by new elements. Thread isolation is lifted and
proxy_drop() is called, which will purge the proxy if its refcount is
null. Thanks to recently introduced PROXIES_DEL_LOCK, proxy_drop() is
thread safe.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
6145f52d9c MINOR: proxy: use atomic ops for default proxy refcount
Default proxy refcount <def_ref> is used to comptabilize reference on a
default proxy instance by standard proxies. Currently, this is necessary
when a default proxy defines TCP/HTTP rules or a tcpcheck ruleset.

Transform every access on <def_ref> so that atomic operations are now
used. Currently, this is not strictly needed as default proxies
references are only manipulated at init or deinit in single thread mode.
However, when dynamic backends deletion will be implemented, <def_ref>
will be decremented at runtime also.
2026-03-02 14:14:05 +01:00
Amaury Denoyelle
f64aa036d8 MEDIUM: proxy: add lock for global accesses during default free
This patch is similar to the previous one, but this time it deals with
functions related to defaults proxies instances. Lock PROXIES_DEL_LOCK
is used to protect accesses on global collections.

This patch will be necessary to implement dynamic backend deletion, even
if defaults won't be use as direct target of a "del backend" CLI.
However, a backend may have a reference on a default instance. When the
backend is freed, this references is released, which can in turn cause
the freeing of the default proxy instance. All of this will occur at
runtime, outside of thread isolation.
2026-03-02 14:10:46 +01:00
Amaury Denoyelle
f58b2698ce MEDIUM: proxy: add lock for global accesses during proxy free
Define a new lock with label PROXIES_DEL_LOCK. Its purpose is to protect
operations performed on global lists or trees while a proxy is freed.

Currently, this lock is unneeded as proxies are only freed on
single-thread init or deinit. However, with the incoming dynamic backend
deletion, this operation will be also performed at runtime, outside of
thread isolation.
2026-03-02 14:09:25 +01:00
Amaury Denoyelle
a7d1c59a92 MINOR: proxy: add comment for defaults_px_ref/unref_all()
Write documentation for functions related to default proxies instances.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
98c8c5e16e MINOR: cli: implement wait on be-removable
Implement be-removable argument to CLI wait. This is implemented via
be_check_for_deletion() invokation, also used by "del backend" handler.

The objective is to test whether a backend instance can be removed. If
this is not the case, the command may returns immediately if the target
proxy is incompatible with dynamic removal or if a user action is
required. Else, the command will wait until the temporary restriction is
lifted.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
5ddfbd4b03 MINOR: server: mark backend removal as forbidden if QUIC was used
Currenly, quic_conn on the backend side may access their parent proxy
instance during their lifetime. In particular, this is the case for
counters update, with <prx_counters> field directly referencing a proxy
memory zone.

As such, this prevents safe backend removal. One solution would be to
check if the upper connection instance is still alive, as a proxy cannot
be removed if connection are still active. However, this would
completely prevent proxy counters update via
quic_conn_prx_cntrs_update(), as this is performed on quic_conn release.

Another solution would be to use refcount, or a dedicated counter on the
which account for QUIC connections on a backend instance. However,
refcount is currently only used by short-term references, and it could
also have a negative impact on performance.

Thus, the simplest solution for now is to disable a backend removal if a
QUIC server is/was used in it. This is considered acceptable for now as
QUIC on the backend side is experimental.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
053887cc98 MINOR: proxy: prevent backend deletion if server still exists in it
Ensure a backend instance cannot be removed if there is still server in
it. This is checked via be_check_for_deletion() to ensure "del backend"
cannot be executed. The only solution is to use "del server" to remove
on the servers instances.

This check only covers servers not yet targetted via "del server". For
deleted servers not yet purged (due to their refcount), the proxy
refcount is incremented but this does not block "del backend"
invokation.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
7f725f0754 MINOR: proxy: prevent deletion of backend referenced by config elements
Define a new proxy flag PR_FL_NON_PURGEABLE. This is used to mark every
proxy instance explicitely referenced in the config. Such instances
cannot be deleted at runtime.

Static use_backend/default_backend rules are handled in
proxy_finalize(). Also, sample expression proxy references are protected
via smp_resolve_args().

Note that this last case also incidentally protects any proxies
referenced via a CLI "set var" expression. This should not be the case
as in this case variable value is instantly resolved so the proxy
reference is not needed anymore. This also affects dynamic servers.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
7bf3020952 MINOR: proxy: prevent backend removal when unsupported
Prevent removal of a backend which relies on features not compatible
with dynamic backends. This is the case if either dispatch or
transparent option is used, or if a stick-table is declared.

These limitations are similar to the "add backend" ones.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
ad1e00b2ac MINOR: lua: handle proxy refcount
Implement proxy refcount for Lua proxy class. This is similar to the
server class.

In summary, proxy_take() is used to increment refcount when a Lua proxy
is instantiated. proxy_drop() is called via Lua garbage collector. To
ensure a deleted backend is released asap, hlua_check_proxy() now
returns NULL if PR_FL_DELETED is set.

This approach is directly dependable on Lua GC execution. As such, it
probably suffers from the same limitations as the ones already described
in the previous commit. With the current patch, "del backend" is not
directly impacted though. However, the final proxy deinit may happen
after a long period of time, which could cause memory pressure increase.

One final observations regarding deinit : it is necessary to delay a
BUG_ON() which checks that defaults proxies list is empty. Now this must
be executed after Lua deinit (called via post_deinit_list). This should
guarantee that all proxies and their defaults refcount are null.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
f521c2ce2d MINOR: server: take proxy refcount when deleting a server
When a server is deleted via "del server", increment refcount of its
parent backend. This is necessary as the server is not referenced
anymore in the backend, but can still access it via its own <proxy>
member. Thus, backend removal must not happen until the complete purge
of the server.

The proxy refcount is released in srv_drop() if the flag SRV_F_DELETED
is set, which indicates that "del server" was used. This operation is
performed after the complete release of the server instance to ensure no
access will be performed on the proxy via itself. The refcount must not
be decremented if a server is freed without "del server" invokation.

Another solution could be for servers to always increment the refcount.
However, for now in haproxy refcount usage is limited, so the current
approach is preferred. It should also ensure that if the refcount is
still incremented, it may indicate that some servers are not completely
purged themselves.

Note that this patch may cause issues if "del backend" are used in
parallel with LUA scripts referencing servers. Currently, any servers
referenced by LUA must be released by its garbage collector to ensure it
can be finally freed. However, it appeas that in some case the gc does
not run for several minutes. At least this has been observed with Lua
version 5.4.8. In the end, this will result in indefinitely blocking of
"del backend" commands.
2026-03-02 14:08:30 +01:00
Amaury Denoyelle
ee1f0527c6 MINOR: proxy: rename default refcount to avoid confusion
Rename proxy conf <refcount> to <def_ref>. This field only serves for
defaults proxy instances. The objective is to avoid confusion with the
newly introduced <refcount> field used for dynamic backends.

As an optimization, it could be possible to remove <def_ref> and only
use <refcount> also for defaults proxies usage. However for now the
simplest solution is implemented.

This patch does not bring any functional change.
2026-03-02 14:07:40 +01:00
Amaury Denoyelle
f3127df74d MINOR: proxy: add refcount to proxies
Implement refcount notion into proxy structure. The objective is to be
able to increment refcount on proxy to prevent its deletion temporarily.
This is similar to the server refcount : "del backend" is not blocked
and will remove the targetted instance from the global proxies_list.
However, the final free operation is delayed until the refcount is null.

As stated above, the API is similar to servers. Proxies are initialized
with a refcount of 1. Refcount can be incremented via proxy_take(). When
no longer useful, refcount is decremented via proxy_drop() which
replaces the older free_proxy(). Deinit is only performed once refcount
is null.

This commit also defines flag PR_FL_DELETED. It is set when a proxy
instance has been removed via a "del backend" CLI command. This should
serve as indication to modules which may still have a refcount on the
target proxy so that they can release it as soon as possible.

Note that this new refcount is completely ignored for a default proxy
instance. For them, proxy_take() is pure noop. Free is immediately
performed on first proxy_drop() invokation.
2026-03-02 10:44:59 +01:00
Amaury Denoyelle
ebbdfc5915 MINOR: lua: use watcher for proxies iterator
Ensures proxies iteration via lua functions is safe via a new watcher
member. The principle is similar to the one already used for servers
iteration.
2026-03-02 10:36:21 +01:00
Amaury Denoyelle
dd1990a97a MINOR: promex: use watcher to iterate over backend instances
Ensures proxies iteration in promex applet is safe via a new watcher
member. The principle is similar to the one already used for servers
iteration.

Note that ctx.p[0] is not updated anymore at the end of a function, as
this is automatically done via the watcher itself.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
20376c54e2 MINOR: stats: protect proxy iteration via watcher
Define a new <px_watch> watcher member in stats applet context. It is
used to register the applet on a proxy when iterating over the proxies
list. <obj1> is automatically updated via the watcher interaction.
Watcher is first initialized prior to stats_dump_proxies() invocation.

This guarantees that stats dump is safe even if applet yields and a
backend is removed in parallel.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
4bcfc09acf MINOR: proxy: define proxy watcher member
Define a new member watcher_list in proxy. It will be used to register
modules which iterate over the proxies list. This will ensure that the
operation is safe even if a backend is removed in parallel.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
08623228a1 MINOR: proxy: define a basic "del backend" CLI
Add "del backend" handler which is restricted to admin level. Along with
it, a new function be_check_for_deletion() is used to test if the
backend is removable.
2026-02-27 10:28:24 +01:00
Amaury Denoyelle
d166894fef MINOR: server: refactor srv_detach()
Correct documentation for srv_detach() which previously stated that this
function could be called for a server even if not stored in its proxy
list. In fact there is a BUG_ON() which detects this case.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
dd55f2246e MINOR: proxy: convert proxy flags to uint
Proxy flags member were of type char. This will soon enough not be
sufficient as new flags will be defined. As such, convert flags member
to unsigned int type.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
78549c66c5 BUG/MINOR: proxy: add dynamic backend into ID tree
Add missing proxy_index_id() call in "add backend" handler. This step is
responsible to store the newly created proxy instance in the
used_proxy_id global tree.

No need to backport.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
5000f0b2ef BUG/MINOR: promex: fix server iteration when last server is deleted
Servers iteration via promex is now resilient to server runtime deletion
thanks to the watcher mechanism. However, the watcher was not correctly
initialized which could cause duplicate metrics reporting.

This issue happens when promex dump yielded when manipulating the last
server of a proxy. If this server is removed in parallel, <sv> pointer
will be set to NULL when promex resumes. Instead of switching to another
proxy, the code would reuse the same one and iterate again on the same
server list.

To fix this issue, <sv> pointer must not be reinitialized just after a
resumption point. Instead, this is now performed before
promex_dump_srv_metrics(), or just after switching to another proxy
instance. Thus, on resumption, if promex_dump_srv_metrics() is started
with <sv> as NULL, it means that the server was deleted and the end of
the current proxy list is reached, hence iteration is restarted on the
next proxy instance.

Note that ctx.p[1] does not need to be manually updated at the end of
promex_dump_srv_metrics() as srv_watch already does that.

This patch must be backported up to 3.0.
2026-02-26 18:24:36 +01:00
Amaury Denoyelle
00a106059e MINOR: promex: test applet resume in stress mode
Implement a stress mode with force yield for promex applet each time a
metric is displayed. This is implemented by returning 0 in
promex_dump_ts() each time the output buffer is not empty.

To test this, haproxy must be compiled with DEBUG_STRESS and the
following configuration must be used :

  global
    stress-level 1
2026-02-26 18:24:36 +01:00
Willy Tarreau
9db62d408a BUG/MINOR: call EXTRA_COUNTERS_FREE() before srv_free_params() in srv_drop()
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
As seen with the last changes to counters allocation, the move of the
counters storage to the thread group as operated in commit 04a9f86a85
("MEDIUM: counters: add a dedicated storage for extra_counters in various
structs") causes some random errors when using ASAN, because the extra
counters are freed in srv_drop() after calling srv_free_params(), which
is responsible for freeing the per-thread group storage.

For the proxies however it's OK because free calls are made before the
call to deinit_proxy() which frees the per_tgrp area.

No backport is needed, this is purely 3.4-dev.
2026-02-26 17:24:59 +01:00
Willy Tarreau
b604064980 MEDIUM: counters: make EXTRA_COUNTERS_GET() consider tgid
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Now we store and retrieve only counters for the current tgid when more
than one is supported. This allows to significantly reduce contention
on shared stats. The haterm utility saw its performance increase from
4.9 to 5.8M req/s in H1, and 6.0 to 7.6M for H2, both with 5 groups of
16 threads, showing that we don't necessarily need insane amounts of
groups.
2026-02-26 17:03:53 +01:00
Willy Tarreau
9019a5db93 MEDIUM: counters: return aggregate extra counters in ->fill_stats()
Now thanks to new macro EXTRA_COUNTERS_AGGR() we can iterate over all
thread groups storages when returning the data for a given metric. This
remains convenient and mostly transparent. The caller continues to pass
the pointer to the metric in the first group, and offsets are calculated
for all other groups and data summed. For now all groups except the
first one contain only zeroes but reported values are nevertheless
correct.
2026-02-26 17:03:53 +01:00
Willy Tarreau
de0eddf512 MINOR: counters: add EXTRA_COUNTERS_BASE() to retrieve extra_counters base storage
The goal is to always retrieve the storage address of the first thread
group for the given module. This will be used to iterate over all thread
groups. For now it returns the same value as EXTRA_COUNTERS_GET().
2026-02-26 17:03:53 +01:00
Willy Tarreau
a60e1fcf7f MEDIUM: counters: store the number of thread groups accessing extra_counters
In order to be able to properly allocate all storage and retrieve data
from there, we'll need to know how many thread groups are supposed to
access it. Let's store the number of thread groups at init time. If the
tgrp_step is zero, there's always only one tg though.

Now EXTRA_COUNTERS_ALLOC() takes this number of thread groups in argument
and stores it in the structure. It also allocates as many areas as needed,
incrementing the datap pointer by the step for each of them.

EXTRA_COUNTERS_FREE() uses this info to free all allocated areas.

EXTRA_COUNTERS_INIT() initializes all allocated areas, this is used
elsewhere to clear/preset counters, e.g. in proxy_stats_clear_counters().
It involves a memcpy() call for each array, which is normally preset to
something empty but might also be used to preset certain non-scalar
fields such as an instance name.
2026-02-26 17:03:53 +01:00
Willy Tarreau
7ac47910a2 MINOR: counters: store a tgroup step for extra_counters to access multiple tgroups
We'll need to permit any user to update its own tgroup's extra counters
instead of the global ones. For this we now store the per-tgroup step
between two consecutive data storages, for when they're stored in a
tgroup array. When shared (e.g. resolvers or listeners), we just store
zero to indicate that it doesn't scale with tgroups. For now only the
registration was handled, it's not used yet.
2026-02-26 17:03:53 +01:00
Willy Tarreau
04a9f86a85 MEDIUM: counters: add a dedicated storage for extra_counters in various structs
Servers, proxies, listeners and resolvers all use extra_counters. We'll
need to move the storage to per-tgroup for those where it matters. Now
we're relying on an external storage, and the data member of the struct
was replaced with a pointer to that pointer to data called datap. When
the counters are registered, these datap are set to point to relevant
locations. In the case of proxies and servers, it points to the first
tgrp's storage. For listeners and resolvers, it points to a local
storage. The rationale here is that listeners are limited to a single
group anyway, and that resolvers have a low enough load so that we do
not care about contention there.

Nothing should change for the user at this point.
2026-02-26 17:03:47 +01:00
Willy Tarreau
8dd22a62a4 CLEANUP: counters: only retrieve zeroes for unallocated extra_counters
Since version 2.4 with commit 7f8f6cb926 ("BUG/MEDIUM: stats: prevent
crash if counters not alloc with dummy one") we can afford to always
update extra_counters because we know they're always either allocated
or linked to a dedicated trash. However, the ->fill_stats() callbacks
continue to access such values, making it technically possible to
retrieve random counters from this trash, which is not really clean.
Let's implement an explicit test in the ->fill_stats() functions to
only return 0 for the metric when not allocated like this. It's much
cleaner because it guarantees that we're returning an empty counter
in this case rather than random values.

The situation currently happens for dummy servers like the ones used
in Lua proxies as well as those used by rings (e.g. used for logging
or traces). Normally, none of the objects retrieved via stats or
Prometheus is concerned by this unallocated extra_counters situation,
so this is more about a cleanup than a real fix.
2026-02-26 08:24:03 +01:00
Willy Tarreau
95a9f472d2 MEDIUM: counters: change the fill_stats() API to pass the module and extra_counters
We'll soon need to iterate over thread groups in the fill_stats() functions,
so let's first pass the extra_counters and stats_module pointers to the
fill_stats functions. They now call EXTRA_COUNTERS_GET() themselves with
these elements in order to retrieve the required pointer. Nothing else
changed, and it's getting even a bit more transparent for callers.

This doesn't change anything visible however.
2026-02-26 08:24:03 +01:00
Willy Tarreau
56fc12d6fa CLEANUP: stats: drop stats.h / stats-t.h where not needed
A number of C files include stats.h or stats-t.h, many of which were
just to access the counters. Now those which really need counters rely
on counters.h or counters-t.h, which already reduces the amount of
preprocessed code to be built (~3000 lines or about 0.05%).
2026-02-26 08:24:03 +01:00
Willy Tarreau
2b463e9b1f REORG: stats/counters: move extra_counters to counters not stats
It was always difficult to find extra_counters when the rest of the
counters are now in counters-t.h. Let's move the types to counters-t.h
and the macros to counters.h. Stats include them since they're used
there. But some users could be cleaned from the stats definitions now.
2026-02-26 08:24:03 +01:00
Willy Tarreau
9910af6117 CLEANUP: quic-stats: include counters from quic_stats
There's something a bit awkward in the way stats counters are inherited
through the QUIC modules: quic_conn-t includes quic_stats-t.h, which
declares quic_stats_module as extern from a type that's not known from
this file. And anyway externs should not be exported from type defintions
since they're not part of the ABI itself.

This commit moves the declaration to quic_stats.h which now takes care
to include stats-t.h to get the definition of struct stats_module. The
few users who used to learn it through quic_conn-t.h now include it
explicitly. As a bonus this reduces the number of preprocessed lines
by 5000 (~0.1%).

By the way, it looks like struct stats_module could benefit from being
moved off stats-t.h since it's only used at places where the rest of
the stats is not needed. Maybe something to consider for a future
cleanup.
2026-02-26 08:24:03 +01:00
Willy Tarreau
fb5e280e0d CLEANUP: tree-wide: drop a few useless null-checks before free()
We only support platforms where free(NULL) is a NOP so that
null checks are useless before free(). Let's drop them to keep
the code clean. There were a few in cfgparse-global, flt_trace,
ssl_sock and stats.
2026-02-26 08:24:03 +01:00
Willy Tarreau
709c3be845 BUG/MINOR: server: adjust initialization order for dynamic servers
It appears that in cli_parse_add_server(), we're calling srv_alloc_lb()
and stats_allocate_proxy_counters_internal() before srv_preinit() which
allocates the thread groups. LB algos can make use of the per_tgrp part
which is initialized by srv_preinit(). Fortunately for now no algo uses
both tgrp and ->server_init() so this explains why this remained
unnoticed to date. Also, extra counters will soon require per_tgrp to
already be initialized. So let's move these between srv_preinit() and
srv_postinit(). It's possible that other parts will have to be moved
in between.

This could be backported to recent versions for the sake of safety but
it looks like the current code cannot tell the difference.
2026-02-26 08:24:03 +01:00
Willy Tarreau
44932b6c41 BUG/MEDIUM: mux-h2: make sure to always report pending errors to the stream
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Some stream parsing errors that do not affect the connection result in
the parsed block not being transferred from the rx buffer to the channel
and not being reported upstream in rcv_buf(), causing the stconn to time
out. Let's detect this condition, and propagate term flags anyway since
no more progress will be made otherwise.

This should be backported at least till 3.2, probably even 2.8.
2026-02-26 00:30:42 +01:00
Willy Tarreau
e67e36c9eb MINOR: mux-h2: add a new setting, "tune.h2.log-errors" to tweak error logging
The H2 mux currently logs whenever some decoding fails. Most of the errors
happen at the connection level, but some are even at the stream level,
meaning that multiple logs can be emitted for a given connection, which
can quickly use some resource for little value. This new setting allows
to tweak this and decide to only log errors that affect the connection,
or even none at all.

This should be backported at least as far as 3.2.
2026-02-25 22:43:40 +01:00
Willy Tarreau
cad6e0b3da MINOR: mux-h2: also count glitches on invalid trailers
Two cases were not causing glitches to be incremented:
  - invalid trailers
  - trailers on closed streams

This patch addresses this. It could be backported, at least to 3.2.
2026-02-25 22:03:16 +01:00
Frederic Lecaille
5af42fa342 CLEANUP: ssl: remove outdated comments
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
ssl_sock_srv_try_reuse_sess() was modified by this commit to no longer
fail (it now returns void), but the related comments remained:

  BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions

This patch cleans them up.
2026-02-25 11:25:05 +01:00
Frederic Lecaille
89c75b0777 BUG/MINOR: quic: missing app ops init during backend 0-RTT sessions
The QUIC mux requires "application operations" (app ops), which are a list
of callbacks associated with the application level (i.e., h3, h0.9) and
derived from the ALPN. For 0-RTT, when the session cache cannot be reused
before activation, the current code fails to reach the initialization of
these app ops, causing the mux to crash during its initialization.

To fix this, this patch restores the behavior of
ssl_sock_srv_try_reuse_sess(), whose purpose was to reuse sessions stored
in the session cache regardless of whether 0-RTT was enabled, prior to
this commit:

  MEDIUM: quic-be: modify ssl_sock_srv_try_reuse_sess() to reuse backend
  sessions (0-RTT)

With this patch, this function now does only one thing: attempt to reuse a
session, and that's it!

This patch allows ignoring whether a session was successfully reused from
the cache or not. This directly fixes the issue where app ops
initialization was skipped upon a session cache reuse failure. From a
functional standpoint, starting a mux without reusing the session cache
has no negative impact; the mux will start, but with no early data to
send.

Finally, there is the case where the ALPN is reset when the backend is
stopped. It is critical to continue locking read access to the ALPN to
secure shared access, which this patch does. It is indeed possible for the
server to be stopped between the call to connect_server() and
quic_reuse_srv_params(). But this cannot prevent the mux to start
without app ops. This is why a 'TODO' section was added, as a reminder that a
race condition regarding the ALPN reset still needs to be fixed.

Must be backported to 3.3
2026-02-25 11:13:52 +01:00
Olivier Houchard
84837b6e70 BUG/MEDIUM: cpu-topo: Distribute CPUs fairly across groups
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Make sure CPUs are distributed fairly across groups, in case the number
of groups to generate is not a divider of the number of CPUs, otherwise
we may end up with a few groups that will have no CPU bound to them.

This was introduced in 3.4-dev2 with commit 56fd0c1a5c ("MEDIUM: cpu-topo:
Add an optional directive for per-group affinity"). No backport is
needed unless this commit is backported.
2026-02-24 08:17:16 +01:00
Frederic Lecaille
ca5332a9c3 BUG/MINOR: haterm: cannot reset default "haterm" mode
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
When "mode haterm" was set in a "defaults" section, it could not be
overridden in subsequent sections using the "mode" keyword. This is because
the proxy stream instantiation callback was not being reset to the
default stream_new() value.

This could break the stats URI with a configuration such as:

    defaults
        mode haterm
        # ...

    frontend stats
		bind :8181
		mode http
		stats uri /

This patch ensures the ->stream_new_from_sc() proxy callback is reset
to stream_new() when the "mode" keyword is parsed for any mode other
than "haterm".

No need to backport.
2026-02-23 17:57:19 +01:00
Maxime Henrion
a9dc8e2587 MINOR: quic: add a new metric for ncbuf failures
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
This counts the number of times we failed to add data to the ncbuf
buffer because of the gap size limit.
2026-02-23 17:47:45 +01:00
Amaury Denoyelle
05d73aa81c MINOR: proxy: improve code when checking server name conflicts
During proxy_finalize(), a lookup is performed over the servers by name
tree to detect any collision. Only the first conflict for each server
instance is reported to avoid a combinatory explosion with too many
alerts shown.

Previously, this was written using a for loop without any iteration.
Replace this by a simple if statement as this is cleaner.

This should fix github issue #3276.
2026-02-23 16:28:41 +01:00
Amaury Denoyelle
c528824094 MINOR: ncbmbuf: improve itbmap_next() code
itbmap_next() advances an iterator over a ncbmbuf buffer storage. When
reaching the end of the buffer, <b> field is set to NULL, and the caller
is expected to stop working with the iterator.

Complete this part to ensure that itbmap type is fully initialized in
case null iterator value is returned. This is not strictly required
given the above description, but this is better to avoid any possible
future mistake.

This should fix coverity issue from github #3273.

This could be backported up to 2.8.
2026-02-23 16:28:41 +01:00
Willy Tarreau
868dd3e88b MINOR: traces: always mark trace_source as thread-aligned
Some perf profiles occasionally show that reading the trace source's
state can take some time, which is not expected at all. It just happens
that the trace_source is not cache-aligned so depending on linkage, it
may share a cache line with a more active variable, thereby inducing a
slow down to all threads trying to read the variable.

Let's always mark it aligned to avoid this. For now the problem was not
observed again.
2026-02-23 16:22:59 +01:00
Christopher Faulet
c2b5446292 BUG/MEDIUM: spoe: Acquire context buffer in applet before consuming a frame
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Changes brought to support large buffers revealed a bug in the SPOE applet
when a frame is copied in the SPOE context buffer. A b_xfer() was performed
without allocating the SPOE context buffer. It is not expected. As stated in
the function documentation, the caller is responsible for ensuring there is
enough space in the destination buffer. So first of all, it must ensure this
buffer was allocated.

With recent changes, we are able to hit a BUG_ON() because the swap is no
longer possible if source and destination buffers size are not the same.

This patch should fix the issue #3286. It could be backported as far as 3.1.
2026-02-23 15:47:25 +01:00
Willy Tarreau
bbd8492c22 CLENAUP: cfgparse: accept-invalid-http-* does not support "no"/"defaults"
Some options do not support "no" nor "defaults" and they're placed after
the check for their absence. However, "accept-invalid-http-request" and
"accept-invalid-http-response" still used to check for the flags that
come with these prefixes, but Coverity noticed this was dead code in
github issue #3272. Let's just drop the test.

No backport needed as it's just dead code.
2026-02-23 15:44:40 +01:00
Ilia Shipitsin
bf363a7135 CI: remove redundant "halog" compilation
since 6499c0a0d5 halog is being build
in vtest workflow, no need to build it two times
2026-02-23 11:11:26 +01:00
Ilia Shipitsin
c44d6c6c71 CI: use the latest docker for QUIC Interop
quic-interop runner is using features available in Docker v28.1
while Github runner includes v28.0

let's for sure setup the latest available
2026-02-23 11:11:20 +01:00
Frederic Lecaille
dfa8907a3d CLEANUP: ssl: Remove a useless variable from ssl_gen_x509()
This function was recently created by moving code from acme_gen_tmp_x509()
(in acme.c) to ssl_gencrt.c (ssl_gen_x509()). The <ctmp> variable was
initialized and then freed without ever being used. This was already the
case in the original acme_gen_tmp_x509() function.

This patch removes these useless statements.

Reported in GH #3284
2026-02-23 10:47:15 +01:00
Frederic Lecaille
bb3304c6af CLEANUP: haterm: avoid static analyzer warnings about rand() use
Avoid such a warnings from coverity:

CID 1645121: (#1 of 1): Calling risky function (DC.WEAK_CRYPTO)
dont_call: random should not be used for security-related applications,
because linear congruential algorithms are too easy to break.

Reported in GH #3283 and #3285
2026-02-23 10:39:59 +01:00
Frederic Lecaille
a5a053e612 CLEANUP: haterm: remove unreachable labels hstream_add_data()
This function does not return a status. It nevers fails.
Remove some useled dead code.
2026-02-23 10:13:25 +01:00
Mia Kanashi
5aa30847ae BUG/MINOR: acme: fix incorrect number of arguments allowed in config
Fix incorrect number of arguments allowed in config for keywords
"chalenge" and "account-key".
2026-02-23 09:46:34 +01:00
Hyeonggeun Oh
ca5c07b677 MINOR: ssl: clarify error reporting for unsupported keywords
This patch changes the registration of the following keywords to be
unconditional:
  - ssl-dh-param-file
  - ssl-engine
  - ssl-propquery, ssl-provider, ssl-provider-path
  - ssl-default-bind-curves, ssl-default-server-curves
  - ssl-default-bind-sigalgs, ssl-default-server-sigalgs
  - ssl-default-bind-client-sigalgs, ssl-default-server-client-sigalgs

Instead of excluding them at compile time via #ifdef guards in the keyword
registration table, their parsing functions now check feature availability
at runtime and return a descriptive error when the feature is missing.

For features controlled by the SSL library (providers, curves, sigalgs,
DH), the error message includes the actual OpenSSL version string via
OpenSSL_version(OPENSSL_VERSION), so users can immediately identify which
library they are running rather than seeing cryptic internal macro names.

For ssl-dh-param-file, the message also includes "(no DH support)" as a
hint, since OPENSSL_NO_DH can be set either by an OpenSSL build or by
HAProxy itself in certain configurations.

For ssl-engine, which depends on a HAProxy build-time flag (USE_ENGINE),
the message retains the flag name as it is more actionable for the user.

This addresses issue https://github.com/haproxy/haproxy/issues/3246.
2026-02-23 09:40:18 +01:00
William Lallemand
8d54cda0af BUG/MINOR: acme: wrong labels logic always memprintf errmsg
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In acme_req_finalize(), acme_req_challenge(), acme_req_neworder(),
acme_req_account(), and acme_post_as_get(), the success path always
calls unconditionally memprintf(errmsg, ...).

This may result in a leak of errmsg.

Additionally, acme_res_chkorder(), acme_res_finalize(), acme_res_auth(),
and acme_res_neworder() had unused 'out:' labels that were removed.

Must be backported as far as 3.2.
2026-02-22 22:30:38 +01:00
William Lallemand
e63722fed4 BUG/MINOR: acme: acme_ctx_destroy() leaks auth->dns
365a696 ("MINOR: acme: emit a log for DNS-01 challenge response")
introduces the auth->dns member which is istdup(). But this member is
never free, instead auth->token was freed twice by mistake.

Must be backported to 3.2.
2026-02-21 16:10:29 +01:00
Amaury Denoyelle
0f95e73032 BUG/MINOR: quic/h3: display QUIC/H3 backend module on HTML stats
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
QUIC is now implemented on the backend side. Complete definitions for
QUIC/H3 stats module to add STATS_PX_CAP_BE capability.

This change is necessary to display QUIC/H3 counters on backend lines
for HTML stats page.

This should be backported up to 3.3.
2026-02-20 14:08:27 +01:00
Amaury Denoyelle
5f26cf162c MINOR: quic: add BUG_ON() on half_open_conn counter access from BE
half_open_conn is a proxy counter used to account for quic_conn in
half-open state : this represents a connection whose address is not yet
validated (handshake successful, or via token validation).

This counter only has sense for the frontend side. Currently, code is
safe as access is only performed if quic_conn is not yet flagged with
QUIC_FL_CONN_PEER_VALIDATED_ADDR, which is always set for backend
connections.

To better reflect this, add a BUG_ON() when half_open_conn is
incremented/decremented to ensure this never occurs for backend
connections.
2026-02-20 14:08:27 +01:00
Amaury Denoyelle
b8cb8e1a65 BUG/MINOR: quic: fix counters used on BE side
quic_conn is initialized with a pointer to its proxy counters. These
counters are then updated during the connection lifetime.

Counters pointer was incorrect for backend quic_conn, as it always
referenced frontend counters. For pure backend, no stats would be
updated. For listen instances, this resulted in incorrect stats
reporting.

Fix this by correctly set proxy counters based on the connection side.

This must be backported up to 3.3.
2026-02-20 14:08:27 +01:00
Frederic Lecaille
db360d466b BUG/MINOR: haterm: missing allocation check in copy_argv()
This is a very minor bug with a very low probability of occurring.
However, it could be flagged by a static analyzer or result in a small
contribution, which is always time-consuming for very little gain.
2026-02-20 12:12:10 +01:00
Frederic Lecaille
92581043fb MINOR: haterm: add long options for QUIC and TCP "bind" settings
Add the --quic-bind-opts and --tcp-bind-opts long options to append
settings to all QUIC and TCP bind lines. This requires modifying the argv
parser to first process these new options, ensuring they are available
during the second argv pass to be added to each relevant "bind" line.
2026-02-20 12:00:34 +01:00
Frederic Lecaille
8927426f78 MINOR: haterm: provide -b and -c options (RSA key size, ECDSA curves)
Add -b and -c options to the haterm argv parser. Use -b to specify the RSA
private key size (in bits) and -c to define the ECDSA certificate curves.
These self-signed certificates are required for haterm SSL bindings.
2026-02-20 10:43:54 +01:00
Amaury Denoyelle
f71b2f4338 BUG/MINOR: server: enable no-check-sni-auto for dynamic servers
Allows server keyword "no-check-sni-auto" for dynamic servers. This may
be necessary to users who do not want to benefit from auto SNI for
checks.

Keyword "check-sni-auto" is still deactivated for dynamic servers, for
the same reason as "sni-auto" (cf the previous patch for a complete
explanation).

This must be backported up to 3.3.
2026-02-20 09:02:47 +01:00
Amaury Denoyelle
de5fc2f515 BUG/MINOR: server: set auto SNI for dynamic servers
Auto SNI configuration is configured during check config validity.
However, nothing was implemented for dynamic servers.

Fix this by implementing auto SNI configuration during "add server" CLI
handler. Auto SNI configuration code is moved in a dedicated function
srv_configure_auto_sni() called both for static and dynamic servers.

Along with this, allows the keyword "no-sni-auto" on dynamic servers, so
that this process can be deactivated if wanted. Note that "sni-auto"
remains unavailable as it only makes sense with default-servers which
are never used for dynamic server creation.

This must be backported up to 3.3.
2026-02-20 09:02:47 +01:00
Amaury Denoyelle
2b0fc33114 BUG/MINOR: proxy: detect strdup error on server auto SNI
There was no check on the result of strdup() used to setup auto SNI on a
server instance during check config validity. In case of failure, the
error would be silently ignored as the following server_parse_exprs()
does nothing when <sni_expr> server field is NULL. Hence, no SNI would
be used on the server, without any error nor warning reported.

Fix this by adding a check on strdup() return value. On error, ERR_ABORT
is reported along with an alert, parsing should be interrupted as soon
as possible.

This must be backported up to 3.3. Note that the related code in this
case is present in cfgparse.c source file.
2026-02-20 09:02:47 +01:00
William Lallemand
f5a182c7e7 CLEANUP: acme: remove duplicate includes
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Remove a bunch of duplicate includes in acme.c.
2026-02-19 23:57:20 +01:00
Willy Tarreau
028940725a [RELEASE] Released version 3.4-dev5
Released version 3.4-dev5 with the following main changes :
    - DOC: internals: addd mworker V3 internals
    - BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
    - BUG/MEDIUM: threads: Differ checking the max threads per group number
    - BUG/MINOR: startup: fix allocation error message of progname string
    - BUG/MINOR: startup: handle a possible strdup() failure
    - MINOR: cfgparse: validate defaults proxies separately
    - MINOR: cfgparse: move proxy post-init in a dedicated function
    - MINOR: proxy: refactor proxy inheritance of a defaults section
    - MINOR: proxy: refactor mode parsing
    - MINOR: backend: add function to check support for dynamic servers
    - MINOR: proxy: define "add backend" handler
    - MINOR: proxy: parse mode on dynamic backend creation
    - MINOR: proxy: parse guid on dynamic backend creation
    - MINOR: proxy: check default proxy compatibility on "add backend"
    - MEDIUM: proxy: implement dynamic backend creation
    - MINOR: proxy: assign dynamic proxy ID
    - REGTESTS: add dynamic backend creation test
    - BUG/MINOR: proxy: fix clang build error on "add backend" handler
    - BUG/MINOR: proxy: fix null dereference in "add backend" handler
    - MINOR: net_helper: extend the ip.fp output with an option presence mask
    - BUG/MINOR: proxy: fix default ALPN bind settings
    - CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
    - BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
    - CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
    - MINOR: activity: support setting/clearing lock/memory watching for task profiling
    - MEDIUM: activity: apply and use new finegrained task profiling settings
    - MINOR: activity: allow to switch per-task lock/memory profiling at runtime
    - MINOR: startup: Add the SSL lib verify directory in haproxy -vv
    - BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
    - CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
    - DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
    - MINOR: queues: Check minconn first in srv_dynamic_maxconn()
    - MINOR: servers: Call process_srv_queue() without lock when possible
    - BUG/MINOR: quic: ensure handshake speed up is only run once per conn
    - BUG/MAJOR: quic: reject invalid token
    - BUG/MAJOR: quic: fix parsing frame type
    - MINOR: ssl: Missing '\n' in error message
    - MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
    - MINOR: jwt: Add new jwt_decrypt_jwk converter
    - REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
    - MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
    - MINOR: startup: sort the feature list in haproxy -vv
    - MINOR: startup: show the list of detected features at runtime with haproxy -vv
    - SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
    - MINOR: filters: rework RESUME_FILTER_* macros as inline functions
    - MINOR: filters: rework filter iteration for channel related callback functions
    - MEDIUM: filters: use per-channel filter list when relevant
    - DEV: gdb: add a utility to find the post-mortem address from a core
    - BUG/MINOR: deviceatlas: add missing return on error in config parsers
    - BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
    - BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
    - BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
    - BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
    - BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
    - BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
    - BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
    - BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
    - MINOR: deviceatlas: check getproptype return and remove pprop indirection
    - MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
    - MINOR: deviceatlas: define header_evidence_entry in dummy library header
    - MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
    - CLEANUP: deviceatlas: add unlikely hints and minor code tidying
    - DEV: gdb: use unsigned longs to display pools memory usage
    - BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
    - BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
    - BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
    - BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
    - BUG/MINOR: ssl: error with ssl-f-use when no "crt"
    - MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
    - BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
    - BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
    - MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
    - REGTESTS: fix quoting in feature cmd which prevents test execution
    - BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
    - BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
    - BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
    - DEV: term-events: Fix hanshake events decoding
    - BUG/MINOR: flt-trace: Properly compute length of the first DATA block
    - MINOR: flt-trace: Add an option to limit the amount of data forwarded
    - CLEANUP: compression: Remove unused static buffers
    - BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
    - BUG/MINOR: http-ana: Stop to wait for body on client error/abort
    - MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
    - REORG: stconn: Move functions related to channel buffers to sc_strm.h
    - BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
    - MINOR: tree-wide: Use the buffer size instead of global setting when possible
    - MINOR: buffers: Swap buffers of same size only
    - BUG/MINOR: config: Check buffer pool creation for failures
    - MEDIUM: cache: Don't rely on a chunk to store messages payload
    - MEDIUM: stream: Limit number of synchronous send per stream wakeup
    - MEDIUM: compression: Be sure to never compress more than a chunk at once
    - MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
    - MEDIUM: applet: Disable 0-copy for buffers of different size
    - MINOR: h1-htx: Disable 0-copy for buffers of different size
    - MEDIUM: stream: Offer buffers of default size only
    - BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
    - MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
    - MEDIUM: htx: Refactor htx defragmentation to merge data blocks
    - MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
    - MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
    - MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
    - MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
    - MEDIUM: chunk: Add support for large chunks
    - MEDIUM: stconn: Properly handle large buffers during a receive
    - MEDIUM: sample: Get chunks with a size dependent on input data when necessary
    - MEDIUM: http-fetch: Be able to use large chunks when necessary
    - MINPR: htx: Get large chunk if necessary to perform a defrag
    - MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
    - MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
    - MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
    - CI: do not use ghcr.io for Quic Interop workflows
    - BUG/MEDIUM: ssl: SSL backend sessions used after free
    - CI: vtest: move the vtest2 URL to vinyl-cache.org
    - CI: github: disable windows.yml by default on unofficials repo
    - MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
    - CLEANUP: mux-h1: Remove unneeded null check
    - DOC: remove openssl no-deprecated CI image
    - BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
    - BUG/MINOR: backend: check delay MUX before conn_prepare()
    - OPTIM: backend: reduce contention when checking MUX init with ALPN
    - DOC: configuration: add the ACME wiki page link
    - MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
    - MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
    - MINOR: trace: add definitions for haterm streams
    - MINOR: init: allow a fileless init mode
    - MEDIUM: init: allow the redefinition of argv[] parsing function
    - MINOR: stconn: stream instantiation from proxy callback
    - MINOR: haterm: add haterm HTTP server
    - MINOR: haterm: new "haterm" utility
    - MINOR: haterm: increase thread-local pool size
    - BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
    - BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
    - BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
    - CI: github: only enable OS X on development branches
2026-02-19 16:26:52 +01:00
William Lallemand
41a71aec3d CI: github: only enable OS X on development branches
Don't use the macOS job on maintenance branches, it's mainly use for
development and checking portability, but we don't support actively
macOS on stable branches.
2026-02-19 16:22:42 +01:00
Aurelien DARRAGON
09bf116242 BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
When leveraging shm-stats-file, global_now_ms and global_now_ns are stored
(and thus shared) inside the shared map, so that all co-processes share
the same clock source.

Since the global_now_{ns,ms} clocks are derived from now_ns, and given
that now_ns is a monotonic clock (hence inconsistent from one host to
another or reset after reboot) special care must be taken to detect
situations where the clock stored in the shared map is inconsistent
with the one from the local process during startup, and cannot be
relied upon anymore. A common situation where the current implementation
fails is resuming from a shared file after reboot: the global_now_ns stored
in the shm-stats-file will be greater than the local now_ns after reboot,
and applying the shared offset doesn't help since it was only relevant to
processes prior to rebooting. Haproxy's clock code doesn't expect that
(once the now offset is applied) global_now_ns > now_ns, and it creates
ambiguous situation where the clock computations (both haproxy oriented
and shm-stats-file oriented) are broken.

To fix the issue, when we detect that the clock stored in the shm is
off by more than SHM_STATS_FILE_HEARTBEAT_TIMEOUT (60s) from the
local now_ns, since this situation is not supposed to happen in normal
environment on the host, we assume that the shm file was previously used
on a different system (or that the current host rebooted).

In this case, we perform a manually adjustment of the now offset so that
the monotonic clock from the current host is consistent again with the
global_now_ns stored in the file. Doing so we can ensure that clock-
dependent objects (such as freq_counters) stored within the map will keep
working as if we just (re)started where we left off when the last process
stopped updating the map.

Normally it is not expected that we update the now offset stored in the
map once the map was already created (because of concurrent accesses to
the file when multiple processes are attached to it), but in this specific
case, we know we are the first process on this host to start working
(again) on the file, thus we update the offset as if we created the
file ourself, while keeping existing content.

It should be backported in 3.3
2026-02-19 16:14:02 +01:00
Aurelien DARRAGON
2b7849fd02 BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
shm-stats-file heartbeat is derived from now_ms with an extra time added
to it, thus it should be handled using the same time as now_ms is.

Until now, we used to handle heartbeat using signed integer. This was not
found to cause severe harm but it could result in improper handling due
to early wrapping because of signedness for instance, so let's better fix
that before it becomes a real issue.

It should be backported in 3.3
2026-02-19 16:13:55 +01:00
Aurelien DARRAGON
04a4d242c9 BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
Amaury reported that when the following warning is reported by haproxy:

  [WARNING]  (296347) : config: failed to get shm stats file slot for 'haproxy.stats', all slots are occupied

haproxy would segfault right after during clock update operation.

The reason for the warning being emitted is not the object of this commit
(all shm-stats-file slots occupied by simultaneous co-processes) but since
it was is intended that haproxy is able to keep working despite that
warning (ignoring the use of shm-stats-file), we should fix the crash.

The crash is caused by the fact that we detach from the shared memory while
the global_now_ns and global_now_ms clock pointers still point to the shared
memory. Instead we should revert to using our local clock instead before
detaching from the map.

It should be backported in 3.3
2026-02-19 16:13:48 +01:00
Willy Tarreau
0bb686a72d MINOR: haterm: increase thread-local pool size
QUIC uses many objects and the default pool size causes a lot of
thrashing at the current request rate, taking ~12% CPU in pools.
Let's increase it to 3MB, which allows us to reach around 11M
req/s on a 80-core machine.
2026-02-19 15:45:01 +01:00
Frederic Lecaille
b007b7aa04 MINOR: haterm: new "haterm" utility
haterm_init.c is added to implement haproxy_init_args() which overloads
the one defined by haproxy.c. This way, haterm program uses its own argv[]
parsing function. It generates its own configuration in memory that is
parsed during boot and executed by the common code.
2026-02-19 15:45:01 +01:00
Frederic Lecaille
c9d47804d1 MINOR: haterm: add haterm HTTP server
Contrary to haproxy, httpterm does not support all the HTTP protocols.
Furthermore, it has become easier to handle inbound/outbound
connections / streams since the rework done at conn_stream level.

This patch implements httpterm HTTP server services into haproxy. To do
so, it proceeds the same way as for the TCP checks which use only one
stream connector, but on frontend side.

The makefile is modified to handle haterm.c in additions to all the C
files for haproxy to build new haterm program into haproxy, the haterm
server also instantiates a haterm stream (hstream struct) attached to a
stream connector for each incoming connection without backend stream
connector. This is the role of sc_new_from_endp() called by the muxes to
instantiate streams/hstreams.

As for stream_new(), hstream_new() instantiates a task named
process_hstream() (see haterm.c) which has the same role as
process_stream() but for haterm streams.

haterm into haproxy takes advantage of the HTTP muxes and HTX API to
support all the HTTP protocols supported by haproxy.
2026-02-19 15:10:37 +01:00
Frederic Lecaille
2bf091e9da MINOR: stconn: stream instantiation from proxy callback
Add a pointer to function to proxies as ->stream_new_from_sc proxy
struct member to instantiate stream from connection as this is done by
all the muxes when they call sc_new_from_endp(). The default value for
this pointer is obviously stream_new() which is exported by this patch.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
1dc20a630a MEDIUM: init: allow the redefinition of argv[] parsing function
This patches allows the argv[] parsing function to be redefined from
others C modules. This is done extracting the function which really
parse the argv[] array to implement haproxy_init_args(). This function
is declared as a weak symbol which may be overloaded by others C module.

Same thing for copy_argv() which checks/cleanup/modifies the argv array.
One may want this function to be redefined. This is the case when other
C modules do not handle the same command line option. Copying such
argv[] would lead to conflicts with the original haproxy argv[] during
the copy.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
6013f4baeb MINOR: init: allow a fileless init mode
This patch provides the possibility to initialize haproxy without
configuration file. This may be identified by the new global and exported
<fileless_mode> and <fileless_cfg> variables which may be used to
provide a struct cfgfile to haproxy by others means than a physical
file (built in memory).
When enabled, this fileless mode skips all the configuration files
parsing.
2026-02-19 14:46:49 +01:00
Frederic Lecaille
234ce775c3 MINOR: trace: add definitions for haterm streams
Add definitions for haterm stream as arguments to be used by the TRACE API.
This will be used by the haterm module to come which will have to handle
hstream struct objects (in place of stream struct objects).
2026-02-19 14:46:49 +01:00
Frederic Lecaille
5d3bca4b17 MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
Add "generate-dummy" on/off type keyword to "load" directive to
automatically generate dummy certificates as this is done for ACME from
ckch_conf_load_pem_or_generate() function which is called if a "crt"
keyword is also provide for this directive.

Also implement "keytype" to specify the key type used for these
certificates.  Only "RSA" or "ECDSA" is accepted. This patch also
implements "bits" keyword for the "load" directive to specify the
private key size used for RSA. For ECDSA, a new "curves" keyword is also
provided by this patch to specify the curves to be used for the EDCSA
private keys generation.

ckch_conf_load_pem_or_generate() is modified to use these parameters
provided by "keytype", "bits" and "curves" to generate the private key
with ssl_gen_EVP_PKEY() before generating the X509 certificate calling
ssl_gen_x509().
2026-02-19 14:46:49 +01:00
Frederic Lecaille
36b1fba871 MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
Move acme_EVP_PKEY_gen() implementation to ssl_gencrt.c and rename it to
ssl_EVP_PKEY_gen().  Also extract from acme_gen_tmp_x509() the generic
part to implement ssl_gen_x509() into ssl_gencrt.c.

To generate a self-signed expired certificate ssl_EVP_PKEY_gen() must be
used to generate the private key. Then, ssl_gen_x509() must be called
with the private key as argument.  acme_gen_tmp_x509() is also modified
to called these two functions to generate a temporary certificate has
done before modifying this part.

Such an expired self-signed certificate should not be use on the field
but only during testing and development steps.
2026-02-19 14:46:47 +01:00
William Lallemand
b0240bcfaf DOC: configuration: add the ACME wiki page link
Add the ACME page link to the HAProxy github wiki.
2026-02-19 13:53:50 +01:00
Amaury Denoyelle
c71ef2969b OPTIM: backend: reduce contention when checking MUX init with ALPN
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In connect_server(), MUX initialization must be delayed if ALPN
negotiation is configured, unless ALPN can already be retrieved via the
server cache.

A readlock is used to consult the server cache. Prior to this patch, it
was always taken even if no ALPN is configured. The lock was thus used
for every new backend connection instantiation.

Rewrite the check so that now the lock is only used if ALPN is
configured. Thus, no lock access is done if SSL is not used or if ALPN
is not defined.

In practice, there will be no performance gain, as the read lock should
never block if ALPN is not configured. However, the code is cleaner as
it better reflect that only access to server nego_alpn requires the
path_params lock protection.
2026-02-19 11:27:49 +01:00
Amaury Denoyelle
55e9c67381 BUG/MINOR: backend: check delay MUX before conn_prepare()
In connect_server(), when a new connection must be instantiated, MUX
initialization is delayed if an ALPN setting is present on the server
line configuration, as negotiation must be performed to select the
correct MUX. However, this is not the case if the ALPN can already be
retrieved on the server cache.

This check is performed too late however and may cause issue with the
QUIC stack. The problem can happen when the server ALPN is not yet set.
In the normal case, quic_conn layer is instantiated and MUX init is
delayed until the handshake completion. When the MUX is finally
instantiated, it reused without any issue app_ops from its quic_conn,
which is derived from the negotiated ALPN.

However, there is a race condition if another QUIC connection populates
the server ALPN cache. If this happens after the first quic_conn init
but prior to the MUX delay check, the MUX will thus immediately start in
connect_server(). When app_ops is retrieved from its quic_conn, a crash
occurs in qcc_install_app_ops() as the QUIC handshake is not yet
finalized :

  #0  0x000055e242a66df4 in qcc_install_app_ops (qcc=0x7f127c39da90, app_ops=0x0) at src/mux_quic.c:1697
  1697		if (app_ops->init && !app_ops->init(qcc)) {
  [Current thread is 1 (Thread 0x7f12810f06c0 (LWP 25758))]

To fix this, MUX delay check is moved up in connect_server(). It is now
performed prior conn_prepare() which is responsible for the quic_conn
layer instantiation. Thus, it ensures consistency for the QUIC stack :
MUX init is always delayed if the quic_conn does not reuses itself the
SSL session and ALPN server cache (no quic_reuse_srv_params()).

This must be backported up to 3.3.
2026-02-19 11:22:55 +01:00
David Carlier
194a67600e BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
In acme_gen_tmp_x509(), if X509_set_issuer_name() fails, the code
jumped to the mkcert_error label without freeing the previously
allocated X509_NAME object. The other error paths after X509_NAME_new()
(X509_NAME_add_entry_by_txt and X509_set_subject_name) already properly
freed the name before jumping to mkcert_error, but this one was missed.

Fix this by freeing name before the goto, consistent with the other
error paths in the same function.

Must be backported as far as 3.3.
2026-02-19 10:40:26 +01:00
William Lallemand
92e3635679 DOC: remove openssl no-deprecated CI image
Remove the "openssl no-depcrecated" CI image status which was removed a while
ago.
2026-02-19 10:39:23 +01:00
Egor Shestakov
6ab86ca14c CLEANUP: mux-h1: Remove unneeded null check
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Since 3.1 a task is always created when H1 connections initialize, so
the later null check before task_queue() became unneeded.

Could be backported with 3c09b3432 (BUG/MEDIUM: mux-h1: Fix how timeouts
are applied on H1 connections).
2026-02-19 08:20:37 +01:00
Nenad Merdanovic
5a079d1811 MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
Add the ability to set connect, queue and tarpit timeouts from the
set-timeout action. This is especially useful when using set-dst to
dynamically connect to servers.

This patch also adds the relevant fe_/be_/cur_ sample fetches for these
timeouts.
2026-02-19 08:20:37 +01:00
William Lallemand
c26c721312 CI: github: disable windows.yml by default on unofficials repo
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Disable the windows job for repository in repositories that are not in
the "haproxy" organization. This is mostly used for portability during
development and only making noise during the maintenance cycle.

Must be backported in every branches.
2026-02-18 18:16:21 +01:00
William Lallemand
27e1ec8ca9 CI: vtest: move the vtest2 URL to vinyl-cache.org
VTest git repository was moved on vinyl-cache.org.

Must be backported in every branches.
2026-02-18 16:29:46 +01:00
Frederic Lecaille
3e6d030ce2 BUG/MEDIUM: ssl: SSL backend sessions used after free
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This bug impacts only the backends. The sessions cached could be used after been
freed because of a missing write lock into ssl_sock_handle_hs_error() when freeing
such objects. This issue could be rarely reproduced and only with QUIC with
difficulties (random CRYPTO data corruptions and instrumented code).

Must be backported as far as 2.6.
2026-02-18 15:37:13 +01:00
Ilia Shipitsin
dfe1de4335 CI: do not use ghcr.io for Quic Interop workflows
due to some (yet unknown) changes in ghcr.io we are not able to pull
images from it anymore. Lets temporarily switch to "local only" images
storage.

no functional change
2026-02-18 15:35:18 +01:00
Christopher Faulet
cfa30dea4e MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
the bufsize was already limited to 256 MB because of Lua and HTX
limitations. So the same limit is set on large buffers.
2026-02-18 13:26:21 +01:00
Christopher Faulet
a324616cdb MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
b_is_default() and b_is_large() can now be used to know if a buffer is a
default buffer or a large one. _b_free() now relies on it.

These functions are also used when possible (stream_free(),
stream_release_buffers() and http_wait_for_msg_body()).
2026-02-18 13:26:21 +01:00
Christopher Faulet
5737fc9518 MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
Thanks to previous patches, it is now possible to allocate a large buffer to
store the message payload in the context of the "wait-for-body" action. To
do so, "use-large-buffer" option must be set.

It means now it is no longer necessary to increase the regular buffer size
to be able to get message payloads of some requests or responses.
2026-02-18 13:26:21 +01:00
Christopher Faulet
9ecd0011c1 MINPR: htx: Get large chunk if necessary to perform a defrag
Function performing defragmentation of an HTX message was updated to get a
chunk accordingly to the HTX message size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
4c6ca0b471 MEDIUM: http-fetch: Be able to use large chunks when necessary
The function used to fetch params was update to get a chunk accordingly to
the parameter size. The same was also performed when the message body is
retrieved.
2026-02-18 13:26:21 +01:00
Christopher Faulet
dfc4085413 MEDIUM: sample: Get chunks with a size dependent on input data when necessary
The function used to duplicate a sample was update to support large buffers. In
addition several converters were also reviewed to deal with large buffers. For
instance, base64 encoding and decoding must use chunks of the same size than the
sample. For some of them, a retry is performed to enlarge the chunk is possible.

TODO: Review reg_sub, concat and add_item to get larger chunk if necessary
2026-02-18 13:26:21 +01:00
Christopher Faulet
bd25c63067 MEDIUM: stconn: Properly handle large buffers during a receive
While large buffers are still unused internally, functions receiving data
from endpoint (connections or applets) were updated to block the receives
when channels are using large buffer and the data forwarding was started.

The goal of this patch is to be able to flush large buffers at the end of
the analyzis stage to return asap on regular buffers.
2026-02-18 13:26:21 +01:00
Christopher Faulet
ce912271db MEDIUM: chunk: Add support for large chunks
Because there is now a memory pool for large buffers, we must also add the
support for large chunks. So, if large buffers are configured, a dedicated
memory pool is created to allocate large chunks. alloc_large_trash_chunk()
must be used to allocate a large chunk. alloc_trash_chunk_sz() can be used to
allocate a chunk with the best size. However free_trash_chunk() remains the
only way to release a chunk, regular or large.

In addition, large trash buffers are also created, using the same mechanism
than for regular trash buffers. So three thread-local trash buffers are
created. get_large_trash_chunk() must be used to get a large trash buffer.
And get_trash_chunk_sz() may be used to get a trash buffer with the best
size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
d89ec33a34 MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
Add the support for large bufers. A dedicated memory pool is added. The size
of these buffers must be explicitly configured by setting
"tune.bufsize.large" directive. If it is not set, the pool is not
created. In addition, if the size for large buffers is the same than for
regular buffer, the feature is automatically disable.

For now, large buffers remain unused.
2026-02-18 13:26:21 +01:00
Christopher Faulet
ee309bafcf MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
In sample fetch functions retrieving the message payload (req.body,
res.body...), instead of copying the payload in a trash buffer, we know
directely return a pointer the payload in the HTX message. To do so, we must
be sure there is only one HTX DATA block. Thanks to previous patches, it is
possible. However, we must take care to perform a defragmentation if
necessary.
2026-02-18 13:26:21 +01:00
Christopher Faulet
f559c202fb MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
When we are waiting for the request or response payload, it is usually
because the payload will be analyzed in a way or another. So, perform a
defrag if necessary. This should ease payload analyzis.
2026-02-18 13:26:21 +01:00
Christopher Faulet
4f27a72d19 MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
First, an HTX flags was added to know when blocks are unordered. It may
happen when a header is added while part of the payload was already received
or when the start-line is replaced by an new one. In these cases, the blocks
indexes are in the right order but not the blocks payload. Knowing a message
is unordered can be useful to trigger a defragmentation, mainly to be able
to append data properly for instance.

Then, detection of fragmented messages was improved, especially when a
header or a start-line is replaced by a new one.

Finally, when data are added in a message and cannot be appended into the
previous DATA block because the message is not aligned, a defragmentation is
performed to realign the message and append data.
2026-02-18 13:26:21 +01:00
Christopher Faulet
0c6f2207fc MEDIUM: htx: Refactor htx defragmentation to merge data blocks
When an HTX message is defragmented, the HTX DATA blocks are now merged into
one block. Just like the previous commit, this will help all payload
analysis, if any. However, there is an exception when the reference on a
DATA block must be preserved, via the <blk> parameter. In that case, this
DATA block is not merged with previous block.
2026-02-18 13:26:21 +01:00
Christopher Faulet
783db96ccb MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
When htx_xfer_blks() is called and when HTX DATA blocks are copied, we now
merge them in one block. This will help all payload analysis, if any.
2026-02-18 13:26:21 +01:00
Christopher Faulet
a8887e55a0 BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
htx_replace_blk_value() is buggy when a defrag is performed. It only happens
on data expension. But in that case, because a defragmentation is performed,
the blocks data are moved and old data of the updated block are no longer
accessible.

To fix the bug, we now use a chunk to temporarily copy the new data of the
block. This way we can safely perform the HTX defragmentation and then
recopy the data from the chunk to the HTX message.

It is theorically possible to hit this bug but concretly it is pretty hard.

This patch should be backported to all stable versions.
2026-02-18 13:26:21 +01:00
Christopher Faulet
e62e8de5a7 MEDIUM: stream: Offer buffers of default size only
Check the channels buffers size on release before trying to offer it to
waiting entities. Only normal buffers must be considered. This will be
mandatory when the large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
bc586b4138 MINOR: h1-htx: Disable 0-copy for buffers of different size
When a message payload is parsed, it is possible to swap buffers. We must
only take care both buffers have same size. It will be mandatory when the
large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
8b27dfdfb0 MEDIUM: applet: Disable 0-copy for buffers of different size
Just like the previous commit, we must take care to never swap buffers of
different size when data are exchanged between an applet and a SC. it will
be mandatory when the large buffers support on channels will be added.
2026-02-18 13:26:21 +01:00
Christopher Faulet
36282ae348 MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
Today, it is useless to check the buffers size before performing a 0-copy in
muxes when data are sent, but it will be mandatory when the large buffers
support on channels will be added. Indeed, muxes will still rely on normal
buffers, so we must take care to never swap buffers of different size.
2026-02-18 13:26:21 +01:00
Christopher Faulet
f82ace414b MEDIUM: compression: Be sure to never compress more than a chunk at once
When the compression is performed, a trash chunk is used. So be sure to
never compression more data than the trash size. Otherwise the commression
could fail. Today, this cannot happen. But with the large buffers support on
channels, it could be an issue.

Note that this part should be reviewed to evaluate if we should use a larger
chunk too to perform the compression, maybe via an option.
2026-02-18 13:26:21 +01:00
Christopher Faulet
53b7150357 MEDIUM: stream: Limit number of synchronous send per stream wakeup
It is not a bug fix, because there is no way to hit the issue for now. But
there is nothing preventing a loop of synchronous sends in process_stream().
Indead, when a synchronous send is successfully performed, we restart the
SCs evaluation and at the end another synchronous send is attempted. So with
an endpoint consuming data bit by bit or with a filter fowarding few bytes
at each call, it is possible to loop for a while in process_stream().

Because it is not expected, we now limit the number of synchronous send per
wakeup to two calls. In a nominal case, it should never be more. This commit
is mandatory to be able to handle large buffers on channels

There is no reason to backport this commit except if the large buffers
support on channels are backported.
2026-02-18 13:26:21 +01:00
Christopher Faulet
5965a6e1d2 MEDIUM: cache: Don't rely on a chunk to store messages payload
When the response payload is stored in the cache, we can avoid to use a
trash chunk as temporary space area before copying everything in the cache
in one call. Instead we can directly write each HTX block in the cache, one
by one. It should not be an issue because, most of time, there is only one
DATA block.

This commit depends on "BUG/MEDIUM: shctx: Use the next block when data
exactly filled a block".
2026-02-18 13:26:20 +01:00
Christopher Faulet
9ad9def126 BUG/MINOR: config: Check buffer pool creation for failures
The call to init_buffer() during the worker startup may fail. In that case,
an error message is displayed but the error was not properly handled. So
let's add the proper check and exit on error.
2026-02-18 13:26:20 +01:00
Christopher Faulet
fae478dae5 MINOR: buffers: Swap buffers of same size only
In b_xfer(), we now take care to swap buffers of the same size only. For
now, it is always the case. But that will change.
2026-02-18 13:26:20 +01:00
Christopher Faulet
6bf450b7fe MINOR: tree-wide: Use the buffer size instead of global setting when possible
At many places, we rely on global.tune.bufsize value instead of using the buffer
size. For now, it is not a problem. But if we want to be able to deal with
buffers of different sizes, it is good to reduce as far as possible dependencies
on the global value. most of time, we can use b_size() or c_size()
functions. The main change is performed on the error snapshot where the buffer
size was added into the error_snapshot structure.
2026-02-18 13:26:20 +01:00
David Carlier
fc89ff76c7 BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
Fix two issues in JWE token processing:

- Replace memcmp() with CRYPTO_memcmp() for authentication tag
  verification in build_and_check_tag() to prevent timing
  side-channel attacks. Also add a tag length validation check
  before the comparison to avoid potential buffer over-read when
  the decoded tag length doesn't match the expected HMAC half.

- Remove unreachable break statement after JWE_ALG_A256GCMKW case
  in decrypt_cek_aesgcmkw().
2026-02-18 10:46:32 +01:00
Christopher Faulet
806c8c830d REORG: stconn: Move functions related to channel buffers to sc_strm.h
sc_have_buff(), sc_need_buff(), sc_have_room() and sc_need_room() are
related to the buffer's channel. So we can move them in sc_strm.h header
file. In addition, this will be mandatory for the next commit.
2026-02-18 09:44:16 +01:00
Christopher Faulet
1b1a0b3bae MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
SC_FL_NO_FASTFWD flag was not listed in sc_show_flags() function. Let's do
so.

This patch could be backported as far as 3.0.
2026-02-18 09:44:16 +01:00
Christopher Faulet
0aca25f725 BUG/MINOR: http-ana: Stop to wait for body on client error/abort
During the message analysis, we must take care to stop wait for the message
body if an error was reported on client side or an abort was detected with
abort-on-close configured (by default now).

The bug was introduced when the "wait-for-body" action was added. Only the
producer state was tested. So, when we were waiting for the request payload,
there was no issue. But when we were waiting for the response payload, error
or abort on client side was not considered.

This patch should be backported to all stable versions.
2026-02-18 09:44:16 +01:00
Christopher Faulet
9e17087aeb BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
When the hot list was removed in 3.0, a regression was introduced.
Theorically, it is possible to override data in a block when new data are
appended. It happens when data are copied. If the data size is a multiple of
the block size, all data are copied and the last used block is full. But
instead of saving a reference on the next block as the restart point for the
next copies, we keep a reference on the last full one. On the next read, we
reuse this block and old data are crushed. To hit the bug, no new blocks
should be reserved between the two data copy attempts.

Concretely, for now, it seems not possible to hit the bug. But with a block
size set to 1024, if more than 1024 bytes are reseved, with a first copy of
1024 bytes and a second one with remaining data, data in the first block
will be crushed.

So to fix the bug, the reference of the last block used to write data (which
is in fact the next one to use to perform the next copy) is only updated
when a block is full. In that case the next block is used.

This patch should be backported as far as 3.0 after a period of observation.
2026-02-18 09:44:15 +01:00
Christopher Faulet
b248b1c021 CLEANUP: compression: Remove unused static buffers
Since the legacy HTTP code was removed, the global and thread-local buffers,
tmpbuf and zbuf, are no longer used. So let's removed them.

This could be backported, theorically to all supported versions. But at
least it could be good to do so as far as 3.2 as it saves 2 buffers
per-thread.
2026-02-18 09:44:15 +01:00
Christopher Faulet
829002d459 MINOR: flt-trace: Add an option to limit the amount of data forwarded
"max-fwd <max>" option can now be used to limit the maximum amount of data
forwarded at a time by the filter. It could be handy to make tests.
2026-02-18 09:44:15 +01:00
Christopher Faulet
92307b5fec BUG/MINOR: flt-trace: Properly compute length of the first DATA block
This bug is quite old. When the length of the first DATA block is computed,
the offset is used instead of the block length minus the offset. It is only
used with random forwarding and there is a test just after to prevent any
issue, so there is no effect.

It could be backported to all stable versions.
2026-02-18 09:44:15 +01:00
Christopher Faulet
ccb075fa1b DEV: term-events: Fix hanshake events decoding
Handshakes events were not properly decoded. Only send errors were decoded
as expected, other events were reported with a '-'. It is now fixes.

This patch could be backported as far as 3.2.
2026-02-18 09:44:15 +01:00
Christopher Faulet
1b7843f1c1 BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
The previous fix was wrong. When shut flags are tested for legacy applets,
to know if the I/O handler can be called or not, we must be sure shut for
reads and for writes are both set to skip the applet I/O handler.

This bug introduced regression, at least for the peer applet and for the DNS
applet.

This patch must be backported with abc1947e1 ("BUG/MEDIUM: applet: Fix test
on shut flags for legacy applets"), so as far as 3.0.
2026-02-18 09:44:15 +01:00
Christopher Faulet
8e0c2599b6 BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
If a producer tries to send data via the fast-forward mechanism while the
message is in an unexpected state from the consumer point of view, the
fast-forward is now disabled. Concretely, we now take care that the message
is in its data/tunnel stage to proceed in h1_nego_ff().

By disabling fast-forward in that case, we will automatically fall back on
the regular sending path and be able to handle the error in h1_snd_buf().

This patch should be backported as far as 3.0
2026-02-18 09:44:15 +01:00
Christopher Faulet
cda056b9f4 BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
If is illegal to send data if the stream is already closed. The case is
properly handled when data are sent via snd_buf(), by draining the data. But
it was still possible to process these data via nego_ff().

So, in this patch, both for the H2 and QUIC multiplexers, the fast-forward
is disabled if the stream is closed and nothing is performed. Doing so, we
will automatically fall back on the regular sending path and be able to
drain data in snd_buf().

Thanks to Mike Walker for his investigation on the subject.

This patch should be backported as far as 3.0.
2026-02-18 09:44:09 +01:00
Amaury Denoyelle
2f94f61c31 REGTESTS: fix quoting in feature cmd which prevents test execution
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Remove extra quote in feature cmd used to test SSL compatibility with
set_ssl_cafile QUIC regtest. Due to this syntax error, the test was
never executed.

No need to backport.
2026-02-17 18:31:29 +01:00
Amaury Denoyelle
18a78956cb MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
Add a BUG_ON_STRESS() to be able to detect if data draining is performed
due to early stream closure.
2026-02-17 18:18:44 +01:00
Amaury Denoyelle
4c275c7d17 BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
HTTP/3 CONNECT transcoding is not properly implemented on the frontend
side. Neither tunnel mode of application nor extended connect are
currently functional.

Clarify this situation by rejecting any CONNETC attempts on the frontend
side. The stream is thus now closed via a RESET_STREAM with error code
REQUEST_REJECTED.

This should be backported to every stable versions.
2026-02-17 18:18:44 +01:00
Amaury Denoyelle
f3003d1508 BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
This reverts commit 235e8f1afd.

Prior to the above commit, snd_buf callback for QUIC MUX was able to
deal with data even after stream closure. The excess was simply
discarded, as no STREAM frame can be emitted after FIN/RESET_STREAM.
This code was later removed and replaced by a BUG_ON() to ensure snd_buf
is never called after stream closure.

However, this approach is too strict. Indeed, there is nothing in the
haproxy stream architecture which forbids this scheduling, in part
because QUIC MUX is the sole responsible of the stream closure. As such,
it is preferable to revert to the old code to prevent any triggering of
a BUG_ON() failure.

Note that nego_ff does not implement data draining if called after
stream closure. This will be done in a future patch.

Thanks to Mike Walker for his investigation on the subject.

This must be backported up to 2.8.
2026-02-17 18:18:44 +01:00
Aurelien DARRAGON
747ff09818 MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This is a follow up to b6bdb2553 ("MEDIUM: backend: make "balance random"
consider req rate when loads are equal")

In the above patch, we used the global sess_per_sec metric to choose which
server we should be using. But the original intent was to use the per
thread group statistic.

No backport needed, the previous patch already improved the situation in
3.3, so let's not take the risk of breaking that.
2026-02-17 09:51:46 +01:00
William Lallemand
1274c21a42 BUG/MINOR: ssl: error with ssl-f-use when no "crt"
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
ssl-f-use lines tries to load a crt file, but the "crt" keyword is not
mandatory. That could lead to crtlist_load_crt() being called with a
NULL path, and trying to do a stat.

In this particular case we don't need to try anything and it's better to
leave with an actual error.

Must be backported as far as 3.2.
2026-02-16 18:41:40 +01:00
William Lallemand
0016d45a9c BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
crtlist_load_crt() in post_section_frontend_crt_init() won't give
details about the line being parsed, this should be done by the caller.

Modify post_section_frontend_crt_init() to ouput the right error format.

Must be backported to 3.2.
2026-02-16 18:41:08 +01:00
William Lallemand
e0d1cdff6a BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
cfg_crt_node->filename is leaked on the error path in the ssl-f-use
configuration parser.

Could be backported as far as 3.2
2026-02-16 16:04:35 +01:00
William Lallemand
86df0e206e BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
In post_section_frontend_crt_init(), the crt_entry is populated by the
ssl_conf fromt the cfg_crt_node. On error path, the crt_list is
completely freed, including the ssl_conf structure. But the ssl_conf
structure was already freed when freeing the cfg_crt_node.

Fix the issue by doing a crtlist_dup_ssl_conf(n->ssl_conf) in the
crtlist_entry instead of an assignation.

Fix issue #3268.

Need to be backported as far as 3.2. The previous patch which adds the
crtlist_dup_ssl_conf() declaration is needed.
2026-02-16 16:04:35 +01:00
William Lallemand
df8e05815c BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
Add lacking crtlist_dup_ssl_conf() declaration in ssl_crt-list.h.

Could be backported if needed.
2026-02-16 16:04:35 +01:00
Willy Tarreau
1d2490c5ae DEV: gdb: use unsigned longs to display pools memory usage
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The pools memory usage calculation was done using ints by default, making
it harder to identify large ones. Let's switch to unsigned long for the
size calculations.
2026-02-16 11:07:23 +01:00
David Carlier
cb63e899d9 CLEANUP: deviceatlas: add unlikely hints and minor code tidying
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Add unlikely() hints on error paths in init, conv and fetch functions.
Remove unnecessary zero-initialization of local buffers that are
always written before use. Fix indentation in da_haproxy_checkinst()
and remove unused loop variable initialization.
2026-02-14 15:49:00 +01:00
David Carlier
076ec9443c MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
Precompute the maximum header name length from the atlas evidence
headers at init and hot-reload time. Use it in da_haproxy_fetch() to
skip headers early that cannot match any known DeviceAtlas evidence
header, avoiding unnecessary string copies and comparisons.
2026-02-14 15:49:00 +01:00
David Carlier
f5d03bbe13 MINOR: deviceatlas: define header_evidence_entry in dummy library header
Add the struct header_evidence_entry definition to the dummy dac.h
to accommodate the ongoing deviceatlas module update which now
iterates over atlas header_priorities to precompute maxhdrlen.
The struct was already referenced by struct da_atlas but lacked
a definition in the dummy header.
2026-02-14 15:49:00 +01:00
David Carlier
23aeb72798 MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
Increase DA_MAX_HEADERS from 24 to 32 and hbuf from 24 to 64 to
accommodate current DeviceAtlas data files which may use more headers
and longer header names.
2026-02-14 14:47:22 +01:00
David Carlier
8031bf6e03 MINOR: deviceatlas: check getproptype return and remove pprop indirection
Check the return value of da_atlas_getproptype() and skip the property
on failure instead of using an uninitialized proptype. Also remove the
unnecessary pprop pointer indirection, using prop directly.
2026-02-14 14:47:22 +01:00
David Carlier
0fad24b5da BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
When hot-reloading the atlas in da_haproxy_checkinst(), the configured
cache_size was not applied to the new instance, causing it to use the
default value.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
1d1daff7c4 BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
da_fini() was called unconditionally in deinit_deviceatlas() even when
da_init() was never called. Move it inside the daset check. Also remove
the erroneous shm_unlink() call which could affect the dadwsch shared
memory used by the scheduling process.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
d8ff676592 BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
In da_haproxy_checkinst(), when da_atlas_compile() failed, the cnew
buffer was leaked. Add a free(cnew) in the else branch.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
ea3b1bb866 BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
In da_haproxy_checkinst(), base[0] was checked before acquiring the
lock but not re-checked after. Another thread could have already
processed the reload between the initial check and the lock
acquisition, leading to a race condition.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
734a139c52 BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
In da_haproxy_fetch(), vlen was set from v.len (the raw header value
length) instead of the truncated copy length. Also the cookie-specific
vlen calculation used an incorrect subtraction instead of the actual
extracted cookie value length (pl) returned by
http_extract_cookie_value().

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
7098b4f93a BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
The user-agent string copy had an off-by-one error: the buffer size
limit did not account for the null terminator, and the memcpy length
used i-1 which truncated the last character of the user-agent string.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
d8f219b380 BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
When da_atlas_compile() or da_atlas_open() failed in init_deviceatlas(),
atlasimgptr was leaked and da_fini() was never called. Also add a NULL
check on strdup() for the default cookie name with proper cleanup of
the atlas and image pointer on failure.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
6342705cee BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
Add missing NULL checks after strdup() for the json file path in
da_json_file() and the cookie name in da_properties_cookie().

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
David Carlier
2d6e9e15cd BUG/MINOR: deviceatlas: add missing return on error in config parsers
da_log_level() and da_cache_size() were missing a return -1 on error,
causing fall-through to the normal return 0 path when invalid values
were provided.

This should be backported to lower branches.
2026-02-14 14:47:22 +01:00
Willy Tarreau
5689605c8e DEV: gdb: add a utility to find the post-mortem address from a core
More and more often, core dumps retrieved on systems that build with
-fPIE by default are becoming unexploitable. Even functions and global
symbols get relocated and gdb cannot figure their final position.
Ironically the post_mortem struct lying in its own section that was
meant to ease its finding is not exempt from this problem.

The only remaining way is to inspect the core to search for the
post-mortem magic, figure its offset from the file and look up the
corresponding virtual address with objdump. This is quite a hassle.

This patch implements a simple utility that opens a 64-bit core dump,
scans the program headers looking for a data segment which contains
the post-mortem magic, and prints it on stdout. It also places the
"pm_init" command alone on its own line to ease copy-pasting into the
gdb console. With this, at least the other commands in this directory
work again and allow to inspect the program's state. E.g:

  $ ./getpm core.57612
  Found post-mortem magic in segment 5:
    Core File Offset: 0xfc600 (0xd5000 + 0x27600)
    Runtime VAddr:    0x5613e52b6600 (0x5613e528f000 + 0x27600)
    Segment Size:     0x28000

  In gdb, copy-paste this line:

     pm_init 0x5613e52b6600

It's worth noting that the program has so few dependencies that it even
builds with nolibc, allowing to upload a static executable into containers
being debugged and lacking development tools and compilers. The build
procedure is indicated inthe source code.
2026-02-14 14:46:33 +01:00
Aurelien DARRAGON
d71e2e73ea MEDIUM: filters: use per-channel filter list when relevant
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
In the historical implementation, all filter related information where
stored at the stream level (using struct strm_flt * context), and filters
iteration was performed at the stream level also.

We identified that this was not ideal and would make the implementation of
future filters more complex since filters ordering should be handled in
a different order during request and response handling for decompression
for instance.

To make such thing possible, in this commit we migrate some channel
specific filter contexts in the channel directly (request or response),
and we implement 2 additional filter lists, one on the request channel
and another on the response channel. The historical stream filter list
is kept as-is because in some contexts only the stream is available and
we have to iterate on all filters. But for functions where we only are
interested in request side or response side filters, we now use dedicated
channel filters list instead.

The only overhead is that the "struct filter" was expanded by two "struct
list".

For now, no change of behavior is expected.
2026-02-13 12:24:13 +01:00
Aurelien DARRAGON
bb6cfbe754 MINOR: filters: rework filter iteration for channel related callback functions
Multiple channel related functions have the same construction: they use
list_for_each_entry() to work on a given filter from the stream+channel
combination. In future commits we will try to use filter list from
dedicated channel list instead of the stream one, thus in this patch we
need as a prerequisite to implement and use the flt_list_{start,next} API
to iterate over filter list, giving the API the responsibility to iterate
over the correct list depending on the context, while the calling function
remains free to use the iteration construction it needs. This way we will
be able to easily change the way we iterate over filter list without
duplicating the code for requests and responses.
2026-02-13 12:24:07 +01:00
Aurelien DARRAGON
e88b219331 MINOR: filters: rework RESUME_FILTER_* macros as inline functions
There is no need to have those helpers defined as macro, and since it
is not mandatory, code maintenance is much easier using functions,
thus let's switch to function definitions.

Also, we change the way we iterate over the list so that the calling
function now has a pseudo API to get and iterate over filter pointers
while keeping control on how they implement the iterating logic.

One benefit of this is that we will also be able to switch between lists
depending on the channel type, which is a prerequisite for upcoming
rework that split the filter list over request and response channels
(commit will follow)

No change of behavior is expected.
2026-02-13 12:24:00 +01:00
William Lallemand
f47b800ac3 SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Implement a way to set a destination directory using DESTDIR, and a tmp
directory using TMPDIR.

By default:

- DESTDIR is ../vtest like it was done previously
- TMPDIR  is mktemp -d

Only the vtest binary is copied in DESTDIR.

Example:

	TMPDIR=/dev/shm/ DESTDIR=/home/user/.local/bin/ ./scripts/build-vtest.sh
2026-02-12 18:48:09 +01:00
William Lallemand
d13164e105 MINOR: startup: show the list of detected features at runtime with haproxy -vv
Features prefixed by "HAVE_WORKING_" in the haproxy -vv feature list,
are features that are detected during runtime.

This patch splits these features on another line in haproxy -vv. This
line is named "Detected feature list".
2026-02-12 18:02:19 +01:00
William Lallemand
b90b312a50 MINOR: startup: sort the feature list in haproxy -vv
The feature list in haproxy -vv is partly generated from the Makefile
using the USE_* keywords, but it's also possible to add keywords in the
feature list using hap_register_feature(), which adds the keyword at the
end of list. When doing so, the list is not correctly sorted anymore.

This patch fixes the problem by splitting the string using an array of
ist and applying a qsort() on it.
2026-02-12 18:02:19 +01:00
William Lallemand
1592ed9854 MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
the TCP_MD5SIG ifdef is not enough to check if the feature is usable.
The code might compile but the OS could prevent to use it.

This patch tries to use the TCP_MD5SIG setsockopt before adding
HAVE_WORKING_TCP_MD5SIG in the feature list.  so it would prevent to
start reg-tests if the OS can't run it.
2026-02-12 18:02:19 +01:00
Remi Tricot-Le Breton
f9b3319f48 REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
Test the new "jwt_decrypt_jwk" converter that takes a JWK as argument,
either as a string or in a variable.
Only "RSA" and "oct" types are managed for now.
2026-02-12 16:31:38 +01:00
Remi Tricot-Le Breton
aad212954f MINOR: jwt: Add new jwt_decrypt_jwk converter
This converter takes a private key in the JWK format (RFC7517) that can
be provided as a string of via a variable.
The only keys managed for now are of type 'RSA' or 'oct'.
2026-02-12 16:31:27 +01:00
Remi Tricot-Le Breton
b26f0cc45a MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
Add helper functions that take a JWK (JSON representation of an RSA
private key) into an EVP_PKEY (containing the private key).
Those functions are not used yet, they will be used in the upcoming
'jwt_decrypt_jwk' converter.
2026-02-12 16:31:12 +01:00
Remi Tricot-Le Breton
b3a44158fb MINOR: ssl: Missing '\n' in error message
Fix missing '\n' in error message raised when trying to load a password
protected private key.
2026-02-12 16:29:01 +01:00
Amaury Denoyelle
8e16fd2cf1 BUG/MAJOR: quic: fix parsing frame type
QUIC frame type is encoded as a varint. Initially, haproxy parsed it as
a single byte, which was enough to cover frames defined in RFC9000.

The code has been extended recently to support multi-bytes encoded
value, in anticipation of QUIC frames extension support. However, there
was no check on the varint format. This is interpreted erroneously as a
PADDING frame as this serves as the initial value. Thus the rest of the
packet is incorrectly handled, with various resulting effects, including
infinite loops and/or crashes.

This patch fixes this by checking the return value of quic_dec_int(). If
varint cannot be parsed, the connection is immediately closed.

This issue is assigned to CVE-2026-26080 report.

This must be backported up to 3.2.

Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
2026-02-12 09:09:44 +01:00
Amaury Denoyelle
4aa974f949 BUG/MAJOR: quic: reject invalid token
Token parsing code on INITIAL packet for the NEW_TOKEN format is not
robust enough and may even crash on some rare malformed packets.

This patch fixes this by adding a check on the expected length of the
received token. The packet is now rejected if the token does not match
QUIC_TOKEN_LEN. This check is legitimate as haproxy should only parse
tokens emitted by itself.

This issue has been introduced with the implementation of NEW_TOKEN
tokens parsing required for 0-RTT support.

This issue is assigned to CVE-2026-26081 report.

This must be backported up to 3.0.

Reported-by: Asim Viladi Oglu Manizada <manizada@pm.me>
2026-02-12 09:09:44 +01:00
Amaury Denoyelle
d80f0143c9 BUG/MINOR: quic: ensure handshake speed up is only run once per conn
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
When a duplicated CRYPTO frame is received during handshake, a server
may consider that there was a packet loss and immediately retransmit its
pending CRYPTO data without having to wait for PTO expiration. However,
RFC 9002 indicates that this should only be performed at most once per
connection to avoid excessive packet transmission.

QUIC connection is flagged with QUIC_FL_CONN_HANDSHAKE_SPEED_UP to mark
that a fast retransmit has been performed. However, during the
refactoring on CRYPTO handling with the storage conversion from ncbuf to
ncbmbuf, the check on the flag was accidentely removed. The faulty patch
is the following one :

  commit f50425c021
  MINOR: quic: remove received CRYPTO temporary tree storage

This patch adds again the check on QUIC_FL_CONN_HANDSHAKE_SPEED_UP
before initiating fast retransmit. This ensures this is only performed
once per connection.

This must be backported up to 3.3.
2026-02-12 09:09:44 +01:00
Olivier Houchard
b65df062be MINOR: servers: Call process_srv_queue() without lock when possible
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
In server_warmup(), call process_srv_queue() only once we released the
server lock, as we don't need it.
2026-02-12 02:19:38 +01:00
Olivier Houchard
a8f50cff7e MINOR: queues: Check minconn first in srv_dynamic_maxconn()
In srv_dynamic_maxconn(), we'll decide that the max number of connection
is the server's maxconn if 1) the proxy's number of connection is over
fullconn, or if minconn was not set.
Check if minconn is not set first, as it will be true most of the time,
and as the proxy's "beconn" variable is in a busy cache line, it can be
costly to access it, while minconn/maxconn is in a cache line that
should very rarely change.
2026-02-12 02:18:59 +01:00
Willy Tarreau
c622ed23c8 DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Oto Valek rightfully reported in issue #3262 that the proxy-protocol
doc makes no mention of the packed attribute on struct pp2_tlv_ssl,
which is mandatory since fields are not type-aligned in it. Let's
add it in the definition and make an explicit mention about it to
save implementers from wasting their time trying to debug this.

It can be backported.
2026-02-11 14:46:55 +01:00
Egor Shestakov
f5f9c008b1 CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
It is likely that copy-pasted text was not initially adjusted in the
comments, that is misleading regarding number of arguments.

No backport needed.
2026-02-11 09:10:56 +01:00
William Lallemand
ea92b0ef01 BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The documentation of @system-ca specifies that one can overwrite the
value provided by the SSL Library using SSL_CERT_DIR.

However it seems like X509_get_default_cert_dir() is not affected by
this environment variable, and X509_get_default_cert_dir_env() need to
be used in order to get the variable name, and get the value manually.

This could be backported in every stable branches. Note that older
branches don't have the memprintf in ssl_sock.c.
2026-02-10 21:34:45 +01:00
William Lallemand
2ac0d12790 MINOR: startup: Add the SSL lib verify directory in haproxy -vv
SSL libraries built manually might lack the right
X509_get_default_cert_dir() value.

The common way to fix the problem is to build openssl with
./configure --openssldir=/etc/ssl/

In order to verify this setting, output it with haproxy -vv.
2026-02-10 21:06:38 +01:00
Willy Tarreau
c724693b95 MINOR: activity: allow to switch per-task lock/memory profiling at runtime
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Given that we already have "set profiling task", it's easy to permit to
enable/disable the lock and/or memory profiling at run time. However, the
change will only be applied next time the task profiling will be switched
from off/auto to on.

The patch is very minor and is best viewed with git show -b because it
indents a whole block that moves in a "if" clause.

This can be backported to 3.3 along with the two previous patches.
2026-02-10 17:53:01 +01:00
Willy Tarreau
e2631ee5f7 MEDIUM: activity: apply and use new finegrained task profiling settings
In continuity of previous patch, this one makes use of the new profiling
flags. For this, based on the global "profiling" setting, when switching
profiling on, we set or clear two flags on the thread context,
TH_FL_TASK_PROFILING_L and TH_FL_TASK_PROFILING_M to indicate whether
lock profiling and/or malloc profiling are desired when profiling is
enabled. These flags are checked along with TH_FL_TASK_PROFILING to
decide when to collect time around a lock or a malloc. And by default
we're back to the behavior of 3.2 in that neither lock nor malloc times
are collected anymore.

This is sufficient to see the CPU usage spent in the VDSO to significantly
drop from 22% to 2.2% on a highly loaded system.

This should be backported to 3.3 along with the previous patch.
2026-02-10 17:52:59 +01:00
Willy Tarreau
a7b2353cb3 MINOR: activity: support setting/clearing lock/memory watching for task profiling
Damien Claisse reported in issue #3257 a performance regression between
3.2 and 3.3 when task profiling is enabled, more precisely in relation
with the following patches were merged:

  98cc815e3e ("MINOR: activity: collect time spent with a lock held for each task")
  503084643f ("MINOR: activity: collect time spent waiting on a lock for each task")
  9d8c2a888b ("MINOR: activity: collect CPU time spent on memory allocations for each task")

The issue mostly comes from the first patches. What happens is that the
local time is taken when entering and leaving each lock, which costs a
lot on a contended system. The problem here is the lack of finegrained
settings for lock and malloc profiling.

This patch introduces a better approach. The task profiler goes back to
its default behavior in on/auto modes, but the configuration now accepts
new extra options "lock", "no-lock", "memory", "no-memory" to precisely
indicate other timers to watch for each task when profiling turns on.

This is achieved by setting two new flags HA_PROF_TASKS_LOCK and
HA_PROF_TASKS_MEM in the global "profiling" variable.

This patch only parses the new values and assigns them to the global
variable from the config file for now. The doc was updated.
2026-02-10 17:47:02 +01:00
Willy Tarreau
3b45beb465 CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
Commit 3674afe8a0 ("BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING
and clr FL_NOTIFIED") accidentally left a strange-looking line wrapping
making one think of an editing mistake, let's fix it and keep it on a
single line given that even indented wrapping is almost as large.

This can be backported with the fix above till 2.8 to keep the patch
context consistent between versions.
2026-02-10 14:11:42 +01:00
Willy Tarreau
64c5d45a26 BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
An issue was introduced in 3.0 with commit faa8c3e024 ("MEDIUM: lb-chash:
Deterministic node hashes based on server address"): the new server_key
field and lb_nodes entries initialization were not updated for servers
added at run time with "add server": server_key remains zero and the key
used in lb_node remains the one depending only on the server's ID.

This will cause trouble when adding new servers with consistent hashing,
because the hash-key will be ignored until the server's weight changes
and the key difference is detected, leading to its recalculation.

This is essentially caused by the poorly placed lb_nodes initialization
that is specific to lb-chash and had to be replicated in the code dealing
with server addition.

This commit solves the problem by adding a new ->server_init() function
in the lbprm proxy struct, that is called by the server addition code.
This also allows to abandon the complex check for LB algos that was
placed there for that purpose. For now only lb-chash provides such a
function, and calls it as well during initial setup. This way newly
added servers always use the correct key now.

While it should also theoretically have had an impact on servers added
with the "random" algorithm, it's unlikely that the difference between
proper server keys and those based on their ID could have had any visible
effect.

This patch should be backported as far as 3.0. The backport may be eased
by a preliminary backport of previous commit "CLEANUP: lb-chash: free
lb_nodes from chash's deinit(), not global", though this is not strictly
necessary if context is manually adjusted.
2026-02-10 07:22:54 +01:00
Willy Tarreau
62239539bf CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
There's an ambuity on the ownership of lb_nodes in chash, it's allocated
by chash but freed by the server code in srv_free_params() from srv_drop()
upon deinit. Let's move this free() call to a chash-specific function
which will own the responsibility for doing this instead. Note that
the .server_deinit() callback is properly called both on proxy being
taken down and on server deletion.
2026-02-10 07:20:50 +01:00
Amaury Denoyelle
91a5b67b25 BUG/MINOR: proxy: fix default ALPN bind settings
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
For "add backend" implementation, postparsing code in
check_config_validity() from cfgparse.c has been extracted in a new
dedicated function named proxy_finalize() into proxy.c.

This has caused unexpected compilation issue as in the latter file
TLSEXT_TYPE_application_layer_protocol_negotiation macro may be
undefined, in particular when building without QUIC support. Thus, code
related to default ALPN on binds is discarded after the preprocessing
stage.

Fix this by including openssl-compat header file into proxy source file.
This should be sufficient to ensure SSL related defines are properly
included.

This should fix recent issues on SSL regtests.

No need to backport.
2026-02-09 13:52:25 +01:00
Willy Tarreau
ecffaa6d5a MINOR: net_helper: extend the ip.fp output with an option presence mask
Emeric suggested that it's sometimes convenient to instantly know if a
client has advertised support for window scaling or timestamps for
example. While the info is present in the TCP options output, it's hard
to extract since it respects the options order.

So here we're extending the 56-bit fingerprint with 8 extra bits that
indicate the presence of options 2..8, and any option above 9 for the
last bit. In practice this is sufficient since higher options are not
commonly used. Also TCP option 5 is normally not sent on the SYN (SACK,
only SACK_perm is sent), and echo options 6 & 7 are no longer used
(replaced with timestamps). These fields might be repurposed in the
future if some more meaningful options are to be mapped (e.g. MPTCP,
TFO cookie, auth).
2026-02-09 09:18:04 +01:00
Amaury Denoyelle
a1db464c3e BUG/MINOR: proxy: fix null dereference in "add backend" handler
When a backend is created at runtime, the new proxy instance is inserted
at the end of proxies_list. This operation is buggy if this list is
empty : the code causes a null dereference which will lead to a crash.
This causes the following compilation error :

  CC      src/proxy.o
src/proxy.c: In function 'cli_parse_add_backend':
src/proxy.c:4933:36: warning: null pointer dereference [-Wnull-dereference]
 4933 |                 proxies_list->next = px;
      |                 ~~~~~~~~~~~~~~~~~~~^~~~

This patch fixes this issue. Note that in reality it cannot occur at
this moment as proxies_list cannot be empty (haproxy requires at least
one frontend to start, and the list also always contains internal
proxies).

No need to backport.
2026-02-06 21:35:12 +01:00
Amaury Denoyelle
5dff6e439d BUG/MINOR: proxy: fix clang build error on "add backend" handler
This patch fixes the following compilation error :
  src/proxy.c:4954:12: error: format string is not a string literal
        (potentially insecure) [-Werror,-Wformat-security]
   4954 |         ha_notice(msg);
        |                   ^~~

No need to backport.
2026-02-06 21:17:18 +01:00
Amaury Denoyelle
d7cdd2c7f4 REGTESTS: add dynamic backend creation test
Add a new regtests to validate backend creation at runtime. A server is
then added and requests made to validate the newly created instance
before (with force-be-switch) and after publishing.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
5753c14e84 MINOR: proxy: assign dynamic proxy ID
Implement proxy ID generation for dynamic backends. This is performed
through the already function existing proxy_get_next_id().

As an optimization, lookup will performed starting from a global
variable <dynpx_next_id>. It is initialized to the greatest ID assigned
after parsing, and updated each time a backend instance is created. When
backend deletion will be implemented, it could be lowered to the newly
available slot.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
3115eb82a6 MEDIUM: proxy: implement dynamic backend creation
Implement the required operations for "add backend" handler. This
requires a new proxy allocation, settings copy from the specified
default instance and proxy config finalization. All handlers registered
via REGISTER_POST_PROXY_CHECK() are also called on the newly created
instance.

If no error were encountered, the newly created proxy is finally
attached in the proxies list.
2026-02-06 17:28:27 +01:00
Amaury Denoyelle
07195a1af4 MINOR: proxy: check default proxy compatibility on "add backend"
This commits completes "add backend" handler with some checks performed
on the specified default proxy instance. These are additional checks
outside of the already existing inheritance rules, specific to dynamic
backends.

For now, a default proxy is considered not compatible if it is not in
mode TCP/HTTP. Also, a default proxy is rejected if it references HTTP
errors. This limitation may be lifted in the future, when HTTP errors
are partiallay reworked.
2026-02-06 17:28:26 +01:00
Amaury Denoyelle
a603811aac MINOR: proxy: parse guid on dynamic backend creation
Defines an extra optional GUID argument for "add backend" command. This
can be useful as it is not possible to define it via a default proxy
instance.
2026-02-06 17:28:04 +01:00
Amaury Denoyelle
e152913327 MINOR: proxy: parse mode on dynamic backend creation
Add an optional "mode" argument to "add backend" CLI command. This
argument allows to specify if the backend is in TCP or HTTP mode.

By default, it is mandatory, unless the inherited default proxy already
explicitely specifies the mode. To differentiate if TCP mode is implicit
or explicit, a new proxy flag PR_FL_DEF_EXPLICIT_MODE is defined. It is
set for every defaults instances which explicitely defined their mode.
2026-02-06 17:27:50 +01:00
Amaury Denoyelle
7ac5088c50 MINOR: proxy: define "add backend" handler
Define a basic CLI handler for "add backend".

For now, this handler only performs a parsing of the name argument and
return an error if a duplicate already exists. It runs under thread
isolation, to guarantee thread safety during the proxy creation.

This feature is considered in development. CLI command requires to set
experimental-mode.
2026-02-06 17:26:55 +01:00
Amaury Denoyelle
817003aa31 MINOR: backend: add function to check support for dynamic servers
Move backend compatibility checks performed during 'add server' in a
dedicated function be_supports_dynamic_srv(). This should simplify
addition of future restriction.

This function will be reused when implementing backend creation at
runtime.
2026-02-06 14:35:19 +01:00
Amaury Denoyelle
dc6cf224dd MINOR: proxy: refactor mode parsing
Define a new utility function str_to_proxy_mode() which is able to
convert a string into the corresponding proxy mode if possible. This new
function is used for the parsing of "mode" configuration proxy keyword.

This patch will be reused for dynamic backend implementation, in order
to parse a similar "mode" argument via a CLI handler.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
87ea407cce MINOR: proxy: refactor proxy inheritance of a defaults section
If a proxy is referencing a defaults instance, some checks must be
performed to ensure that inheritance will be compatible. Refcount of the
defaults instance may also be incremented if some settings cannot be
copied. This operation is performed when parsing a new proxy of defaults
section which references a defaults, either implicitely or explicitely.

This patch extracts this code into a dedicated function named
proxy_ref_defaults(). This in turn may call defaults_px_ref()
(previously called proxy_ref_defaults()) to increment its refcount.

The objective of this patch is to be able to reuse defaults inheritance
validation for dynamic backends created at runtime, outside of the
parsing code.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
a8bc83bea5 MINOR: cfgparse: move proxy post-init in a dedicated function
A lot of proxies initialization code is delayed on post-parsing stage,
as it depends on the configuration fully parsed. This is performed via a
loop on proxies_list.

Extract this code in a dedicated function proxy_finalize(). This patch
will be useful for dynamic backends creation.

Note that for the moment the code has been extracted as-is. With each
new features, some init code was added there. This has become a giant
loop with no real ordering. A future patch may provide some cleanup in
order to reorganize this.
2026-02-06 14:35:18 +01:00
Amaury Denoyelle
2c8ad11b73 MINOR: cfgparse: validate defaults proxies separately
Default proxies validation occurs during post-parsing. The objective is
to report any tcp/http-rules which could not behave as expected.

Previously, this was performed while looping over standard proxies list,
when such proxy is referencing a default instance. This was enough as
only named referenced proxies were kept after parsing. However, this is
not the case anymore in the context of dynamic backends creation at
runtime.

As such, this patch now performs validation on every named defaults
outside of the standard proxies list loop. This should not cause any
behavior difference, as defaults are validated without using the proxy
which relies on it.

Along with this change, PR_FL_READY proxy flag is now removed. Its usage
was only really needed for defaults, to avoid validating a same instance
multiple times. With the validation of defaults in their own loop, it is
now redundant.
2026-02-06 14:35:18 +01:00
Egor Shestakov
2a07dc9c24 BUG/MINOR: startup: handle a possible strdup() failure
Fix unhandled strdup() failure when initializing global.log_tag.

Bug was introduced with the fix UAF for global progname pointer from
351ae5dbe. So it must be backported as far as 3.1.
2026-02-06 10:50:31 +01:00
Egor Shestakov
9dd7cf769e BUG/MINOR: startup: fix allocation error message of progname string
Initially when init_early was introduced the progname string was a local
used for temporary storage of log_tag. Now it's global and detached from
log_tag enough. Thus, in the past we could inform that log_tag
allocation has been failed but not now.

Must be backported since the progname string became global, that is
v3.1-dev9-96-g49772c55e
2026-02-06 10:50:31 +01:00
Olivier Houchard
bf7a2808fc BUG/MEDIUM: threads: Differ checking the max threads per group number
Differ checking the max threads per group number until we're done
parsing the configuration file, as it may be set after a "thread-group-
directive. Otherwise the default value of 64 will be used, even if there
is a max-threads-per-group directive.

This should be backported to 3.3.
2026-02-06 03:01:50 +01:00
Olivier Houchard
9766211cf0 BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
Give global.maxthrpertgroup its default value at global creation,
instead of later when we're trying to detect the thread count.
It is used when verifying the configuration file validity, and if it was
not set in the config file, in a few corner cases, the value of 0 would
be used, which would then reject perfectly fine configuration files.

This should be backported to 3.3.
2026-02-06 03:01:36 +01:00
William Lallemand
9e023ae930 DOC: internals: addd mworker V3 internals
Document the mworker V3 implementation introduced in HAProxy 3.1.
Explains the rationale behind moving configuration parsing out of the
master process to improve robustness.

Could be backported to 3.1.
2026-02-04 16:39:44 +01:00
Willy Tarreau
68e9fb73fd [RELEASE] Released version 3.4-dev4
Released version 3.4-dev4 with the following main changes :
    - BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
    - BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
    - BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
    - BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
    - BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
    - BUG/MINOR: config: check capture pool creations for failures
    - BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
    - MEDIUM: pools: better check for size rounding overflow on registration
    - DOC: reg-tests: update VTest upstream link in the starting guide
    - BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
    - BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
    - MINOR: ssl: display libssl errors on private key loading
    - BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
    - MINOR: ssl: allow to disable certificate compression
    - BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
    - DOC: config: mention some possible TLS versions restrictions for kTLS
    - OPTIM: server: move queueslength in server struct
    - OPTIM: proxy: separate queues fields from served
    - OPTIM: server: get rid of the last use of _ha_barrier_full()
    - DOC: config: mention that idle connection sharing is per thread-group
    - MEDIUM: h1: strictly verify quoting in chunk extensions
    - BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
    - BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
    - MEDIUM: ssl: remove connection from msg callback args
    - MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
    - REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
    - DOC: internals: cleanup few typos in master-worker documentation
    - BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
    - MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
    - MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
    - BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
    - BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
    - DOC: config: mention the limitation on server id range for consistent hash
    - MEDIUM: backend: make "balance random" consider req rate when loads are equal
    - BUG/MINOR: config: Fix setting of alt_proto
2026-02-04 14:59:47 +01:00
Aperence
143f5a5c0d BUG/MINOR: config: Fix setting of alt_proto
This patch fixes the bug presented in issue #3254
(https://github.com/haproxy/haproxy/issues/3254), which
occured on FreeBSD when using a stream socket for in
nameserver section. This bug occured due to an incorrect
reset of the alt_proto for a stream socket when the default
socket is created as a datagram socket. This patch fixes
this bug by doing a late assignment to alt_proto when
a datagram socket is requested, leaving only the modification
of alt_proto done by mptcp. Additional documentation
for the use of alt_proto has also been added to
clarify the use of the alt_proto variable.
2026-02-04 14:54:20 +01:00
Willy Tarreau
b6bdb2553b MEDIUM: backend: make "balance random" consider req rate when loads are equal
As reported by Damien Claisse and Cdric Paillet, the "random" LB
algorithm can become particularly unfair with large numbers of servers
having few connections. It's indeed fairly common to see many servers
with zero connection in a thousand-server large farm, and in this case
the P2C algo consisting in checking the servers' loads doesn't help at
all and is basically similar to random(1). In this case, we only rely
on the distribution of server IDs in the random space to pick the best
server, but it's possible to observe huge discrepancies.

An attempt to model the problem clearly shows that with 1600 servers
with weight 10, for 1 million requests, the lowest loaded ones will
take 300 req while the most loaded ones will get 780, with most of
the values between 520 and 700.

In addition, only the first 28 lower bits of server IDs are used for
the key calculation, which means that node keys are more determinist.
Setting random keys in the lowest 28 bits only better packs values
with min around 530 and max around 710, with values mostly between
550 and 680.

This can only be compensated by increasing weights and draws without
being a perfect fix either. At 4 draws, the min is around 560 and the
max around 670, with most values bteween 590 and 650.

This patch takes another approach to this problem: when servers are on
tie regarding their loads, instead of arbitrarily taking the second one,
we now compare their current request rates, which is updated all the
time and smoothed over one second, and we pick the server with the
lowest request rate. Now with 2 draws, the curve is mostly flat, with
the min at 580 and the max at 628, and almost all values between 611
and 625. And 4 draws exclusively gives values from 614 to 624.

Other points will need to be addressed separately (bits of server ID,
maybe refine the hash algorithm), but these ones would affect how
caches are selected, and cannot be changed without an extra option.
For random however we can perform a change without impacting anyone.

This should be backported, probably only to 3.3 since it's where the
"random" algo became the default.
2026-02-04 14:54:16 +01:00
Willy Tarreau
3edf600859 DOC: config: mention the limitation on server id range for consistent hash
When using "hash-type consistent", we default to using the server's ID
as the insertion key. However, that key is scaled to avoid collisions
when inserting multiple slots for a server (16 per weight unit), and
that scaling loses the 4 topmost bits of the ID, so the only effective
range of IDs is 1..268435456, and anything above will provide the same
hashing keys again.

Let's mention this in the documentation, and also remind that it can
affect "balance random". This can be backported to all versions.
2026-02-04 14:54:16 +01:00
Willy Tarreau
cddeea58cd BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
The per-cpu capacity of a cluster was taken into account since 3.2 with
commit 6c88e27cf4 ("MEDIUM: cpu-topo: change "performance" to consider
per-core capacity").

In cpu_policy_performance() and cpu_policy_efficiency(), we're trying
to figure which cores have more capacity than others by comparing their
cluster's average capacity. However, contrary to what the comment says,
we're not averaging per core but per cpu, which makes a difference for
CPUs mixing SMT with non-SMT cores on the same SoC, such as intel's 14th
gen CPUs. Indeed, on a machine where cpufreq is not enabled, all CPUs
can be reported with a capacity of 1024, resulting in a big cluster of
16*1024, and 4 small clusters of 4*1024 each, giving an average of 1024
per CPU, making it impossible to distinguish one from the other. In this
situation, both "cpu-policy performance" and "cpu-policy efficiency"
enable all cores.

But this is wrong, what needs to be taken into account in the divide is
the number of *cores*, not *cpus*, that allows to distinguish big from
little clusters. This was not noticeable on the ARM machines the commit
above aimed at fixing because there, the number of CPUs equals the number
of cores. And on an x86 machine with cpu_freq enabled, the frequencies
continue to help spotting which ones are big/little.

By using nb_cores instead of nb_cpus in the comparison and in the avg_capa
compare function, it properly works again on x86 without affecting other
machines with 1 CPU per core.

This can be backported to 3.2.
2026-02-04 08:49:18 +01:00
Olivier Houchard
3674afe8a0 BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
When we're about to enter polling, atomically set TH_FL_SLEEPING and
remove TH_FL_NOTIFIED, instead of doing it in sequence. Otherwise,
another thread may sett that both the TH_FL_SLEEPING and the
TH_FL_NOTIFIED bits are set, and don't wake up the thread then it should
be doing that.
This prevents a bug where a thread is sleeping while it should be
handling a new connection, which can happen if there are very few
incoming connection. This is easy to reproduce when using only two
threads, and injecting with only one connection, the connection may then
never be handled.

This should be backported up to 2.8.
2026-02-04 07:13:06 +01:00
Hyeonggeun Oh
2527d9dcd1 MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
This patch adds a new 'post-80' option that sets the
CLIENT_PLUGIN_AUTH (0x00080000) capability flag
and explicitly specifies mysql_native_password as
the authentication plugin in the handshake response.

This patch also addes documentation content for post-80 option
support in MySQL 8.x version. Which handles new default auth
plugin caching_sha2_password.

MySQL 8.0 changed the default authentication plugin from
mysql_native_password to caching_sha2_password.
The current mysql-check implementation only supports pre-41
and post-41 client auth protocols, which lack the CLIENT_PLUGIN_AUTH
capability flag. When HAProxy sends a post-41 authentication
packet to a MySQL 8.x server, the server responds with error 1251:
"Client does not support authentication protocol requested by server".

The new client capabilities for post-80 are:
- CLIENT_PROTOCOL_41 (0x00000200)
- CLIENT_SECURE_CONNECTION (0x00008000)
- CLIENT_PLUGIN_AUTH (0x00080000)

Usage example:
backend mysql_servers
	option mysql-check user haproxy post-80
	server db1 192.168.1.10:3306 check

The health check user must be created with mysql_native_password:
CREATE USER 'haproxy'@'%' IDENTIFIED WITH mysql_native_password BY '';

This addresses https://github.com/haproxy/haproxy/issues/2934.
2026-02-03 07:36:53 +01:00
Olivier Houchard
f26562bcb7 MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
Commit fa094d0b61 changed the msg callback
args, but forgot to fix quic_tls_msg_callback() accordingly, so do that,
and remove the unused struct connection paramter.
2026-02-03 04:05:34 +01:00
Christopher Faulet
abc1947e19 BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
A regression was introduced in the commit 0ea601127 ("BUG/MAJOR: applet: Don't
call I/O handler if the applet was shut"). The test on shut flags for legacy
applets is inverted.

It should be harmeless on 3.4 and 3.3 because all applets were converted. But
this fix is mandatory for 3.2 and older.

The patch must be backported as far as 3.0 with the commit above.
2026-01-30 09:55:18 +01:00
Egor Shestakov
02e6375017 DOC: internals: cleanup few typos in master-worker documentation
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
s/mecanism/mechanism
s/got ride/got rid
s/traditionnal/traditional

One typo is confusion between master and worker that results to a
semantic mistake in the sentence:

"...the master will emit an "exit-on-failure" error and will kill every
workers with a SIGTERM and exits with the same error code than the
failed [-master-]{+worker+}..."

Should be backported as far as 3.1.
2026-01-29 18:44:40 +01:00
William Lallemand
da728aa0f6 REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
OpenSSL 4.0 changed the way it stores objects in X509_STORE structures
and are not allowing anymore to iterate on objects in insertion order.

Meaning that the order of the object are not the same before and after
OpenSSL 4.0, and the reg-tests need to handle both cases.
2026-01-29 17:08:45 +01:00
William Lallemand
23e8ed6ea6 MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
OpenSSL 4.0 is deprecating X509_STORE_get0_objects().

Every occurence of X509_STORE_get0_objects() was first replaced by
X509_STORE_get1_objects().
This changes the ref count of the STACK_OF(X509_OBJECT) everywhere, and
need it to be sk_X509_OBJECT_pop_free(objs, X509_OBJECT_free) each time.

X509_STORE_get1_objects() is not available in AWS-LC, OpenSSL < 3.2,
LibreSSL and WolfSSL, so we need to still be compatible with get0.
To achieve this, 2 macros were added X509_STORE_getX_objects() and
sk_X509_OBJECT_popX_free(), these macros will use either the get0 or the
get1 macro depending on their availability. In the case of get0,
sk_X509_OBJECT_popX_free() will just do nothing instead of trying to
free.

Don't backport that unless really needed if we want to be compatible
with OpenSSL 4.0. It changes all the refcounts.
2026-01-29 17:08:41 +01:00
Amaury Denoyelle
fa094d0b61 MEDIUM: ssl: remove connection from msg callback args
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
SSL msg callbacks are used for notification about sent/received SSL
messages. Such callbacks are registered via
ssl_sock_register_msg_callback().

Prior to this patch, connection was passed as first argument of these
callbacks. However, most of them do not use it. Worst, this may lead to
confusion as connection can be NULL in QUIC context.

This patch cleans this by removing connection argument. As an
alternative, connection can be retrieved in callbacks if needed using
ssl_sock_get_conn() but the code must be ready to deal with potential
NULL instances. As an example, heartbeat parsing callback has been
adjusted in this manner.
2026-01-29 11:14:09 +01:00
Amaury Denoyelle
869a997a68 BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
With QUIC backend implementation, SSL code has been adjusted in several
place when accessing connection instance. Indeed, with QUIC usage, SSL
context is tied up to quic_conn, and code may be executed prior/after
connection instantiation. For example, on frontend side, connection is
only created after QUIC handshake completion.

The following patch tried to fix unsafe accesses to connection. In
particular, msg callbacks are not called anymore if connection is NULL.

  fab7da0fd0
  BUG/MEDIUM: quic-be/ssl_sock: TLS callback called without connection

However, most msg callbacks do not need to use the connection instance.
The only occurence where it is accessed is for heartbeat message
parsing, which is the only case of crash solved. The above fix is too
restrictive as it completely prevents execution of these callbacks when
connection is unset. This breaks several features with QUIC, such as SSL
key logging or samples based on ClientHello capture.

The current patch reverts the above one. Thus, this restores invokation
of msg callbacks for QUIC during the whole low-level connection
lifetime. This requires a small adjustment in heartbeat parsing callback
to prevent access on a NULL connection.

The issue on ClientHello capture was mentionned in github issue #2495.

This must be backported up to 3.3.
2026-01-29 11:14:09 +01:00
Willy Tarreau
48d9c90ff2 BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
The help message for "ktls" mentions "expose-experimental-directive"
without the final 's', which is particularly annoying when copy-pasting
the directive from the error message directly into the config.

This should be backported to 3.3.
2026-01-29 11:07:55 +01:00
Willy Tarreau
35d63cc3c7 MEDIUM: h1: strictly verify quoting in chunk extensions
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by Ben Kallus in the following thread:

   https://www.mail-archive.com/haproxy@formilux.org/msg46471.html

there exist some agents which mistakenly accept CRLF inside quoted
chunk extensions, making it possible to fool them by injecting one
extra chunk they won't see for example, or making them miss the end
of the body depending on how it's done. Haproxy, like most other
agents nowadays, doesn't care at all about chunk extensions and just
drops them, in agreement with the spec.

However, as discussed, since chunk extensions are basically never used
except for attacks, and that the cost of just matching quote pairs and
checking backslashed quotes is escape consistency remains relatively
low, it can make sense to add such a check to abort the message parsing
when this situation is encountered. Note that it has to be done at two
places, because there is a fast path and a slow path for chunk parsing.

Also note that it *will* cause transfers using improperly formatted chunk
extensions to fail, but since these are really not used, and that the
likelihood of them being used but improperly quoted certainly is much
lower than the risk of crossing a broken parser on the client's request
path or on the server's response path, we consider the risk as
acceptable. The test is not subject to the configurable parser exceptions
and it's very unlikely that it will ever be needed.

Since this is done in 3.4 which will be LTS, this patch will have to be
backported to 3.3 so that any unlikely trouble gets a chance to be
detected before users upgrade to 3.4.

Thanks to Ben for the discussion, and to Rajat Raghav for sparking it
in the first place even though the original report was mistaken.

Cc: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Cc: Rajat Raghav <xclow3n@gmail.com>
Cc: Christopher Faulet <cfaulet@haproxy.com>
2026-01-28 18:54:23 +01:00
Willy Tarreau
bb36836d76 DOC: config: mention that idle connection sharing is per thread-group
There's already a tunable "tune.idle-pool.shared" allowing to enable or
disable idle connection sharing between threads. However the doc does not
mention that these connections are only shared between threads of the same
thread group, since 2.7 with commit 15c5500b6e ("MEDIUM: conn: make
conn_backend_get always scan the same group"). Let's clarify this and
also give a hint about "max-threads-per-group" which can be helpful for
machines with unified caches.
2026-01-28 17:18:50 +01:00
Willy Tarreau
a79a67b52f OPTIM: server: get rid of the last use of _ha_barrier_full()
The code in srv_add_to_idle_list() has its roots in 2.0 with commit
9ea5d361ae ("MEDIUM: servers: Reorganize the way idle connections are
cleaned."). At this era we didn't yet have the current set of atomic
load/store operations and we used to perform loads using volatile casts
after a barrier. It turns out that this function has kept this schema
over the years, resulting in a big mfence stalling all the pipeline
in the function:

       |     static __inline void
       |     __ha_barrier_full(void)
       |     {
       |     __asm __volatile("mfence" ::: "memory");
 27.08 |       mfence
       |     if ((volatile void *)srv->idle_node.node.leaf_p == NULL) {
  0.84 |       cmpq    $0x0,0x158(%r15)
  0.74 |       je      35f
       |     return 1;

Switching these for a pair of atomic loads got rid of this and brought
0.5 to 3% extra performance depending on the tests due to variations
elsewhere, but it has never been below 0.5%. Note that the second load
doesn't need to be atomic since it's protected by the lock, but it's
cleaner from an API and code review perspective. That's also why it's
relaxed.

This was the last user of _ha_barrier_full(), let's try not to
reintroduce it now!
2026-01-28 16:07:27 +00:00
Willy Tarreau
a9df6947b4 OPTIM: proxy: separate queues fields from served
There's still a lot of contention when accessing the backend's
totpend and queueslength for every request in may_dequeue_tasks(),
even when queues are not used. This only happens because it's stored
in the same cache line as >beconn which is being written by other
threads:

  0.01 |        call       sess_change_server
  0.02 |        mov        0x188(%r15),%esi     ## s->queueslength
       |      if (may_dequeue_tasks(srv, s->be))
  0.00 |        mov        0xa8(%r12),%rax
  0.00 |        mov        -0x50(%rbp),%r11d
  0.00 |        mov        -0x60(%rbp),%r10
  0.00 |        test       %esi,%esi
       |        jne        3349
  0.01 |        mov        0xa00(%rax),%ecx     ## p->queueslength
  8.26 |        test       %ecx,%ecx
  4.08 |        je         288d

This patch moves queueslength and totpend to their own cache line,
thus adding 64 bytes to the struct proxy, but gaining 3.6% of RPS
on a 64-core EPYC thanks to the elimination of this false sharing.

process_stream() goes down from 3.88% to 3.26% in perf top, with
the next top users being inc/dec (s->served) and be->beconn.
2026-01-28 16:07:27 +00:00
Willy Tarreau
3ca2a83fc0 OPTIM: server: move queueslength in server struct
This field is shared by all threads and must be in the shared area
instead, because where it's placed, it slows down access to other
fields of the struct by false sharing. Just moving this field gives
a steady 2% gain on the request rate (1.93 to 1.96 Mrps) on a 64-core
EPYC.
2026-01-28 16:07:27 +00:00
Willy Tarreau
cb3fd012cd DOC: config: mention some possible TLS versions restrictions for kTLS
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
It took me one hour of trial and fail to figure that kTLS and splicing
were not used only for reasons of TLS version, and that switching to
TLS v1.2 solved the issue. Thus, let's mention it in the doc so that
others find it more easily in the future.

This should be backported to 3.3.
2026-01-28 10:45:21 +01:00
William Lallemand
bbab0ac4d0 BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
tune.ssl-certificate-compression expects 'auto' but not 'on'.

Could be backported if the previous patch is backported.
2026-01-27 16:25:11 +01:00
William Lallemand
6995fe60c3 MINOR: ssl: allow to disable certificate compression
This option allows to disable the certificate compression (RFC 8879)
using OpenSSL >= 3.2.0.

This feature is known to permit some denial of services by causing extra
memory allocations of approximately 22MiB and extra CPU work per
connection with OpenSSL versions affected by CVE-2025-66199.
( https://openssl-library.org/news/vulnerabilities/index.html#CVE-2025-66199 )

Setting this to "off" permits to mitigate the problem.

Must be backported to every stable branches.
2026-01-27 16:10:41 +01:00
Christopher Faulet
0ea601127e BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
In 3.0, it was stated an applet could not be woken up after it was shutdown.
So the corresponding test in the applets I/O handler was removed. However,
it seems it may happen, especially when outgoing data are blocked on the
opposite side. But it is really unexpected because the "release" callback
function was already called and the appctx context was most probably
released.

Strangely, it was never detected by any applet till now. But the Prometheus
exporter was never updated and was still testing the shutdown. But when it
was refactored to use the new applet API in 3.3, the test was removed. And
this introduced a regression leading a crash because a server object could
be corrupted. Conditions to hit the bug are not really clear however.

So, now, to avoid any issue with all other applets, the test is performed in
task_process_applet(). The I/O handler is no longer called if the applet is
already shut.

The same is performed for applets still relying on the old API.

An amazing thanks to @idl0r for his invaluable help on this issue !

This patch should fix the issue #3244. It should first be backported to 3.3
and then slowly as far as 3.0.
2026-01-27 16:00:23 +01:00
William Lallemand
0ebef67132 MINOR: ssl: display libssl errors on private key loading
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Display a more precise error message from the libssl when a private key
can't be loaded correctly.
2026-01-26 14:19:19 +01:00
Remi Tricot-Le Breton
9b1faee4c9 BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
The SSL passphrase callback function was only called when loading
private keys from a dedicated file (separate from the corresponding
certificate) but not when both the certificate and the key were in the
same file.
We can now load them properly, regardless of how they are provided.
A flas had to be added in the 'passphrase_cb_data' structure because in
the 'ssl_sock_load_pem_into_ckch' function, when calling
'PEM_read_bio_PrivateKey' there might be no private key in the PEM file
which would mean that the callback never gets called (and cannot set the
'passphrase_idx' to -1).

This patch can be backported to 3.3.
2026-01-26 14:09:13 +01:00
Remi Tricot-Le Breton
d2ccc19fde BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
Some error paths in 'ssl_sock_passwd_cb' (allocation failures) did not
set the 'passphrase_idx' to -1 which is the way for the caller to know
not to call the callback again so in some memory contention contexts we
could end up calling the callback 'infinitely' (or until memory is
finally available).

This patch must be backported to 3.3.
2026-01-26 14:08:50 +01:00
Egor Shestakov
f4cd1e74ba DOC: reg-tests: update VTest upstream link in the starting guide
The starting guide to regression testing had an outdated VTest link.

Could be backported as far as 2.6.
2026-01-26 13:56:13 +01:00
Willy Tarreau
1a3252e956 MEDIUM: pools: better check for size rounding overflow on registration
Certain object sizes cannot be controlled at declaration time because
the resulting object size may be slightly extended (tag, caller),
aligned and rounded up, or even doubled depending on pool settings
(e.g. if backup is used).

This patch addresses this by enlarging the type in the pool registration
to 64-bit so that no info is lost from the declaration, and extra checks
for overflows can be performed during registration after various rounding
steps. This allows to catch issues such as these ones and to report a
suitable error:

  global
      tune.http.logurilen 2147483647

  frontend
      capture request header name len 2147483647
      http-request capture src len 2147483647
      tcp-request content capture src len 2147483647
2026-01-26 11:54:14 +01:00
Willy Tarreau
e9e4821db5 BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
Since 3.3 with commit 945aa0ea82 ("MINOR: initcalls: Add a new initcall
stage, STG_INIT_2"), stkt_late_init() calls stkt_create_stk_ctr_pool()
but doesn't check its return value, so if the pool creation fails, the
process still starts, which is not correct. This patch adds a check for
the return value to make sure we fail to start in this case. This was
not an issue before 3.3 because the function was called as a post-check
handler which did check for errors in the returned values.
2026-01-26 11:45:49 +01:00
Willy Tarreau
4e7c07736a BUG/MINOR: config: check capture pool creations for failures
A few capture pools can fail in case of too large values for example.
These include the req_uri, capture, and caphdr pools, and may be triggered
with "tune.http.logurilen 2147483647" in the global section, or one of
these in a frontend:

  capture request header name len 2147483647
  http-request capture src len 2147483647
  tcp-request content capture src len 2147483647

These seem to be the only occurrences where create_pool()'s return value
is assigned without being checked, so let's add the proper check for
errors there. This can be backported as a hardening measure though the
risks and impacts are extremely low.
2026-01-26 11:45:49 +01:00
Christopher Faulet
c267d24f57 BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Condition to report the support for HAVE_TCP_MD5SIG feature was inverted. It
is only an issue for the reg-test related to this feature.

This patch must be backported to 3.3.
2026-01-23 11:40:54 +01:00
Christopher Faulet
a3e9a04435 BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
UNUSED blocks were not properly handled when the H1 multiplexer was
formatting the start line of a request or a response. UNUSED was ignored but
not removed from HTX message. So the mux can loop infinitly on such block.

It could be seen a a major issue but in fact it happens only if a very
specific case on the reponse processing (at least I think so): the server
must send an interim message (a 100-continue for intance) with the final
response. HAProxy must receive both in same time and the final reponse must
be intercepted (via a http-response return action for instance), In that
case, the interim message is fowarded and the server final reponse is
removed and replaced by a proxy error message.

Now UNUSED htx blocks are properly skipped and removed.

This patch must be backported as far as 3.0.
2026-01-23 11:40:54 +01:00
Christopher Faulet
be68ecc37d BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
If an error occurres during the dump of a metric for a server, we must take
care to detach promex from the watcher list for this server. It must be
performed explicitly because on error, the applet state (st1) is changed, so
it is not possible to detach it during the applet release stage.

This patch must be backported with b4f64c0ab ("BUG/MEDIUM: promex: server
iteration may rely on stale server") as far as 3.0. On older versions, 2.8
and 2.6, the watcher_detach() line must be changed by "srv_drop(ctx->p[1])".
2026-01-23 11:40:54 +01:00
Aurelien DARRAGON
a66b4881d7 BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
We frequently use lua_pcall() to provide safe alternative functions
(encapsulated helpers) that prevent the process from crashing in case
of Lua error when Lua is executed from an unsafe environment.

However, some of those safe helpers don't handle errors properly. In case
of error, the Lua API will always put an error object on top of the stack
as stated in the documentation. This error object can be used to retrieve
more info about the error. But in some cases when we ignore it, we should
still consume it to prevent the stack from being altered with an extra
object when returning from the helper function.

It should be backported to all stable versions. If the patch doesn't apply
automatically, all that's needed is to check for lua_pcall() in hlua.c
and for other cases than 'LUA_OK', make sure that the error object is popped
from the stack before the function returns.
2026-01-23 11:23:37 +01:00
Aurelien DARRAGON
9e9083d0e2 BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
Since commit 365ee28 ("BUG/MINOR: hlua: prevent LJMP in hlua_traceback()")
we now use lua_pcall() to protect sensitive parts of hlua_traceback()
function, and this to prevent Lua from crashing the process in case of
unexpected Lua error.

This is still relevant, but an error was made, as lua_pcall() was given
the nresult argument '1' when _hlua_traceback() internal function
doesn't push any argument on the stack. Because of this, it seems Lua
API still tries to push garbage object on top of the stack before
returning. This may cause functions that leverage hlua_traceback() in
the middle of stack manipulation to end up having a corrupted stack when
continuing after the hlua_traceback().

There doesn't seem to be many places where this could be a problem, as
this was discovered using the reproducer documented in f535d3e
("BUG/MEDIUM: debug: only dump Lua state when panicking"). Indeed, when
hlua_traceback() was used from the signal handler while the thread was
previously executing Lua, when returning to Lua after the handler the
Lua stack would be corrupted.

To fix the issue, we emphasize on the fact that the _hlua_traceback()
function doesn't push anything on the stack, returns 0, thus lua_pcall()
is given 0 'nresult' argument to prevent anything from being pushed after
the execution, preserving the original stack state.

This should be backported to all stable versions (because 365ee28 was
backported there)
2026-01-23 11:23:31 +01:00
Willy Tarreau
2eda6e1cbe [RELEASE] Released version 3.4-dev3
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Released version 3.4-dev3 with the following main changes :
    - BUILD: ssl: strchr definition changed in C23
    - BUILD: tools: memchr definition changed in C23
    - BUG/MINOR: cfgparse: wrong section name upon error
    - MINOR: cfgparse: Refactor "userlist" parser to print it in -dKall operation
    - BUILD: sockpair: fix build issue on macOS related to variable-length arrays
    - BUG/MINOR: cli/stick-tables: argument to "show table" is optional
    - REGTESTS: ssl: Fix reg-tests curve check
    - CI: github: remove ERR=1 temporarly from the ECH job
    - BUG/MINOR: ech/quic: enable ech configuration also for quic listeners
    - MEDIUM: config: warn if some userlist hashes are too slow
    - MINOR: cfgparse: remove duplicate "force-persist" in common kw list
    - MINOR: sample: also support retrieving fc.timer.handshake without a stream
    - MINOR: tcp-sample: permit retrieving tcp_info from the connection/session stage
    - CLEANUP: connection: Remove outdated note about CO_FL `0x00002000` being unused
    - MINOR: receiver: Dynamically alloc the "members" field of shard_info
    - MINOR: stats: Increase the tgid from 8bits to 16bits
    - BUG/MINOR: stats-file: Use a 16bits variable when loading tgid
    - BUG/MINOR: hlua_fcn: fix broken yield for Patref:add_bulk()
    - BUG/MINOR: hlua_fcn: ensure Patref:add_bulk() is given a table object before using it
    - BUG/MINOR: net_helper: fix IPv6 header length processing
    - MEDIUM: counters: Dynamically allocate per-thread group counters
    - MEDIUM: counters: Remove some extra tests
    - BUG/MEDIUM: threads: Fix binding thread on bind.
    - BUG/MEDIUM: quic: fix ACK ECN frame parsing
    - MEDIUM: counters: mostly revert da813ae4d7
    - BUG/MINOR: http_act: fix deinit performed on uninitialized lf_expr in release_http_map()
    - MINOR: queues: Turn non_empty_tgids into a long array.
    - MINOR: threads: Eliminate all_tgroups_mask.
    - BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
    - MEDIUM: thread: Turn the group mask in thread set into a group counter
    - BUG/MINOR: proxy: free persist_rules
    - MEDIUM: stream: refactor switching-rules processing
    - REGTESTS: add test on backend switching rules selection
    - MEDIUM: proxy: do not select a backend if disabled
    - MEDIUM: proxy: implement publish/unpublish backend CLI
    - MINOR: stats: report BE unpublished status
    - MINOR: cfgparse: adapt warnif_cond_conflicts() error output
    - MEDIUM: proxy: force traffic on unpublished/disabled backends
    - MINOR: ssl: Factorize AES GCM data processing
    - MINOR: ssl: Add new aes_cbc_enc/_dec converters
    - REGTESTS: ssl: Add tests for new aes cbc converters
    - MINOR: jwe: Add new jwt_decrypt_secret converter
    - MINOR: jwe: Add new jwt_decrypt_cert converter
    - REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
    - DOC: jwe: Add doc for jwt_decrypt converters
    - MINOR: jwe: Some algorithms not supported by AWS-LC
    - REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
    - BUG/MINOR: cfgparse: fix "default" prefix parsing
    - REORG/MINOR: cfgparse: eliminate code duplication by lshift_args()
    - MEDIUM: systemd: implement directory loading
    - CI: github: switch monthly Fedora Rawhide build to OpenSSL
    - SCRIPTS: build-ssl: use QUICTLS_VERSION instead of QUICTLS=yes
    - CI: github: define the right quictls version in each jobs
    - CI: github: fix vtest.yml with "not quictls"
    - MINOR: cli: use srv_drop() when server was created using new_server()
    - BUG/MINOR: server: ensure server is detached from proxy list before being freed
    - BUG/MEDIUM: promex: server iteration may rely on stale server
    - SCRIPTS: build-ssl: clone the quictls branch directly
    - SCRIPTS: build-ssl: fix quictls build for 1.1.1 versions
    - BUG/MEDIUM: log: parsing log-forward options may result in segfault
    - DOC: proxy-protocol: Add SSL client certificate TLV
    - DOC: fix typos in the documentation files
    - DOC: fix mismatched quotes typos around words in the documentation files
    - REORG: cfgparse: move peers parsing to cfgparse-peers.c
    - MINOR: tools: add chunk_escape_string() helper function
    - MINOR: vars: store variable names for runtime access
    - MINOR: vars: implement dump_all_vars() sample fetch
    - DOC: vars: document dump_all_vars() sample fetch
    - BUG/MEDIUM: ssl: fix error path on generate-certificates
    - BUG/MEDIUM: ssl: fix generate-certificates option when SNI greater than 64bytes
    - BUG/MEDIUM: mux-quic: prevent BUG_ON() on aborted uni stream close
    - REGTESTS: ssl: fix generate-certificates w/ LibreSSL
    - SCRIPTS: build: enable symbols in AWS-LC builds
    - BUG/MINOR: proxy: fix deinit crash on defaults with duplicate name
    - BUG/MEDIUM: debug: only dump Lua state when panicking
    - MINOR: proxy: remove proxy_preset_defaults()
    - MINOR: proxy: refactor defaults proxies API
    - MINOR: proxy: simplify defaults proxies list storage
    - MEDIUM: cfgparse: do not store unnamed defaults in name tree
    - MEDIUM: proxy: implement persistent named defaults
2026-01-22 19:02:54 +01:00
Amaury Denoyelle
b52c60d366 MEDIUM: proxy: implement persistent named defaults
This patch changes the handling of named defaults sections. Prior to
this patch, every unreferenced defaults proxies were removed on post
parsing. Now by default, these sections are kept after postparsing and
only purged on deinit. The objective is to allow reusing them as base
configuration for dynamic backends.

To implement this, refcount of every still addressable named sections is
incremented by one after parsing. This ensures that they won't be
removed even if referencing proxies are removed at runtime. This is done
via the new function proxy_ref_all_defaults().

To ensure defaults instances are still properly removed on deinit, the
inverse operation is performed : refcount is decremented by one on every
defaults sections via proxy_unref_all_defaults().

The original behavior can still be used by using the new global keyword
tune.defaults.purge. This is useful for users using configuration with
large number of defaults and not interested in dynamic backends
creation.
2026-01-22 18:06:42 +01:00
Amaury Denoyelle
116983ad94 MEDIUM: cfgparse: do not store unnamed defaults in name tree
Defaults section are indexed by their name in defproxy_by_name tree. For
named sections, there is no duplicate : if two instances have the same
name, the older one is removed from the tree. However, this was not the
case for unnamed defaults which are all stored inconditionnally in
defproxy_by_name.

This commit introduces a new approach for unnamed defaults. Now, these
instances are never inserted in the defproxy_by_name tree. Indeed, this
is not needed as no tree lookup is performed with empty names. This may
optimize slightly config parsing with a huge number of named and unnamed
defaults sections, as the first ones won't fill up the tree needlessly.

However, defproxy_by_name tree is also used to purge unreferenced
defaults instances, both on postparsing and deinit. Thus, a new approach
is needed for unnamed sections cleanup. Now, each time a new defaults is
parsed, if the previous instance is unnamed, it is freed unless if
referenced by a proxy. When config parsing is ended, a similar operation
is performed to ensure the last unnamed defaults section won't stay in
memory. To implement this, last_defproxy static variable is now set to
global. Unnamed sections which cannot be removed due to proxies
referencing proxies will still be removed when such proxies are freed
themselves, at runtime or on deinit.
2026-01-22 17:57:16 +01:00
Amaury Denoyelle
848e0cd052 MINOR: proxy: simplify defaults proxies list storage
Defaults proxies instance are stored in a global name tree. When there
is a name conflict and the older entry cannot be simply discarded as it
is already referenced, the older entry is instead removed from the name
tree and inserted into the orphaned list.

The purpose of the orphaned list was to guarantee that any remaining
unreferenced defaults are purged either on postparsing or deinit.
However, this is in fact completely useless. Indeed on postparsing,
orphaned entries are always referenced. On deinit instead, defaults are
already freed along the cleanup of all frontend/backend instances clean
up, thanks to their refcounting.

This patch streamlines this by removing orphaned list. Instead, a
defaults section is inserted into a new global defaults_list during
their whole lifetime. This is not strictly necessary but it ensures that
defaults instances can still be accessed easily in the future if needed
even if not present in the name tree. On deinit, a BUG_ON() is added to
ensure that defaults_list is indeed emptied.

Another benefit from this patch is to simplify the defaults deletion
procedure. Orphaned simple list is replaced by a proper double linked
list implementation, so a single LIST_DELETE() is now performed. This
will be notably useful as defaults may be removed at runtime in the
future if backends deletion at runtime is implemented.
2026-01-22 17:57:09 +01:00
Amaury Denoyelle
434e979046 MINOR: proxy: refactor defaults proxies API
This patch renames functions which deal with defaults section. A common
"defaults_px_" prefix is defined. This serves as a marker to identify
functions which can only be used with proxies defaults capability. New
BUG_ON() are enforced to ensure this is valid.

Also, older proxy_unref_or_destroy_defaults() is renamed
defaults_px_detach().
2026-01-22 17:55:47 +01:00
Amaury Denoyelle
6c0ea1fe73 MINOR: proxy: remove proxy_preset_defaults()
Function proxy_preset_defaults() purpose has evolved over time.
Originally, it was only used to initialize defaults proxies instances.
Until today, it was extended so that all proxies use it. Its objective
is to initialize settings to common default values.

To remove the confusion, this function is now removed. Its content is
integrated directly into init_new_proxy().
2026-01-22 16:20:25 +01:00
Willy Tarreau
f535d3e031 BUG/MEDIUM: debug: only dump Lua state when panicking
For a long time, we've tried to show the Lua state and backtrace when
dumping threads so as to be able to figure is (and which) Lua code was
misbehaving, e.g. by performing expensive library calls. Since 3.1 with
commit 365ee28510 ("BUG/MINOR: hlua: prevent LJMP in hlua_traceback()"),
it appears that the approach is more fragile (though that fix addressed
a real issue about out-of-memory), and it's possible to occasionally
observe crashes or CPU loops with "show threads" while running Lua
heavily. While users of "show threads" are rare, the watchdog warnings,
which were also enabled on 3.1, also trigger these issues, which is
even more of a concern.

This patch goes the simple way to address this for now: since the purpose
of the Lua backtrace was to help locate Lua call places upon a panic,
let's only call the backtrace on panic but not in other situations. After
a panic we obviously don't care that the Lua stack might be corrupted
since it's never going to be resumed anyway. This may be relaxed in the
future if a solution is found to reliably produce harmless Lua backtraces.

The commit above was backported to all stable branches, so this patch
will be needed everywhere. However, TAINTED_PANIC only appeared in 2.8,
and given the rarety of this bug before 3.1, it's probably not needed
to make any extra effort to go beyond 2.8.

It's easy enough to test a version for being subject to this issue,
by running the following Lua code:

  local function stress(txn)
          for _, backend in pairs(core.backends) do
                  for _, server in pairs(backend.servers) do
                          local stats = server:get_stats()
                  end
          end
  end

  core.register_fetches("stress", stress)

in the following config file:

  global
        stats socket /tmp/haproxy.stat level admin mode 666
        tune.lua.bool-sample-conversion normal
        lua-load-per-thread "stress.lua"

  listen stress
        bind :8001
        mode http
        timeout client 5s
        timeout server 5s
        timeout connect 5s
        http-request return status 200 content-type text/plain lf-string %[lua.stress()]
        server s1 127.0.0.1:8000

and stressing port 8001 with 100+ connections requesting / in loop, then
issuing "show threads" on the CLI using socat in loops as well. Normally
it instantly segfaults (sometimes during the first "show").
2026-01-22 15:47:42 +01:00
Amaury Denoyelle
ac877a25dd BUG/MINOR: proxy: fix deinit crash on defaults with duplicate name
A defaults proxy instance may be move into the orphaned list when it is
replaced by a newer section with the same name. This is attached via
<next> member as a single linked list entry. However, proxy free does
not clear <next> attach point.

This causes a crash on deinit if orphaned list is not empty. First, all
frontend/backend instances are freed. This triggers the release of every
referenced defaults instances as their refcount reach zero, but orphaned
list is not clean up. A loop is then conducted on orphaned list via
proxy_destroy_all_unref_defaults(). This causes a segfault due to access
on already freed entries.

To fix this, this patch extends proxy_destroy_defaults(). If orphaned
list is not empty, a loop is performed to remove a possible entry of the
currently released defaults instance. This ensures that loop over
orphaned list won't be able to access to already freed entries.

This bug is pretty rare as it requires to have duplicate name in
defaults sections, and also to use settings which forces defaults
referencing, such as TCP/HTTP rules. This can be reproduced with the
minimal config here :

defaults def
    http-request return status 200
frontend fe
    bind :20080
defaults def

Note that in fact orphaned list looping is not strictly necessary, as
defaults instances are automatically removed via refcounting. This will
be the purpose of a future patch. However, to limit the risk of
regression on stable releases during backport, this patch uses the more
direct approach for now.

This must be backported up to 3.1.
2026-01-22 15:40:01 +01:00
William Lallemand
b3f7d43248 SCRIPTS: build: enable symbols in AWS-LC builds
The Release target of AWS-LC does not enable symbols. Which makes
debugging and using gdb difficult.

Change the build type to RelWithDebInfo.
2026-01-22 15:25:24 +01:00
William Lallemand
21b192e799 REGTESTS: ssl: fix generate-certificates w/ LibreSSL
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Since commit eb5279b15 ("BUG/MEDIUM: ssl: fix generate-certificates
option when SNI greater than 64bytes") the LibreSSL job does not seem to
work anymore.

Indeed the reg-tests was modified to add a SNI longer than 64 bytes,
without any concern about the DNS standard, which allows only 63 bytes
per label.

LibreSSL is stricter than the other libraries about that, and checks
that the SNI is compliant with the DNS RFC in the
tlsext_sni_is_valid_hostname() function
https://github.com/libressl/openbsd/blob/OPENBSD_7_8/src/lib/libssl/ssl_tlsext.c#L710

This patch fixes the issue by splitting the SNI with a second label to
reach more than 64 bytes.

Must be backported with eb5279b15 in every stable branches.
2026-01-21 16:50:16 +01:00
Amaury Denoyelle
c7004be964 BUG/MEDIUM: mux-quic: prevent BUG_ON() on aborted uni stream close
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
When a QCS instance is fully closed on qcs_close_remote() invokation, it
is moved into purg_list for later cleanup. This reuses <el_send> list
element, so a BUG_ON() ensures that QCS is not already present in
send_list.

This code is safe for bidirectional streams, as local channel is only
closed after FIN or RESET_STREAM emission completion, so such QCS won't
be present in the send_list on full closure.

However, things are different for remote uni streams. As such streams do
not have any local channel, qcs_close_remote() will always proceed to
full closure. Most of the time this is fine, but the aformentionned
BUG_ON() could be triggered if emission is required on a remote uni
stream : this only happens after read was aborted and a STOP_SENDING
frame is prepared.

Fix this by adding an extra operation in qcs_close_remote() : on full
close, STOP_SENDING is cancelled if it was prepared and the QCS instance
is removed from send_list. This is safe as STOP_SENDING is unnecessary
after the remote channel is closed. This operation is performed before
purg_list insertion which prevents the BUG_ON() crash issue.

This patch must be backported up to 3.1.
2026-01-21 14:01:12 +01:00
William Lallemand
eb5279b154 BUG/MEDIUM: ssl: fix generate-certificates option when SNI greater than 64bytes
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The problem is that the certificate is generated with a CN greater than
64 bytes when the SNI is too long, which is not suppose to be supported,
and will end up with a handshake failure.

The patch fixes the issue by avoiding to add a CN when the SNI is longer than
64 bytes. Indeed this is not a mandatory field anymore and was deprecated more
than 20 years ago. The SAN DNS is enough for this case.

Must be backported in every stable branches.
2026-01-21 10:45:22 +01:00
William Lallemand
fbc98ebcda BUG/MEDIUM: ssl: fix error path on generate-certificates
It was reported by Przemyslaw Bromber that using the "generate-certificates"
option combined with AWS-LC would crash HAProxy when a request is done with a
SNI longer than 64 bytes.

The problem is that the certificate is generated with a CN greater than 64
bytes which results in ssl_sock_do_create_cert() returning NULL. This
NULL value being passed to SSL_set_SSL_CTX.

With OpenSSL, passing a NULL SSL_CTX does not seem to be an issue as it
would just ignore it.

With AWS_LC, passing a NULL seems to crash the function. This was
reported to upstream AWS-LC and fixed in patch 7487ad1dcd8
https://github.com/aws/aws-lc/pull/2946.

This must be backported in every branches.
2026-01-21 10:45:22 +01:00
Hyeonggeun Oh
2d8d2b4247 DOC: vars: document dump_all_vars() sample fetch
Add documentation for the dump_all_vars() sample fetch function in the
configuration manual. This function was introduced in the previous commit
to dump all variables in a given scope with optional prefix filtering.

The documentation includes:
- Function signature and return type
- Description of output format
- Explanation of scope and prefix arguments
- Usage examples for common scenarios

This completes the implementation of GitHub issue #1623.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
9f766b2056 MINOR: vars: implement dump_all_vars() sample fetch
This patch implements dump_all_vars([scope],[prefix]) sample fetch
function that dumps all variables in a given scope, optionally
filtered by name prefix.

Output format: var1=value1, var2=value2, ...
- String values are quoted and escaped (", , \r, \n, \b, \0)
- All sample types are supported via sample_convert()
- Scope can be: sess, txn, req, res, proc
- Prefix filtering is optional

Example usage:
  http-request return string %[dump_all_vars(txn)]
  http-request return string %[dump_all_vars(txn,user)]

This addresses GitHub issue #1623.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
95e8483b35 MINOR: vars: store variable names for runtime access
Currently, variable names are only used during parsing and are not
stored at runtime. This makes it impossible to iterate through
variables and retrieve their names.

This patch adds infrastructure to store variable names:
- Add 'name' and 'name_len' fields to var_desc structure
- Add 'name' field to var structure
- Add VDF_NAME_ALLOCATED flag to track memory ownership
- Store names in vars_fill_desc(), var_set(), vars_check_arg(),
  and parse_store()
- Free names in var_clear() and release_store_rule()
- Add ARGT_VAR handling in release_sample_arg() to free the
  allocated name when the flag is set

This prepares the ground for implementing dump_all_vars() in the
next commit.

Tested with:
  - ASAN-enabled build on Linux (TARGET=linux-glibc USE_OPENSSL=1
    ARCH_FLAGS="-g -fsanitize=address")
  - Regression tests: reg-tests/sample_fetches/vars.vtc
  - Regression tests: reg-tests/startup/default_rules.vtc
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
25564b6075 MINOR: tools: add chunk_escape_string() helper function
This function takes a string appends it to a buffer in a format
compatible with most languages (double-quoted, with special characters
escaped). It handles standard escape sequences like \n, \r, \", \\.

This generic utility is desined to be used for logging or debugging
purposes where arbitrary string data needs to be safely emitted without
breaking the output format. It will be primarily used by the upcoming
dump_all_vars() sample fetch to dump variable contents safely.
2026-01-21 10:44:19 +01:00
Hyeonggeun Oh
7e85391a9e REORG: cfgparse: move peers parsing to cfgparse-peers.c
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This patch move the peers section parsing code from src/cfgparse.c to a
dedicated src/cfgparse-peers.c file. This seperation improves code
organization and prepares for further refactoring of the "peers" keyword
registration system.

No functional changes in this patch - the code is moved as-is with only
the necessary adjustments for compliation (adding SPDX header and
updating Makefile for build).

This is the first patch in a series to address issue #3221, which
reports that "peers" section keywords are not displayed with -dKall.
2026-01-20 17:17:37 +01:00
Egor Shestakov
44c491ae6b DOC: fix mismatched quotes typos around words in the documentation files
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
s/"no'/"no"
s/'private"/"private"
s/"flt'/"flt"

There isn't definite convention but people usually prefer to highlight
something important with quotation marks. For example, it's convenient
to find keywords from a text when they are quoted, mismatches make this
harder.

No backport needed.
2026-01-20 08:15:41 +01:00
Egor Shestakov
0c3b212aab DOC: fix typos in the documentation files
This fixes several obvious typos in the documentation:

s/elvoved/evolved
s/performend/performed
s/importnat/important
s/sharedd/shared
s/eveyone/everyone

No backport needed.
2026-01-20 08:15:28 +01:00
Simon Ser
6f5def3cbd DOC: proxy-protocol: Add SSL client certificate TLV
Add the PP2_SUBTYPE_SSL_CLIENT_CERT code point reservation in the
proxy protocol specification. This is useful in cases where the
backend needs to perform mTLS authentication, but the rules for
certificate validation are backend-specific (e.g. database of
allowed certificate hashes).

This is left optional to leave it up to the frontend configuration
to dictate whether to forward raw certificate data.

Support for this new TLV has been added in tlstunnel:
https://codeberg.org/emersion/tlstunnel/pulls/33
2026-01-20 08:11:19 +01:00
Aurelien DARRAGON
9156d5f775 BUG/MEDIUM: log: parsing log-forward options may result in segfault
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by GH user @HiggsTeilchen on #3250, the use of "option
dont-parse-log" may result in segmentation fault when parsing the
configuration. In fact, "option assume-rfc6587-ntf" is also affected.

The reason behind this is that cfg_parse_log_forward() leverages the
cfg_parse_listen_match_option() function to check for generic proxy
options that are relevant in the PR_MODE_SYSLOG context. And while it
is not documented, this function assumes that the currently evaluated
proxy is stored in the global variable 'curproxy', which
cfg_parse_log_forward() doesn't offer.

cfg_parse_listen_match_option() uses curproxy to check the currently
evaluated proxy's capabilities is compatible with the option, so if a
proxy with the frontend capability was defined earlier in the config,
parsing would succeed, if curproxy points to proxy without the frontend
capabilty (ie: backend), a warning would be emitted to tell that the
option would be ignored while it is perfectly valid for the log-forward
proxy, and if no proxy was defined earlier in the config a segfault would
be triggered.

To fix the issue, we explicitly make "curproxy" global variable point to
the log-forward proxy being parsed in cfg_parse_log_forward() before
leveraging cfg_parse_listen_match_option() to check for compatible
options.

It must be backported with 834e9af8 ("MINOR: log: add options eval for
log-forward"), which was introduced in 3.2 precisely.
2026-01-19 16:53:00 +01:00
William Lallemand
14e890d85e SCRIPTS: build-ssl: fix quictls build for 1.1.1 versions
The quictls build function was not using anymore the right make target
to build older versions. make all must be used instead of make build_sw
for 1.1.1.
2026-01-19 14:45:47 +01:00
William Lallemand
818b32addc SCRIPTS: build-ssl: clone the quictls branch directly
Allow to clone the quictls branch directly using the value from
QUICTLS_VERSION.
2026-01-19 14:45:47 +01:00
Aurelien DARRAGON
b4f64c0abf BUG/MEDIUM: promex: server iteration may rely on stale server
When performing a promex dump, even though we hold reference on server
during resumption after a yield (ie: buffer full), the refcount mechanism
only guarantees that the server pointer will be valid upon resumption, not
that its content will be consistent. As such, sv->next may be garbage upon
resumption. Instead, we must rely on the watcher mechanism to iterate over
server list when resumption is involved like we already do for stats and
lua handlers.

It must be backported anywhere 071ae8ce3 (" BUG/MEDIUM: stats/server: use
watcher to track server during stats dump") was (up to 2.8 it seems)
2026-01-19 14:24:11 +01:00
Aurelien DARRAGON
d38b918da1 BUG/MINOR: server: ensure server is detached from proxy list before being freed
There remained some cases (on error paths) were a server could be freed
while still attached on the parent proxy server list. In 3.3 this can be
problematic because new_server() automatically adds the server to the
parent proxy list.

The bug is insignificant because it is on errors paths during init and
often haproxy exits right after. But let's fix that to ensure no UAF or
undefined behavior occurs because of that.

This patch depends on ("MINOR: cli: use srv_drop() when server was created using new_server()")

It must be backported in 3.3 with the above mentioned patch.
2026-01-19 14:24:04 +01:00
Aurelien DARRAGON
12dc9325a7 MINOR: cli: use srv_drop() when server was created using new_server()
Now that new_server() is becoming more and more complex, we need to
take care that servers created using new_server() must be released
using the corresponding release function srv_drop() which takes care
of properly de-initing the server and its members.
2026-01-19 14:23:58 +01:00
William Lallemand
eebb448f49 CI: github: fix vtest.yml with "not quictls"
Previous patch 0a4642 ("CI: github: define the right quictls version in
each jobs") didn't use the right syntax for string matching.
2026-01-19 13:22:10 +01:00
William Lallemand
0a464215c5 CI: github: define the right quictls version in each jobs
openssl+quictls is not maintained anymore (quictls/openssl), however we
still need to test openssl+quictls 1.1.1. Other openssl+quictls branches
don't need to be tested.

The quictls hardfork is tested in the 'quictls' job, it uses the
'main' branch in the quictls/quictls repository.
2026-01-19 11:45:57 +01:00
William Lallemand
b8e91f619a SCRIPTS: build-ssl: use QUICTLS_VERSION instead of QUICTLS=yes
Quictls has multiple versions and the CI are not able to test a specific
one because the script uses the default git branch.

This patch allows to checkout the right tag or branch.
2026-01-19 11:43:24 +01:00
Ilia Shipitsin
bd8d70413e CI: github: switch monthly Fedora Rawhide build to OpenSSL
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
QuicTLS builds are already run on push and openssl+quictls patchset is
not maintained anymore. The patch switch from openssl+quictls to the
native openssl of fedora.

Fedora Rawhide builds are mainly useful to test the latest gcc and clang
versions as well as default options of the distribution.

The patch also contains a workaround to re-enable legacy algorithms
which are still tested on the CI.
2026-01-19 10:56:48 +01:00
William Lallemand
90c5618ed5 MEDIUM: systemd: implement directory loading
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
Redhat-based system already use a CFGDIR variable to load configuration
files from a directory, this patch implements the same feature.

It now requires that /etc/haproxy/conf.d exists or the service won't be
able to start.
2026-01-16 09:55:33 +01:00
Egor Shestakov
a3ee35cbfc REORG/MINOR: cfgparse: eliminate code duplication by lshift_args()
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
There were similar parts of the code in "no" and "default" prefix
keywords handling. This duplication caused the bug once.

No backport needed.
2026-01-16 09:09:24 +01:00
Egor Shestakov
447d73dc99 BUG/MINOR: cfgparse: fix "default" prefix parsing
Fix the left shift of args when "default" prefix matches. The cause of the
bug was the absence of zeroing of the right element during the shift. The
same bug for "no" prefix was fixed by commit 0f99e3497, but missed for
"default".

The shift of ("default", "option", "dontlog-normal")
    produced ("option", "dontlog-normal", "dontlog-normal")
  instead of ("option", "dontlog-normal", "")

As an example, a valid config line:
    default option dontlog-normal

caused a parse error:
[ALERT]    (32914) : config : parsing [bug-default-prefix.cfg:22] : 'option dontlog-normal' cannot handle unexpected argument 'dontlog-normal'.

The patch should be backported to all stable versions, since the absence of
zeroing was introduced with "default" keyword.
2026-01-16 09:09:19 +01:00
Remi Tricot-Le Breton
362ff2628f REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Many tests use the A128KW algorithm which is not supported by AWS-LC but
instead of removing those tests we will just have a hardcoded value set
by default in this case.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
aba18bac71 MINOR: jwe: Some algorithms not supported by AWS-LC
AWS-LC does not have EVP_aes_128_wrap or EVP_aes_192_wrap so the A128KW
and A192KW algorithms will not be supported for JWE token decryption.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
39da1845fc DOC: jwe: Add doc for jwt_decrypt converters
Add doc for jwt_decrypt_secret and jwt_decrypt_cert converters.
2026-01-15 10:56:28 +01:00
Remi Tricot-Le Breton
4b73a3ed29 REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
Test the new jwt_decrypt converters.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
e3a782adb5 MINOR: jwe: Add new jwt_decrypt_cert converter
This converter checks the validity and decrypts the content of a JWE
token that has an asymetric "alg" algorithm (RSA). In such a case, we
must provide a path to an already loaded certificate and private key
that has the "jwt" option set to "on".
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
416b87d5db MINOR: jwe: Add new jwt_decrypt_secret converter
This converter checks the validity and decrypts the content of a JWE
token that has a symetric "alg" algorithm. In such a case, we only
require a secret as parameter in order to decrypt the token.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
2b45b7bf4f REGTESTS: ssl: Add tests for new aes cbc converters
This test mimics what was already done for the aes_gcm converters. Some
data is encrypted and directly decrypted and we ensure that the output
was not changed.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
c431034037 MINOR: ssl: Add new aes_cbc_enc/_dec converters
Those converters allow to encrypt or decrypt data with AES in Cipher
Block Chaining mode. They work the same way as the already existing
aes_gcm_enc/_dec ones apart from the AEAD tag notion which is not
supported in CBC mode.
2026-01-15 10:56:27 +01:00
Remi Tricot-Le Breton
f0e64de753 MINOR: ssl: Factorize AES GCM data processing
The parameter parsing and processing and the actual crypto part of the
aes_gcm converter are interleaved. This patch puts the crypto parts in a
dedicated function for better reuse in the upcoming JWE processing.
2026-01-15 10:56:27 +01:00
Amaury Denoyelle
6870551a57 MEDIUM: proxy: force traffic on unpublished/disabled backends
A recent patch has introduced a new state for proxies : unpublished
backends. Such backends won't be eligilible for traffic, thus
use_backend/default_backend rules which target them won't match and
content switching rules processing will continue.

This patch defines a new frontend keywords 'force-be-switch'. This
keyword allows to ignore unpublished or disabled state. Thus,
use_backend/default_backend will match even if the target backend is
unpublished or disabled. This is useful to be able to test a backend
instance before exposing it outside.

This new keyword is converted into a persist rule of new type
PERSIST_TYPE_BE_SWITCH, stored in persist_rules list proxy member. This
is the only persist rule applicable to frontend side. Prior to this
commit, pure frontend proxies persist_rules list were always empty.

This new features requires adjustment in process_switching_rules(). Now,
when a use_backend/default_backend rule matches with an non eligible
backend, frontend persist_rules are inspected to detect if a
force-be-switch is present so that the backend may be selected.
2026-01-15 09:08:19 +01:00
Amaury Denoyelle
16f035d555 MINOR: cfgparse: adapt warnif_cond_conflicts() error output
Utility function warnif_cond_conflicts() is used when parsing an ACL.
Previously, the function directly calls ha_warning() to report an error.
Change the function so that it now takes the error message as argument.
Caller can then output it as wanted.

This change is necessary to use the function when parsing a keyword
registered as cfg_kw_list. The next patch will reuse it.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
82907d5621 MINOR: stats: report BE unpublished status
A previous patch defines a new proxy status : unpublished backends. This
patch extends this by changing proxy status reported in stats. If
unpublished is set, an extra "(UNPUB)" is added to the field.

Also, HTML stats is also slightly updated. If a backend is up but
unpublished, its status will be reported in orange color.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
797ec6ede5 MEDIUM: proxy: implement publish/unpublish backend CLI
Define a new set of CLI commands publish/unpublish backend <be>. The
objective is to be able to change the status of a backend to
unpublished. Such a backend is considered ineligible to traffic : this
allows to skip use_backend rules which target it.

Note that contrary to disabled/stopped proxies, an unpublished backend
still has server checks running on it.

Internally, a new proxy flags PR_FL_BE_UNPUBLISHED is defined. CLI
commands handler "publish backend" and "unpublish backend" are executed
under thread isolation. This guarantees that the flag can safely be set
or remove in the CLI handlers, and read during content-switching
processing.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
21fb0a3f58 MEDIUM: proxy: do not select a backend if disabled
A proxy can be marked as disabled using the keyword with the same name.
The doc mentions that it won't process any traffic. However, this is not
really the case for backends as they may still be selected via switching
rules during stream processing.

In fact, currently access to disabled backends will be conducted up to
assign_server(). However, no eligible server is found at this stage,
resulting in a connection closure or an HTTP 503, which is expected. So
in the end, servers in disabled backends won't receive any traffic. But
this is only because post-parsing steps are not performed on such
backends. Thus, this can be considered as functional but only via
side-effects.

This patch clarifies the handling of disable backends, so that they are
never selected via switching rules. Now, process_switching_rules() will
ignore disable backends and continue rules evaluation.

As this is a behavior change, this patch is labelled as medium. The
documentation manuel for use_backend is updated accordingly.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2d26d353ce REGTESTS: add test on backend switching rules selection
Create a new test to ensure that switching rules selection is fine.
Currently, this checks that dynamic backend switching works as expected.
If a matching rule is resolved to an unexisting backend, the default
backend is used instead.

This regtest should be useful as switching-rules will be extended in a
future set of patches to add new abilities on backends, linked to
dynamic backend support.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
12975c5c37 MEDIUM: stream: refactor switching-rules processing
This commit rewrites process_switching_rules() function. The objective
is to simplify backend selection so that a single unified
stream_set_backend() call is kept, both for regular and default backends
case.

This patch will be useful to add new capabilities on backends, in the
context of dynamic backend support implementation.
2026-01-15 09:08:18 +01:00
Amaury Denoyelle
2f6aab9211 BUG/MINOR: proxy: free persist_rules
force-persist proxy keyword is converted into a persist_rule, stored in
proxy persist_rules list member. Each new rule is dynamically allocated
during parsing.

This commit fixes the memory leak on deinit due to a missing free on
persist_rules list entries. This is done via deinit_proxy()
modification. Each rule in the list is freed, along with its associated
ACL condition type.

This can be backported to every stable version.
2026-01-15 09:08:18 +01:00
Olivier Houchard
a209c35f30 MEDIUM: thread: Turn the group mask in thread set into a group counter
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
If we want to be able to have more than 64 thread groups, we can no
longer use thread group masks as long.
One remaining place where it is done is in struct thread_set. However,
it is not really used as a mask anywhere, all we want is a thread group
counter, so convert that mask to a counter.
2026-01-15 05:24:53 +01:00
Olivier Houchard
6249698840 BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
Fix the arithmetic when pre-filling non_empty_tgids when we still have
more than 32/64 thread groups left, to get the right index, we of course
have to divide the number of thread groups by the number of bits in a
long.
This bug was introduced by commit
7e1fed4b7a, but hopefully was not hit
because it requires to have at least as much thread groups as there are
bits in a long, which is impossible on 64bits machines, as MAX_TGROUPS
is still 32.
2026-01-15 04:28:04 +01:00
Olivier Houchard
1397982599 MINOR: threads: Eliminate all_tgroups_mask.
Now that it is unused, eliminate all_tgroups_mask, as we can't 64bits
masks to represent thread groups, if we want to be able to have more
than 64 thread groups.
2026-01-15 03:46:57 +01:00
Olivier Houchard
7e1fed4b7a MINOR: queues: Turn non_empty_tgids into a long array.
In order to be able to have more than 64 thread groups, turn
non_empty_tgids into a long array, so that we have enough bits to
represent everty thread group, and manipulate it with the ha_bit_*
functions.
2026-01-15 03:46:57 +01:00
Aurelien DARRAGON
2ec387cdc2 BUG/MINOR: http_act: fix deinit performed on uninitialized lf_expr in release_http_map()
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by GH user @Lzq-001 on issue #3245, the config below would
cause haproxy to SEGFAULT after having reported an error:

  frontend 0000000
        http-request set-map %[hdr(0000)0_

Root cause is simple, in parse_http_set_map(), we define the release
function (which is responsible to clear lf_expr expressions used by the
action), prior to initializing the expressions, while the release
function assumes the expressions are always initialized.

For all similar actions, we already perform the init prior to setting
the related release function, but this was not the case for
parse_http_set_map(). We fix the bug by initializing the expressions
earlier.

Thanks to @Lzq-001 for having reported the issue and provided a simple
reproducer.

It should be backported to all stable versions, note for versions prior to
3.0, lf_expr_init() should be replace by LIST_INIT(), see
6810c41 ("MEDIUM: tree-wide: add logformat expressions wrapper")
2026-01-14 20:05:39 +01:00
Olivier Houchard
7f4b053b26 MEDIUM: counters: mostly revert da813ae4d7
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Contrarily to what was previously believed, there are corner cases where
the counters may not be allocated, and we may want to make them optional
at a later date, so we have to check if those counters are there.
However, just checking that shared.tg is non-NULL is enough, we can then
assume that shared.tg[tgid - 1] has properly been allocated too.
Also modify the various COUNTER_SHARED_* macros to make sure they check
for that too.
2026-01-14 12:39:14 +01:00
Amaury Denoyelle
7aa839296d BUG/MEDIUM: quic: fix ACK ECN frame parsing
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
ACK frames are either of type 0x02 or 0x03. The latter is an indication
that it contains extra ECN related fields. In haproxy QUIC stack, this
is considered as a different frame type, set to QUIC_FT_ACK_ECN, with
its own set of builder/parser functions.

This patch fixes ACK ECN parsing function. Indeed, the latter suffered
from two issues. First, 'first ACK range' and 'ACK ranges' were
inverted. Then, the three remaining ECN fields were simply ignored by
the parsing function.

This issue can cause desynchronization in the frames parsing code, which
may result in various result. Most of the time, the connection will be
aborted by haproxy due to an invalid frame content read.

Note that this issue was not detected earlier as most clients do not
enable ECN support if the peer is not able to emit ACK ECN frame first,
which haproxy currently never sends. Nevertheless, this is not the case
for every client implementation, thus proper ACK ECN parsing is
mandatory for a proper QUIC stack support.

Fix this by adjusting quic_parse_ack_ecn_frame() function. The remaining
ECN fields are parsed to ensure correct packet parsing. Currently, they
are not used by the congestion controller.

This must be backported up to 2.6.
2026-01-13 15:08:02 +01:00
Olivier Houchard
82196eb74e BUG/MEDIUM: threads: Fix binding thread on bind.
The code to parse the "thread" keyword on bind lines was changed to
check if the thread numbers were correct against the value provided with
max-threads-per-group, if any were provided, however, at the time those
thread keywords have been set, it may not yet have been set, and that
breaks the feature, so revert to check against MAX_THREADS_PER_GROUP instead,
it should have no major impact.
2026-01-13 11:45:46 +01:00
Olivier Houchard
da813ae4d7 MEDIUM: counters: Remove some extra tests
Before updating counters, a few tests are made to check if the counters
exits. but those counters should always exist at this point, so just
remmove them.
This commit should have no impact, but can easily be reverted with no
functional impact if various crashes appear.
2026-01-13 11:12:34 +01:00
Olivier Houchard
5495c88441 MEDIUM: counters: Dynamically allocate per-thread group counters
Instead of statically allocating the per-thread group counters,
based on the max number of thread groups available, allocate
them dynamically, based on the number of thread groups actually
used. That way we can increase the maximum number of thread
groups without using an unreasonable amount of memory.
2026-01-13 11:12:34 +01:00
Willy Tarreau
37057feb80 BUG/MINOR: net_helper: fix IPv6 header length processing
The IPv6 header contains a payload length that excludes the 40 bytes of
IPv6 packet header, which differs from IPv4's total length which includes
it. As a result, the parser was wrong and would only see the IP part and
not the TCP one unless sufficient options were present tocover it.

This issue came in 3.4-dev2 with recent commit e88e03a6e4 ("MINOR:
net_helper: add ip.fp() to build a simplified fingerprint of a SYN"),
so no backport is needed.
2026-01-13 08:42:36 +01:00
Aurelien DARRAGON
fcd4d4a7aa BUG/MINOR: hlua_fcn: ensure Patref:add_bulk() is given a table object before using it
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
As reported by GH user @kanashimia in GH #3241, providing anything else
than a table to Patref:add_bulk() method could cause a segfault because
we were calling lua_next() with the lua object without ensuring it
actually is a table.

Let's add the missing lua_istable() check on the stack object before
calling lua_next() function on it.

It should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()")
2026-01-12 17:30:54 +01:00
Aurelien DARRAGON
04545cb2b7 BUG/MINOR: hlua_fcn: fix broken yield for Patref:add_bulk()
In GH #3241, GH user @kanashimia reported that the Patref:add_bulk()
method would raise a Lua exception when called with more than 101
elements at once.

As identified by @kanashimia there was an error in the way the
add_bulk() method was forced to yield after 101 elements precisely.
The yield is there to ensure Lua doesn't eat too much ressources at
once and doesn't impact haproxy's core responsiveness, but the check
for the yield was misplaced resulting in improper stack content upon
resume.

Thanks to user @kanashimia who even provided a reproducer which helped
a lot to troubleshoot the issue.

This fix should be backported up to 3.2 with 884dc62 ("MINOR: hlua_fcn:
add Patref:add_bulk()") where the bug was introduced.
2026-01-12 17:30:52 +01:00
Olivier Houchard
b1cfeeef21 BUG/MINOR: stats-file: Use a 16bits variable when loading tgid
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Now that the tgid stored in the stats file has been increased to 16bits
by commit 022cb3ab7f, don't forget to
increase the variable size when reading it from the file, too.
This should have no impact given the maximum thread group limit is still
32.
2026-01-12 09:48:54 +01:00
Olivier Houchard
022cb3ab7f MINOR: stats: Increase the tgid from 8bits to 16bits
Increase the size of the stored tgid in the stat file from 8bits to
32bits, so that we can have more than 256 thread group. 65536 should be
enough for some time.

This bumps thet stat file minor version, as the structure changes.
2026-01-12 09:39:52 +01:00
Olivier Houchard
c0f64fc36a MINOR: receiver: Dynamically alloc the "members" field of shard_info
Instead of always allocating MAX_TGROUPS members, allocate them
dynamically, using the number of thread groups we'll use, so that
increasing MAX_TGROUPS will not have a huge impact on the structure
size.
2026-01-12 09:32:27 +01:00
Tim Duesterhus
96faf71f87 CLEANUP: connection: Remove outdated note about CO_FL 0x00002000 being unused
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This flag is used as of commit dcce936912
("MINOR: connections: Add a new CO_FL_SSL_NO_CACHED_INFO flag"). This patch
should be backported to 3.3. Apparently dcce936912 has been backported
to 3.2 and 3.1 already, with that change already applied, so no need for a
backport there.
2026-01-12 03:22:15 +01:00
Willy Tarreau
2560cce7c5 MINOR: tcp-sample: permit retrieving tcp_info from the connection/session stage
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
The fc_xxx info that are retrieved over tcp_info could currently not
be accessed before a stream is created due to a test that verified the
existence of a stream. The rationale here was that the function works
both for frontend and backend. Let's always retrieve these info from
the session for the frontend case so that it now becomes possible to
set variables at connection/session time. The doc did not mention this
limitation so this could almost be considered as a bug.
2026-01-11 15:48:20 +01:00
Willy Tarreau
880bbeeda4 MINOR: sample: also support retrieving fc.timer.handshake without a stream
Some timers, like the handshake timer, are stored in the session and are
only copied to the logs struct when a stream is created. But this means
we can't measure it without a stream, nor store it once for all in a
variable at session creation time. Let's extend the sample fetch function
to retrieve it from the session when no stream is present. The doc did not
mention this limitation so this could almost be considered as a bug.
2026-01-11 15:48:19 +01:00
Amaury Denoyelle
875bbaa7fc MINOR: cfgparse: remove duplicate "force-persist" in common kw list
Some checks failed
Contrib / build (push) Has been cancelled
alpine/musl / gcc (push) Has been cancelled
VTest / Generate Build Matrix (push) Has been cancelled
Windows / Windows, gcc, all features (push) Has been cancelled
VTest / (push) Has been cancelled
"force-persist" proxy keyword is listed twice in common_kw_list. This
patch removes the duplicated occurence.

This could be backported up to 2.4.
2026-01-09 16:45:54 +01:00
Willy Tarreau
46088b7ad0 MEDIUM: config: warn if some userlist hashes are too slow
It was reported in GH #2956 and more recently in GH #3235 that some
hashes are way too slow. The former triggers watchdog warnings during
checks, the second sees the config parsing take 20 seconds. This is
always due to the use of hash algorithms that are not suitable for use
in low-latency environments like web. They might be fine for a local
auth though. The difficulty, as explained by Philipp Hossner, is that
developers are not aware of this cost and adopt this without suspecting
any side effect.

The proposal here is to measure the crypt() call time and emit a warning
if it takes more than 10ms (which is already extreme). This was tested
by Philipp and confirmed to catch his case.

This is marked medium as it might start to report warnings on config
suffering from this problem without ever detecting it till now.
2026-01-09 14:56:18 +01:00
akarl10
a203ce6854 BUG/MINOR: ech/quic: enable ech configuration also for quic listeners
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Patch dba4fd24 ("MEDIUM: ssl/ech: config and load keys") introduced
ECH configuration for bind lines, but the QUIC configuration parsers
still suffers from not using the same code as the TCP/TLS one, so the
init for QUIC was missed.

Must be backported in 3.3.
2026-01-08 17:34:28 +01:00
William Lallemand
6e1718ce4b CI: github: remove ERR=1 temporarly from the ECH job
The ECH job still fails to compile since the openssl 4.0 deprecated
functions were not removed yet. Let's remove ERR=1 temporarly.

We do know that there's a regression in OpenSSL 4.0 with these
reg-tests though:

Error: #    top  TEST reg-tests/ssl/set_ssl_crlfile.vtc FAILED (0.219) exit=2
Error: #    top  TEST reg-tests/ssl/set_ssl_cafile.vtc FAILED (0.236) exit=2
Error: #    top  TEST reg-tests/quic/set_ssl_crlfile.vtc FAILED (0.196) exit=2
2026-01-08 17:32:27 +01:00
Christian Ruppert
dbe52cc23e REGTESTS: ssl: Fix reg-tests curve check
OpenSSL changed the output from "Server Temp Key" in prior versions to
"Peer Temp Key" in recent ones.
a39dc27c25
It looks like it affects OpenSSL >=3.5.0
This broke the reg-test for e.g. Debian 13 builds, using OpenSSL 3.5.1

Fixes bug #3238

Could be backported in every branches.

Signed-off-by: Christian Ruppert <idl0r@qasl.de>
2026-01-08 16:14:54 +01:00
William Lallemand
623aa725a2 BUG/MINOR: cli/stick-tables: argument to "show table" is optional
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
Discussed in issue #3187, the CLI help is confusing for the "show table"
command as it seems that the argument is mandatory.

This patch adds the arguments between square brackets to remove the
confusion.
2026-01-08 11:54:01 +01:00
Willy Tarreau
dbba442740 BUILD: sockpair: fix build issue on macOS related to variable-length arrays
In GH issue #3226, Sergey Fedorov (@barracuda156) reported that since
commit 10c14a1ed0 ("MINOR: proto_sockpair: send_fd_uxst: init iobuf,
cmsghdr, cmsgbuf to zeros"), macOS 10.6.8 with gcc 14.3.0 doesn't build
anymore:

  src/proto_sockpair.c: In function 'send_fd_uxst':
  src/proto_sockpair.c:246:49: error: variable-sized object may not be initialized except with an empty initializer
    246 |         char cmsgbuf[CMSG_SPACE(sizeof(int))] = {0};
        |                                                 ^
  src/proto_sockpair.c:247:45: error: variable-sized object may not be initialized except with an empty initializer
    247 |         char buf[CMSG_SPACE(sizeof(int))] = {0};
        |                                             ^

Upon investigation, it appears that the CMSG_SPACE() macro on this OS
looks too complex for gcc to consider it as a constant, so it takes
these buffers for variable-length arrays and cannot initialize them.

Let's move to a simple memset() instead, which Sergey confirmed fixes
the problem.

This needs to be backported as far as 3.1. Thanks to Sergey for the
report, the bisect and testing the fix.
2026-01-08 09:26:22 +01:00
Hyeonggeun Oh
c17ed69bf3 MINOR: cfgparse: Refactor "userlist" parser to print it in -dKall operation
Some checks are pending
Contrib / build (push) Waiting to run
alpine/musl / gcc (push) Waiting to run
VTest / Generate Build Matrix (push) Waiting to run
VTest / (push) Blocked by required conditions
Windows / Windows, gcc, all features (push) Waiting to run
This patch covers issue https://github.com/haproxy/haproxy/issues/3221.

The parser for the "userlist" section did not use the standard keyword
registration mechanism. Instead, it relied on a series of strcmp()
comparisons to identify keywords such as "group" and "user".

This had two main drawbacks:
1. The keywords were not discoverable by the "-dKall" dump option,
   making it difficult for users to see all available keywords for the
   section.
2. The implementation was inconsistent with the parsers for other
   sections, which have been progressively refactored to use the
   standard cfg_kw_list infrastructure.

This patch refactors the userlist parser to align it with the project's
standard conventions.

The parsing logic for the "group" and "user" keywords has been extracted
from the if/else block in cfg_parse_users() into two new dedicated
functions:
- cfg_parse_users_group()
- cfg_parse_users_user()

These two keywords are now registered via a dedicated cfg_kw_list,
making them visible to the rest of the HAPorxy ecosystem, including the
-dKall dump.
2026-01-07 18:25:09 +01:00
William Lallemand
91cff75908 BUG/MINOR: cfgparse: wrong section name upon error
When a unknown keyword was used in the "userlist" section, the error was
mentioning the "users" section, instead of "userlist".

Could be backported in every branches.
2026-01-07 18:13:12 +01:00
William Lallemand
4aff6d1c25 BUILD: tools: memchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/tools.c: In function ‘fgets_from_mem’:
src/tools.c:7200:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
 7200 |         new_pos = memchr(*position, '\n', size);
      |                 ^

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
memchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
William Lallemand
5322bd3785 BUILD: ssl: strchr definition changed in C23
New gcc and clang versions from fedora rawhide seems to use the C23
standard by default. This version changes the definition of some
string.h functions, which now return a const char * instead of a char *.

src/ssl_sock.c: In function ‘SSL_CTX_keylog’:
src/ssl_sock.c:4475:17: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 4475 |         lastarg = strrchr(line, ' ');

Strangely, -Wdiscarded-qualifiers does not seem to catch all the
strrchr.

Should fix issue #3228.

This could be backported in previous versions.
2026-01-07 14:51:26 +01:00
211 changed files with 12415 additions and 4189 deletions

11
.github/matrix.py vendored
View file

@ -222,7 +222,7 @@ def main(ref_name):
"OPENSSL_VERSION=1.0.2u", "OPENSSL_VERSION=1.0.2u",
"OPENSSL_VERSION=1.1.1s", "OPENSSL_VERSION=1.1.1s",
"OPENSSL_VERSION=3.5.1", "OPENSSL_VERSION=3.5.1",
"QUICTLS=yes", "QUICTLS_VERSION=OpenSSL_1_1_1w-quic1",
"WOLFSSL_VERSION=5.7.0", "WOLFSSL_VERSION=5.7.0",
"AWS_LC_VERSION=1.39.0", "AWS_LC_VERSION=1.39.0",
# "BORINGSSL=yes", # "BORINGSSL=yes",
@ -261,7 +261,7 @@ def main(ref_name):
except: except:
pass pass
if ssl == "BORINGSSL=yes" or ssl == "QUICTLS=yes" or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl or openssl_supports_quic: if ssl == "BORINGSSL=yes" or "QUICTLS" in ssl or "LIBRESSL" in ssl or "WOLFSSL" in ssl or "AWS_LC" in ssl or openssl_supports_quic:
flags.append("USE_QUIC=1") flags.append("USE_QUIC=1")
matrix.append( matrix.append(
@ -275,11 +275,8 @@ def main(ref_name):
} }
) )
# macOS # macOS on dev branches
if "haproxy-" not in ref_name:
if "haproxy-" in ref_name:
os = "macos-13" # stable branch
else:
os = "macos-26" # development branch os = "macos-26" # development branch
TARGET = "osx" TARGET = "osx"

View file

@ -11,9 +11,6 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Compile admin/halog/halog
run: |
make admin/halog/halog
- name: Compile dev/flags/flags - name: Compile dev/flags/flags
run: | run: |
make dev/flags/flags make dev/flags/flags

View file

@ -27,7 +27,7 @@ jobs:
libsystemd-dev libsystemd-dev
- name: Install QUICTLS - name: Install QUICTLS
run: | run: |
QUICTLS=yes scripts/build-ssl.sh QUICTLS_VERSION=OpenSSL_1_1_1w-quic1 scripts/build-ssl.sh
- name: Download Coverity build tool - name: Download Coverity build tool
run: | run: |
wget -c -N https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_SCAN_TOKEN }}&project=Haproxy" -O coverity_tool.tar.gz wget -c -N https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_SCAN_TOKEN }}&project=Haproxy" -O coverity_tool.tar.gz

View file

@ -104,7 +104,7 @@ jobs:
- name: install quictls - name: install quictls
run: | run: |
QUICTLS_EXTRA_ARGS="--cross-compile-prefix=${{ matrix.platform.arch }}- ${{ matrix.platform.target }}" QUICTLS=yes scripts/build-ssl.sh QUICTLS_EXTRA_ARGS="--cross-compile-prefix=${{ matrix.platform.arch }}- ${{ matrix.platform.target }}" QUICTLS_VERSION=OpenSSL_1_1_1w-quic1 scripts/build-ssl.sh
- name: Build - name: Build
run: | run: |

View file

@ -1,4 +1,4 @@
name: Fedora/Rawhide/QuicTLS name: Fedora/Rawhide/OpenSSL
on: on:
schedule: schedule:
@ -13,10 +13,10 @@ jobs:
strategy: strategy:
matrix: matrix:
platform: [ platform: [
{ name: x64, cc: gcc, QUICTLS_EXTRA_ARGS: "", ADDLIB_ATOMIC: "", ARCH_FLAGS: "" }, { name: x64, cc: gcc, ADDLIB_ATOMIC: "", ARCH_FLAGS: "" },
{ name: x64, cc: clang, QUICTLS_EXTRA_ARGS: "", ADDLIB_ATOMIC: "", ARCH_FLAGS: "" }, { name: x64, cc: clang, ADDLIB_ATOMIC: "", ARCH_FLAGS: "" },
{ name: x86, cc: gcc, QUICTLS_EXTRA_ARGS: "-m32 linux-generic32", ADDLIB_ATOMIC: "-latomic", ARCH_FLAGS: "-m32" }, { name: x86, cc: gcc, ADDLIB_ATOMIC: "-latomic", ARCH_FLAGS: "-m32" },
{ name: x86, cc: clang, QUICTLS_EXTRA_ARGS: "-m32 linux-generic32", ADDLIB_ATOMIC: "-latomic", ARCH_FLAGS: "-m32" } { name: x86, cc: clang, ADDLIB_ATOMIC: "-latomic", ARCH_FLAGS: "-m32" }
] ]
fail-fast: false fail-fast: false
name: ${{ matrix.platform.cc }}.${{ matrix.platform.name }} name: ${{ matrix.platform.cc }}.${{ matrix.platform.name }}
@ -28,11 +28,9 @@ jobs:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Install dependencies - name: Install dependencies
run: | run: |
dnf -y install awk diffutils git pcre-devel zlib-devel pcre2-devel 'perl(FindBin)' perl-IPC-Cmd 'perl(File::Copy)' 'perl(File::Compare)' lua-devel socat findutils systemd-devel clang dnf -y install awk diffutils git pcre-devel zlib-devel pcre2-devel 'perl(FindBin)' perl-IPC-Cmd 'perl(File::Copy)' 'perl(File::Compare)' lua-devel socat findutils systemd-devel clang openssl-devel.x86_64
dnf -y install 'perl(FindBin)' 'perl(File::Compare)' perl-IPC-Cmd 'perl(File::Copy)' glibc-devel.i686 lua-devel.i686 lua-devel.x86_64 systemd-devel.i686 zlib-ng-compat-devel.i686 pcre-devel.i686 libatomic.i686 dnf -y install 'perl(FindBin)' 'perl(File::Compare)' perl-IPC-Cmd 'perl(File::Copy)' glibc-devel.i686 lua-devel.i686 lua-devel.x86_64 systemd-devel.i686 zlib-ng-compat-devel.i686 pcre-devel.i686 libatomic.i686 openssl-devel.i686
- uses: ./.github/actions/setup-vtest - uses: ./.github/actions/setup-vtest
- name: Install QuicTLS
run: QUICTLS=yes QUICTLS_EXTRA_ARGS="${{ matrix.platform.QUICTLS_EXTRA_ARGS }}" scripts/build-ssl.sh
- name: Build contrib tools - name: Build contrib tools
run: | run: |
make admin/halog/halog make admin/halog/halog
@ -41,7 +39,7 @@ jobs:
make dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht make dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht
- name: Compile HAProxy with ${{ matrix.platform.cc }} - name: Compile HAProxy with ${{ matrix.platform.cc }}
run: | run: |
make -j3 CC=${{ matrix.platform.cc }} V=1 ERR=1 TARGET=linux-glibc DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" USE_OPENSSL=1 USE_QUIC=1 USE_ZLIB=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_LUA=1 ADDLIB="${{ matrix.platform.ADDLIB_ATOMIC }} -Wl,-rpath,${HOME}/opt/lib" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include ARCH_FLAGS="${{ matrix.platform.ARCH_FLAGS }}" make -j3 CC=${{ matrix.platform.cc }} V=1 ERR=1 TARGET=linux-glibc DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" USE_PROMEX=1 USE_OPENSSL=1 USE_QUIC=1 USE_ZLIB=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_LUA=1 ADDLIB="${{ matrix.platform.ADDLIB_ATOMIC }}" ARCH_FLAGS="${{ matrix.platform.ARCH_FLAGS }}"
make install make install
- name: Show HAProxy version - name: Show HAProxy version
id: show-version id: show-version
@ -51,6 +49,13 @@ jobs:
echo "::endgroup::" echo "::endgroup::"
haproxy -vv haproxy -vv
echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
#
# TODO: review this workaround later
- name: relax crypto policies
run: |
dnf -y install crypto-policies-scripts
echo LEGACY > /etc/crypto-policies/config
update-crypto-policies
- name: Run VTest for HAProxy ${{ steps.show-version.outputs.version }} - name: Run VTest for HAProxy ${{ steps.show-version.outputs.version }}
id: vtest id: vtest
run: | run: |

View file

@ -28,7 +28,7 @@ jobs:
run: env SSL_LIB=${HOME}/opt/ scripts/build-curl.sh run: env SSL_LIB=${HOME}/opt/ scripts/build-curl.sh
- name: Compile HAProxy - name: Compile HAProxy
run: | run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \ make -j$(nproc) CC=gcc TARGET=linux-glibc \
USE_QUIC=1 USE_OPENSSL=1 USE_ECH=1 \ USE_QUIC=1 USE_OPENSSL=1 USE_ECH=1 \
SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \ SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include \
DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \ DEBUG="-DDEBUG_POOL_INTEGRITY -DDEBUG_UNIT" \

View file

@ -11,7 +11,7 @@ on:
jobs: jobs:
build: combined-build-and-run:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }} if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
permissions: permissions:
@ -21,84 +21,47 @@ jobs:
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Log in to the Container registry - name: Update Docker to the latest
uses: docker/login-action@v3 uses: docker/setup-docker-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image - name: Build Docker image
id: push id: push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v6
with: with:
context: https://github.com/haproxytech/haproxy-qns.git context: https://github.com/haproxytech/haproxy-qns.git
push: true platforms: linux/amd64
build-args: | build-args: |
SSLLIB=AWS-LC SSLLIB=AWS-LC
tags: ghcr.io/${{ github.repository }}:aws-lc tags: local:aws-lc
- name: Cleanup registry
uses: actions/delete-package-versions@v5
with:
owner: ${{ github.repository_owner }}
package-name: 'haproxy'
package-type: container
min-versions-to-keep: 1
delete-only-untagged-versions: 'true'
run:
needs: build
strategy:
matrix:
suite: [
{ client: chrome, tests: "http3" },
{ client: picoquic, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" },
{ client: quic-go, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" },
{ client: ngtcp2, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" }
]
fail-fast: false
name: ${{ matrix.suite.client }}
runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install tshark - name: Install tshark
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get -y install tshark sudo apt-get -y install tshark
- name: Pull image
run: |
docker pull ghcr.io/${{ github.repository }}:aws-lc
- name: Run - name: Run
run: | run: |
git clone https://github.com/quic-interop/quic-interop-runner git clone https://github.com/quic-interop/quic-interop-runner
cd quic-interop-runner cd quic-interop-runner
pip install -r requirements.txt --break-system-packages pip install -r requirements.txt --break-system-packages
python run.py -j result.json -l logs -r haproxy=ghcr.io/${{ github.repository }}:aws-lc -t ${{ matrix.suite.tests }} -c ${{ matrix.suite.client }} -s haproxy python run.py -j result.json -l logs-chrome -r haproxy=local:aws-lc -t "http3" -c chrome -s haproxy
python run.py -j result.json -l logs-picoquic -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c picoquic -s haproxy
python run.py -j result.json -l logs-quic-go -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c quic-go -s haproxy
python run.py -j result.json -l logs-ngtcp2 -r haproxy=local:aws-lc -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,resumption,zerortt,http3,blackhole,keyupdate,ecn,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,ipv6,v2" -c ngtcp2 -s haproxy
- name: Delete succeeded logs - name: Delete succeeded logs
if: failure() if: failure()
run: | run: |
cd quic-interop-runner/logs/haproxy_${{ matrix.suite.client }} for client in chrome picoquic quic-go ngtcp2; do
pushd quic-interop-runner/logs-${client}/haproxy_${client}
cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf
popd
done
- name: Logs upload - name: Logs upload
if: failure() if: failure()
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: logs-${{ matrix.suite.client }} name: logs
path: quic-interop-runner/logs/ path: quic-interop-runner/logs*/
retention-days: 6 retention-days: 6

View file

@ -11,7 +11,7 @@ on:
jobs: jobs:
build: combined-build-and-run:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }} if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
permissions: permissions:
@ -21,82 +21,45 @@ jobs:
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v5
- name: Log in to the Container registry - name: Update Docker to the latest
uses: docker/login-action@v3 uses: docker/setup-docker-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image - name: Build Docker image
id: push id: push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v6
with: with:
context: https://github.com/haproxytech/haproxy-qns.git context: https://github.com/haproxytech/haproxy-qns.git
push: true platforms: linux/amd64
build-args: | build-args: |
SSLLIB=LibreSSL SSLLIB=LibreSSL
tags: ghcr.io/${{ github.repository }}:libressl tags: local:libressl
- name: Cleanup registry
uses: actions/delete-package-versions@v5
with:
owner: ${{ github.repository_owner }}
package-name: 'haproxy'
package-type: container
min-versions-to-keep: 1
delete-only-untagged-versions: 'true'
run:
needs: build
strategy:
matrix:
suite: [
{ client: picoquic, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,v2" },
{ client: quic-go, tests: "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,transferloss,transfercorruption,v2" }
]
fail-fast: false
name: ${{ matrix.suite.client }}
runs-on: ubuntu-24.04
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
steps:
- uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install tshark - name: Install tshark
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get -y install tshark sudo apt-get -y install tshark
- name: Pull image
run: |
docker pull ghcr.io/${{ github.repository }}:libressl
- name: Run - name: Run
run: | run: |
git clone https://github.com/quic-interop/quic-interop-runner git clone https://github.com/quic-interop/quic-interop-runner
cd quic-interop-runner cd quic-interop-runner
pip install -r requirements.txt --break-system-packages pip install -r requirements.txt --break-system-packages
python run.py -j result.json -l logs -r haproxy=ghcr.io/${{ github.repository }}:libressl -t ${{ matrix.suite.tests }} -c ${{ matrix.suite.client }} -s haproxy python run.py -j result.json -l logs-picoquic -r haproxy=local:libressl -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,handshakeloss,transferloss,handshakecorruption,transfercorruption,v2" -c picoquic -s haproxy
python run.py -j result.json -l logs-quic-go -r haproxy=local:libressl -t "handshake,transfer,longrtt,chacha20,multiplexing,retry,http3,blackhole,amplificationlimit,transferloss,transfercorruption,v2" -c quic-go -s haproxy
- name: Delete succeeded logs - name: Delete succeeded logs
if: failure() if: failure()
run: | run: |
cd quic-interop-runner/logs/haproxy_${{ matrix.suite.client }} for client in picoquic quic-go; do
pushd quic-interop-runner/logs-${client}/haproxy_${client}
cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf cat ../../result.json | jq -r '.results[][] | select(.result=="succeeded") | .name' | xargs rm -rf
popd
done
- name: Logs upload - name: Logs upload
if: failure() if: failure()
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: logs-${{ matrix.suite.client }} name: logs
path: quic-interop-runner/logs/ path: quic-interop-runner/logs*/
retention-days: 6 retention-days: 6

View file

@ -23,7 +23,7 @@ jobs:
sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
sudo apt-get --no-install-recommends -y install socat gdb sudo apt-get --no-install-recommends -y install socat gdb
- name: Install QuicTLS - name: Install QuicTLS
run: env QUICTLS=yes QUICTLS_URL=https://github.com/quictls/quictls scripts/build-ssl.sh run: env QUICTLS_VERSION=main QUICTLS_URL=https://github.com/quictls/quictls scripts/build-ssl.sh
- name: Compile HAProxy - name: Compile HAProxy
run: | run: |
make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \ make -j$(nproc) ERR=1 CC=gcc TARGET=linux-glibc \

View file

@ -57,7 +57,7 @@ jobs:
echo "key=$(echo ${{ matrix.name }} | sha256sum | awk '{print $1}')" >> $GITHUB_OUTPUT echo "key=$(echo ${{ matrix.name }} | sha256sum | awk '{print $1}')" >> $GITHUB_OUTPUT
- name: Cache SSL libs - name: Cache SSL libs
if: ${{ matrix.ssl && matrix.ssl != 'stock' && matrix.ssl != 'BORINGSSL=yes' && matrix.ssl != 'QUICTLS=yes' }} if: ${{ matrix.ssl && matrix.ssl != 'stock' && matrix.ssl != 'BORINGSSL=yes' && !contains(matrix.ssl, 'QUICTLS') }}
id: cache_ssl id: cache_ssl
uses: actions/cache@v4 uses: actions/cache@v4
with: with:

View file

@ -18,6 +18,7 @@ jobs:
msys2: msys2:
name: ${{ matrix.name }} name: ${{ matrix.name }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
if: ${{ github.repository_owner == 'haproxy' || github.event_name == 'workflow_dispatch' }}
defaults: defaults:
run: run:
shell: msys2 {0} shell: msys2 {0}

255
CHANGELOG
View file

@ -1,6 +1,261 @@
ChangeLog : ChangeLog :
=========== ===========
2026/02/19 : 3.4-dev5
- DOC: internals: addd mworker V3 internals
- BUG/MINOR: threads: Initialize maxthrpertgroup earlier.
- BUG/MEDIUM: threads: Differ checking the max threads per group number
- BUG/MINOR: startup: fix allocation error message of progname string
- BUG/MINOR: startup: handle a possible strdup() failure
- MINOR: cfgparse: validate defaults proxies separately
- MINOR: cfgparse: move proxy post-init in a dedicated function
- MINOR: proxy: refactor proxy inheritance of a defaults section
- MINOR: proxy: refactor mode parsing
- MINOR: backend: add function to check support for dynamic servers
- MINOR: proxy: define "add backend" handler
- MINOR: proxy: parse mode on dynamic backend creation
- MINOR: proxy: parse guid on dynamic backend creation
- MINOR: proxy: check default proxy compatibility on "add backend"
- MEDIUM: proxy: implement dynamic backend creation
- MINOR: proxy: assign dynamic proxy ID
- REGTESTS: add dynamic backend creation test
- BUG/MINOR: proxy: fix clang build error on "add backend" handler
- BUG/MINOR: proxy: fix null dereference in "add backend" handler
- MINOR: net_helper: extend the ip.fp output with an option presence mask
- BUG/MINOR: proxy: fix default ALPN bind settings
- CLEANUP: lb-chash: free lb_nodes from chash's deinit(), not global
- BUG/MEDIUM: lb-chash: always properly initialize lb_nodes with dynamic servers
- CLEANUP: haproxy: fix bad line wrapping in run_poll_loop()
- MINOR: activity: support setting/clearing lock/memory watching for task profiling
- MEDIUM: activity: apply and use new finegrained task profiling settings
- MINOR: activity: allow to switch per-task lock/memory profiling at runtime
- MINOR: startup: Add the SSL lib verify directory in haproxy -vv
- BUG/MINOR: ssl: SSL_CERT_DIR environment variable doesn't affect haproxy
- CLEANUP: initcall: adjust comments to INITCALL{0,1} macros
- DOC: proxy-proto: underline the packed attribute for struct pp2_tlv_ssl
- MINOR: queues: Check minconn first in srv_dynamic_maxconn()
- MINOR: servers: Call process_srv_queue() without lock when possible
- BUG/MINOR: quic: ensure handshake speed up is only run once per conn
- BUG/MAJOR: quic: reject invalid token
- BUG/MAJOR: quic: fix parsing frame type
- MINOR: ssl: Missing '\n' in error message
- MINOR: jwt: Convert an RSA JWK into an EVP_PKEY
- MINOR: jwt: Add new jwt_decrypt_jwk converter
- REGTESTS: jwt: Add new "jwt_decrypt_jwk" tests
- MINOR: startup: Add HAVE_WORKING_TCP_MD5SIG in haproxy -vv
- MINOR: startup: sort the feature list in haproxy -vv
- MINOR: startup: show the list of detected features at runtime with haproxy -vv
- SCRIPTS: build-vtest: allow to set a TMPDIR and a DESTDIR
- MINOR: filters: rework RESUME_FILTER_* macros as inline functions
- MINOR: filters: rework filter iteration for channel related callback functions
- MEDIUM: filters: use per-channel filter list when relevant
- DEV: gdb: add a utility to find the post-mortem address from a core
- BUG/MINOR: deviceatlas: add missing return on error in config parsers
- BUG/MINOR: deviceatlas: add NULL checks on strdup() results in config parsers
- BUG/MEDIUM: deviceatlas: fix resource leaks on init error paths
- BUG/MINOR: deviceatlas: fix off-by-one in da_haproxy_conv()
- BUG/MINOR: deviceatlas: fix cookie vlen using wrong length after extraction
- BUG/MINOR: deviceatlas: fix double-checked locking race in checkinst
- BUG/MINOR: deviceatlas: fix resource leak on hot-reload compile failure
- BUG/MINOR: deviceatlas: fix deinit to only finalize when initialized
- BUG/MINOR: deviceatlas: set cache_size on hot-reloaded atlas instance
- MINOR: deviceatlas: check getproptype return and remove pprop indirection
- MINOR: deviceatlas: increase DA_MAX_HEADERS and header buffer sizes
- MINOR: deviceatlas: define header_evidence_entry in dummy library header
- MINOR: deviceatlas: precompute maxhdrlen to skip oversized headers early
- CLEANUP: deviceatlas: add unlikely hints and minor code tidying
- DEV: gdb: use unsigned longs to display pools memory usage
- BUG/MINOR: ssl: lack crtlist_dup_ssl_conf() declaration
- BUG/MINOR: ssl: double-free on error path w/ ssl-f-use parser
- BUG/MINOR: ssl: fix leak in ssl-f-use parser upon error
- BUG/MINOR: ssl: clarify ssl-f-use errors in post-section parsing
- BUG/MINOR: ssl: error with ssl-f-use when no "crt"
- MEDIUM: backend: make "balance random" consider tg local req rate when loads are equal
- BUG/MAJOR: Revert "MEDIUM: mux-quic: add BUG_ON if sending on locally closed QCS"
- BUG/MEDIUM: h3: reject frontend CONNECT as currently not implemented
- MINOR: mux-quic: add BUG_ON_STRESS() when draining data on closed stream
- REGTESTS: fix quoting in feature cmd which prevents test execution
- BUG/MEDIUM: mux-h2/quic: Stop sending via fast-forward if stream is closed
- BUG/MEDIUM: mux-h1: Stop sending vi fast-forward for unexpected states
- BUG/MEDIUM: applet: Fix test on shut flags for legacy applets (v2)
- DEV: term-events: Fix hanshake events decoding
- BUG/MINOR: flt-trace: Properly compute length of the first DATA block
- MINOR: flt-trace: Add an option to limit the amount of data forwarded
- CLEANUP: compression: Remove unused static buffers
- BUG/MEDIUM: shctx: Use the next block when data exactly filled a block
- BUG/MINOR: http-ana: Stop to wait for body on client error/abort
- MINOR: stconn: Add missing SC_FL_NO_FASTFWD flag in sc_show_flags
- REORG: stconn: Move functions related to channel buffers to sc_strm.h
- BUG/MEDIUM: jwe: fix timing side-channel and dead code in JWE decryption
- MINOR: tree-wide: Use the buffer size instead of global setting when possible
- MINOR: buffers: Swap buffers of same size only
- BUG/MINOR: config: Check buffer pool creation for failures
- MEDIUM: cache: Don't rely on a chunk to store messages payload
- MEDIUM: stream: Limit number of synchronous send per stream wakeup
- MEDIUM: compression: Be sure to never compress more than a chunk at once
- MEDIUM: mux-h1/mux-h2/mux-fcgi/h3: Disable 0-copy for buffers of different size
- MEDIUM: applet: Disable 0-copy for buffers of different size
- MINOR: h1-htx: Disable 0-copy for buffers of different size
- MEDIUM: stream: Offer buffers of default size only
- BUG/MEDIUM: htx: Fix function used to change part of a block value when defrag
- MEDIUM: htx: Refactor transfer of htx blocks to merge DATA blocks if possible
- MEDIUM: htx: Refactor htx defragmentation to merge data blocks
- MEDIUM: htx: Improve detection of fragmented/unordered HTX messages
- MINOR: http-ana: Do a defrag on unaligned HTX message when waiting for payload
- MINOR: http-fetch: Use pointer to HTX DATA block when retrieving HTX body
- MEDIUM: dynbuf: Add a pool for large buffers with a configurable size
- MEDIUM: chunk: Add support for large chunks
- MEDIUM: stconn: Properly handle large buffers during a receive
- MEDIUM: sample: Get chunks with a size dependent on input data when necessary
- MEDIUM: http-fetch: Be able to use large chunks when necessary
- MINPR: htx: Get large chunk if necessary to perform a defrag
- MEDIUM: http-ana: Use a large buffer if necessary when waiting for body
- MINOR: dynbuf: Add helpers to know if a buffer is a default or a large buffer
- MINOR: config: reject configs using HTTP with large bufsize >= 256 MB
- CI: do not use ghcr.io for Quic Interop workflows
- BUG/MEDIUM: ssl: SSL backend sessions used after free
- CI: vtest: move the vtest2 URL to vinyl-cache.org
- CI: github: disable windows.yml by default on unofficials repo
- MEDIUM: Add connect/queue/tarpit timeouts to set-timeout
- CLEANUP: mux-h1: Remove unneeded null check
- DOC: remove openssl no-deprecated CI image
- BUG/MINOR: acme: fix X509_NAME leak when X509_set_issuer_name() fails
- BUG/MINOR: backend: check delay MUX before conn_prepare()
- OPTIM: backend: reduce contention when checking MUX init with ALPN
- DOC: configuration: add the ACME wiki page link
- MINOR: ssl/ckch: Move EVP_PKEY and cert code generation from acme
- MINOR: ssl/ckch: certificates generation from "load" "crt-store" directive
- MINOR: trace: add definitions for haterm streams
- MINOR: init: allow a fileless init mode
- MEDIUM: init: allow the redefinition of argv[] parsing function
- MINOR: stconn: stream instantiation from proxy callback
- MINOR: haterm: add haterm HTTP server
- MINOR: haterm: new "haterm" utility
- MINOR: haterm: increase thread-local pool size
- BUG/MEDIUM: stats-file: fix shm-stats-file recover when all process slots are full
- BUG/MINOR: stats-file: manipulate shm-stats-file heartbeat using unsigned int
- BUG/MEDIUM: stats-file: detect and fix inconsistent shared clock when resuming from shm-stats-file
- CI: github: only enable OS X on development branches
2026/02/04 : 3.4-dev4
- BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
- BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
- BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
- BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
- BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
- BUG/MINOR: config: check capture pool creations for failures
- BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
- MEDIUM: pools: better check for size rounding overflow on registration
- DOC: reg-tests: update VTest upstream link in the starting guide
- BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
- BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
- MINOR: ssl: display libssl errors on private key loading
- BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
- MINOR: ssl: allow to disable certificate compression
- BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
- DOC: config: mention some possible TLS versions restrictions for kTLS
- OPTIM: server: move queueslength in server struct
- OPTIM: proxy: separate queues fields from served
- OPTIM: server: get rid of the last use of _ha_barrier_full()
- DOC: config: mention that idle connection sharing is per thread-group
- MEDIUM: h1: strictly verify quoting in chunk extensions
- BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
- BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
- MEDIUM: ssl: remove connection from msg callback args
- MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
- REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
- DOC: internals: cleanup few typos in master-worker documentation
- BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
- MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
- MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
- BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
- BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
- DOC: config: mention the limitation on server id range for consistent hash
- MEDIUM: backend: make "balance random" consider req rate when loads are equal
- BUG/MINOR: config: Fix setting of alt_proto
2026/01/22 : 3.4-dev3
- BUILD: ssl: strchr definition changed in C23
- BUILD: tools: memchr definition changed in C23
- BUG/MINOR: cfgparse: wrong section name upon error
- MINOR: cfgparse: Refactor "userlist" parser to print it in -dKall operation
- BUILD: sockpair: fix build issue on macOS related to variable-length arrays
- BUG/MINOR: cli/stick-tables: argument to "show table" is optional
- REGTESTS: ssl: Fix reg-tests curve check
- CI: github: remove ERR=1 temporarly from the ECH job
- BUG/MINOR: ech/quic: enable ech configuration also for quic listeners
- MEDIUM: config: warn if some userlist hashes are too slow
- MINOR: cfgparse: remove duplicate "force-persist" in common kw list
- MINOR: sample: also support retrieving fc.timer.handshake without a stream
- MINOR: tcp-sample: permit retrieving tcp_info from the connection/session stage
- CLEANUP: connection: Remove outdated note about CO_FL `0x00002000` being unused
- MINOR: receiver: Dynamically alloc the "members" field of shard_info
- MINOR: stats: Increase the tgid from 8bits to 16bits
- BUG/MINOR: stats-file: Use a 16bits variable when loading tgid
- BUG/MINOR: hlua_fcn: fix broken yield for Patref:add_bulk()
- BUG/MINOR: hlua_fcn: ensure Patref:add_bulk() is given a table object before using it
- BUG/MINOR: net_helper: fix IPv6 header length processing
- MEDIUM: counters: Dynamically allocate per-thread group counters
- MEDIUM: counters: Remove some extra tests
- BUG/MEDIUM: threads: Fix binding thread on bind.
- BUG/MEDIUM: quic: fix ACK ECN frame parsing
- MEDIUM: counters: mostly revert da813ae4d7cb77137ed
- BUG/MINOR: http_act: fix deinit performed on uninitialized lf_expr in release_http_map()
- MINOR: queues: Turn non_empty_tgids into a long array.
- MINOR: threads: Eliminate all_tgroups_mask.
- BUG/MEDIUM: queues: Fix arithmetic when feeling non_empty_tgids
- MEDIUM: thread: Turn the group mask in thread set into a group counter
- BUG/MINOR: proxy: free persist_rules
- MEDIUM: stream: refactor switching-rules processing
- REGTESTS: add test on backend switching rules selection
- MEDIUM: proxy: do not select a backend if disabled
- MEDIUM: proxy: implement publish/unpublish backend CLI
- MINOR: stats: report BE unpublished status
- MINOR: cfgparse: adapt warnif_cond_conflicts() error output
- MEDIUM: proxy: force traffic on unpublished/disabled backends
- MINOR: ssl: Factorize AES GCM data processing
- MINOR: ssl: Add new aes_cbc_enc/_dec converters
- REGTESTS: ssl: Add tests for new aes cbc converters
- MINOR: jwe: Add new jwt_decrypt_secret converter
- MINOR: jwe: Add new jwt_decrypt_cert converter
- REGTESTS: jwe: Add jwt_decrypt_secret and jwt_decrypt_cert tests
- DOC: jwe: Add doc for jwt_decrypt converters
- MINOR: jwe: Some algorithms not supported by AWS-LC
- REGTESTS: jwe: Fix tests of algorithms not supported by AWS-LC
- BUG/MINOR: cfgparse: fix "default" prefix parsing
- REORG/MINOR: cfgparse: eliminate code duplication by lshift_args()
- MEDIUM: systemd: implement directory loading
- CI: github: switch monthly Fedora Rawhide build to OpenSSL
- SCRIPTS: build-ssl: use QUICTLS_VERSION instead of QUICTLS=yes
- CI: github: define the right quictls version in each jobs
- CI: github: fix vtest.yml with "not quictls"
- MINOR: cli: use srv_drop() when server was created using new_server()
- BUG/MINOR: server: ensure server is detached from proxy list before being freed
- BUG/MEDIUM: promex: server iteration may rely on stale server
- SCRIPTS: build-ssl: clone the quictls branch directly
- SCRIPTS: build-ssl: fix quictls build for 1.1.1 versions
- BUG/MEDIUM: log: parsing log-forward options may result in segfault
- DOC: proxy-protocol: Add SSL client certificate TLV
- DOC: fix typos in the documentation files
- DOC: fix mismatched quotes typos around words in the documentation files
- REORG: cfgparse: move peers parsing to cfgparse-peers.c
- MINOR: tools: add chunk_escape_string() helper function
- MINOR: vars: store variable names for runtime access
- MINOR: vars: implement dump_all_vars() sample fetch
- DOC: vars: document dump_all_vars() sample fetch
- BUG/MEDIUM: ssl: fix error path on generate-certificates
- BUG/MEDIUM: ssl: fix generate-certificates option when SNI greater than 64bytes
- BUG/MEDIUM: mux-quic: prevent BUG_ON() on aborted uni stream close
- REGTESTS: ssl: fix generate-certificates w/ LibreSSL
- SCRIPTS: build: enable symbols in AWS-LC builds
- BUG/MINOR: proxy: fix deinit crash on defaults with duplicate name
- BUG/MEDIUM: debug: only dump Lua state when panicking
- MINOR: proxy: remove proxy_preset_defaults()
- MINOR: proxy: refactor defaults proxies API
- MINOR: proxy: simplify defaults proxies list storage
- MEDIUM: cfgparse: do not store unnamed defaults in name tree
- MEDIUM: proxy: implement persistent named defaults
2026/01/07 : 3.4-dev2 2026/01/07 : 3.4-dev2
- BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards - BUG/MEDIUM: mworker/listener: ambiguous use of RX_F_INHERITED with shards
- BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2) - BUG/MEDIUM: http-ana: Properly detect client abort when forwarding response (v2)

View file

@ -643,7 +643,7 @@ ifneq ($(USE_OPENSSL:0=),)
OPTIONS_OBJS += src/ssl_sock.o src/ssl_ckch.o src/ssl_ocsp.o src/ssl_crtlist.o \ OPTIONS_OBJS += src/ssl_sock.o src/ssl_ckch.o src/ssl_ocsp.o src/ssl_crtlist.o \
src/ssl_sample.o src/cfgparse-ssl.o src/ssl_gencert.o \ src/ssl_sample.o src/cfgparse-ssl.o src/ssl_gencert.o \
src/ssl_utils.o src/jwt.o src/ssl_clienthello.o src/jws.o src/acme.o \ src/ssl_utils.o src/jwt.o src/ssl_clienthello.o src/jws.o src/acme.o \
src/ssl_trace.o src/ssl_trace.o src/jwe.o
endif endif
ifneq ($(USE_ENGINE:0=),) ifneq ($(USE_ENGINE:0=),)
@ -956,6 +956,7 @@ endif # obsolete targets
endif # TARGET endif # TARGET
OBJS = OBJS =
HATERM_OBJS =
ifneq ($(EXTRA_OBJS),) ifneq ($(EXTRA_OBJS),)
OBJS += $(EXTRA_OBJS) OBJS += $(EXTRA_OBJS)
@ -1002,12 +1003,15 @@ OBJS += src/mux_h2.o src/mux_h1.o src/mux_fcgi.o src/log.o \
src/ebsttree.o src/freq_ctr.o src/systemd.o src/init.o \ src/ebsttree.o src/freq_ctr.o src/systemd.o src/init.o \
src/http_acl.o src/dict.o src/dgram.o src/pipe.o \ src/http_acl.o src/dict.o src/dgram.o src/pipe.o \
src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \ src/hpack-huff.o src/hpack-enc.o src/ebtree.o src/hash.o \
src/httpclient_cli.o src/version.o src/ncbmbuf.o src/ech.o src/httpclient_cli.o src/version.o src/ncbmbuf.o src/ech.o \
src/cfgparse-peers.o src/haterm.o
ifneq ($(TRACE),) ifneq ($(TRACE),)
OBJS += src/calltrace.o OBJS += src/calltrace.o
endif endif
HATERM_OBJS += $(OBJS) src/haterm_init.o
# Used only for forced dependency checking. May be cleared during development. # Used only for forced dependency checking. May be cleared during development.
INCLUDES = $(wildcard include/*/*.h) INCLUDES = $(wildcard include/*/*.h)
DEP = $(INCLUDES) .build_opts DEP = $(INCLUDES) .build_opts
@ -1039,7 +1043,7 @@ IGNORE_OPTS=help install install-man install-doc install-bin \
uninstall clean tags cscope tar git-tar version update-version \ uninstall clean tags cscope tar git-tar version update-version \
opts reg-tests reg-tests-help unit-tests admin/halog/halog dev/flags/flags \ opts reg-tests reg-tests-help unit-tests admin/halog/halog dev/flags/flags \
dev/haring/haring dev/ncpu/ncpu dev/poll/poll dev/tcploop/tcploop \ dev/haring/haring dev/ncpu/ncpu dev/poll/poll dev/tcploop/tcploop \
dev/term_events/term_events dev/term_events/term_events dev/gdb/pm-from-core
ifneq ($(TARGET),) ifneq ($(TARGET),)
ifeq ($(filter $(firstword $(MAKECMDGOALS)),$(IGNORE_OPTS)),) ifeq ($(filter $(firstword $(MAKECMDGOALS)),$(IGNORE_OPTS)),)
@ -1055,6 +1059,9 @@ endif # non-empty target
haproxy: $(OPTIONS_OBJS) $(OBJS) haproxy: $(OPTIONS_OBJS) $(OBJS)
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
haterm: $(OPTIONS_OBJS) $(HATERM_OBJS)
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
objsize: haproxy objsize: haproxy
$(Q)objdump -t $^|grep ' g '|grep -F '.text'|awk '{print $$5 FS $$6}'|sort $(Q)objdump -t $^|grep ' g '|grep -F '.text'|awk '{print $$5 FS $$6}'|sort
@ -1070,6 +1077,9 @@ admin/dyncookie/dyncookie: admin/dyncookie/dyncookie.o
dev/flags/flags: dev/flags/flags.o dev/flags/flags: dev/flags/flags.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/gdb/pm-from-core: dev/gdb/pm-from-core.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
dev/haring/haring: dev/haring/haring.o dev/haring/haring: dev/haring/haring.o
$(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS) $(cmd_LD) $(ARCH_FLAGS) $(LDFLAGS) -o $@ $^ $(LDOPTS)
@ -1168,7 +1178,7 @@ distclean: clean
$(Q)rm -f admin/dyncookie/dyncookie $(Q)rm -f admin/dyncookie/dyncookie
$(Q)rm -f dev/haring/haring dev/ncpu/ncpu{,.so} dev/poll/poll dev/tcploop/tcploop $(Q)rm -f dev/haring/haring dev/ncpu/ncpu{,.so} dev/poll/poll dev/tcploop/tcploop
$(Q)rm -f dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht $(Q)rm -f dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht
$(Q)rm -f dev/qpack/decode $(Q)rm -f dev/qpack/decode dev/gdb/pm-from-core
tags: tags:
$(Q)find src include \( -name '*.c' -o -name '*.h' \) -print0 | \ $(Q)find src include \( -name '*.c' -o -name '*.h' \) -print0 | \

View file

@ -2,7 +2,6 @@
[![alpine/musl](https://github.com/haproxy/haproxy/actions/workflows/musl.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/musl.yml) [![alpine/musl](https://github.com/haproxy/haproxy/actions/workflows/musl.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/musl.yml)
[![AWS-LC](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml) [![AWS-LC](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/aws-lc.yml)
[![openssl no-deprecated](https://github.com/haproxy/haproxy/actions/workflows/openssl-nodeprecated.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/openssl-nodeprecated.yml)
[![Illumos](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml) [![Illumos](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/illumos.yml)
[![NetBSD](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml) [![NetBSD](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml/badge.svg)](https://github.com/haproxy/haproxy/actions/workflows/netbsd.yml)
[![FreeBSD](https://api.cirrus-ci.com/github/haproxy/haproxy.svg?task=FreeBSD)](https://cirrus-ci.com/github/haproxy/haproxy/) [![FreeBSD](https://api.cirrus-ci.com/github/haproxy/haproxy.svg?task=FreeBSD)](https://cirrus-ci.com/github/haproxy/haproxy/)

View file

@ -1,2 +1,2 @@
$Format:%ci$ $Format:%ci$
2026/01/07 2026/02/19

View file

@ -1 +1 @@
3.4-dev2 3.4-dev5

View file

@ -31,6 +31,7 @@ static struct {
da_atlas_t atlas; da_atlas_t atlas;
da_evidence_id_t useragentid; da_evidence_id_t useragentid;
da_severity_t loglevel; da_severity_t loglevel;
size_t maxhdrlen;
char separator; char separator;
unsigned char daset:1; unsigned char daset:1;
} global_deviceatlas = { } global_deviceatlas = {
@ -42,6 +43,7 @@ static struct {
.atlasmap = NULL, .atlasmap = NULL,
.atlasfd = -1, .atlasfd = -1,
.useragentid = 0, .useragentid = 0,
.maxhdrlen = 0,
.daset = 0, .daset = 0,
.separator = '|', .separator = '|',
}; };
@ -57,6 +59,10 @@ static int da_json_file(char **args, int section_type, struct proxy *curpx,
return -1; return -1;
} }
global_deviceatlas.jsonpath = strdup(args[1]); global_deviceatlas.jsonpath = strdup(args[1]);
if (unlikely(global_deviceatlas.jsonpath == NULL)) {
memprintf(err, "deviceatlas json file : out of memory.\n");
return -1;
}
return 0; return 0;
} }
@ -73,6 +79,7 @@ static int da_log_level(char **args, int section_type, struct proxy *curpx,
loglevel = atol(args[1]); loglevel = atol(args[1]);
if (loglevel < 0 || loglevel > 3) { if (loglevel < 0 || loglevel > 3) {
memprintf(err, "deviceatlas log level : expects a log level between 0 and 3, %s given.\n", args[1]); memprintf(err, "deviceatlas log level : expects a log level between 0 and 3, %s given.\n", args[1]);
return -1;
} else { } else {
global_deviceatlas.loglevel = (da_severity_t)loglevel; global_deviceatlas.loglevel = (da_severity_t)loglevel;
} }
@ -101,6 +108,10 @@ static int da_properties_cookie(char **args, int section_type, struct proxy *cur
return -1; return -1;
} else { } else {
global_deviceatlas.cookiename = strdup(args[1]); global_deviceatlas.cookiename = strdup(args[1]);
if (unlikely(global_deviceatlas.cookiename == NULL)) {
memprintf(err, "deviceatlas cookie name : out of memory.\n");
return -1;
}
} }
global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename); global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename);
return 0; return 0;
@ -119,6 +130,7 @@ static int da_cache_size(char **args, int section_type, struct proxy *curpx,
cachesize = atol(args[1]); cachesize = atol(args[1]);
if (cachesize < 0 || cachesize > DA_CACHE_MAX) { if (cachesize < 0 || cachesize > DA_CACHE_MAX) {
memprintf(err, "deviceatlas cache size : expects a cache size between 0 and %d, %s given.\n", DA_CACHE_MAX, args[1]); memprintf(err, "deviceatlas cache size : expects a cache size between 0 and %d, %s given.\n", DA_CACHE_MAX, args[1]);
return -1;
} else { } else {
#ifdef APINOCACHE #ifdef APINOCACHE
fprintf(stdout, "deviceatlas cache size : no-op, its support is disabled.\n"); fprintf(stdout, "deviceatlas cache size : no-op, its support is disabled.\n");
@ -165,7 +177,7 @@ static int init_deviceatlas(void)
da_status_t status; da_status_t status;
jsonp = fopen(global_deviceatlas.jsonpath, "r"); jsonp = fopen(global_deviceatlas.jsonpath, "r");
if (jsonp == 0) { if (unlikely(jsonp == 0)) {
ha_alert("deviceatlas : '%s' json file has invalid path or is not readable.\n", ha_alert("deviceatlas : '%s' json file has invalid path or is not readable.\n",
global_deviceatlas.jsonpath); global_deviceatlas.jsonpath);
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
@ -177,9 +189,11 @@ static int init_deviceatlas(void)
status = da_atlas_compile(jsonp, da_haproxy_read, da_haproxy_seek, status = da_atlas_compile(jsonp, da_haproxy_read, da_haproxy_seek,
&global_deviceatlas.atlasimgptr, &atlasimglen); &global_deviceatlas.atlasimgptr, &atlasimglen);
fclose(jsonp); fclose(jsonp);
if (status != DA_OK) { if (unlikely(status != DA_OK)) {
ha_alert("deviceatlas : '%s' json file is invalid.\n", ha_alert("deviceatlas : '%s' json file is invalid.\n",
global_deviceatlas.jsonpath); global_deviceatlas.jsonpath);
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;
} }
@ -187,8 +201,10 @@ static int init_deviceatlas(void)
status = da_atlas_open(&global_deviceatlas.atlas, extraprops, status = da_atlas_open(&global_deviceatlas.atlas, extraprops,
global_deviceatlas.atlasimgptr, atlasimglen); global_deviceatlas.atlasimgptr, atlasimglen);
if (status != DA_OK) { if (unlikely(status != DA_OK)) {
ha_alert("deviceatlas : data could not be compiled.\n"); ha_alert("deviceatlas : data could not be compiled.\n");
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL; err_code |= ERR_ALERT | ERR_FATAL;
goto out; goto out;
} }
@ -197,11 +213,28 @@ static int init_deviceatlas(void)
if (global_deviceatlas.cookiename == 0) { if (global_deviceatlas.cookiename == 0) {
global_deviceatlas.cookiename = strdup(DA_COOKIENAME_DEFAULT); global_deviceatlas.cookiename = strdup(DA_COOKIENAME_DEFAULT);
if (unlikely(global_deviceatlas.cookiename == NULL)) {
ha_alert("deviceatlas : out of memory.\n");
da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr);
da_fini();
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename); global_deviceatlas.cookienamelen = strlen(global_deviceatlas.cookiename);
} }
global_deviceatlas.useragentid = da_atlas_header_evidence_id(&global_deviceatlas.atlas, global_deviceatlas.useragentid = da_atlas_header_evidence_id(&global_deviceatlas.atlas,
"user-agent"); "user-agent");
{
size_t hi;
global_deviceatlas.maxhdrlen = 16;
for (hi = 0; hi < global_deviceatlas.atlas.header_evidence_count; hi++) {
size_t nl = strlen(global_deviceatlas.atlas.header_priorities[hi].name);
if (nl > global_deviceatlas.maxhdrlen)
global_deviceatlas.maxhdrlen = nl;
}
}
if ((global_deviceatlas.atlasfd = shm_open(ATLASMAPNM, O_RDWR, 0660)) != -1) { if ((global_deviceatlas.atlasfd = shm_open(ATLASMAPNM, O_RDWR, 0660)) != -1) {
global_deviceatlas.atlasmap = mmap(NULL, ATLASTOKSZ, PROT_READ | PROT_WRITE, MAP_SHARED, global_deviceatlas.atlasfd, 0); global_deviceatlas.atlasmap = mmap(NULL, ATLASTOKSZ, PROT_READ | PROT_WRITE, MAP_SHARED, global_deviceatlas.atlasfd, 0);
if (global_deviceatlas.atlasmap == MAP_FAILED) { if (global_deviceatlas.atlasmap == MAP_FAILED) {
@ -231,15 +264,13 @@ static void deinit_deviceatlas(void)
free(global_deviceatlas.cookiename); free(global_deviceatlas.cookiename);
da_atlas_close(&global_deviceatlas.atlas); da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr); free(global_deviceatlas.atlasimgptr);
da_fini();
} }
if (global_deviceatlas.atlasfd != -1) { if (global_deviceatlas.atlasfd != -1) {
munmap(global_deviceatlas.atlasmap, ATLASTOKSZ); munmap(global_deviceatlas.atlasmap, ATLASTOKSZ);
close(global_deviceatlas.atlasfd); close(global_deviceatlas.atlasfd);
shm_unlink(ATLASMAPNM);
} }
da_fini();
} }
static void da_haproxy_checkinst(void) static void da_haproxy_checkinst(void)
@ -258,6 +289,10 @@ static void da_haproxy_checkinst(void)
da_property_decl_t extraprops[1] = {{NULL, 0}}; da_property_decl_t extraprops[1] = {{NULL, 0}};
#ifdef USE_THREAD #ifdef USE_THREAD
HA_SPIN_LOCK(OTHER_LOCK, &dadwsch_lock); HA_SPIN_LOCK(OTHER_LOCK, &dadwsch_lock);
if (base[0] == 0) {
HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock);
return;
}
#endif #endif
strlcpy2(atlasp, base + sizeof(char), sizeof(atlasp)); strlcpy2(atlasp, base + sizeof(char), sizeof(atlasp));
jsonp = fopen(atlasp, "r"); jsonp = fopen(atlasp, "r");
@ -275,10 +310,20 @@ static void da_haproxy_checkinst(void)
fclose(jsonp); fclose(jsonp);
if (status == DA_OK) { if (status == DA_OK) {
if (da_atlas_open(&inst, extraprops, cnew, atlassz) == DA_OK) { if (da_atlas_open(&inst, extraprops, cnew, atlassz) == DA_OK) {
inst.config.cache_size = global_deviceatlas.cachesize;
da_atlas_close(&global_deviceatlas.atlas); da_atlas_close(&global_deviceatlas.atlas);
free(global_deviceatlas.atlasimgptr); free(global_deviceatlas.atlasimgptr);
global_deviceatlas.atlasimgptr = cnew; global_deviceatlas.atlasimgptr = cnew;
global_deviceatlas.atlas = inst; global_deviceatlas.atlas = inst;
{
size_t hi;
global_deviceatlas.maxhdrlen = 16;
for (hi = 0; hi < inst.header_evidence_count; hi++) {
size_t nl = strlen(inst.header_priorities[hi].name);
if (nl > global_deviceatlas.maxhdrlen)
global_deviceatlas.maxhdrlen = nl;
}
}
base[0] = 0; base[0] = 0;
ha_notice("deviceatlas : new instance, data file date `%s`.\n", ha_notice("deviceatlas : new instance, data file date `%s`.\n",
da_getdatacreationiso8601(&global_deviceatlas.atlas)); da_getdatacreationiso8601(&global_deviceatlas.atlas));
@ -286,6 +331,8 @@ static void da_haproxy_checkinst(void)
ha_alert("deviceatlas : instance update failed.\n"); ha_alert("deviceatlas : instance update failed.\n");
free(cnew); free(cnew);
} }
} else {
free(cnew);
} }
#ifdef USE_THREAD #ifdef USE_THREAD
HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock); HA_SPIN_UNLOCK(OTHER_LOCK, &dadwsch_lock);
@ -297,7 +344,7 @@ static void da_haproxy_checkinst(void)
static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_t *devinfo) static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_t *devinfo)
{ {
struct buffer *tmp; struct buffer *tmp;
da_propid_t prop, *pprop; da_propid_t prop;
da_status_t status; da_status_t status;
da_type_t proptype; da_type_t proptype;
const char *propname; const char *propname;
@ -317,13 +364,15 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
chunk_appendf(tmp, "%c", global_deviceatlas.separator); chunk_appendf(tmp, "%c", global_deviceatlas.separator);
continue; continue;
} }
pprop = &prop; if (unlikely(da_atlas_getproptype(&global_deviceatlas.atlas, prop, &proptype) != DA_OK)) {
da_atlas_getproptype(&global_deviceatlas.atlas, *pprop, &proptype); chunk_appendf(tmp, "%c", global_deviceatlas.separator);
continue;
}
switch (proptype) { switch (proptype) {
case DA_TYPE_BOOLEAN: { case DA_TYPE_BOOLEAN: {
bool val; bool val;
status = da_getpropboolean(devinfo, *pprop, &val); status = da_getpropboolean(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%d", val); chunk_appendf(tmp, "%d", val);
} }
@ -332,7 +381,7 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
case DA_TYPE_INTEGER: case DA_TYPE_INTEGER:
case DA_TYPE_NUMBER: { case DA_TYPE_NUMBER: {
long val; long val;
status = da_getpropinteger(devinfo, *pprop, &val); status = da_getpropinteger(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%ld", val); chunk_appendf(tmp, "%ld", val);
} }
@ -340,7 +389,7 @@ static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_
} }
case DA_TYPE_STRING: { case DA_TYPE_STRING: {
const char *val; const char *val;
status = da_getpropstring(devinfo, *pprop, &val); status = da_getpropstring(devinfo, prop, &val);
if (status == DA_OK) { if (status == DA_OK) {
chunk_appendf(tmp, "%s", val); chunk_appendf(tmp, "%s", val);
} }
@ -371,29 +420,26 @@ static int da_haproxy_conv(const struct arg *args, struct sample *smp, void *pri
{ {
da_deviceinfo_t devinfo; da_deviceinfo_t devinfo;
da_status_t status; da_status_t status;
const char *useragent; char useragentbuf[1024];
char useragentbuf[1024] = { 0 };
int i; int i;
if (global_deviceatlas.daset == 0 || smp->data.u.str.data == 0) { if (unlikely(global_deviceatlas.daset == 0) || smp->data.u.str.data == 0) {
return 1; return 1;
} }
da_haproxy_checkinst(); da_haproxy_checkinst();
i = smp->data.u.str.data > sizeof(useragentbuf) ? sizeof(useragentbuf) : smp->data.u.str.data; i = smp->data.u.str.data > sizeof(useragentbuf) - 1 ? sizeof(useragentbuf) - 1 : smp->data.u.str.data;
memcpy(useragentbuf, smp->data.u.str.area, i - 1); memcpy(useragentbuf, smp->data.u.str.area, i);
useragentbuf[i - 1] = 0; useragentbuf[i] = 0;
useragent = (const char *)useragentbuf;
status = da_search(&global_deviceatlas.atlas, &devinfo, status = da_search(&global_deviceatlas.atlas, &devinfo,
global_deviceatlas.useragentid, useragent, 0); global_deviceatlas.useragentid, useragentbuf, 0);
return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo); return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo);
} }
#define DA_MAX_HEADERS 24 #define DA_MAX_HEADERS 32
static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private) static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private)
{ {
@ -403,10 +449,10 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
struct channel *chn; struct channel *chn;
struct htx *htx; struct htx *htx;
struct htx_blk *blk; struct htx_blk *blk;
char vbuf[DA_MAX_HEADERS][1024] = {{ 0 }}; char vbuf[DA_MAX_HEADERS][1024];
int i, nbh = 0; int i, nbh = 0;
if (global_deviceatlas.daset == 0) { if (unlikely(global_deviceatlas.daset == 0)) {
return 0; return 0;
} }
@ -414,18 +460,17 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
chn = (smp->strm ? &smp->strm->req : NULL); chn = (smp->strm ? &smp->strm->req : NULL);
htx = smp_prefetch_htx(smp, chn, NULL, 1); htx = smp_prefetch_htx(smp, chn, NULL, 1);
if (!htx) if (unlikely(!htx))
return 0; return 0;
i = 0;
for (blk = htx_get_first_blk(htx); nbh < DA_MAX_HEADERS && blk; blk = htx_get_next_blk(htx, blk)) { for (blk = htx_get_first_blk(htx); nbh < DA_MAX_HEADERS && blk; blk = htx_get_next_blk(htx, blk)) {
size_t vlen; size_t vlen;
char *pval; char *pval;
da_evidence_id_t evid; da_evidence_id_t evid;
enum htx_blk_type type; enum htx_blk_type type;
struct ist n, v; struct ist n, v;
char hbuf[24] = { 0 }; char hbuf[64];
char tval[1024] = { 0 }; char tval[1024];
type = htx_get_blk_type(blk); type = htx_get_blk_type(blk);
@ -438,20 +483,18 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
continue; continue;
} }
/* The HTTP headers used by the DeviceAtlas API are not longer */ if (n.len > global_deviceatlas.maxhdrlen || n.len >= sizeof(hbuf)) {
if (n.len >= sizeof(hbuf)) {
continue; continue;
} }
memcpy(hbuf, n.ptr, n.len); memcpy(hbuf, n.ptr, n.len);
hbuf[n.len] = 0; hbuf[n.len] = 0;
pval = v.ptr;
vlen = v.len;
evid = -1; evid = -1;
i = v.len > sizeof(tval) - 1 ? sizeof(tval) - 1 : v.len; i = v.len > sizeof(tval) - 1 ? sizeof(tval) - 1 : v.len;
memcpy(tval, v.ptr, i); memcpy(tval, v.ptr, i);
tval[i] = 0; tval[i] = 0;
pval = tval; pval = tval;
vlen = i;
if (strcasecmp(hbuf, "Accept-Language") == 0) { if (strcasecmp(hbuf, "Accept-Language") == 0) {
evid = da_atlas_accept_language_evidence_id(&global_deviceatlas.atlas); evid = da_atlas_accept_language_evidence_id(&global_deviceatlas.atlas);
@ -469,7 +512,7 @@ static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const ch
continue; continue;
} }
vlen -= global_deviceatlas.cookienamelen - 1; vlen = pl;
pval = p; pval = p;
evid = da_atlas_clientprop_evidence_id(&global_deviceatlas.atlas); evid = da_atlas_clientprop_evidence_id(&global_deviceatlas.atlas);
} else { } else {

View file

@ -141,6 +141,11 @@ enum {
DA_INITIAL_MEMORY_ESTIMATE = 1024 * 1024 * 14 DA_INITIAL_MEMORY_ESTIMATE = 1024 * 1024 * 14
}; };
struct header_evidence_entry {
const char *name;
da_evidence_id_t id;
};
struct da_config { struct da_config {
unsigned int cache_size; unsigned int cache_size;
unsigned int __reserved[15]; /* enough reserved keywords for future use */ unsigned int __reserved[15]; /* enough reserved keywords for future use */

View file

@ -36,6 +36,7 @@
#include <haproxy/stats.h> #include <haproxy/stats.h>
#include <haproxy/stconn.h> #include <haproxy/stconn.h>
#include <haproxy/stream.h> #include <haproxy/stream.h>
#include <haproxy/stress.h>
#include <haproxy/task.h> #include <haproxy/task.h>
#include <haproxy/tools.h> #include <haproxy/tools.h>
#include <haproxy/version.h> #include <haproxy/version.h>
@ -82,6 +83,8 @@ struct promex_ctx {
unsigned field_num; /* current field number (ST_I_PX_* etc) */ unsigned field_num; /* current field number (ST_I_PX_* etc) */
unsigned mod_field_num; /* first field number of the current module (ST_I_PX_* etc) */ unsigned mod_field_num; /* first field number of the current module (ST_I_PX_* etc) */
int obj_state; /* current state among PROMEX_{FRONT|BACK|SRV|LI}_STATE_* */ int obj_state; /* current state among PROMEX_{FRONT|BACK|SRV|LI}_STATE_* */
struct watcher px_watch; /* watcher to automatically update next pointer */
struct watcher srv_watch; /* watcher to automatically update next pointer */
struct list modules; /* list of promex modules to export */ struct list modules; /* list of promex modules to export */
struct eb_root filters; /* list of filters to apply on metrics name */ struct eb_root filters; /* list of filters to apply on metrics name */
}; };
@ -346,6 +349,10 @@ static int promex_dump_ts(struct appctx *appctx, struct ist prefix,
istcat(&n, prefix, PROMEX_MAX_NAME_LEN); istcat(&n, prefix, PROMEX_MAX_NAME_LEN);
istcat(&n, name, PROMEX_MAX_NAME_LEN); istcat(&n, name, PROMEX_MAX_NAME_LEN);
/* In stress mode, force yielding on each metric. */
if (STRESS_RUN1(istlen(*out), 0))
goto full;
if ((ctx->flags & PROMEX_FL_METRIC_HDR) && if ((ctx->flags & PROMEX_FL_METRIC_HDR) &&
!promex_dump_ts_header(n, desc, type, out, max)) !promex_dump_ts_header(n, desc, type, out, max))
goto full; goto full;
@ -625,8 +632,6 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_FE)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_FE))
continue; continue;
@ -663,8 +668,7 @@ static int promex_dump_front_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_FE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_FE))
goto next_px2; goto next_px2;
counters = EXTRA_COUNTERS_GET(px->extra_counters_fe, mod); if (!mod->fill_stats(mod, px->extra_counters_fe, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -816,8 +820,6 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_LI)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_LI))
continue; continue;
@ -863,8 +865,7 @@ static int promex_dump_listener_metrics(struct appctx *appctx, struct htx *htx)
labels[lb_idx+1].name = ist("mod"); labels[lb_idx+1].name = ist("mod");
labels[lb_idx+1].value = ist2(mod->name, strlen(mod->name)); labels[lb_idx+1].value = ist2(mod->name, strlen(mod->name));
counters = EXTRA_COUNTERS_GET(li->extra_counters, mod); if (!mod->fill_stats(mod, li->extra_counters, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -940,9 +941,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
unsigned int srv_state_count[PROMEX_SRV_STATE_COUNT] = { 0 }; unsigned int srv_state_count[PROMEX_SRV_STATE_COUNT] = { 0 };
@ -1097,9 +1095,16 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
&val, labels, &out, max)) &val, labels, &out, max))
goto full; goto full;
next_px: next_px:
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
} }
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0] via watcher.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
} }
/* Skip extra counters */ /* Skip extra counters */
@ -1112,8 +1117,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_BE)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_BE))
continue; continue;
@ -1124,9 +1127,6 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
struct promex_metric metric; struct promex_metric metric;
@ -1150,8 +1150,7 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px2; goto next_px2;
counters = EXTRA_COUNTERS_GET(px->extra_counters_be, mod); if (!mod->fill_stats(mod, px->extra_counters_be, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num))
return -1; return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
@ -1162,25 +1161,39 @@ static int promex_dump_back_metrics(struct appctx *appctx, struct htx *htx)
goto full; goto full;
next_px2: next_px2:
px = px->next; px = watcher_next(&ctx->px_watch, px->next);
} }
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0] via watcher.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
} }
ctx->field_num += mod->stats_count; ctx->field_num += mod->stats_count;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
} }
px = NULL;
mod = NULL;
end: end:
if (ret) {
watcher_detach(&ctx->px_watch);
mod = NULL;
}
if (out.len) { if (out.len) {
if (!htx_add_data_atonce(htx, out)) if (!htx_add_data_atonce(htx, out)) {
watcher_detach(&ctx->px_watch);
return -1; /* Unexpected and unrecoverable error */ return -1; /* Unexpected and unrecoverable error */
} }
/* Save pointers (0=current proxy, 1=current stats module) of the current context */ }
ctx->p[0] = px;
/* Save pointers of the current context for dump resumption :
* 0=current proxy, 1=current stats module
* Note that p[0] is already automatically updated via px_watch.
*/
ctx->p[1] = mod; ctx->p[1] = mod;
return ret; return ret;
full: full:
@ -1222,9 +1235,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
enum promex_mt_type type; enum promex_mt_type type;
@ -1244,15 +1254,12 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px; goto next_px;
if (!sv)
sv = px->srv;
while (sv) { while (sv) {
labels[lb_idx].name = ist("server"); labels[lb_idx].name = ist("server");
labels[lb_idx].value = ist2(sv->id, strlen(sv->id)); labels[lb_idx].value = ist2(sv->id, strlen(sv->id));
if (!stats_fill_sv_line(px, sv, 0, stats, ST_I_PX_MAX, &(ctx->field_num))) if (!stats_fill_sv_line(px, sv, 0, stats, ST_I_PX_MAX, &(ctx->field_num)))
return -1; goto error;
if ((ctx->flags & PROMEX_FL_NO_MAINT_SRV) && (sv->cur_admin & SRV_ADMF_MAINT)) if ((ctx->flags & PROMEX_FL_NO_MAINT_SRV) && (sv->cur_admin & SRV_ADMF_MAINT))
goto next_sv; goto next_sv;
@ -1397,13 +1404,30 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
&val, labels, &out, max)) &val, labels, &out, max))
goto full; goto full;
next_sv: next_sv:
sv = sv->next; sv = watcher_next(&ctx->srv_watch, sv->next);
} }
next_px: next_px:
px = px->next; watcher_detach(&ctx->srv_watch);
px = watcher_next(&ctx->px_watch, px->next);
if (px) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
} }
}
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0]/p[1] via px_watch/srv_watch.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
if (likely(px)) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
}
} }
/* Skip extra counters */ /* Skip extra counters */
@ -1416,8 +1440,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
} }
list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) { list_for_each_entry_from(mod, &stats_module_list[STATS_DOMAIN_PROXY], list) {
void *counters;
if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_SRV)) if (!(stats_px_get_cap(mod->domain_flags) & STATS_PX_CAP_SRV))
continue; continue;
@ -1428,9 +1450,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if (promex_filter_metric(appctx, prefix, name)) if (promex_filter_metric(appctx, prefix, name))
continue; continue;
if (!px)
px = proxies_list;
while (px) { while (px) {
struct promex_label labels[PROMEX_MAX_LABELS-1] = {}; struct promex_label labels[PROMEX_MAX_LABELS-1] = {};
struct promex_metric metric; struct promex_metric metric;
@ -1451,9 +1470,6 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE)) if ((px->flags & PR_FL_DISABLED) || px->uuid <= 0 || !(px->cap & PR_CAP_BE))
goto next_px2; goto next_px2;
if (!sv)
sv = px->srv;
while (sv) { while (sv) {
labels[lb_idx].name = ist("server"); labels[lb_idx].name = ist("server");
labels[lb_idx].value = ist2(sv->id, strlen(sv->id)); labels[lb_idx].value = ist2(sv->id, strlen(sv->id));
@ -1465,9 +1481,8 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
goto next_sv2; goto next_sv2;
counters = EXTRA_COUNTERS_GET(sv->extra_counters, mod); if (!mod->fill_stats(mod, sv->extra_counters, stats + ctx->field_num, &ctx->mod_field_num))
if (!mod->fill_stats(counters, stats + ctx->field_num, &ctx->mod_field_num)) goto error;
return -1;
val = stats[ctx->field_num + ctx->mod_field_num]; val = stats[ctx->field_num + ctx->mod_field_num];
metric.type = ((val.type == FN_GAUGE) ? PROMEX_MT_GAUGE : PROMEX_MT_COUNTER); metric.type = ((val.type == FN_GAUGE) ? PROMEX_MT_GAUGE : PROMEX_MT_COUNTER);
@ -1477,42 +1492,62 @@ static int promex_dump_srv_metrics(struct appctx *appctx, struct htx *htx)
goto full; goto full;
next_sv2: next_sv2:
sv = sv->next; sv = watcher_next(&ctx->srv_watch, sv->next);
} }
next_px2: next_px2:
px = px->next; watcher_detach(&ctx->srv_watch);
px = watcher_next(&ctx->px_watch, px->next);
if (px) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
} }
}
watcher_detach(&ctx->px_watch);
ctx->flags |= PROMEX_FL_METRIC_HDR; ctx->flags |= PROMEX_FL_METRIC_HDR;
/* Prepare a new iteration for the next stat column.
* Update ctx.p[0]/p[1] via px_watch/srv_watch.
*/
watcher_attach(&ctx->px_watch, proxies_list);
px = proxies_list;
if (likely(px)) {
watcher_attach(&ctx->srv_watch, px->srv);
sv = ctx->p[1];
}
} }
ctx->field_num += mod->stats_count; ctx->field_num += mod->stats_count;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
} }
px = NULL;
sv = NULL;
mod = NULL;
end: end:
if (ret) {
watcher_detach(&ctx->px_watch);
watcher_detach(&ctx->srv_watch);
mod = NULL;
}
if (out.len) { if (out.len) {
if (!htx_add_data_atonce(htx, out)) if (!htx_add_data_atonce(htx, out))
return -1; /* Unexpected and unrecoverable error */ return -1; /* Unexpected and unrecoverable error */
} }
/* Decrement server refcount if it was saved through ctx.p[1]. */ /* Save pointers of the current context for dump resumption :
srv_drop(ctx->p[1]); * 0=current proxy, 1=current server, 2=current stats module
if (sv) * Note that p[0]/p[1] are already automatically updated via px_watch/srv_watch.
srv_take(sv); */
/* Save pointers (0=current proxy, 1=current server, 2=current stats module) of the current context */
ctx->p[0] = px;
ctx->p[1] = sv;
ctx->p[2] = mod; ctx->p[2] = mod;
return ret; return ret;
full: full:
ret = 0; ret = 0;
goto end; goto end;
error:
watcher_detach(&ctx->px_watch);
watcher_detach(&ctx->srv_watch);
return -1;
} }
/* Dump metrics of module <mod>. It returns 1 on success, 0 if <out> is full and /* Dump metrics of module <mod>. It returns 1 on success, 0 if <out> is full and
@ -1733,6 +1768,11 @@ static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
ctx->field_num = ST_I_PX_PXNAME; ctx->field_num = ST_I_PX_PXNAME;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
appctx->st1 = PROMEX_DUMPER_BACK; appctx->st1 = PROMEX_DUMPER_BACK;
if (ctx->flags & PROMEX_FL_SCOPE_BACK) {
/* Update ctx.p[0] via watcher. */
watcher_attach(&ctx->px_watch, proxies_list);
}
__fallthrough; __fallthrough;
case PROMEX_DUMPER_BACK: case PROMEX_DUMPER_BACK:
@ -1750,6 +1790,15 @@ static int promex_dump_metrics(struct appctx *appctx, struct htx *htx)
ctx->field_num = ST_I_PX_PXNAME; ctx->field_num = ST_I_PX_PXNAME;
ctx->mod_field_num = 0; ctx->mod_field_num = 0;
appctx->st1 = PROMEX_DUMPER_SRV; appctx->st1 = PROMEX_DUMPER_SRV;
if (ctx->flags & PROMEX_FL_SCOPE_SERVER) {
/* Update ctx.p[0] via watcher. */
watcher_attach(&ctx->px_watch, proxies_list);
if (likely(proxies_list)) {
/* Update ctx.p[1] via watcher. */
watcher_attach(&ctx->srv_watch, proxies_list->srv);
}
}
__fallthrough; __fallthrough;
case PROMEX_DUMPER_SRV: case PROMEX_DUMPER_SRV:
@ -2027,6 +2076,8 @@ static int promex_appctx_init(struct appctx *appctx)
LIST_INIT(&ctx->modules); LIST_INIT(&ctx->modules);
ctx->filters = EB_ROOT; ctx->filters = EB_ROOT;
appctx->st0 = PROMEX_ST_INIT; appctx->st0 = PROMEX_ST_INIT;
watcher_init(&ctx->px_watch, &ctx->p[0], offsetof(struct proxy, watcher_list));
watcher_init(&ctx->srv_watch, &ctx->p[1], offsetof(struct server, watcher_list));
return 0; return 0;
} }
@ -2040,11 +2091,14 @@ static void promex_appctx_release(struct appctx *appctx)
struct promex_metric_filter *flt; struct promex_metric_filter *flt;
struct eb32_node *node, *next; struct eb32_node *node, *next;
if (appctx->st1 == PROMEX_DUMPER_SRV) { if (appctx->st1 == PROMEX_DUMPER_BACK ||
struct server *srv = objt_server(ctx->p[1]); appctx->st1 == PROMEX_DUMPER_SRV) {
srv_drop(srv); watcher_detach(&ctx->px_watch);
} }
if (appctx->st1 == PROMEX_DUMPER_SRV)
watcher_detach(&ctx->srv_watch);
list_for_each_entry_safe(ref, back, &ctx->modules, list) { list_for_each_entry_safe(ref, back, &ctx->modules, list) {
LIST_DELETE(&ref->list); LIST_DELETE(&ref->list);
pool_free(pool_head_promex_mod_ref, ref); pool_free(pool_head_promex_mod_ref, ref);

View file

@ -6,9 +6,9 @@ Wants=network-online.target
[Service] [Service]
EnvironmentFile=-/etc/default/haproxy EnvironmentFile=-/etc/default/haproxy
EnvironmentFile=-/etc/sysconfig/haproxy EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "EXTRAOPTS=-S /run/haproxy-master.sock" Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "CFGDIR=/etc/haproxy/conf.d" "EXTRAOPTS=-S /run/haproxy-master.sock"
ExecStart=@SBINDIR@/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS ExecStart=@SBINDIR@/haproxy -Ws -f $CONFIG -f $CFGDIR -p $PIDFILE $EXTRAOPTS
ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -c $EXTRAOPTS ExecReload=@SBINDIR@/haproxy -Ws -f $CONFIG -f $CFGDIR -c $EXTRAOPTS
ExecReload=/bin/kill -USR2 $MAINPID ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed KillMode=mixed
Restart=always Restart=always

141
dev/gdb/pm-from-core.c Normal file
View file

@ -0,0 +1,141 @@
/*
* Find the post-mortem offset from a core dump
*
* Copyright (C) 2026 Willy Tarreau <w@1wt.eu>
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/* Note: builds with no option under glibc, and can be built as a minimal
* uploadable static executable using nolibc as well:
gcc -o pm-from-core -nostdinc -nostdlib -s -Os -static -fno-ident \
-fno-exceptions -fno-asynchronous-unwind-tables -fno-unwind-tables \
-Wl,--gc-sections,--orphan-handling=discard,-znoseparate-code \
-I /path/to/nolibc-sysroot/include pm-from-core.c
*/
#define _GNU_SOURCE
#include <sys/mman.h>
#include <sys/stat.h>
#include <elf.h>
#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#if defined(__GLIBC__)
# define my_memmem memmem
#else
void *my_memmem(const void *haystack, size_t haystacklen,
const void *needle, size_t needlelen)
{
while (haystacklen >= needlelen) {
if (!memcmp(haystack, needle, needlelen))
return (void*)haystack;
haystack++;
haystacklen--;
}
return NULL;
}
#endif
#define MAGIC "POST-MORTEM STARTS HERE+7654321\0"
int main(int argc, char **argv)
{
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
struct stat st;
uint8_t *mem;
int i, fd;
if (argc < 2) {
printf("Usage: %s <core_file>\n", argv[0]);
exit(1);
}
fd = open(argv[1], O_RDONLY);
/* Let's just map the core dump as an ELF header */
fstat(fd, &st);
mem = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (mem == MAP_FAILED) {
perror("mmap()");
exit(1);
}
/* get the program headers */
ehdr = (Elf64_Ehdr *)mem;
/* check that it's really a core. Should be "\x7fELF" */
if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) {
fprintf(stderr, "ELF magic not found.\n");
exit(1);
}
if (ehdr->e_ident[EI_CLASS] != ELFCLASS64) {
fprintf(stderr, "Only 64-bit ELF supported.\n");
exit(1);
}
if (ehdr->e_type != ET_CORE) {
fprintf(stderr, "ELF type %d, not a core dump.\n", ehdr->e_type);
exit(1);
}
/* OK we can safely go with program headers */
phdr = (Elf64_Phdr *)(mem + ehdr->e_phoff);
for (i = 0; i < ehdr->e_phnum; i++) {
uint64_t size = phdr[i].p_filesz;
uint64_t offset = phdr[i].p_offset;
uint64_t vaddr = phdr[i].p_vaddr;
uint64_t found_ofs;
uint8_t *found;
if (phdr[i].p_type != PT_LOAD)
continue;
//printf("Scanning segment %d...\n", ehdr->e_phnum);
//printf("\r%-5d: off=%lx va=%lx sz=%lx ", i, (long)offset, (long)vaddr, (long)size);
if (!size)
continue;
if (size >= 1048576) // don't scan large segments
continue;
found = my_memmem(mem + offset, size, MAGIC, sizeof(MAGIC) - 1);
if (!found)
continue;
found_ofs = found - (mem + offset);
printf("Found post-mortem magic in segment %d:\n", i);
printf(" Core File Offset: 0x%lx (0x%lx + 0x%lx)\n", offset + found_ofs, offset, found_ofs);
printf(" Runtime VAddr: 0x%lx (0x%lx + 0x%lx)\n", vaddr + found_ofs, vaddr, found_ofs);
printf(" Segment Size: 0x%lx\n", size);
printf("\nIn gdb, copy-paste this line:\n\n pm_init 0x%lx\n\n", vaddr + found_ofs);
return 0;
}
//printf("\r%75s\n", "\r");
printf("post-mortem magic not found\n");
return 1;
}

View file

@ -14,8 +14,8 @@ define pools_dump
set $idx=$idx + 1 set $idx=$idx + 1
end end
set $mem = $total * $e->size set $mem = (unsigned long)$total * $e->size
printf "list=%#lx pool_head=%p name=%s size=%u alloc=%u used=%u mem=%u\n", $p, $e, $e->name, $e->size, $total, $used, $mem printf "list=%#lx pool_head=%p name=%s size=%u alloc=%u used=%u mem=%lu\n", $p, $e, $e->name, $e->size, $total, $used, $mem
set $p = *(void **)$p set $p = *(void **)$p
end end
end end

View file

@ -30,8 +30,8 @@ static const char *tevt_fd_types[16] = {
}; };
static const char *tevt_hs_types[16] = { static const char *tevt_hs_types[16] = {
[ 0] = "-", [ 1] = "-", [ 2] = "-", [ 3] = "rcv_err", [ 0] = "-", [ 1] = "-", [ 2] = "-", [ 3] = "-",
[ 4] = "snd_err", [ 5] = "-", [ 6] = "-", [ 7] = "-", [ 4] = "snd_err", [ 5] = "truncated_shutr", [ 6] = "truncated_rcv_err", [ 7] = "-",
[ 8] = "-", [ 9] = "-", [10] = "-", [11] = "-", [ 8] = "-", [ 9] = "-", [10] = "-", [11] = "-",
[12] = "-", [13] = "-", [14] = "-", [15] = "-", [12] = "-", [13] = "-", [14] = "-", [15] = "-",
}; };

View file

@ -3,7 +3,7 @@
A number of contributors are often embarrassed with coding style issues, they A number of contributors are often embarrassed with coding style issues, they
don't always know if they're doing it right, especially since the coding style don't always know if they're doing it right, especially since the coding style
has elvoved along the years. What is explained here is not necessarily what is has evolved along the years. What is explained here is not necessarily what is
applied in the code, but new code should as much as possible conform to this applied in the code, but new code should as much as possible conform to this
style. Coding style fixes happen when code is replaced. It is useless to send style. Coding style fixes happen when code is replaced. It is useless to send
patches to fix coding style only, they will be rejected, unless they belong to patches to fix coding style only, they will be rejected, unless they belong to

View file

@ -3,7 +3,7 @@
Configuration Manual Configuration Manual
---------------------- ----------------------
version 3.4 version 3.4
2026/01/07 2026/02/19
This document covers the configuration language as implemented in the version This document covers the configuration language as implemented in the version
@ -1869,8 +1869,10 @@ The following keywords are supported in the "global" section :
- tune.buffers.limit - tune.buffers.limit
- tune.buffers.reserve - tune.buffers.reserve
- tune.bufsize - tune.bufsize
- tune.bufsize.large
- tune.bufsize.small - tune.bufsize.small
- tune.comp.maxlevel - tune.comp.maxlevel
- tune.defaults.purge
- tune.disable-fast-forward - tune.disable-fast-forward
- tune.disable-zero-copy-forwarding - tune.disable-zero-copy-forwarding
- tune.epoll.mask-events - tune.epoll.mask-events
@ -1982,6 +1984,7 @@ The following keywords are supported in the "global" section :
- tune.ssl.cachesize - tune.ssl.cachesize
- tune.ssl.capture-buffer-size - tune.ssl.capture-buffer-size
- tune.ssl.capture-cipherlist-size (deprecated) - tune.ssl.capture-cipherlist-size (deprecated)
- tune.ssl.certificate-compression
- tune.ssl.default-dh-param - tune.ssl.default-dh-param
- tune.ssl.force-private-cache - tune.ssl.force-private-cache
- tune.ssl.hard-maxrecord - tune.ssl.hard-maxrecord
@ -2753,7 +2756,7 @@ h2-workaround-bogus-websocket-clients
automatically downgrade to http/1.1 for the websocket tunnel, specify h2 automatically downgrade to http/1.1 for the websocket tunnel, specify h2
support on the bind line using "alpn" without an explicit "proto" keyword. If support on the bind line using "alpn" without an explicit "proto" keyword. If
this statement was previously activated, this can be disabled by prefixing this statement was previously activated, this can be disabled by prefixing
the keyword with "no'. the keyword with "no".
hard-stop-after <time> hard-stop-after <time>
Defines the maximum time allowed to perform a clean soft-stop. Defines the maximum time allowed to perform a clean soft-stop.
@ -4003,7 +4006,7 @@ profiling.memory { on | off }
use in production. The same may be achieved at run time on the CLI using the use in production. The same may be achieved at run time on the CLI using the
"set profiling memory" command, please consult the management manual. "set profiling memory" command, please consult the management manual.
profiling.tasks { auto | on | off } profiling.tasks { auto | on | off | lock | no-lock | memory | no-memory }*
Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto' Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto'
the profiling automatically turns on a thread when it starts to suffer from the profiling automatically turns on a thread when it starts to suffer from
an average latency of 1000 microseconds or higher as reported in the an average latency of 1000 microseconds or higher as reported in the
@ -4014,6 +4017,18 @@ profiling.tasks { auto | on | off }
systems, containers, or virtual machines, or when the system swaps (which systems, containers, or virtual machines, or when the system swaps (which
must absolutely never happen on a load balancer). must absolutely never happen on a load balancer).
When task profiling is enabled, HAProxy can also collect the time each task
spends with a lock held or waiting for a lock, as well as the time spent
waiting for a memory allocation to succeed in case of a pool cache miss. This
can sometimes help understand certain causes of latency. For this, the extra
keywords "lock" (to enable lock time collection), "no-lock" (to disable it),
"memory" (to enable memory allocation time collection) or "no-memory" (to
disable it) may additionally be passed. By default they are not enabled since
they can have a non-negligible CPU impact on highly loaded systems (3-10%).
Note that the overhead is only taken when profiling is effectively running,
so that when running in "auto" mode, it will only appear when HAProxy decides
to turn it on.
CPU profiling per task can be very convenient to report where the time is CPU profiling per task can be very convenient to report where the time is
spent and which requests have what effect on which other request. Enabling spent and which requests have what effect on which other request. Enabling
it will typically affect the overall's performance by less than 1%, thus it it will typically affect the overall's performance by less than 1%, thus it
@ -4104,6 +4119,17 @@ tune.bufsize <size>
value set using this parameter will automatically be rounded up to the next value set using this parameter will automatically be rounded up to the next
multiple of 8 on 32-bit machines and 16 on 64-bit machines. multiple of 8 on 32-bit machines and 16 on 64-bit machines.
tune.bufsize.large <size>
Sets the size in butes for large buffers. By defaults, support for large
buffers is not enabled, it must explicitly be enable by setting this value.
These buffers are designed to be used in some specific contexts where more
data must be bufferized without changing the size of regular buffers. The
large buffers are not implicitly used.
Note that when large buffers are configured, three special large buffers will
be allocated for each threads during startup for internal usage.
tune.bufsize.small <size> tune.bufsize.small <size>
Sets the size in bytes for small buffers. The defaults value is 1024. Sets the size in bytes for small buffers. The defaults value is 1024.
@ -4121,6 +4147,19 @@ tune.comp.maxlevel <number>
Each stream using compression initializes the compression algorithm with Each stream using compression initializes the compression algorithm with
this value. The default value is 1. this value. The default value is 1.
tune.defaults.purge
For dynamic backends support, all named defaults sections are now kept in
memory after parsing. This is necessary as backend added at runtime must be
based on a named defaults for its configuration.
This may consume significant memory if the number of defaults instances is
important. In this case and if dynamic backend feature is unnecessary, it's
possible to use this option to force deletion of defaults section after
parsing. It is still mandatory though to keep referenced defaults section
which contain settings whose cannot be copied by their referencing proxies.
For example, this is the case if the defaults section defines TCP/HTTP rules
or a tcpcheck ruleset.
tune.disable-fast-forward tune.disable-fast-forward
Disables the data fast-forwarding. It is a mechanism to optimize the data Disables the data fast-forwarding. It is a mechanism to optimize the data
forwarding by passing data directly from a side to the other one without forwarding by passing data directly from a side to the other one without
@ -4439,6 +4478,16 @@ tune.h2.initial-window-size <number>
specific settings tune.h2.fe.initial-window-size and specific settings tune.h2.fe.initial-window-size and
tune.h2.be.initial-window-size. tune.h2.be.initial-window-size.
tune.h2.log-errors { none | connection | stream }
Sets the level of errors in the H2 demultiplexer that will generate a log.
The default is "stream", which means that any decoding error encountered in
the demultiplexer will lead to the emission of a log. The "connection" value
indicates that only logs that result in invalidating the connection will
produce a log. Finally, "none" indicates that no decoding error will produce
any log. It is recommended to set at least "connection" in order to detect
protocol anomalies, even if this means temporarily switching to "none" during
difficult periods.
tune.h2.max-concurrent-streams <number> tune.h2.max-concurrent-streams <number>
Sets the default HTTP/2 maximum number of concurrent streams per connection Sets the default HTTP/2 maximum number of concurrent streams per connection
(i.e. the number of outstanding requests on a single connection). This value (i.e. the number of outstanding requests on a single connection). This value
@ -4512,7 +4561,11 @@ tune.idle-pool.shared { on | off }
disabling this option without setting a conservative value on "pool-low-conn" disabling this option without setting a conservative value on "pool-low-conn"
for all servers relying on connection reuse to achieve a high performance for all servers relying on connection reuse to achieve a high performance
level, otherwise connections might be closed very often as the thread count level, otherwise connections might be closed very often as the thread count
increases. increases. Note that in any case, connections are only shared between threads
of the same thread group. This means that systems with many NUMA nodes may
show slightly more persistent connections while machines with unified caches
and many CPU cores per node may experience higher CPU usage. In the latter
case, the "max-thread-per-group" tunable may be used to improve the behavior.
tune.idletimer <timeout> tune.idletimer <timeout>
Sets the duration after which HAProxy will consider that an empty buffer is Sets the duration after which HAProxy will consider that an empty buffer is
@ -5296,6 +5349,22 @@ tune.ssl.capture-cipherlist-size <number> (deprecated)
formats. If the value is 0 (default value) the capture is disabled, formats. If the value is 0 (default value) the capture is disabled,
otherwise a buffer is allocated for each SSL/TLS connection. otherwise a buffer is allocated for each SSL/TLS connection.
tune.ssl.certificate-compression { auto | off }
This setting allows to configure the certificate compression support which is
an extension (RFC 8879) to TLS 1.3.
When set to "auto" it uses the default value of the TLS library.
With "off" it tries to explicitely disable the support of the feature.
HAProxy won't try to send compressed certificates anymore nor accept
compressed certificates.
Configures both backend and frontend sides.
This keyword is supported by OpenSSL >= 3.2.0.
The default value is auto.
tune.ssl.default-dh-param <number> tune.ssl.default-dh-param <number>
Sets the maximum size of the Diffie-Hellman parameters used for generating Sets the maximum size of the Diffie-Hellman parameters used for generating
the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The
@ -5718,7 +5787,7 @@ It is possible to chain a TCP frontend to an HTTP backend. It is pointless if
only HTTP traffic is handled. But it may be used to handle several protocols only HTTP traffic is handled. But it may be used to handle several protocols
within the same frontend. In this case, the client's connection is first handled within the same frontend. In this case, the client's connection is first handled
as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the
content processings are performend on raw data. Once upgraded, data is parsed content processings are performed on raw data. Once upgraded, data is parsed
and stored using an internal representation called HTX and it is no longer and stored using an internal representation called HTX and it is no longer
possible to rely on raw representation. There is no way to go back. possible to rely on raw representation. There is no way to go back.
@ -5736,7 +5805,7 @@ HTTP/2 upgrade, applicative streams are distinct and all frontend rules are
evaluated systematically on each one. And as said, the first stream, the TCP evaluated systematically on each one. And as said, the first stream, the TCP
one, is destroyed, but only after the frontend rules were evaluated. one, is destroyed, but only after the frontend rules were evaluated.
There is another importnat point to understand when HTTP processings are There is another important point to understand when HTTP processings are
performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from
tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2 tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2
preface can be parsed. This is a huge limitation regarding the HTTP content preface can be parsed. This is a huge limitation regarding the HTTP content
@ -5820,6 +5889,7 @@ errorloc302 X X X X
errorloc303 X X X X errorloc303 X X X X
error-log-format X X X - error-log-format X X X -
force-persist - - X X force-persist - - X X
force-be-switch - X X -
filter - X X X filter - X X X
fullconn X - X X fullconn X - X X
guid - X X X guid - X X X
@ -6247,8 +6317,16 @@ balance url_param <param> [check_post]
will take away N-1 of the highest loaded servers at the will take away N-1 of the highest loaded servers at the
expense of performance. With very high values, the algorithm expense of performance. With very high values, the algorithm
will converge towards the leastconn's result but much slower. will converge towards the leastconn's result but much slower.
In addition, for large server farms with very low loads (or
perfect balance), comparing loads will often lead to a tie,
so in case of equal loads between all measured servers, their
request rate over the last second are compared, which allows
to better balance server usage over time in the same spirit
as roundrobin does, and smooth consistent hash unfairness.
The default value is 2, which generally shows very good The default value is 2, which generally shows very good
distribution and performance. This algorithm is also known as distribution and performance. For large farms with low loads
(less than a few requests per second per server), it may help
to raise it to 3 or even 4. This algorithm is also known as
the Power of Two Random Choices and is described here : the Power of Two Random Choices and is described here :
http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
@ -7103,6 +7181,9 @@ default_backend <backend>
used when no rule has matched. It generally is the dynamic backend which used when no rule has matched. It generally is the dynamic backend which
will catch all undetermined requests. will catch all undetermined requests.
If a backend is disabled or unpublished, default_backend rules targetting it
will be ignored and stream processing will remain on the original proxy.
Example : Example :
use_backend dynamic if url_dyn use_backend dynamic if url_dyn
@ -7146,7 +7227,11 @@ disabled
is possible to disable many instances at once by adding the "disabled" is possible to disable many instances at once by adding the "disabled"
keyword in a "defaults" section. keyword in a "defaults" section.
See also : "enabled" By default, a disabled backend cannot be selected for content-switching.
However, a portion of the traffic can ignore this when "force-be-switch" is
used.
See also : "enabled", "force-be-switch"
dispatch <address>:<port> (deprecated) dispatch <address>:<port> (deprecated)
@ -7556,6 +7641,19 @@ force-persist { if | unless } <condition>
and section 7 about ACL usage. and section 7 about ACL usage.
force-be-switch { if | unless } <condition>
Allow content switching to select a backend instance even if it is disabled
or unpublished. This rule can be used by admins to test traffic to services
prior to expose them to the outside world.
May be used in the following contexts: tcp, http
May be used in sections: defaults | frontend | listen | backend
no | yes | yes | no
See also : "disabled"
filter <name> [param*] filter <name> [param*]
Add the filter <name> in the filter list attached to the proxy. Add the filter <name> in the filter list attached to the proxy.
@ -9219,6 +9317,9 @@ mode { tcp|http|log|spop }
processing and switching will be possible. This is the mode which processing and switching will be possible. This is the mode which
brings HAProxy most of its value. brings HAProxy most of its value.
haterm The frontend will work in haterm HTTP benchmark mode. This is
not supported by backends. See doc/haterm.txt for details.
log When used in a backend section, it will turn the backend into a log When used in a backend section, it will turn the backend into a
log backend. Such backend can be used as a log destination for log backend. Such backend can be used as a log destination for
any "log" directive by using the "backend@<name>" syntax. Log any "log" directive by using the "backend@<name>" syntax. Log
@ -10765,7 +10866,7 @@ no option logasap
logging. logging.
option mysql-check [ user <username> [ { post-41 | pre-41 } ] ] option mysql-check [ user <username> [ { post-41 | pre-41 | post-80 } ] ]
Use MySQL health checks for server testing Use MySQL health checks for server testing
May be used in the following contexts: tcp May be used in the following contexts: tcp
@ -10778,6 +10879,12 @@ option mysql-check [ user <username> [ { post-41 | pre-41 } ] ]
server. server.
post-41 Send post v4.1 client compatible checks (the default) post-41 Send post v4.1 client compatible checks (the default)
pre-41 Send pre v4.1 client compatible checks pre-41 Send pre v4.1 client compatible checks
post-80 Send post v8.0 client compatible checks with CLIENT_PLUGIN_AUTH
capability set and mysql_native_password as the authentication
plugin. Use this option when connecting to MySQL 8.0+ servers
where the health check user is created with mysql_native_password
authentication. Example:
CREATE USER 'haproxy'@'%' IDENTIFIED WITH mysql_native_password BY '';
If you specify a username, the check consists of sending two MySQL packet, If you specify a username, the check consists of sending two MySQL packet,
one Client Authentication packet, and one QUIT packet, to correctly close one Client Authentication packet, and one QUIT packet, to correctly close
@ -14752,14 +14859,17 @@ use_backend <backend> [{if | unless} <condition>]
There may be as many "use_backend" rules as desired. All of these rules are There may be as many "use_backend" rules as desired. All of these rules are
evaluated in their declaration order, and the first one which matches will evaluated in their declaration order, and the first one which matches will
assign the backend. assign the backend. This is even the case if the backend is considered as
down. However, if a matching rule targets a disabled or unpublished backend,
it is ignored instead and rules evaluation continue.
In the first form, the backend will be used if the condition is met. In the In the first form, the backend will be used if the condition is met. In the
second form, the backend will be used if the condition is not met. If no second form, the backend will be used if the condition is not met. If no
condition is valid, the backend defined with "default_backend" will be used. condition is valid, the backend defined with "default_backend" will be used
If no default backend is defined, either the servers in the same section are unless it is disabled or unpublished. If no default backend is available,
used (in case of a "listen" section) or, in case of a frontend, no server is either the servers in the same section are used (in case of a "listen"
used and a 503 service unavailable response is returned. section) or, in case of a frontend, no server is used and a 503 service
unavailable response is returned.
Note that it is possible to switch from a TCP frontend to an HTTP backend. In Note that it is possible to switch from a TCP frontend to an HTTP backend. In
this case, either the frontend has already checked that the protocol is HTTP, this case, either the frontend has already checked that the protocol is HTTP,
@ -16260,20 +16370,24 @@ set-status <status> [reason <str>]
http-response set-status 503 reason "Slow Down". http-response set-status 503 reason "Slow Down".
set-timeout { client | server | tunnel } { <timeout> | <expr> } set-timeout { client | connect | queue | server | tarpit | tunnel } { <timeout> | <expr> }
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
- | - | - | - | - | X | X | - - | - | - | - | - | X | X | -
This action overrides the specified "client", "server" or "tunnel" timeout This action overrides the specified "client", "connect", "queue", "server",
for the current stream only. The timeout can be specified in milliseconds or "tarpit" or "tunnel" timeout for the current stream only. Changing one timeout
with any other unit if the number is suffixed by the unit as explained at the does not influence any other timeouts, even if they are inherited from each
top of this document. It is also possible to write an expression which must other during configuration parsing (see last example). The timeout can be
return a number interpreted as a timeout in milliseconds. specified in milliseconds or with any other unit if the number is suffixed by
the unit as explained at the top of this document. It is also possible to
write an expression which must return a number interpreted as a timeout in
milliseconds.
Note that the server/tunnel timeouts are only relevant on the backend side Note that the connect, queue, server and tunnel timeouts are only relevant on
and thus this rule is only available for the proxies with backend the backend side and thus this rule is only available for the proxies with
capabilities. Likewise, client timeout is only relevant for frontend side. backend capabilities. Likewise, client timeout is only relevant for frontend
Also the timeout value must be non-null to obtain the expected results. side. Tarpit timeout is available to both sides. The timeout value must be
non-null to obtain the expected results.
Example: Example:
http-request set-timeout tunnel 5s http-request set-timeout tunnel 5s
@ -16283,6 +16397,18 @@ set-timeout { client | server | tunnel } { <timeout> | <expr> }
http-response set-timeout tunnel 5s http-response set-timeout tunnel 5s
http-response set-timeout server res.hdr(X-Refresh-Seconds),mul(1000) http-response set-timeout server res.hdr(X-Refresh-Seconds),mul(1000)
Example:
defaults
# This will set both tarpit and queue timeout to 5s as they are not
# defined
timeout connect 5s
timeout client 30s
timeout server 30s
listen foo
# This will only change the connect timeout to 10s without affecting
# queue or tarpit timeouts
http-request set-timeout connect 10s
set-tos <tos> (deprecated) set-tos <tos> (deprecated)
This is an alias for "set-fc-tos" (which should be used instead). This is an alias for "set-fc-tos" (which should be used instead).
@ -16506,7 +16632,7 @@ use-service <service-name>
http-request use-service prometheus-exporter if { path /metrics } http-request use-service prometheus-exporter if { path /metrics }
wait-for-body time <time> [ at-least <bytes> ] wait-for-body time <time> [ at-least <bytes> ] [use-large-buffer]
Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft Usable in: QUIC Ini| TCP RqCon| RqSes| RqCnt| RsCnt| HTTP Req| Res| Aft
- | - | - | - | - | X | X | - - | - | - | - | - | X | X | -
@ -16524,6 +16650,10 @@ wait-for-body time <time> [ at-least <bytes> ]
happens first, this timeout will not occur even if the full body has happens first, this timeout will not occur even if the full body has
not yet been received. not yet been received.
"use-large-buffer" option may be set to allocate a large buffer if regular
one is to small to store the message body. To be used, "tune.bufsize.large"
global option must be defined.
This action may be used as a replacement for "option http-buffer-request". This action may be used as a replacement for "option http-buffer-request".
Arguments : Arguments :
@ -16538,7 +16668,7 @@ wait-for-body time <time> [ at-least <bytes> ]
Example: Example:
http-request wait-for-body time 1s at-least 1k if METH_POST http-request wait-for-body time 1s at-least 1k if METH_POST
See also : "option http-buffer-request" See also : "option http-buffer-request" and "tune.bufsize.large"
wait-for-handshake wait-for-handshake
@ -17165,7 +17295,9 @@ interface <interface>
ktls <on|off> [ EXPERIMENTAL ] ktls <on|off> [ EXPERIMENTAL ]
Enables or disables ktls for those sockets. If enabled, kTLS will be used Enables or disables ktls for those sockets. If enabled, kTLS will be used
if the kernel supports it and the cipher is compatible. This is only if the kernel supports it and the cipher is compatible. This is only
available on Linux kernel 4.17 and above. available on Linux kernel 4.17 and above. Please note that some network
drivers and/or TLS stacks might restrict kTLS usage to TLS v1.2 only. See
also "force-tlsv12".
label <label> label <label>
Sets an optional label for these sockets. It could be used group sockets by Sets an optional label for these sockets. It could be used group sockets by
@ -18268,7 +18400,10 @@ hash-key <key>
id The node keys will be derived from the server's numeric id The node keys will be derived from the server's numeric
identifier as set from "id" or which defaults to its position identifier as set from "id" or which defaults to its position
in the server list. in the server list. This is the default. Note that only the 28
lowest bits of the ID will be used (i.e. (id % 268435456)), so
better only use values comprised between 1 and this value to
avoid overlap.
addr The node keys will be derived from the server's address, when addr The node keys will be derived from the server's address, when
available, or else fall back on "id". available, or else fall back on "id".
@ -18280,7 +18415,9 @@ hash-key <key>
HAProxy processes are balancing traffic to the same set of servers. If the HAProxy processes are balancing traffic to the same set of servers. If the
server order of each process is different (because, for example, DNS records server order of each process is different (because, for example, DNS records
were resolved in different orders) then this will allow each independent were resolved in different orders) then this will allow each independent
HAProxy processes to agree on routing decisions. HAProxy processes to agree on routing decisions. Note: "balance random" also
uses "hash-type consistent", and the quality of the distribution will depend
on the quality of the keys.
id <value> id <value>
May be used in the following contexts: tcp, http, log May be used in the following contexts: tcp, http, log
@ -18425,9 +18562,10 @@ See also: "option tcp-check", "option httpchk"
ktls <on|off> [ EXPERIMENTAL ] ktls <on|off> [ EXPERIMENTAL ]
May be used in the following contexts: tcp, http, log, peers, ring May be used in the following contexts: tcp, http, log, peers, ring
Enables or disables ktls for those sockets. If enabled, kTLS will be used Enables or disables ktls for those sockets. If enabled, kTLS will be used if
if the kernel supports it and the cipher is compatible. the kernel supports it and the cipher is compatible. This is only available
This is only available on Linux. on Linux 4.17 and above. Please note that some network drivers and/or TLS
stacks might restrict kTLS usage to TLS v1.2 only. See also "force-tlsv12".
log-bufsize <bufsize> log-bufsize <bufsize>
May be used in the following contexts: log May be used in the following contexts: log
@ -19329,7 +19467,7 @@ strict-maxconn
then never establish more connections to a server than maxconn, and try to then never establish more connections to a server than maxconn, and try to
reuse or kill connections if needed. Please note, however, than it may lead reuse or kill connections if needed. Please note, however, than it may lead
to failed requests in case we can't establish a new connection, and no to failed requests in case we can't establish a new connection, and no
idle connection is available. This can happen when 'private" connections idle connection is available. This can happen when "private" connections
are established, connections tied only to a session, because authentication are established, connections tied only to a session, because authentication
happened. happened.
@ -20456,6 +20594,8 @@ The following keywords are supported:
51d.single(prop[,prop*]) string string 51d.single(prop[,prop*]) string string
add(value) integer integer add(value) integer integer
add_item(delim[,var[,suff]]) string string add_item(delim[,var[,suff]]) string string
aes_cbc_dec(bits,nonce,key[,<aad>]) binary binary
aes_cbc_enc(bits,nonce,key[,<aad>]) binary binary
aes_gcm_dec(bits,nonce,key,aead_tag[,aad]) binary binary aes_gcm_dec(bits,nonce,key,aead_tag[,aad]) binary binary
aes_gcm_enc(bits,nonce,key,aead_tag[,aad]) binary binary aes_gcm_enc(bits,nonce,key,aead_tag[,aad]) binary binary
and(value) integer integer and(value) integer integer
@ -20512,6 +20652,9 @@ ip.ver binary integer
ipmask(mask4[,mask6]) address address ipmask(mask4[,mask6]) address address
json([input-code]) string string json([input-code]) string string
json_query(json_path[,output_type]) string _outtype_ json_query(json_path[,output_type]) string _outtype_
jwt_decrypt_jwk(<jwk>) string binary
jwt_decrypt_cert(<cert>) string binary
jwt_decrypt_secret(<secret>) string binary
jwt_header_query([json_path[,output_type]]) string string jwt_header_query([json_path[,output_type]]) string string
jwt_payload_query([json_path[,output_type]]) string string jwt_payload_query([json_path[,output_type]]) string string
-- keyword -------------------------------------+- input type + output type - -- keyword -------------------------------------+- input type + output type -
@ -20680,6 +20823,31 @@ add_item(<delim>[,<var>[,<suff>]])
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)' http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)'
http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1) http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1)
aes_cbc_dec(<bits>,<nonce>,<key>[,<aad>])
Decrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. All other parameters need to be
base64 encoded and the returned result is in raw byte format. The <aad>
parameter is optional. If the <aad> validation fails, the converter doesn't
return any data.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Decrypted-Text %[var(txn.enc),\
aes_cbc_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_cbc_enc(<bits>,<nonce>,<key>[,<aad>])
Encrypts the raw byte input using the AES128-CBC, AES192-CBC or AES256-CBC
algorithm, depending on the <bits> parameter. <nonce>, <key> and <aad>
parameters must be base64 encoded.
The <aad> parameter is optional. The returned result is in raw byte format.
The <nonce>, <key> and <aad> can either be strings or variables. This
converter requires at least OpenSSL 1.0.1.
Example:
http-response set-header X-Encrypted-Text %[var(txn.plain),\
aes_cbc_enc(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==)]
aes_gcm_dec(<bits>,<nonce>,<key>,<aead_tag>[,<aad>]) aes_gcm_dec(<bits>,<nonce>,<key>,<aead_tag>[,<aad>])
Decrypts the raw byte input using the AES128-GCM, AES192-GCM or AES256-GCM Decrypts the raw byte input using the AES128-GCM, AES192-GCM or AES256-GCM
algorithm, depending on the <bits> parameter. All other parameters need to be algorithm, depending on the <bits> parameter. All other parameters need to be
@ -21129,9 +21297,9 @@ ip.fp([<mode>])
can be used to distinguish between multiple apparently identical hosts. The can be used to distinguish between multiple apparently identical hosts. The
real-world use case is to refine the identification of misbehaving hosts real-world use case is to refine the identification of misbehaving hosts
between a shared IP address to avoid blocking legitimate users when only one between a shared IP address to avoid blocking legitimate users when only one
is misbehaving and needs to be blocked. The converter builds a 7-byte binary is misbehaving and needs to be blocked. The converter builds a 8-byte minimum
block based on the input. The bytes of the fingerprint are arranged like binary block based on the input. The bytes of the fingerprint are arranged
this: like this:
- byte 0: IP TOS field (see ip.tos) - byte 0: IP TOS field (see ip.tos)
- byte 1: - byte 1:
- bit 7: IPv6 (1) / IPv4 (0) - bit 7: IPv6 (1) / IPv4 (0)
@ -21146,10 +21314,13 @@ ip.fp([<mode>])
- bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised) - bits 3..0: TCP window scaling + 1 (1..15) / 0 (no WS advertised)
- byte 3..4: tcp.win - byte 3..4: tcp.win
- byte 5..6: tcp.options.mss, or zero if absent - byte 5..6: tcp.options.mss, or zero if absent
- byte 7: 1 bit per present TCP option, with options 2 to 8 being mapped to
bits 0..6 respectively, and bit 7 indicating the presence of any
option from 9 to 255.
The <mode> argument permits to append more information to the fingerprint. By The <mode> argument permits to append more information to the fingerprint. By
default, when the <mode> argument is not set or is zero, the fingerprint is default, when the <mode> argument is not set or is zero, the fingerprint is
solely made of the 7 bytes described above. If <mode> is specified as another solely made of the 8 bytes described above. If <mode> is specified as another
value, it then corresponds to the sum of the following values, and the value, it then corresponds to the sum of the following values, and the
respective components will be concatenated to the fingerprint, in the order respective components will be concatenated to the fingerprint, in the order
below: below:
@ -21159,7 +21330,7 @@ ip.fp([<mode>])
- 4: the source IP address is appended to the fingerprint, which adds - 4: the source IP address is appended to the fingerprint, which adds
4 bytes for IPv4 and 16 for IPv6. 4 bytes for IPv4 and 16 for IPv6.
Example: make a 12..24 bytes fingerprint using the base FP, the TTL and the Example: make a 13..25 bytes fingerprint using the base FP, the TTL and the
source address (1+4=5): source address (1+4=5):
frontend test frontend test
@ -21311,22 +21482,110 @@ json_query(<json_path>[,<output_type>])
# get the value of the key 'iss' from a JWT Bearer token # get the value of the key 'iss' from a JWT Bearer token
http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss') http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss')
jwt_decrypt_cert(<cert>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the certificate provided.
The <cert> parameter must be a path to an already loaded certificate (that
can be dumped via the "dump ssl cert" CLI command). The certificate must have
its "jwt" option explicitely set to "on" (see "jwt" crt-list option). It can
be provided directly or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_cert("/foo/bar.pem")]
jwt_decrypt_jwk(<jwk>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the provided JSON Web Key (RFC7517).
The <jwk> parameter must be a valid JWK of type 'oct' or 'RSA' ('kty' field
of the JSON key) that can be provided either as a string or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used to decode token that have a symmetric-type
algorithm ("alg" field of the JOSE header) among the following: A128KW,
A192KW, A256KW, A128GCMKW, A192GCMKW, A256GCMKW, dir. In this case, we expect
the provided JWK to be of the 'oct' type. Please note that the A128KW and
A192KW algorithms are not available on AWS-LC and decryption will not work.
This converter also manages tokens that have an algorithm ("alg" field of
the JOSE header) among the following: RSA1_5, RSA-OAEP or RSA-OAEP-256. In
such a case an 'RSA' type JWK representing a private key must be provided.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Because of the way quotes, commas and double quotes are treated in the
configuration, the contents of the JWK must be properly escaped for this
converter to work properly (see section 2.2 for more information).
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret(\'{\"kty\":\"oct\",\"k\":\"wAsgsg\"}\')
# or via a variable
http-request set-var(txn.bearer) http_auth_bearer
http-request set-var(txn.jwk) str(\'{\"kty\":\"oct\",\"k\":\"Q-NFLlghQ\"}\')
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_jwk(txn.jwk)
jwt_decrypt_secret(<secret>)
Performs a signature validation of a JSON Web Token following the JSON Web
Encryption format (see RFC 7516) given in input and return its content
decrypted thanks to the base64-encoded secret provided. The secret can be
given as a string or via a variable.
The only tokens managed yet are the ones using the Compact Serialization
format (five dot-separated base64-url encoded strings).
This converter can be used for tokens that have an algorithm ("alg" field of
the JOSE header) among the following: A128KW, A192KW, A256KW, A128GCMKW,
A192GCMKW, A256GCMKW, dir. Please note that the A128KW and A192KW algorithms
are not available on AWS-LC and decryption will not work.
The JWE token must be provided base64url-encoded and the output will be
provided "raw". If an error happens during token parsing, signature
verification or content decryption, an empty string will be returned.
Example:
# Get a JWT from the authorization header, put its decrypted content in an
# HTTP header
http-request set-var(txn.bearer) http_auth_bearer
http-request set-header X-Decrypted %[var(txn.bearer),jwt_decrypt_secret("GawgguFyGrWKav7AX4VKUg")]
jwt_header_query([<json_path>[,<output_type>]]) jwt_header_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded header When given a JSON Web Token (JWT) in input, either returns the decoded header
part of the token (the first base64-url encoded part of the JWT) if no part of the token (the first base64-url encoded part of the JWT) if no
parameter is given, or performs a json_query on the decoded header part of parameter is given, or performs a json_query on the decoded header part of
the token. See "json_query" converter for details about the accepted the token. See "json_query" converter for details about the accepted
json_path and output_type parameters. json_path and output_type parameters.
This converter can be used with tokens that are either JWS or JWE tokens as
long as they are in the Compact Serialization format.
Please note that this converter is only available when HAProxy has been Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL. compiled with USE_OPENSSL.
jwt_payload_query([<json_path>[,<output_type>]]) jwt_payload_query([<json_path>[,<output_type>]])
When given a JSON Web Token (JWT) in input, either returns the decoded When given a JSON Web Token (JWT) of the JSON Web Signed (JWS) format in
payload part of the token (the second base64-url encoded part of the JWT) if input, either returns the decoded payload part of the token (the second
no parameter is given, or performs a json_query on the decoded payload part base64-url encoded part of the JWT) if no parameter is given, or performs a
of the token. See "json_query" converter for details about the accepted json_query on the decoded payload part of the token. See "json_query"
json_path and output_type parameters. converter for details about the accepted json_path and output_type
parameters.
Please note that this converter is only available when HAProxy has been Please note that this converter is only available when HAProxy has been
compiled with USE_OPENSSL. compiled with USE_OPENSSL.
@ -23443,6 +23702,71 @@ var(<var-name>[,<default>]) : undefined
return it as a string. Empty strings are permitted. See section 2.8 about return it as a string. Empty strings are permitted. See section 2.8 about
variables for details. variables for details.
dump_all_vars([<scope>][,<prefix>][,<delimiter>]) : string
Returns a list of all variables in the specified scope, optionally filtered
by name prefix and with a customizable delimiter.
Output format: var1=value1<delim>var2=value2<delim>...
Value encoding by type:
- Strings: quoted and escaped (", \, \r, \n, \b, \0)
Example: txn.name="John \"Doe\""
- Binary: hex-encoded with 'x' prefix, unquoted
Example: txn.data=x48656c6c6f
- Integers: unquoted decimal
Example: txn.count=42
- Booleans: unquoted "true" or "false"
Example: txn.active=true
- Addresses: unquoted IP address string
Example: txn.client=192.168.1.1
- HTTP Methods: quoted string
Example: req.method="GET"
Arguments:
- <scope> (optional): sess, txn, req, res, or proc. If omitted, all these
scopes are visited in the same order as presented here.
- <prefix> (optional): filters variables whose names start with the
specified prefix (after removing the scope prefix).
Performance note: When using prefix filtering, all variables in the scope
are still visited. This should not be used with configurations involving
thousands of variables.
- <delimiter> (optional): string to separate variables. Defaults to ", "
(comma-space). Can be customized to any string. As a reminder, in order
to pass commas or spaces in a function argument, they need to be enclosed
in simple or double quotes (if the expression itself is already within
quotes, use the other ones).
Return value:
- On success: string containing all matching variables
- On failure: empty (sample fetch fails) if output buffer is too small.
The function will not truncate output; it fails completely to avoid
partial data.
This is particularly useful for debugging, logging, or exporting variable
states.
Examples:
# Dump all transaction variables
http-request return string %[dump_all_vars(txn)]
# Dump only variables starting with "user"
http-request set-header X-User-Vars "%[dump_all_vars(txn,user)]"
# Dump all process variables
http-request return string %[dump_all_vars(proc)]
# Custom delimiter (semicolon)
http-request set-header X-Vars "%[dump_all_vars(txn,,; )]"
# Force the default delimiter (comma space)
http-request set-header X-Vars "%[dump_all_vars(txn,,', ')]"
# Prefix filter with custom delimiter
http-request set-header X-Session "%[dump_all_vars(sess,user,|)]"
wait_end : boolean wait_end : boolean
This fetch either returns true when the inspection period is over, or does This fetch either returns true when the inspection period is over, or does
not fetch. It is only used in ACLs, in conjunction with content analysis to not fetch. It is only used in ACLs, in conjunction with content analysis to
@ -23533,12 +23857,18 @@ bc_src_port integer
bc_srv_queue integer bc_srv_queue integer
be_id integer be_id integer
be_name string be_name string
be_connect_timeout integer
be_queue_timeout integer
be_server_timeout integer be_server_timeout integer
be_tarpit_timeout integer
be_tunnel_timeout integer be_tunnel_timeout integer
bytes_in integer bytes_in integer
bytes_out integer bytes_out integer
cur_connect_timeout integer
cur_client_timeout integer cur_client_timeout integer
cur_queue_timeout integer
cur_server_timeout integer cur_server_timeout integer
cur_tarpit_timeout integer
cur_tunnel_timeout integer cur_tunnel_timeout integer
dst ip dst ip
dst_conn integer dst_conn integer
@ -23572,6 +23902,7 @@ fc_src ip
fc_src_is_local boolean fc_src_is_local boolean
fc_src_port integer fc_src_port integer
fc_unacked integer fc_unacked integer
fe_tarpit_timeout integer
fe_client_timeout integer fe_client_timeout integer
fe_defbe string fe_defbe string
fe_id integer fe_id integer
@ -23866,6 +24197,11 @@ be_id : integer
used in a frontend and no backend was used, it returns the current used in a frontend and no backend was used, it returns the current
frontend's id. It can also be used in a tcp-check or an http-check ruleset. frontend's id. It can also be used in a tcp-check or an http-check ruleset.
be_connect_timeout : integer
Returns the configuration value in millisecond for the connect timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_connect_timeout".
be_name : string be_name : string
Returns a string containing the current backend's name. It can be used in Returns a string containing the current backend's name. It can be used in
frontends with responses to check which backend processed the request. If frontends with responses to check which backend processed the request. If
@ -23873,11 +24209,21 @@ be_name : string
frontend's name. It can also be used in a tcp-check or an http-check frontend's name. It can also be used in a tcp-check or an http-check
ruleset. ruleset.
be_queue_timeout : integer
Returns the configuration value in millisecond for the queue timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_queue_timeout".
be_server_timeout : integer be_server_timeout : integer
Returns the configuration value in millisecond for the server timeout of the Returns the configuration value in millisecond for the server timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_server_timeout". also the "cur_server_timeout".
be_tarpit_timeout : integer
Returns the configuration value in millisecond for the queue timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See
also the "cur_tarpit_timeout".
be_tunnel_timeout : integer be_tunnel_timeout : integer
Returns the configuration value in millisecond for the tunnel timeout of the Returns the configuration value in millisecond for the tunnel timeout of the
current backend. This timeout can be overwritten by a "set-timeout" rule. See current backend. This timeout can be overwritten by a "set-timeout" rule. See
@ -23889,16 +24235,32 @@ bytes_in : integer
bytes_out : integer bytes_out : integer
See "res.bytes_in". See "res.bytes_in".
cur_connect_timeout : integer
Returns the currently applied connect timeout in millisecond for the stream.
In the default case, this will be equal to be_connect_timeout unless a
"set-timeout" rule has been applied. See also "be_connect_timeout".
cur_client_timeout : integer cur_client_timeout : integer
Returns the currently applied client timeout in millisecond for the stream. Returns the currently applied client timeout in millisecond for the stream.
In the default case, this will be equal to fe_client_timeout unless a In the default case, this will be equal to fe_client_timeout unless a
"set-timeout" rule has been applied. See also "fe_client_timeout". "set-timeout" rule has been applied. See also "fe_client_timeout".
cur_queue_timeout : integer
Returns the currently applied queue timeout in millisecond for the stream.
In the default case, this will be equal to be_queue_timeout unless a
"set-timeout" rule has been applied. See also "be_queue_timeout".
cur_server_timeout : integer cur_server_timeout : integer
Returns the currently applied server timeout in millisecond for the stream. Returns the currently applied server timeout in millisecond for the stream.
In the default case, this will be equal to be_server_timeout unless a In the default case, this will be equal to be_server_timeout unless a
"set-timeout" rule has been applied. See also "be_server_timeout". "set-timeout" rule has been applied. See also "be_server_timeout".
cur_tarpit_timeout : integer
Returns the currently applied tarpit timeout in millisecond for the stream.
In the default case, this will be equal to fe_tarpit_timeout/be_tarpit_timeout
unless a "set-timeout" rule has been applied. See also "fe_tarpit_timeout"
and "be_tarpit_timeout".
cur_tunnel_timeout : integer cur_tunnel_timeout : integer
Returns the currently applied tunnel timeout in millisecond for the stream. Returns the currently applied tunnel timeout in millisecond for the stream.
In the default case, this will be equal to be_tunnel_timeout unless a In the default case, this will be equal to be_tunnel_timeout unless a
@ -24283,6 +24645,10 @@ fe_name : string
backends to check from which frontend it was called, or to stick all users backends to check from which frontend it was called, or to stick all users
coming via a same frontend to the same server. coming via a same frontend to the same server.
fe_tarpit_timeout : integer
Returns the configuration value in millisecond for the tarpit timeout of the
current frontend. This timeout can be overwritten by a "set-timeout" rule.
req.bytes_in : integer req.bytes_in : integer
This returns the number of bytes received from the client. The value This returns the number of bytes received from the client. The value
corresponds to what was received by HAProxy, including some headers and some corresponds to what was received by HAProxy, including some headers and some
@ -29499,7 +29865,7 @@ See also : "filter"
9.1. Trace 9.1. Trace
---------- ----------
filter trace [name <name>] [random-forwarding] [hexdump] filter trace [name <name>] [random-forwarding] [max-fwd <max>] [hexdump]
Arguments: Arguments:
<name> is an arbitrary name that will be reported in <name> is an arbitrary name that will be reported in
@ -29512,6 +29878,11 @@ filter trace [name <name>] [random-forwarding] [hexdump]
data. With this parameter, it only forwards a random data. With this parameter, it only forwards a random
amount of the parsed data. amount of the parsed data.
<max> is the maximum amount of data that can be forwarded at
a time. "max-fwd" option can be combined with the
random forwarding. <max> must be an positive integer.
0 means there is no limit.
<hexdump> dumps all forwarded data to the server and the client. <hexdump> dumps all forwarded data to the server and the client.
This filter can be used as a base to develop new filters. It defines all This filter can be used as a base to develop new filters. It defines all
@ -31501,8 +31872,9 @@ ocsp-update [ off | on ]
failure" or "Error during insertion" errors. failure" or "Error during insertion" errors.
jwt [ off | on ] jwt [ off | on ]
Allow for this certificate to be used for JWT validation via the Allow for this certificate to be used for JWT validation or decryption via
"jwt_verify_cert" converter when set to 'on'. Its value default to 'off'. the "jwt_verify_cert", "jwt_decrypt_cert" or "jwt_decrypt" converters when
set to 'on'. Its value defaults to 'off'.
When set to 'on' for a given certificate, the CLI command "del ssl cert" will When set to 'on' for a given certificate, the CLI command "del ssl cert" will
not work. In order to be deleted, a certificate must not be used, either for not work. In order to be deleted, a certificate must not be used, either for
@ -31511,6 +31883,30 @@ jwt [ off | on ]
This option can be changed during runtime via the "add ssl jwt" and "del ssl This option can be changed during runtime via the "add ssl jwt" and "del ssl
jwt" CLI commands. See also "show ssl jwt" CLI command. jwt" CLI commands. See also "show ssl jwt" CLI command.
generate-dummy [ off | on ]
Allow the generation of a private key and its self-signed certificate at
parsing time when set to 'on'. This may be useful if one does not have a
certificate at disposal during testing phase for instance. In this case,
"keytype", "bits" and "curves" may be used to customize the private key. When
not used, the default value is 'off'. (also see "keytype", "bits" and
"curves").
keytype [ RSA | ECDSA ]
Allow the selection of the private key type used to generate at parsing time
a self-signed certificate. This is the case if "generate-dummy" is set to 'on'
for this certificate. When not used, the default is 'RSA'.
(also see "generate-dummy").
bits <number>
Configure the number of bits to generate an RSA self-signed certificate when
"generate-dummy" is set to 'on' for this self-signed certificate and "keytype"
is set to 'RSA'. When not used, the default is 2048. (also see
"generate-dummy").
curves <string>
Configure the curves when "generate-dummy" is set to 'on' and "keytype" is
set to 'ECDSA" for this self-signed certificate. The default is 'P-384'.
12.8. ACME 12.8. ACME
---------- ----------
@ -31523,6 +31919,9 @@ The ACME section allows to configure HAProxy as an ACMEv2 client. This feature
is experimental meaning that "expose-experimental-directives" must be in the is experimental meaning that "expose-experimental-directives" must be in the
global section so this can be used. global section so this can be used.
A guide is available on the HAProxy wiki
https://github.com/haproxy/wiki/wiki/ACME:--native-haproxy
Current limitations as of 3.3: Current limitations as of 3.3:
- The feature is limited to the HTTP-01 or DNS-01 challenges for now. HTTP-01 - The feature is limited to the HTTP-01 or DNS-01 challenges for now. HTTP-01
is completely handled by HAProxy, but DNS-01 needs either the dataplaneAPI or is completely handled by HAProxy, but DNS-01 needs either the dataplaneAPI or

140
doc/haterm.txt Normal file
View file

@ -0,0 +1,140 @@
------
HATerm
------
HAProxy's dummy HTTP
server for benchmarks
1. Background
-------------
HATerm is a dummy HTTP server that leverages the flexible and scalable
architecture of HAProxy to ease benchmarking of HTTP agents in all versions of
HTTP currently supported by HAProxy (HTTP/1, HTTP/2, HTTP/3), and both in clear
and TLS / QUIC. It follows the same principle as its ancestor HTTPTerm [1],
consisting in producing HTTP responses entirely configured by the request
parameters (size, response time, status etc). It also preserves the spirit
HTTPTerm which does not require any configuration beyond an optional listening
address and a port number, though it also supports advanced configurations with
the full spectrum of HAProxy features for specific testing. The goal remains
to make it almost as fast as the original HTTPTerm so that it can become a
de-facto replacement, with a compatible command line and request parameters
that will not change users' habits.
[1] https://github.com/wtarreau/httpterm
2. Compilation
--------------
HATerm may be compiled in the same way as HAProxy but with "haterm" as Makefile
target to provide on the "make" command line as follows:
$ make -j $(nproc) TARGET=linux-glibc haterm
HATerm supports HTTPS/SSL/TCP:
$ make TARGET=linux-glibc USE_OPENSSL=1
It also supports QUIC:
$ make -j $(nproc) TARGET=linux-glibc USE_OPENSSL=1 USE_QUIC=1 haterm
Technically speaking, it uses the regular HAProxy source and object code with a
different command line parser. As such, all build options supported by HAProxy
also apply to HATerm. See INSTALL for more details about how to compile them.
3. Execution
------------
HATerm is a very easy to use HTTP server with supports for all the HTTP
versions. It displays its usage when run without argument or wrong arguments:
$ ./haterm
Usage : haterm -L [<ip>]:<clear port>[:<TCP&QUIC SSL port>] [-L...]* [opts]
where <opts> may be any combination of:
-G <line> : multiple option; append <line> to the "global" section
-F <line> : multiple option; append <line> to the "frontend" section
-T <line> : multiple option; append <line> to the "traces" section
-C : dump the configuration and exit
-D : goes daemon
-b <keysize> : RSA key size in bits (ex: "2048", "4096"...)
-c <curves> : ECSDA curves (ex: "P-256", "P-384"...)
-v : shows version
-d : enable the traces for all http protocols
--quic-bind-opts <opts> : append options to QUIC "bind" lines
--tcp-bind-opts <opts> : append options to TCP "bind" lines
Arguments -G, -F, -T permit to append one or multiple lines at the end of their
respective sections. A tab character ('\t') is prepended at the beginning of
the argument, and a line feed ('\n') is appended at the end. It is also
possible to insert multiple lines at once using escape sequences '\n' and '\t'
inside the string argument.
As HAProxy, HATerm may listen on several TCP/UDP addresses which can be
provided by multiple "-L" options. To be functional, it needs at least one
correct "-L" option to be set.
Examples:
$ ./haterm -L 127.0.0.1:8888 # listen on 127.0.0.1:8888 TCP address
$ ./haterm -L 127.0.0.1:8888:8889 # listen on 127.0.0.1:8888 TCP address,
# 127.0.01:8889 SSL/TCP address,
# and 127.0.01:8889 QUIC/UDP address
$ ./haterm -L 127.0.0.1:8888:8889 -L [::1]:8888:8889
With USE_QUIC_OPENSSL_COMPAT support, the user must configure a global
section as for HAProxy. HATerm sets internally its configuration in.
memory as this is done by HAProxy from configuration files:
$ ./haterm -L 127.0.0.1:8888:8889
[NOTICE] (1371578) : haproxy version is 3.4-dev4-ba5eab-28
[NOTICE] (1371578) : path to executable is ./haterm
[ALERT] (1371578) : Binding [haterm cfgfile:12] for frontend
___haterm_frontend___: this SSL library does not
support the QUIC protocol. A limited compatibility
layer may be enabled using the "limited-quic" global
option if desired.
Such an alert may be fixed with "-G' option:
$ ./haterm -L 127.0.0.1:8888:8889 -G "limited-quic"
When the SSL support is not compiled in, the second port is ignored. This is
also the case for the QUIC support.
HATerm adjusts its responses depending on the requests it receives. An empty
query string provides the information about how the URIs are understood by
HATerm:
$ curl http://127.0.0.1:8888/?
HAProxy's dummy HTTP server for benchmarks - version 3.4-dev4.
All integer argument values are in the form [digits]*[kmgr] (r=random(0..1))
The following arguments are supported to override the default objects :
- /?s=<size> return <size> bytes.
E.g. /?s=20k
- /?r=<retcode> present <retcode> as the HTTP return code.
E.g. /?r=404
- /?c=<cache> set the return as not cacheable if <1.
E.g. /?c=0
- /?A=<req-after> drain the request body after sending the response.
E.g. /?A=1
- /?C=<close> force the response to use close if >0.
E.g. /?C=1
- /?K=<keep-alive> force the response to use keep-alive if >0.
E.g. /?K=1
- /?t=<time> wait <time> milliseconds before responding.
E.g. /?t=500
- /?k=<enable> Enable transfer encoding chunked with only one chunk
if >0.
- /?R=<enable> Enable sending random data if >0.
Note that those arguments may be cumulated on one line separated by a set of
delimitors among [&?,;/] :
- GET /?s=20k&c=1&t=700&K=30r HTTP/1.0
- GET /?r=500?s=0?c=0?t=1000 HTTP/1.0

View file

@ -5,7 +5,7 @@
The buffer list API allows one to share a certain amount of buffers between The buffer list API allows one to share a certain amount of buffers between
multiple entities, which will each see their own as lists of buffers, while multiple entities, which will each see their own as lists of buffers, while
keeping a sharedd free list. The immediate use case is for muxes, which may keeping a shared free list. The immediate use case is for muxes, which may
want to allocate up to a certain number of buffers per connection, shared want to allocate up to a certain number of buffers per connection, shared
among all streams. In this case, each stream will first request a new list among all streams. In this case, each stream will first request a new list
for its own use, then may request extra entries from the free list. At any for its own use, then may request extra entries from the free list. At any

View file

@ -11,7 +11,7 @@ default init, this was controversial but fedora and archlinux already uses it.
At this time HAProxy still had a multi-process model, and the way haproxy is At this time HAProxy still had a multi-process model, and the way haproxy is
working was incompatible with the daemon mode. working was incompatible with the daemon mode.
Systemd is compatible with traditionnal forking services, but somehow HAProxy Systemd is compatible with traditional forking services, but somehow HAProxy
is different. To work correctly, systemd needs a main PID, this is the PID of is different. To work correctly, systemd needs a main PID, this is the PID of
the process that systemd will supervises. the process that systemd will supervises.
@ -45,7 +45,7 @@ However the wrapper suffered from several problems:
### mworker V1 ### mworker V1
HAProxy 1.8 got ride of the wrapper which was replaced by the master worker HAProxy 1.8 got rid of the wrapper which was replaced by the master worker
mode. This first version was basically a reintegration of the wrapper features mode. This first version was basically a reintegration of the wrapper features
within HAProxy. HAProxy is launched with the -W flag, read the configuration and within HAProxy. HAProxy is launched with the -W flag, read the configuration and
then fork. In mworker mode, the master is usually launched as a root process, then fork. In mworker mode, the master is usually launched as a root process,
@ -86,7 +86,7 @@ retrieved automatically.
The master is supervising the workers, when a current worker (not a previous one The master is supervising the workers, when a current worker (not a previous one
from before the reload) is exiting without being asked for a reload, the master from before the reload) is exiting without being asked for a reload, the master
will emit an "exit-on-failure" error and will kill every workers with a SIGTERM will emit an "exit-on-failure" error and will kill every workers with a SIGTERM
and exits with the same error code than the failed master, this behavior can be and exits with the same error code than the failed worker, this behavior can be
changed by using the "no exit-on-failure" option in the global section. changed by using the "no exit-on-failure" option in the global section.
While the master is supervising the workers using the wait() function, the While the master is supervising the workers using the wait() function, the
@ -186,8 +186,8 @@ number that can be found in HAPROXY_PROCESSES. With this change the stats socket
in the configuration is less useful and everything can be done from the master in the configuration is less useful and everything can be done from the master
CLI. CLI.
With 2.7, the reload mecanism of the master CLI evolved, with previous versions, With 2.7, the reload mechanism of the master CLI evolved, with previous versions,
this mecanism was asynchronous, so once the `reload` command was received, the this mechanism was asynchronous, so once the `reload` command was received, the
master would reload, the active master CLI connection was closed, and there was master would reload, the active master CLI connection was closed, and there was
no way to return a status as a response to the `reload` command. To achieve a no way to return a status as a response to the `reload` command. To achieve a
synchronous reload, a dedicated sockpair is used, one side uses a master CLI synchronous reload, a dedicated sockpair is used, one side uses a master CLI
@ -208,3 +208,38 @@ starts with -st to achieve a hard stop on the previous worker.
Version 3.0 got rid of the libsystemd dependencies for sd_notify() after the Version 3.0 got rid of the libsystemd dependencies for sd_notify() after the
events of xz/openssh, the function is now implemented directly in haproxy in events of xz/openssh, the function is now implemented directly in haproxy in
src/systemd.c. src/systemd.c.
### mworker V3
This version was implemented with HAProxy 3.1, the goal was to stop parsing and
applying the configuration in the master process.
One of the caveats of the previous implementation was that the parser could take
a lot of time, and the master process would be stuck in the parser instead of
handling its polling loop, signals etc. Some parts of the configuration parsing
could also be less reliable with third-party code (EXTRA_OBJS), it could, for
example, allow opening FDs and not closing them before the reload which
would crash the master after a few reloads.
The startup of the master-worker was reorganized this way:
- the "discovery" mode, which is a lighter configuration parsing step, only
applies the configuration which need to be effective for the master process.
For example, "master-worker", "mworker-max-reloads" and less than 20 other
keywords that are identified by KWF_DISCOVERY in the code. It is really fast
as it don't need all the configuration to be applied in the master process.
- the master will then fork a worker, with a PROC_O_INIT flag. This worker has
a temporary sockpair connected to the master CLI. Once the worker is forked,
the master initializes its configuration and starts its polling loop.
- The newly forked worker will try to parse the configuration, which could
result in a failure (exit 1), or any bad error code. In case of success, the
worker will send a "READY" message to the master CLI then close this FD. At
this step everything was initialized and the worker can enter its polling
loop.
- The master then waits for the worker, it could:
* receive the READY message over the mCLI, resulting in a successful loading
of haproxy
* receive a SIGCHLD, meaning the worker exited and couldn't load

View file

@ -1693,7 +1693,7 @@ A small team of trusted developers will receive it and will be able to propose
a fix. We usually don't use embargoes and once a fix is available it gets a fix. We usually don't use embargoes and once a fix is available it gets
merged. In some rare circumstances it can happen that a release is coordinated merged. In some rare circumstances it can happen that a release is coordinated
with software vendors. Please note that this process usually messes up with with software vendors. Please note that this process usually messes up with
eveyone's work, and that rushed up releases can sometimes introduce new bugs, everyone's work, and that rushed up releases can sometimes introduce new bugs,
so it's best avoided unless strictly necessary; as such, there is often little so it's best avoided unless strictly necessary; as such, there is often little
consideration for reports that needlessly cause such extra burden, and the best consideration for reports that needlessly cause such extra burden, and the best
way to see your work credited usually is to provide a working fix, which will way to see your work credited usually is to provide a working fix, which will

View file

@ -1725,6 +1725,30 @@ add acl [@<ver>] <acl> <pattern>
This command cannot be used if the reference <acl> is a name also used with This command cannot be used if the reference <acl> is a name also used with
a map. In this case, the "add map" command must be used instead. a map. In this case, the "add map" command must be used instead.
add backend <name> from <defproxy> [mode <mode>] [guid <guid>] [ EXPERIMENTAL ]
Instantiate a new backend proxy with the name <name>.
Only TCP or HTTP proxies can be created. All of the settings are inherited
from <defproxy> default proxy instance. By default, it is mandatory to
specify the backend mode via the argument of the same name, unless <defproxy>
already defines it explicitely. It is also possible to use an optional GUID
argument if wanted.
Servers can be added via the command "add server". The backend is initialized
in the unpublished state. Once considered ready for traffic, use "publish
backend" to expose the newly created instance.
All named default proxies can be used, given that they validate the same
inheritance rules applied during configuration parsing. There is some
exceptions though, for example when the mode is neither TCP nor HTTP. Another
exception is that it is not yet possible to use a default proxies which
reference custom HTTP errors, for example via the errorfiles or http-rules
keywords.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it
also requires experimental mode (see "experimental-mode on").
add map [@<ver>] <map> <key> <value> add map [@<ver>] <map> <key> <value>
add map [@<ver>] <map> <payload> add map [@<ver>] <map> <payload>
Add an entry into the map <map> to associate the value <value> to the key Add an entry into the map <map> to associate the value <value> to the key
@ -2100,6 +2124,30 @@ del acl <acl> [<key>|#<ref>]
listing the content of the acl. Note that if the reference <acl> is a name and listing the content of the acl. Note that if the reference <acl> is a name and
is shared with a map, the entry will be also deleted in the map. is shared with a map, the entry will be also deleted in the map.
del backend <name>
Removes the backend proxy with the name <name>.
This operation is only possible for TCP or HTTP proxies. To succeed, the
backend instance must have been first unpublished. Also, all of its servers
must first be removed (via "del server" CLI). Finally, no stream must still
be attached to the backend instance.
There is additional restrictions which prevent backend removal. First, a
backend cannot be removed if it is explicitely referenced by config elements,
for example via a use_backend rule or in sample expressions. Some proxies
options are also incompatible with runtime deletion. Currently, this is the
case when deprecated dispatch or option transparent are used. Also, a backend
cannot be removed if there is a stick-table declared in it. Finally, it is
impossible for now to remove a backend if QUIC servers were present in it.
It can be useful to use "wait be-removable" prior to this command to check
for the aformentioned requisites. This also provides a methode to wait for
the final closure of the streams attached to the target backend.
This command is restricted and can only be issued on sockets configured for
level "admin". Moreover, this feature is still considered in development so it
also requires experimental mode (see "experimental-mode on").
del map <map> [<key>|#<ref>] del map <map> [<key>|#<ref>]
Delete all the map entries from the map <map> corresponding to the key <key>. Delete all the map entries from the map <map> corresponding to the key <key>.
<map> is the #<id> or the <name> returned by "show map". If the <ref> is used, <map> is the #<id> or the <name> returned by "show map". If the <ref> is used,
@ -2474,6 +2522,11 @@ prompt [help | n | i | p | timed]*
advanced scripts, and the non-interactive mode (default) to basic scripts. advanced scripts, and the non-interactive mode (default) to basic scripts.
Note that the non-interactive mode is not available for the master socket. Note that the non-interactive mode is not available for the master socket.
publish backend <backend>
Activates content switching to a backend instance. This is the reverse
operation of "unpublish backend" command. This command is restricted and can
only be issued on sockets configured for levels "operator" or "admin".
quit quit
Close the connection when in interactive mode. Close the connection when in interactive mode.
@ -2529,7 +2582,8 @@ set maxconn global <maxconn>
delayed until the threshold is reached. A value of zero restores the initial delayed until the threshold is reached. A value of zero restores the initial
setting. setting.
set profiling { tasks | memory } { auto | on | off } set profiling memory { on | off }
set profiling tasks { auto | on | off | lock | no-lock | memory | no-memory }
Enables or disables CPU or memory profiling for the indicated subsystem. This Enables or disables CPU or memory profiling for the indicated subsystem. This
is equivalent to setting or clearing the "profiling" settings in the "global" is equivalent to setting or clearing the "profiling" settings in the "global"
section of the configuration file. Please also see "show profiling". Note section of the configuration file. Please also see "show profiling". Note
@ -2539,6 +2593,13 @@ set profiling { tasks | memory } { auto | on | off }
on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at
compile time. compile time.
. For tasks profiling, it is possible to enable or disable the collection of
per-task lock and memory timings at runtime, but the change is only taken
into account next time the profiler switches from off/auto to on (either
automatically or manually). Thus when using "no-lock" to disable per-task
lock profiling and save CPU cycles, it is recommended to flip the task
profiling off then on to commit the change.
set rate-limit connections global <value> set rate-limit connections global <value>
Change the process-wide connection rate limit, which is set by the global Change the process-wide connection rate limit, which is set by the global
'maxconnrate' setting. A value of zero disables the limitation. This limit 'maxconnrate' setting. A value of zero disables the limitation. This limit
@ -2842,6 +2903,13 @@ operator
increased. It also drops expert and experimental mode. See also "show cli increased. It also drops expert and experimental mode. See also "show cli
level". level".
unpublish backend <backend>
Marks the backend as unqualified for future traffic selection. In effect,
use_backend / default_backend rules which reference it are ignored and the
next content switching rules are evaluated. Contrary to disabled backends,
servers health checks remain active. This command is restricted and can only
be issued on sockets configured for levels "operator" or "admin".
user user
Decrease the CLI level of the current CLI session to user. It can't be Decrease the CLI level of the current CLI session to user. It can't be
increased. It also drops expert and experimental mode. See also "show cli increased. It also drops expert and experimental mode. See also "show cli
@ -3615,7 +3683,7 @@ show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
format" described in the section above. In short, the second column (after the format" described in the section above. In short, the second column (after the
first ':') indicates the origin, nature, scope and persistence state of the first ':') indicates the origin, nature, scope and persistence state of the
variable. The third column indicates the field type, among "s32", "s64", variable. The third column indicates the field type, among "s32", "s64",
"u32", "u64", "flt' and "str". Then the fourth column is the value itself, "u32", "u64", "flt" and "str". Then the fourth column is the value itself,
which the consumer knows how to parse thanks to column 3 and how to process which the consumer knows how to parse thanks to column 3 and how to process
thanks to column 2. thanks to column 2.
@ -4482,6 +4550,13 @@ wait { -h | <delay> } [<condition> [<args>...]]
specified condition to be satisfied, to unrecoverably fail, or to remain specified condition to be satisfied, to unrecoverably fail, or to remain
unsatisfied for the whole <delay> duration. The supported conditions are: unsatisfied for the whole <delay> duration. The supported conditions are:
- be-removable <proxy> : this will wait for the specified proxy backend to be
removable by the "del backend" command. Some conditions will never be
accepted (e.g. backend not yet unpublished or with servers in it) and will
cause the report of a specific error message indicating what condition is
not met. If everything is OK before the delay, a success is returned and
the operation is terminated.
- srv-removable <proxy>/<server> : this will wait for the specified server to - srv-removable <proxy>/<server> : this will wait for the specified server to
be removable by the "del server" command, i.e. be in maintenance and no be removable by the "del server" command, i.e. be in maintenance and no
longer have any connection on it (neither active or idle). Some conditions longer have any connection on it (neither active or idle). Some conditions

View file

@ -30,6 +30,7 @@ Revision history
2020/03/05 - added the unique ID TLV type (Tim Düsterhus) 2020/03/05 - added the unique ID TLV type (Tim Düsterhus)
2025/09/09 - added SSL-related TLVs for key exchange group and signature 2025/09/09 - added SSL-related TLVs for key exchange group and signature
scheme (Steven Collison) scheme (Steven Collison)
2026/01/15 - added SSL client certificate TLV (Simon Ser)
1. Background 1. Background
@ -549,6 +550,7 @@ The following types have already been registered for the <type> field :
#define PP2_SUBTYPE_SSL_KEY_ALG 0x25 #define PP2_SUBTYPE_SSL_KEY_ALG 0x25
#define PP2_SUBTYPE_SSL_GROUP 0x26 #define PP2_SUBTYPE_SSL_GROUP 0x26
#define PP2_SUBTYPE_SSL_SIG_SCHEME 0x27 #define PP2_SUBTYPE_SSL_SIG_SCHEME 0x27
#define PP2_SUBTYPE_SSL_CLIENT_CERT 0x28
#define PP2_TYPE_NETNS 0x30 #define PP2_TYPE_NETNS 0x30
@ -625,7 +627,10 @@ For the type PP2_TYPE_SSL, the value is itself a defined like this :
uint8_t client; uint8_t client;
uint32_t verify; uint32_t verify;
struct pp2_tlv sub_tlv[0]; struct pp2_tlv sub_tlv[0];
}; } __attribute__((packed));
Note the "packed" attribute which indicates that each field starts immediately
after the previous one (i.e. without type-specific alignment nor padding).
The <verify> field will be zero if the client presented a certificate The <verify> field will be zero if the client presented a certificate
and it was successfully verified, and non-zero otherwise. and it was successfully verified, and non-zero otherwise.
@ -672,6 +677,10 @@ The second level TLV PP2_SUBTYPE_SSL_SIG_SCHEME provides the US-ASCII string
name of the algorithm the frontend used to sign the ServerKeyExchange or name of the algorithm the frontend used to sign the ServerKeyExchange or
CertificateVerify message, for example "rsa_pss_rsae_sha256". CertificateVerify message, for example "rsa_pss_rsae_sha256".
The optional second level TLV PP2_SUBTYPE_SSL_CLIENT_CERT provides the raw
X.509 client certificate encoded in ASN.1 DER. The frontend may choose to omit
this TLV depending on configuration.
In all cases, the string representation (in UTF8) of the Common Name field In all cases, the string representation (in UTF8) of the Common Name field
(OID: 2.5.4.3) of the client certificate's Distinguished Name, is appended (OID: 2.5.4.3) of the client certificate's Distinguished Name, is appended
using the TLV format and the type PP2_SUBTYPE_SSL_CN. E.g. "example.com". using the TLV format and the type PP2_SUBTYPE_SSL_CN. E.g. "example.com".

View file

@ -24,7 +24,7 @@ vtest installation
------------------------ ------------------------
To use vtest you will have to download and compile the recent vtest To use vtest you will have to download and compile the recent vtest
sources found at https://github.com/vtest/VTest. sources found at https://github.com/vtest/VTest2.
To compile vtest: To compile vtest:

View file

@ -102,7 +102,10 @@ enum act_name {
/* Timeout name valid for a set-timeout rule */ /* Timeout name valid for a set-timeout rule */
enum act_timeout_name { enum act_timeout_name {
ACT_TIMEOUT_CONNECT,
ACT_TIMEOUT_SERVER, ACT_TIMEOUT_SERVER,
ACT_TIMEOUT_QUEUE,
ACT_TIMEOUT_TARPIT,
ACT_TIMEOUT_TUNNEL, ACT_TIMEOUT_TUNNEL,
ACT_TIMEOUT_CLIENT, ACT_TIMEOUT_CLIENT,
}; };

View file

@ -33,6 +33,8 @@
#define HA_PROF_TASKS_MASK 0x00000003 /* per-task CPU profiling mask */ #define HA_PROF_TASKS_MASK 0x00000003 /* per-task CPU profiling mask */
#define HA_PROF_MEMORY 0x00000004 /* memory profiling */ #define HA_PROF_MEMORY 0x00000004 /* memory profiling */
#define HA_PROF_TASKS_MEM 0x00000008 /* per-task CPU profiling with memory */
#define HA_PROF_TASKS_LOCK 0x00000010 /* per-task CPU profiling with locks */
#ifdef USE_MEMORY_PROFILING #ifdef USE_MEMORY_PROFILING

View file

@ -192,6 +192,7 @@ struct lbprm {
void (*server_requeue)(struct server *); /* function used to place the server where it must be */ void (*server_requeue)(struct server *); /* function used to place the server where it must be */
void (*proxy_deinit)(struct proxy *); /* to be called when we're destroying the proxy */ void (*proxy_deinit)(struct proxy *); /* to be called when we're destroying the proxy */
void (*server_deinit)(struct server *); /* to be called when we're destroying the server */ void (*server_deinit)(struct server *); /* to be called when we're destroying the server */
int (*server_init)(struct server *); /* initialize a freshly added server (runtime); <0=fail. */
}; };
#endif /* _HAPROXY_BACKEND_T_H */ #endif /* _HAPROXY_BACKEND_T_H */

View file

@ -69,6 +69,7 @@ int backend_parse_balance(const char **args, char **err, struct proxy *curproxy)
int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit); int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit);
int be_downtime(struct proxy *px); int be_downtime(struct proxy *px);
int be_supports_dynamic_srv(struct proxy *px, char **msg);
void recount_servers(struct proxy *px); void recount_servers(struct proxy *px);
void update_backend_weight(struct proxy *px); void update_backend_weight(struct proxy *px);
@ -85,10 +86,20 @@ static inline int be_usable_srv(struct proxy *be)
return be->srv_bck; return be->srv_bck;
} }
/* Returns true if <be> backend can be used as target to a switching rules. */
static inline int be_is_eligible(const struct proxy *be)
{
/* A disabled or unpublished backend cannot be selected for traffic.
* Note that STOPPED state is ignored as there is a risk of breaking
* requests during soft-stop.
*/
return !(be->flags & (PR_FL_DISABLED|PR_FL_BE_UNPUBLISHED));
}
/* set the time of last session on the backend */ /* set the time of last session on the backend */
static inline void be_set_sess_last(struct proxy *be) static inline void be_set_sess_last(struct proxy *be)
{ {
if (be->be_counters.shared.tg[tgid - 1]) if (be->be_counters.shared.tg)
HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns)); HA_ATOMIC_STORE(&be->be_counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
} }

View file

@ -111,6 +111,7 @@ extern char *cursection;
extern int non_global_section_parsed; extern int non_global_section_parsed;
extern struct proxy *curproxy; extern struct proxy *curproxy;
extern struct proxy *last_defproxy;
extern char initial_cwd[PATH_MAX]; extern char initial_cwd[PATH_MAX];
int cfg_parse_global(const char *file, int linenum, char **args, int inv); int cfg_parse_global(const char *file, int linenum, char **args, int inv);
@ -140,7 +141,7 @@ int warnif_misplaced_tcp_req_sess(struct proxy *proxy, const char *file, int lin
int warnif_misplaced_tcp_req_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2); int warnif_misplaced_tcp_req_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_tcp_res_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2); int warnif_misplaced_tcp_res_cont(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_misplaced_quic_init(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2); int warnif_misplaced_quic_init(struct proxy *proxy, const char *file, int line, const char *arg, const char *arg2);
int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const char *file, int line); int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, char **err);
int warnif_tcp_http_cond(const struct proxy *px, const struct acl_cond *cond); int warnif_tcp_http_cond(const struct proxy *px, const struct acl_cond *cond);
int too_many_args_idx(int maxarg, int index, char **args, char **msg, int *err_code); int too_many_args_idx(int maxarg, int index, char **args, char **msg, int *err_code);
int too_many_args(int maxarg, char **args, char **msg, int *err_code); int too_many_args(int maxarg, char **args, char **msg, int *err_code);

View file

@ -24,6 +24,7 @@
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/filters-t.h>
#include <haproxy/show_flags-t.h> #include <haproxy/show_flags-t.h>
/* The CF_* macros designate Channel Flags, which may be ORed in the bit field /* The CF_* macros designate Channel Flags, which may be ORed in the bit field
@ -205,6 +206,7 @@ struct channel {
unsigned char xfer_large; /* number of consecutive large xfers */ unsigned char xfer_large; /* number of consecutive large xfers */
unsigned char xfer_small; /* number of consecutive small xfers */ unsigned char xfer_small; /* number of consecutive small xfers */
int analyse_exp; /* expiration date for current analysers (if set) */ int analyse_exp; /* expiration date for current analysers (if set) */
struct chn_flt flt; /* current state of filters active on this channel */
}; };

View file

@ -787,8 +787,12 @@ static inline int channel_recv_max(const struct channel *chn)
*/ */
static inline size_t channel_data_limit(const struct channel *chn) static inline size_t channel_data_limit(const struct channel *chn)
{ {
size_t max = (global.tune.bufsize - global.tune.maxrewrite);
size_t max;
if (!c_size(chn))
return 0;
max = (c_size(chn) - global.tune.maxrewrite);
if (IS_HTX_STRM(chn_strm(chn))) if (IS_HTX_STRM(chn_strm(chn)))
max -= HTX_BUF_OVERHEAD; max -= HTX_BUF_OVERHEAD;
return max; return max;

View file

@ -32,6 +32,7 @@
extern struct pool_head *pool_head_trash; extern struct pool_head *pool_head_trash;
extern struct pool_head *pool_head_large_trash;
/* function prototypes */ /* function prototypes */
@ -46,6 +47,9 @@ int chunk_asciiencode(struct buffer *dst, struct buffer *src, char qc);
int chunk_strcmp(const struct buffer *chk, const char *str); int chunk_strcmp(const struct buffer *chk, const char *str);
int chunk_strcasecmp(const struct buffer *chk, const char *str); int chunk_strcasecmp(const struct buffer *chk, const char *str);
struct buffer *get_trash_chunk(void); struct buffer *get_trash_chunk(void);
struct buffer *get_large_trash_chunk(void);
struct buffer *get_trash_chunk_sz(size_t size);
struct buffer *get_larger_trash_chunk(struct buffer *chunk);
int init_trash_buffers(int first); int init_trash_buffers(int first);
static inline void chunk_reset(struct buffer *chk) static inline void chunk_reset(struct buffer *chk)
@ -106,12 +110,53 @@ static forceinline struct buffer *alloc_trash_chunk(void)
return chunk; return chunk;
} }
/*
* Allocate a large trash chunk from the reentrant pool. The buffer starts at
* the end of the chunk. This chunk must be freed using free_trash_chunk(). This
* call may fail and the caller is responsible for checking that the returned
* pointer is not NULL.
*/
static forceinline struct buffer *alloc_large_trash_chunk(void)
{
struct buffer *chunk;
if (!pool_head_large_trash)
return NULL;
chunk = pool_alloc(pool_head_large_trash);
if (chunk) {
char *buf = (char *)chunk + sizeof(struct buffer);
*buf = 0;
chunk_init(chunk, buf,
pool_head_large_trash->size - sizeof(struct buffer));
}
return chunk;
}
/*
* Allocate a trash chunk accordingly to the requested size. This chunk must be
* freed using free_trash_chunk(). This call may fail and the caller is
* responsible for checking that the returned pointer is not NULL.
*/
static forceinline struct buffer *alloc_trash_chunk_sz(size_t size)
{
if (likely(size <= pool_head_trash->size))
return alloc_trash_chunk();
else if (pool_head_large_trash && size <= pool_head_large_trash->size)
return alloc_large_trash_chunk();
else
return NULL;
}
/* /*
* free a trash chunk allocated by alloc_trash_chunk(). NOP on NULL. * free a trash chunk allocated by alloc_trash_chunk(). NOP on NULL.
*/ */
static forceinline void free_trash_chunk(struct buffer *chunk) static forceinline void free_trash_chunk(struct buffer *chunk)
{ {
if (likely(chunk && chunk->size == pool_head_trash->size - sizeof(struct buffer)))
pool_free(pool_head_trash, chunk); pool_free(pool_head_trash, chunk);
else
pool_free(pool_head_large_trash, chunk);
} }
/* copies chunk <src> into <chk>. Returns 0 in case of failure. */ /* copies chunk <src> into <chk>. Returns 0 in case of failure. */

View file

@ -100,6 +100,7 @@ enum cli_wait_err {
enum cli_wait_cond { enum cli_wait_cond {
CLI_WAIT_COND_NONE, // no condition to wait on CLI_WAIT_COND_NONE, // no condition to wait on
CLI_WAIT_COND_SRV_UNUSED,// wait for server to become unused CLI_WAIT_COND_SRV_UNUSED,// wait for server to become unused
CLI_WAIT_COND_BE_UNUSED, // wait for backend to become unused
}; };
struct cli_wait_ctx { struct cli_wait_ctx {

View file

@ -146,7 +146,6 @@ enum {
CO_FL_WANT_SPLICING = 0x00001000, /* we wish to use splicing on the connection when possible */ CO_FL_WANT_SPLICING = 0x00001000, /* we wish to use splicing on the connection when possible */
CO_FL_SSL_NO_CACHED_INFO = 0x00002000, /* Don't use any cached information when creating a new SSL connection */ CO_FL_SSL_NO_CACHED_INFO = 0x00002000, /* Don't use any cached information when creating a new SSL connection */
/* unused: 0x00002000 */
CO_FL_EARLY_SSL_HS = 0x00004000, /* We have early data pending, don't start SSL handshake yet */ CO_FL_EARLY_SSL_HS = 0x00004000, /* We have early data pending, don't start SSL handshake yet */
CO_FL_EARLY_DATA = 0x00008000, /* At least some of the data are early data */ CO_FL_EARLY_DATA = 0x00008000, /* At least some of the data are early data */

View file

@ -66,7 +66,7 @@ struct counters_shared {
COUNTERS_SHARED; COUNTERS_SHARED;
struct { struct {
COUNTERS_SHARED_TG; COUNTERS_SHARED_TG;
} *tg[MAX_TGROUPS]; } **tg;
}; };
/* /*
@ -101,7 +101,7 @@ struct fe_counters_shared_tg {
struct fe_counters_shared { struct fe_counters_shared {
COUNTERS_SHARED; COUNTERS_SHARED;
struct fe_counters_shared_tg *tg[MAX_TGROUPS]; struct fe_counters_shared_tg **tg;
}; };
/* counters used by listeners and frontends */ /* counters used by listeners and frontends */
@ -160,7 +160,7 @@ struct be_counters_shared_tg {
struct be_counters_shared { struct be_counters_shared {
COUNTERS_SHARED; COUNTERS_SHARED;
struct be_counters_shared_tg *tg[MAX_TGROUPS]; struct be_counters_shared_tg **tg;
}; };
/* counters used by servers and backends */ /* counters used by servers and backends */
@ -185,6 +185,29 @@ struct be_counters {
} p; /* protocol-specific stats */ } p; /* protocol-specific stats */
}; };
/* extra counters that are registered at boot by various modules */
enum counters_type {
COUNTERS_FE = 0,
COUNTERS_BE,
COUNTERS_SV,
COUNTERS_LI,
COUNTERS_RSLV,
COUNTERS_OFF_END /* must always be last */
};
struct extra_counters {
char **datap; /* points to pointer to heap containing counters allocated in a linear fashion */
size_t size; /* size of allocated data */
size_t tgrp_step; /* distance in words between two datap for consecutive tgroups, 0 for single */
uint nbtgrp; /* number of thread groups accessing these counters */
enum counters_type type; /* type of object containing the counters */
};
#define EXTRA_COUNTERS(name) \
struct extra_counters *name
#endif /* _HAPROXY_COUNTERS_T_H */ #endif /* _HAPROXY_COUNTERS_T_H */
/* /*

View file

@ -27,6 +27,8 @@
#include <haproxy/counters-t.h> #include <haproxy/counters-t.h>
#include <haproxy/guid-t.h> #include <haproxy/guid-t.h>
extern THREAD_LOCAL void *trash_counters;
int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid, char **errmsg); int counters_fe_shared_prepare(struct fe_counters_shared *counters, const struct guid_node *guid, char **errmsg);
int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid, char **errmsg); int counters_be_shared_prepare(struct be_counters_shared *counters, const struct guid_node *guid, char **errmsg);
@ -43,11 +45,13 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
*/ */
#define COUNTERS_SHARED_LAST_OFFSET(scounters, type, offset) \ #define COUNTERS_SHARED_LAST_OFFSET(scounters, type, offset) \
({ \ ({ \
unsigned long last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\ unsigned long last = 0; \
unsigned long now_seconds = ns_to_sec(now_ns); \ unsigned long now_seconds = ns_to_sec(now_ns); \
int it; \ int it; \
\ \
for (it = 1; (it < global.nbtgroups && scounters[it]); it++) { \ if (scounters) \
last = HA_ATOMIC_LOAD((type *)((char *)scounters[0] + offset));\
for (it = 1; (it < global.nbtgroups && scounters); it++) { \
unsigned long cur = HA_ATOMIC_LOAD((type *)((char *)scounters[it] + offset));\ unsigned long cur = HA_ATOMIC_LOAD((type *)((char *)scounters[it] + offset));\
if ((now_seconds - cur) < (now_seconds - last)) \ if ((now_seconds - cur) < (now_seconds - last)) \
last = cur; \ last = cur; \
@ -74,7 +78,7 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \ uint64_t __ret = 0; \
int it; \ int it; \
\ \
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \ for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc((type *)((char *)scounters[it] + offset)); \ __ret += rfunc((type *)((char *)scounters[it] + offset)); \
__ret; \ __ret; \
}) })
@ -94,9 +98,105 @@ void counters_be_shared_drop(struct be_counters_shared *counters);
uint64_t __ret = 0; \ uint64_t __ret = 0; \
int it; \ int it; \
\ \
for (it = 0; (it < global.nbtgroups && scounters[it]); it++) \ for (it = 0; (it < global.nbtgroups && scounters); it++) \
__ret += rfunc(&scounters[it]->elem, arg1, arg2); \ __ret += rfunc(&scounters[it]->elem, arg1, arg2); \
__ret; \ __ret; \
}) })
/* Manipulation of extra_counters, for boot-time registrable modules */
/* retrieve the base storage of extra counters (first tgroup if any) */
#define EXTRA_COUNTERS_BASE(counters, mod) \
(likely(counters) ? \
((void *)(*(counters)->datap + (mod)->counters_off[(counters)->type])) : \
(trash_counters))
/* retrieve the pointer to the extra counters storage for module <mod> for the
* current TGID.
*/
#define EXTRA_COUNTERS_GET(counters, mod) \
(likely(counters) ? \
((void *)(counters)->datap[(counters)->tgrp_step * (tgid - 1)] + \
(mod)->counters_off[(counters)->type]) : \
(trash_counters))
#define EXTRA_COUNTERS_REGISTER(counters, ctype, alloc_failed_label, storage, step) \
do { \
typeof(*counters) _ctr; \
_ctr = calloc(1, sizeof(*_ctr)); \
if (!_ctr) \
goto alloc_failed_label; \
_ctr->type = (ctype); \
_ctr->tgrp_step = (step); \
_ctr->datap = (storage); \
*(counters) = _ctr; \
} while (0)
#define EXTRA_COUNTERS_ADD(mod, counters, new_counters, csize) \
do { \
typeof(counters) _ctr = (counters); \
(mod)->counters_off[_ctr->type] = _ctr->size; \
_ctr->size += (csize); \
} while (0)
#define EXTRA_COUNTERS_ALLOC(counters, alloc_failed_label, nbtg) \
do { \
typeof(counters) _ctr = (counters); \
char **datap = _ctr->datap; \
uint tgrp; \
_ctr->nbtgrp = _ctr->tgrp_step ? (nbtg) : 1; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
*datap = malloc((_ctr)->size); \
if (!*_ctr->datap) \
goto alloc_failed_label; \
datap += _ctr->tgrp_step; \
} \
} while (0)
#define EXTRA_COUNTERS_INIT(counters, mod, init_counters, init_counters_size) \
do { \
typeof(counters) _ctr = (counters); \
char **datap = _ctr->datap; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
memcpy(*datap + mod->counters_off[_ctr->type], \
(init_counters), (init_counters_size)); \
datap += _ctr->tgrp_step; \
} \
} while (0)
#define EXTRA_COUNTERS_FREE(counters) \
do { \
typeof(counters) _ctr = (counters); \
if (_ctr) { \
char **datap = _ctr->datap; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
ha_free(datap); \
datap += _ctr->tgrp_step; \
} \
free(_ctr); \
} \
} while (0)
/* aggregate all values of <metricp> over the thread groups handled by
* <counters>. <metricp> MUST correspond to an entry of the first tgrp of
* <counters>. The number of groups and the step are found in <counters>. The
* type of the return value is the same as <metricp>, and must be a scalar so
* that values are summed before being returned.
*/
#define EXTRA_COUNTERS_AGGR(counters, metricp) \
({ \
typeof(counters) _ctr = (counters); \
typeof(metricp) *valp, _ret = 0; \
if (_ctr) { \
size_t ofs = (char *)&metricp - _ctr->datap[0]; \
uint tgrp; \
for (tgrp = 0; tgrp < _ctr->nbtgrp; tgrp++) { \
valp = (typeof(valp))(_ctr->datap[tgrp * (counters)->tgrp_step] + ofs); \
_ret += HA_ATOMIC_LOAD(valp); \
} \
} \
_ret; \
})
#endif /* _HAPROXY_COUNTERS_H */ #endif /* _HAPROXY_COUNTERS_H */

View file

@ -24,12 +24,12 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/connection-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/connection-t.h>
#include <haproxy/counters-t.h>
#include <haproxy/dgram-t.h> #include <haproxy/dgram-t.h>
#include <haproxy/dns_ring-t.h> #include <haproxy/dns_ring-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/task-t.h> #include <haproxy/task-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
@ -152,6 +152,7 @@ struct dns_nameserver {
struct dns_stream_server *stream; /* used for tcp dns */ struct dns_stream_server *stream; /* used for tcp dns */
EXTRA_COUNTERS(extra_counters); EXTRA_COUNTERS(extra_counters);
char *extra_counters_storage; /* storage used for extra_counters above */
struct dns_counters *counters; struct dns_counters *counters;
struct list list; /* nameserver chained list */ struct list list; /* nameserver chained list */

View file

@ -36,6 +36,7 @@
#include <haproxy/pool.h> #include <haproxy/pool.h>
extern struct pool_head *pool_head_buffer; extern struct pool_head *pool_head_buffer;
extern struct pool_head *pool_head_large_buffer;
int init_buffer(void); int init_buffer(void);
void buffer_dump(FILE *o, struct buffer *b, int from, int to); void buffer_dump(FILE *o, struct buffer *b, int from, int to);
@ -53,6 +54,30 @@ static inline int buffer_almost_full(const struct buffer *buf)
return b_almost_full(buf); return b_almost_full(buf);
} }
/* Return 1 if <sz> is the default buffer size */
static inline int b_is_default_sz(size_t sz)
{
return (sz == pool_head_buffer->size);
}
/* Return 1 if <sz> is the size of a large buffer (alwoys false is large buffers are not configured) */
static inline int b_is_large_sz(size_t sz)
{
return (pool_head_large_buffer && sz == pool_head_large_buffer->size);
}
/* Return 1 if <bug> is a default buffer */
static inline int b_is_default(struct buffer *buf)
{
return b_is_default_sz(b_size(buf));
}
/* Return 1 if <buf> is a large buffer (alwoys 0 is large buffers are not configured) */
static inline int b_is_large(struct buffer *buf)
{
return b_is_large_sz(b_size(buf));
}
/**************************************************/ /**************************************************/
/* Functions below are used for buffer allocation */ /* Functions below are used for buffer allocation */
/**************************************************/ /**************************************************/
@ -136,13 +161,18 @@ static inline char *__b_get_emergency_buf(void)
#define __b_free(_buf) \ #define __b_free(_buf) \
do { \ do { \
char *area = (_buf)->area; \ char *area = (_buf)->area; \
size_t sz = (_buf)->size; \
\ \
/* let's first clear the area to save an occasional "show sess all" \ /* let's first clear the area to save an occasional "show sess all" \
* glancing over our shoulder from getting a dangling pointer. \ * glancing over our shoulder from getting a dangling pointer. \
*/ \ */ \
*(_buf) = BUF_NULL; \ *(_buf) = BUF_NULL; \
__ha_barrier_store(); \ __ha_barrier_store(); \
if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \ /* if enabled, large buffers are always strictly greater \
* than the default buffers */ \
if (unlikely(b_is_large_sz(sz))) \
pool_free(pool_head_large_buffer, area); \
else if (th_ctx->emergency_bufs_left < global.tune.reserved_bufs) \
th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \ th_ctx->emergency_bufs[th_ctx->emergency_bufs_left++] = area; \
else \ else \
pool_free(pool_head_buffer, area); \ pool_free(pool_head_buffer, area); \

View file

@ -232,22 +232,28 @@ struct filter {
* 0: request channel, 1: response channel */ * 0: request channel, 1: response channel */
unsigned int pre_analyzers; /* bit field indicating analyzers to pre-process */ unsigned int pre_analyzers; /* bit field indicating analyzers to pre-process */
unsigned int post_analyzers; /* bit field indicating analyzers to post-process */ unsigned int post_analyzers; /* bit field indicating analyzers to post-process */
struct list list; /* Next filter for the same proxy/stream */ struct list list; /* Filter list for the stream */
/* req_list and res_list are exactly equivalent, except the order may differ */
struct list req_list; /* Filter list for request channel */
struct list res_list; /* Filter list for response channel */
}; };
/* /*
* Structure reprensenting the "global" state of filters attached to a stream. * Structure reprensenting the "global" state of filters attached to a stream.
* Doesn't hold much information, as the channel themselves hold chn_flt struct
* which contains the per-channel members.
*/ */
struct strm_flt { struct strm_flt {
struct list filters; /* List of filters attached to a stream */ struct list filters; /* List of filters attached to a stream */
struct filter *current[2]; /* From which filter resume processing, for a specific channel.
* This is used for resumable callbacks only,
* If NULL, we start from the first filter.
* 0: request channel, 1: response channel */
unsigned short flags; /* STRM_FL_* */ unsigned short flags; /* STRM_FL_* */
unsigned char nb_req_data_filters; /* Number of data filters registered on the request channel */ };
unsigned char nb_rsp_data_filters; /* Number of data filters registered on the response channel */
unsigned long long offset[2]; /* structure holding filter state for some members that are channel oriented */
struct chn_flt {
struct list filters; /* List of filters attached to a channel */
struct filter *current; /* From which filter resume processing, for a specific channel. */
unsigned char nb_data_filters; /* Number of data filters registered on channel */
unsigned long long offset;
}; };
#endif /* _HAPROXY_FILTERS_T_H */ #endif /* _HAPROXY_FILTERS_T_H */

View file

@ -40,13 +40,13 @@ extern const char *fcgi_flt_id;
/* Useful macros to access per-channel values. It can be safely used inside /* Useful macros to access per-channel values. It can be safely used inside
* filters. */ * filters. */
#define CHN_IDX(chn) (((chn)->flags & CF_ISRESP) == CF_ISRESP) #define CHN_IDX(chn) (((chn)->flags & CF_ISRESP) == CF_ISRESP)
#define FLT_STRM_OFF(s, chn) (strm_flt(s)->offset[CHN_IDX(chn)]) #define FLT_STRM_OFF(s, chn) (chn->flt.offset)
#define FLT_OFF(flt, chn) ((flt)->offset[CHN_IDX(chn)]) #define FLT_OFF(flt, chn) ((flt)->offset[CHN_IDX(chn)])
#define HAS_FILTERS(strm) ((strm)->strm_flt.flags & STRM_FLT_FL_HAS_FILTERS) #define HAS_FILTERS(strm) ((strm)->strm_flt.flags & STRM_FLT_FL_HAS_FILTERS)
#define HAS_REQ_DATA_FILTERS(strm) ((strm)->strm_flt.nb_req_data_filters != 0) #define HAS_REQ_DATA_FILTERS(strm) ((strm)->req.flt.nb_data_filters != 0)
#define HAS_RSP_DATA_FILTERS(strm) ((strm)->strm_flt.nb_rsp_data_filters != 0) #define HAS_RSP_DATA_FILTERS(strm) ((strm)->res.flt.nb_data_filters != 0)
#define HAS_DATA_FILTERS(strm, chn) (((chn)->flags & CF_ISRESP) ? HAS_RSP_DATA_FILTERS(strm) : HAS_REQ_DATA_FILTERS(strm)) #define HAS_DATA_FILTERS(strm, chn) (((chn)->flags & CF_ISRESP) ? HAS_RSP_DATA_FILTERS(strm) : HAS_REQ_DATA_FILTERS(strm))
#define IS_REQ_DATA_FILTER(flt) ((flt)->flags & FLT_FL_IS_REQ_DATA_FILTER) #define IS_REQ_DATA_FILTER(flt) ((flt)->flags & FLT_FL_IS_REQ_DATA_FILTER)
@ -137,14 +137,11 @@ static inline void
register_data_filter(struct stream *s, struct channel *chn, struct filter *filter) register_data_filter(struct stream *s, struct channel *chn, struct filter *filter)
{ {
if (!IS_DATA_FILTER(filter, chn)) { if (!IS_DATA_FILTER(filter, chn)) {
if (chn->flags & CF_ISRESP) { if (chn->flags & CF_ISRESP)
filter->flags |= FLT_FL_IS_RSP_DATA_FILTER; filter->flags |= FLT_FL_IS_RSP_DATA_FILTER;
strm_flt(s)->nb_rsp_data_filters++; else
}
else {
filter->flags |= FLT_FL_IS_REQ_DATA_FILTER; filter->flags |= FLT_FL_IS_REQ_DATA_FILTER;
strm_flt(s)->nb_req_data_filters++; chn->flt.nb_data_filters++;
}
} }
} }
@ -153,16 +150,64 @@ static inline void
unregister_data_filter(struct stream *s, struct channel *chn, struct filter *filter) unregister_data_filter(struct stream *s, struct channel *chn, struct filter *filter)
{ {
if (IS_DATA_FILTER(filter, chn)) { if (IS_DATA_FILTER(filter, chn)) {
if (chn->flags & CF_ISRESP) { if (chn->flags & CF_ISRESP)
filter->flags &= ~FLT_FL_IS_RSP_DATA_FILTER; filter->flags &= ~FLT_FL_IS_RSP_DATA_FILTER;
strm_flt(s)->nb_rsp_data_filters--; else
filter->flags &= ~FLT_FL_IS_REQ_DATA_FILTER;
chn->flt.nb_data_filters--;
}
}
/*
* flt_list_start() and flt_list_next() can be used to iterate over the list of filters
* for a given <strm> and <chn> combination. It will automatically choose the proper
* list to iterate from depending on the context.
*
* flt_list_start() has to be called exactly once to get the first value from the list
* to get the following values, use flt_list_next() until NULL is returned.
*
* Example:
*
* struct filter *filter;
*
* for (filter = flt_list_start(stream, channel); filter;
* filter = flt_list_next(stream, channel, filter)) {
* ...
* }
*/
static inline struct filter *flt_list_start(struct stream *strm, struct channel *chn)
{
struct filter *filter;
if (chn->flags & CF_ISRESP) {
filter = LIST_NEXT(&chn->flt.filters, struct filter *, res_list);
if (&filter->res_list == &chn->flt.filters)
filter = NULL; /* empty list */
} }
else { else {
filter->flags &= ~FLT_FL_IS_REQ_DATA_FILTER; filter = LIST_NEXT(&chn->flt.filters, struct filter *, req_list);
strm_flt(s)->nb_req_data_filters--; if (&filter->req_list == &chn->flt.filters)
filter = NULL; /* empty list */
} }
return filter;
} }
static inline struct filter *flt_list_next(struct stream *strm, struct channel *chn,
struct filter *filter)
{
if (chn->flags & CF_ISRESP) {
filter = LIST_NEXT(&filter->res_list, struct filter *, res_list);
if (&filter->res_list == &chn->flt.filters)
filter = NULL; /* end of list */
}
else {
filter = LIST_NEXT(&filter->req_list, struct filter *, req_list);
if (&filter->req_list == &chn->flt.filters)
filter = NULL; /* end of list */
}
return filter;
} }
/* This function must be called when a filter alter payload data. It updates /* This function must be called when a filter alter payload data. It updates
@ -177,7 +222,8 @@ flt_update_offsets(struct filter *filter, struct channel *chn, int len)
struct stream *s = chn_strm(chn); struct stream *s = chn_strm(chn);
struct filter *f; struct filter *f;
list_for_each_entry(f, &strm_flt(s)->filters, list) { for (f = flt_list_start(s, chn); f;
f = flt_list_next(s, chn, f)) {
if (f == filter) if (f == filter)
break; break;
FLT_OFF(f, chn) += len; FLT_OFF(f, chn) += len;

View file

@ -67,7 +67,7 @@
#define GTUNE_USE_SYSTEMD (1<<10) #define GTUNE_USE_SYSTEMD (1<<10)
#define GTUNE_BUSY_POLLING (1<<11) #define GTUNE_BUSY_POLLING (1<<11)
/* (1<<12) unused */ #define GTUNE_PURGE_DEFAULTS (1<<12)
#define GTUNE_SET_DUMPABLE (1<<13) #define GTUNE_SET_DUMPABLE (1<<13)
#define GTUNE_USE_EVPORTS (1<<14) #define GTUNE_USE_EVPORTS (1<<14)
#define GTUNE_STRICT_LIMITS (1<<15) #define GTUNE_STRICT_LIMITS (1<<15)
@ -179,6 +179,7 @@ struct global {
uint recv_enough; /* how many input bytes at once are "enough" */ uint recv_enough; /* how many input bytes at once are "enough" */
uint bufsize; /* buffer size in bytes, defaults to BUFSIZE */ uint bufsize; /* buffer size in bytes, defaults to BUFSIZE */
uint bufsize_small;/* small buffer size in bytes */ uint bufsize_small;/* small buffer size in bytes */
uint bufsize_large;/* large buffer size in bytes */
int maxrewrite; /* buffer max rewrite size in bytes, defaults to MAXREWRITE */ int maxrewrite; /* buffer max rewrite size in bytes, defaults to MAXREWRITE */
int reserved_bufs; /* how many buffers can only be allocated for response */ int reserved_bufs; /* how many buffers can only be allocated for response */
int buf_limit; /* if not null, how many total buffers may only be allocated */ int buf_limit; /* if not null, how many total buffers may only be allocated */

View file

@ -24,6 +24,7 @@
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/global-t.h> #include <haproxy/global-t.h>
#include <haproxy/cfgparse.h>
extern struct global global; extern struct global global;
extern int pid; /* current process id */ extern int pid; /* current process id */
@ -54,6 +55,8 @@ extern char **old_argv;
extern const char *old_unixsocket; extern const char *old_unixsocket;
extern int daemon_fd[2]; extern int daemon_fd[2];
extern int devnullfd; extern int devnullfd;
extern int fileless_mode;
extern struct cfgfile fileless_cfg;
struct proxy; struct proxy;
struct server; struct server;

View file

@ -263,6 +263,8 @@ static inline int h1_parse_chunk_size(const struct buffer *buf, int start, int s
const char *ptr_old = ptr; const char *ptr_old = ptr;
const char *end = b_wrap(buf); const char *end = b_wrap(buf);
uint64_t chunk = 0; uint64_t chunk = 0;
int backslash = 0;
int quote = 0;
stop -= start; // bytes left stop -= start; // bytes left
start = stop; // bytes to transfer start = stop; // bytes to transfer
@ -327,13 +329,37 @@ static inline int h1_parse_chunk_size(const struct buffer *buf, int start, int s
if (--stop == 0) if (--stop == 0)
return 0; return 0;
while (!HTTP_IS_CRLF(*ptr)) { /* The loop seeks the first CRLF or non-tab CTL char
* and stops there. If a backslash/quote is active,
* it's an error. If none, we assume it's the CRLF
* and go back to the top of the loop checking for
* CR then LF. This way CTLs, lone LF etc are handled
* in the fallback path. This allows to protect
* remotes against their own possibly non-compliant
* chunk-ext parser which could mistakenly skip a
* quoted CRLF. Chunk-ext are not used anyway, except
* by attacks.
*/
while (!HTTP_IS_CTL(*ptr) || HTTP_IS_SPHT(*ptr)) {
if (backslash)
backslash = 0; // escaped char
else if (*ptr == '\\' && quote)
backslash = 1;
else if (*ptr == '\\') // backslash not permitted outside quotes
goto error;
else if (*ptr == '"') // begin/end of quoted-pair
quote = !quote;
if (++ptr >= end) if (++ptr >= end)
ptr = b_orig(buf); ptr = b_orig(buf);
if (--stop == 0) if (--stop == 0)
return 0; return 0;
} }
/* we have a CRLF now, loop above */
/* mismatched quotes / backslashes end here */
if (quote || backslash)
goto error;
/* CTLs (CRLF) fall to the common check */
continue; continue;
} }
else else

View file

@ -222,6 +222,7 @@ struct hlua_proxy_list {
}; };
struct hlua_proxy_list_iterator_context { struct hlua_proxy_list_iterator_context {
struct watcher px_watch; /* watcher to automatically update next pointer on backend deletion */
struct proxy *next; struct proxy *next;
char capabilities; char capabilities;
}; };

View file

@ -0,0 +1,36 @@
#ifndef _HAPROXY_HSTREAM_T_H
#define _HAPROXY_HSTREAM_T_H
#include <haproxy/dynbuf-t.h>
#include <haproxy/http-t.h>
#include <haproxy/obj_type-t.h>
/* hastream stream */
struct hstream {
enum obj_type obj_type;
struct session *sess;
struct stconn *sc;
struct task *task;
struct buffer req;
struct buffer res;
unsigned long long to_write; /* #of response data bytes to write after headers */
struct buffer_wait buf_wait; /* Wait list for buffer allocation */
int flags;
int ka; /* .0: keep-alive .1: forced .2: http/1.1, .3: was_reused */
int req_cache;
unsigned long long req_size; /* values passed in the URI to override the server's */
unsigned long long req_body; /* remaining body to be consumed from the request */
int req_code;
int res_wait; /* time to wait before replying in ms */
int res_time;
int req_chunked;
int req_random;
int req_after_res; /* Drain the request body after having sent the response */
enum http_meth_t req_meth;
};
#endif /* _HAPROXY_HSTREAM_T_H */

12
include/haproxy/hstream.h Normal file
View file

@ -0,0 +1,12 @@
#ifndef _HAPROXY_HSTREAM_H
#define _HAPROXY_HSTREAM_H
#include <haproxy/cfgparse.h>
#include <haproxy/hstream-t.h>
struct task *sc_hstream_io_cb(struct task *t, void *ctx, unsigned int state);
int hstream_wake(struct stconn *sc);
void hstream_shutdown(struct stconn *sc);
void *hstream_new(struct session *sess, struct stconn *sc, struct buffer *input);
#endif /* _HAPROXY_HSTREAM_H */

View file

@ -184,6 +184,7 @@ enum {
PERSIST_TYPE_NONE = 0, /* no persistence */ PERSIST_TYPE_NONE = 0, /* no persistence */
PERSIST_TYPE_FORCE, /* force-persist */ PERSIST_TYPE_FORCE, /* force-persist */
PERSIST_TYPE_IGNORE, /* ignore-persist */ PERSIST_TYPE_IGNORE, /* ignore-persist */
PERSIST_TYPE_BE_SWITCH, /* force-be-switch */
}; };
/* final results for http-request rules */ /* final results for http-request rules */

View file

@ -49,7 +49,7 @@ int http_req_replace_stline(int action, const char *replace, int len,
int http_res_set_status(unsigned int status, struct ist reason, struct stream *s); int http_res_set_status(unsigned int status, struct ist reason, struct stream *s);
void http_check_request_for_cacheability(struct stream *s, struct channel *req); void http_check_request_for_cacheability(struct stream *s, struct channel *req);
void http_check_response_for_cacheability(struct stream *s, struct channel *res); void http_check_response_for_cacheability(struct stream *s, struct channel *res);
enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn, unsigned int time, unsigned int bytes); enum rule_result http_wait_for_msg_body(struct stream *s, struct channel *chn, unsigned int time, unsigned int bytes, unsigned int large_buffer);
void http_perform_server_redirect(struct stream *s, struct stconn *sc); void http_perform_server_redirect(struct stream *s, struct stconn *sc);
void http_server_error(struct stream *s, struct stconn *sc, int err, int finst, struct http_reply *msg); void http_server_error(struct stream *s, struct stconn *sc, int err, int finst, struct http_reply *msg);
void http_reply_and_close(struct stream *s, short status, struct http_reply *msg); void http_reply_and_close(struct stream *s, short status, struct http_reply *msg);

View file

@ -177,7 +177,7 @@ static forceinline char *hsl_show_flags(char *buf, size_t len, const char *delim
#define HTX_FL_PARSING_ERROR 0x00000001 /* Set when a parsing error occurred */ #define HTX_FL_PARSING_ERROR 0x00000001 /* Set when a parsing error occurred */
#define HTX_FL_PROCESSING_ERROR 0x00000002 /* Set when a processing error occurred */ #define HTX_FL_PROCESSING_ERROR 0x00000002 /* Set when a processing error occurred */
#define HTX_FL_FRAGMENTED 0x00000004 /* Set when the HTX buffer is fragmented */ #define HTX_FL_FRAGMENTED 0x00000004 /* Set when the HTX buffer is fragmented */
/* 0x00000008 unused */ #define HTX_FL_UNORDERED 0x00000008 /* Set when the HTX buffer are not ordered */
#define HTX_FL_EOM 0x00000010 /* Set when end-of-message is reached from the HTTP point of view #define HTX_FL_EOM 0x00000010 /* Set when end-of-message is reached from the HTTP point of view
* (at worst, on the EOM block is missing) * (at worst, on the EOM block is missing)
*/ */
@ -192,7 +192,7 @@ static forceinline char *htx_show_flags(char *buf, size_t len, const char *delim
_(0); _(0);
/* flags */ /* flags */
_(HTX_FL_PARSING_ERROR, _(HTX_FL_PROCESSING_ERROR, _(HTX_FL_PARSING_ERROR, _(HTX_FL_PROCESSING_ERROR,
_(HTX_FL_FRAGMENTED, _(HTX_FL_EOM)))); _(HTX_FL_FRAGMENTED, _(HTX_FL_UNORDERED, _(HTX_FL_EOM)))));
/* epilogue */ /* epilogue */
_(~0U); _(~0U);
return buf; return buf;

View file

@ -474,11 +474,12 @@ static inline struct htx_sl *htx_add_stline(struct htx *htx, enum htx_blk_type t
static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist name, static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist name,
const struct ist value) const struct ist value)
{ {
struct htx_blk *blk; struct htx_blk *blk, *tailblk;
if (name.len > 255 || value.len > 1048575) if (name.len > 255 || value.len > 1048575)
return NULL; return NULL;
tailblk = htx_get_tail_blk(htx);
blk = htx_add_blk(htx, HTX_BLK_HDR, name.len + value.len); blk = htx_add_blk(htx, HTX_BLK_HDR, name.len + value.len);
if (!blk) if (!blk)
return NULL; return NULL;
@ -486,6 +487,8 @@ static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist n
blk->info += (value.len << 8) + name.len; blk->info += (value.len << 8) + name.len;
ist2bin_lc(htx_get_blk_ptr(htx, blk), name); ist2bin_lc(htx_get_blk_ptr(htx, blk), name);
memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len); memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len);
if (tailblk && htx_get_blk_type(tailblk) >= HTX_BLK_EOH)
htx->flags |= HTX_FL_UNORDERED;
return blk; return blk;
} }
@ -495,11 +498,12 @@ static inline struct htx_blk *htx_add_header(struct htx *htx, const struct ist n
static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist name, static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist name,
const struct ist value) const struct ist value)
{ {
struct htx_blk *blk; struct htx_blk *blk, *tailblk;
if (name.len > 255 || value.len > 1048575) if (name.len > 255 || value.len > 1048575)
return NULL; return NULL;
tailblk = htx_get_tail_blk(htx);
blk = htx_add_blk(htx, HTX_BLK_TLR, name.len + value.len); blk = htx_add_blk(htx, HTX_BLK_TLR, name.len + value.len);
if (!blk) if (!blk)
return NULL; return NULL;
@ -507,6 +511,8 @@ static inline struct htx_blk *htx_add_trailer(struct htx *htx, const struct ist
blk->info += (value.len << 8) + name.len; blk->info += (value.len << 8) + name.len;
ist2bin_lc(htx_get_blk_ptr(htx, blk), name); ist2bin_lc(htx_get_blk_ptr(htx, blk), name);
memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len); memcpy(htx_get_blk_ptr(htx, blk) + name.len, value.ptr, value.len);
if (tailblk && htx_get_blk_type(tailblk) >= HTX_BLK_EOT)
htx->flags |= HTX_FL_UNORDERED;
return blk; return blk;
} }

View file

@ -147,14 +147,14 @@ __attribute__((constructor)) static void __initcb_##linenum() \
#define _DECLARE_INITCALL(...) \ #define _DECLARE_INITCALL(...) \
__DECLARE_INITCALL(__VA_ARGS__) __DECLARE_INITCALL(__VA_ARGS__)
/* This requires that function <function> is called with pointer argument /* This requires that function <function> is called without arguments
* <argument> during init stage <stage> which must be one of init_stage. * during init stage <stage> which must be one of init_stage.
*/ */
#define INITCALL0(stage, function) \ #define INITCALL0(stage, function) \
_DECLARE_INITCALL(stage, __LINE__, function, 0, 0, 0) _DECLARE_INITCALL(stage, __LINE__, function, 0, 0, 0)
/* This requires that function <function> is called with pointer argument /* This requires that function <function> is called with pointer argument
* <argument> during init stage <stage> which must be one of init_stage. * <arg1> during init stage <stage> which must be one of init_stage.
*/ */
#define INITCALL1(stage, function, arg1) \ #define INITCALL1(stage, function, arg1) \
_DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0) _DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0)

View file

@ -28,13 +28,13 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/counters-t.h>
#include <haproxy/guid-t.h> #include <haproxy/guid-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/quic_cc-t.h> #include <haproxy/quic_cc-t.h>
#include <haproxy/quic_sock-t.h> #include <haproxy/quic_sock-t.h>
#include <haproxy/quic_tp-t.h> #include <haproxy/quic_tp-t.h>
#include <haproxy/receiver-t.h> #include <haproxy/receiver-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
/* Some pointer types reference below */ /* Some pointer types reference below */
@ -263,6 +263,7 @@ struct listener {
struct li_per_thread *per_thr; /* per-thread fields (one per thread in the group) */ struct li_per_thread *per_thr; /* per-thread fields (one per thread in the group) */
char *extra_counters_storage; /* storage for extra_counters */
EXTRA_COUNTERS(extra_counters); EXTRA_COUNTERS(extra_counters);
}; };

View file

@ -46,6 +46,7 @@ enum obj_type {
#ifdef USE_QUIC #ifdef USE_QUIC
OBJ_TYPE_DGRAM, /* object is a struct quic_dgram */ OBJ_TYPE_DGRAM, /* object is a struct quic_dgram */
#endif #endif
OBJ_TYPE_HATERM, /* object is a struct hstream */
OBJ_TYPE_ENTRIES /* last one : number of entries */ OBJ_TYPE_ENTRIES /* last one : number of entries */
} __attribute__((packed)) ; } __attribute__((packed)) ;

View file

@ -26,6 +26,7 @@
#include <haproxy/applet-t.h> #include <haproxy/applet-t.h>
#include <haproxy/check-t.h> #include <haproxy/check-t.h>
#include <haproxy/connection-t.h> #include <haproxy/connection-t.h>
#include <haproxy/hstream-t.h>
#include <haproxy/listener-t.h> #include <haproxy/listener-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/pool.h> #include <haproxy/pool.h>
@ -189,6 +190,19 @@ static inline struct check *objt_check(enum obj_type *t)
return __objt_check(t); return __objt_check(t);
} }
static inline struct hstream *__objt_hstream(enum obj_type *t)
{
return container_of(t, struct hstream, obj_type);
}
static inline struct hstream *objt_hstream(enum obj_type *t)
{
if (!t || *t != OBJ_TYPE_HATERM)
return NULL;
return __objt_hstream(t);
}
#ifdef USE_QUIC #ifdef USE_QUIC
static inline struct quic_dgram *__objt_dgram(enum obj_type *t) static inline struct quic_dgram *__objt_dgram(enum obj_type *t)
{ {

View file

@ -380,6 +380,14 @@ static inline unsigned long ERR_peek_error_func(const char **func)
#endif #endif
#if (HA_OPENSSL_VERSION_NUMBER >= 0x40000000L) && !defined(OPENSSL_IS_AWSLC) && !defined(LIBRESSL_VERSION_NUMBER) && !defined(USE_OPENSSL_WOLFSSL)
# define X509_STORE_getX_objects(x) X509_STORE_get1_objects(x)
# define sk_X509_OBJECT_popX_free(x, y) sk_X509_OBJECT_pop_free(x,y)
#else
# define X509_STORE_getX_objects(x) X509_STORE_get0_objects(x)
# define sk_X509_OBJECT_popX_free(x, y) ({})
#endif
#if (HA_OPENSSL_VERSION_NUMBER >= 0x1010000fL) || (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER >= 0x2070200fL) #if (HA_OPENSSL_VERSION_NUMBER >= 0x1010000fL) || (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER >= 0x2070200fL)
#define __OPENSSL_110_CONST__ const #define __OPENSSL_110_CONST__ const
#else #else

View file

@ -72,8 +72,8 @@ struct pool_registration {
struct list list; /* link element */ struct list list; /* link element */
const char *name; /* name of the pool */ const char *name; /* name of the pool */
const char *file; /* where the pool is declared */ const char *file; /* where the pool is declared */
ullong size; /* expected object size */
unsigned int line; /* line in the file where the pool is declared, 0 if none */ unsigned int line; /* line in the file where the pool is declared, 0 if none */
unsigned int size; /* expected object size */
unsigned int flags; /* MEM_F_* */ unsigned int flags; /* MEM_F_* */
unsigned int type_align; /* type-imposed alignment; 0=unspecified */ unsigned int type_align; /* type-imposed alignment; 0=unspecified */
unsigned int align; /* expected alignment; 0=unspecified */ unsigned int align; /* expected alignment; 0=unspecified */

View file

@ -183,7 +183,7 @@ unsigned long long pool_total_allocated(void);
unsigned long long pool_total_used(void); unsigned long long pool_total_used(void);
void pool_flush(struct pool_head *pool); void pool_flush(struct pool_head *pool);
void pool_gc(struct pool_head *pool_ctx); void pool_gc(struct pool_head *pool_ctx);
struct pool_head *create_pool_with_loc(const char *name, unsigned int size, unsigned int align, struct pool_head *create_pool_with_loc(const char *name, ullong size, unsigned int align,
unsigned int flags, const char *file, unsigned int line); unsigned int flags, const char *file, unsigned int line);
struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg); struct pool_head *create_pool_from_reg(const char *name, struct pool_registration *reg);
void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg); void create_pool_callback(struct pool_head **ptr, char *name, struct pool_registration *reg);

View file

@ -38,7 +38,6 @@
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/queue-t.h> #include <haproxy/queue-t.h>
#include <haproxy/server-t.h> #include <haproxy/server-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/tcpcheck-t.h> #include <haproxy/tcpcheck-t.h>
#include <haproxy/thread-t.h> #include <haproxy/thread-t.h>
#include <haproxy/tools-t.h> #include <haproxy/tools-t.h>
@ -240,13 +239,16 @@ enum PR_SRV_STATE_FILE {
#define PR_RE_JUNK_REQUEST 0x00020000 /* We received an incomplete or garbage response */ #define PR_RE_JUNK_REQUEST 0x00020000 /* We received an incomplete or garbage response */
/* Proxy flags */ /* Proxy flags */
#define PR_FL_DISABLED 0x01 /* The proxy was disabled in the configuration (not at runtime) */ #define PR_FL_DISABLED 0x00000001 /* The proxy was disabled in the configuration (not at runtime) */
#define PR_FL_STOPPED 0x02 /* The proxy was stopped */ #define PR_FL_STOPPED 0x00000002 /* The proxy was stopped */
#define PR_FL_READY 0x04 /* The proxy is ready to be used (initialized and configured) */ #define PR_FL_DEF_EXPLICIT_MODE 0x00000004 /* Proxy mode is explicitely defined - only used for defaults instance */
#define PR_FL_EXPLICIT_REF 0x08 /* The default proxy is explicitly referenced by another proxy */ #define PR_FL_EXPLICIT_REF 0x00000008 /* The default proxy is explicitly referenced by another proxy */
#define PR_FL_IMPLICIT_REF 0x10 /* The default proxy is implicitly referenced by another proxy */ #define PR_FL_IMPLICIT_REF 0x00000010 /* The default proxy is implicitly referenced by another proxy */
#define PR_FL_PAUSED 0x20 /* The proxy was paused at run time (reversible) */ #define PR_FL_PAUSED 0x00000020 /* The proxy was paused at run time (reversible) */
#define PR_FL_CHECKED 0x40 /* The proxy configuration was fully checked (including postparsing checks) */ #define PR_FL_CHECKED 0x00000040 /* The proxy configuration was fully checked (including postparsing checks) */
#define PR_FL_BE_UNPUBLISHED 0x00000080 /* The proxy cannot be targetted by content switching rules */
#define PR_FL_DELETED 0x00000100 /* Proxy has been deleted and must be manipulated with care */
#define PR_FL_NON_PURGEABLE 0x00000200 /* Proxy referenced by config elements which prevent its runtime removal. */
struct stream; struct stream;
@ -292,7 +294,8 @@ struct error_snapshot {
struct server *srv; /* server associated with the error (or NULL) */ struct server *srv; /* server associated with the error (or NULL) */
/* @64 */ /* @64 */
unsigned int ev_id; /* event number (counter incremented for each capture) */ unsigned int ev_id; /* event number (counter incremented for each capture) */
/* @68: 4 bytes hole here */ unsigned int buf_size; /* buffer size */
struct sockaddr_storage src; /* client's address */ struct sockaddr_storage src; /* client's address */
/**** protocol-specific part ****/ /**** protocol-specific part ****/
@ -304,17 +307,21 @@ struct error_snapshot {
struct proxy_per_tgroup { struct proxy_per_tgroup {
struct queue queue; struct queue queue;
struct lbprm_per_tgrp lbprm; struct lbprm_per_tgrp lbprm;
char *extra_counters_fe_storage; /* storage for extra_counters_fe */
char *extra_counters_be_storage; /* storage for extra_counters_be */
} THREAD_ALIGNED(); } THREAD_ALIGNED();
struct proxy { struct proxy {
enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */ enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */
char flags; /* bit field PR_FL_* */
enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */ enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP, ... */
char cap; /* supported capabilities (PR_CAP_*) */ char cap; /* supported capabilities (PR_CAP_*) */
/* 1 byte hole */
unsigned int flags; /* bit field PR_FL_* */
int to_log; /* things to be logged (LW_*), special value LW_LOGSTEPS == follow log-steps */ int to_log; /* things to be logged (LW_*), special value LW_LOGSTEPS == follow log-steps */
unsigned long last_change; /* internal use only: last time the proxy state was changed */ unsigned long last_change; /* internal use only: last time the proxy state was changed */
struct list global_list; /* list member for global proxy list */ struct list global_list; /* list member for global proxy list */
struct list el; /* attach point in various list - currently used only on defaults_list for defaults section */
unsigned int maxconn; /* max # of active streams on the frontend */ unsigned int maxconn; /* max # of active streams on the frontend */
@ -410,6 +417,7 @@ struct proxy {
int redispatch_after; /* number of retries before redispatch */ int redispatch_after; /* number of retries before redispatch */
unsigned down_time; /* total time the proxy was down */ unsigned down_time; /* total time the proxy was down */
int (*accept)(struct stream *s); /* application layer's accept() */ int (*accept)(struct stream *s); /* application layer's accept() */
void *(*stream_new_from_sc)(struct session *sess, struct stconn *sc, struct buffer *in); /* stream instantiation callback for mux stream connector */
struct conn_src conn_src; /* connection source settings */ struct conn_src conn_src; /* connection source settings */
enum obj_type *default_target; /* default target to use for accepted streams or NULL */ enum obj_type *default_target; /* default target to use for accepted streams or NULL */
struct proxy *next; struct proxy *next;
@ -472,7 +480,7 @@ struct proxy {
struct log_steps log_steps; /* bitfield of log origins where log should be generated during request handling */ struct log_steps log_steps; /* bitfield of log origins where log should be generated during request handling */
const char *file_prev; /* file of the previous instance found with the same name, or NULL */ const char *file_prev; /* file of the previous instance found with the same name, or NULL */
int line_prev; /* line of the previous instance found with the same name, or 0 */ int line_prev; /* line of the previous instance found with the same name, or 0 */
unsigned int refcount; /* refcount on this proxy (only used for default proxy for now) */ unsigned int def_ref; /* default proxy only refcount */
} conf; /* config information */ } conf; /* config information */
struct http_ext *http_ext; /* http ext options */ struct http_ext *http_ext; /* http ext options */
struct ceb_root *used_server_addr; /* list of server addresses in use */ struct ceb_root *used_server_addr; /* list of server addresses in use */
@ -501,15 +509,23 @@ struct proxy {
struct list filter_configs; /* list of the filters that are declared on this proxy */ struct list filter_configs; /* list of the filters that are declared on this proxy */
struct guid_node guid; /* GUID global tree node */ struct guid_node guid; /* GUID global tree node */
struct mt_list watcher_list; /* list of elems which currently references this proxy instance (currently only used with backends) */
uint refcount; /* refcount to keep proxy from being deleted during runtime */
EXTRA_COUNTERS(extra_counters_fe); EXTRA_COUNTERS(extra_counters_fe);
EXTRA_COUNTERS(extra_counters_be); EXTRA_COUNTERS(extra_counters_be);
THREAD_ALIGN(); THREAD_ALIGN();
unsigned int queueslength; /* Sum of the length of each queue */ /* these ones change all the time */
int served; /* # of active sessions currently being served */ int served; /* # of active sessions currently being served */
int totpend; /* total number of pending connections on this instance (for stats) */
unsigned int feconn, beconn; /* # of active frontend and backends streams */ unsigned int feconn, beconn; /* # of active frontend and backends streams */
THREAD_ALIGN();
/* these ones are only changed when queues are involved, but checked
* all the time.
*/
unsigned int queueslength; /* Sum of the length of each queue */
int totpend; /* total number of pending connections on this instance (for stats) */
}; };
struct switching_rule { struct switching_rule {

View file

@ -39,6 +39,9 @@ extern struct list proxies;
extern struct ceb_root *used_proxy_id; /* list of proxy IDs in use */ extern struct ceb_root *used_proxy_id; /* list of proxy IDs in use */
extern unsigned int error_snapshot_id; /* global ID assigned to each error then incremented */ extern unsigned int error_snapshot_id; /* global ID assigned to each error then incremented */
extern struct ceb_root *proxy_by_name; /* tree of proxies sorted by name */ extern struct ceb_root *proxy_by_name; /* tree of proxies sorted by name */
extern struct list defaults_list; /* all defaults proxies list */
extern unsigned int dynpx_next_id;
extern const struct cfg_opt cfg_opts[]; extern const struct cfg_opt cfg_opts[];
extern const struct cfg_opt cfg_opts2[]; extern const struct cfg_opt cfg_opts2[];
@ -55,9 +58,10 @@ void stop_proxy(struct proxy *p);
int stream_set_backend(struct stream *s, struct proxy *be); int stream_set_backend(struct stream *s, struct proxy *be);
void deinit_proxy(struct proxy *p); void deinit_proxy(struct proxy *p);
void free_proxy(struct proxy *p); void proxy_drop(struct proxy *p);
const char *proxy_cap_str(int cap); const char *proxy_cap_str(int cap);
const char *proxy_mode_str(int mode); const char *proxy_mode_str(int mode);
enum pr_mode str_to_proxy_mode(const char *mode);
const char *proxy_find_best_option(const char *word, const char **extra); const char *proxy_find_best_option(const char *word, const char **extra);
uint proxy_get_next_id(uint from); uint proxy_get_next_id(uint from);
void proxy_store_name(struct proxy *px); void proxy_store_name(struct proxy *px);
@ -67,16 +71,18 @@ struct proxy *proxy_find_best_match(int cap, const char *name, int id, int *diff
int proxy_cfg_ensure_no_http(struct proxy *curproxy); int proxy_cfg_ensure_no_http(struct proxy *curproxy);
int proxy_cfg_ensure_no_log(struct proxy *curproxy); int proxy_cfg_ensure_no_log(struct proxy *curproxy);
void init_new_proxy(struct proxy *p); void init_new_proxy(struct proxy *p);
void proxy_preset_defaults(struct proxy *defproxy);
void proxy_free_defaults(struct proxy *defproxy); void defaults_px_destroy(struct proxy *px);
void proxy_destroy_defaults(struct proxy *px); void defaults_px_destroy_all_unref(void);
void proxy_destroy_all_unref_defaults(void); void defaults_px_detach(struct proxy *px);
void proxy_ref_defaults(struct proxy *px, struct proxy *defpx); void defaults_px_ref_all(void);
void defaults_px_unref_all(void);
int proxy_ref_defaults(struct proxy *px, struct proxy *defpx, char **errmsg);
void proxy_unref_defaults(struct proxy *px); void proxy_unref_defaults(struct proxy *px);
void proxy_unref_or_destroy_defaults(struct proxy *px);
int setup_new_proxy(struct proxy *px, const char *name, unsigned int cap, char **errmsg); int setup_new_proxy(struct proxy *px, const char *name, unsigned int cap, char **errmsg);
struct proxy *alloc_new_proxy(const char *name, unsigned int cap, struct proxy *alloc_new_proxy(const char *name, unsigned int cap,
char **errmsg); char **errmsg);
void proxy_take(struct proxy *px);
struct proxy *parse_new_proxy(const char *name, unsigned int cap, struct proxy *parse_new_proxy(const char *name, unsigned int cap,
const char *file, int linenum, const char *file, int linenum,
const struct proxy *defproxy); const struct proxy *defproxy);
@ -94,6 +100,9 @@ int resolve_stick_rule(struct proxy *curproxy, struct sticking_rule *mrule);
void free_stick_rules(struct list *rules); void free_stick_rules(struct list *rules);
void free_server_rules(struct list *srules); void free_server_rules(struct list *srules);
int proxy_init_per_thr(struct proxy *px); int proxy_init_per_thr(struct proxy *px);
int proxy_finalize(struct proxy *px, int *err_code);
int be_check_for_deletion(const char *bename, struct proxy **pb, const char **pm);
/* /*
* This function returns a string containing the type of the proxy in a format * This function returns a string containing the type of the proxy in a format
@ -166,12 +175,12 @@ static inline int proxy_abrt_close(const struct proxy *px)
/* increase the number of cumulated connections received on the designated frontend */ /* increase the number of cumulated connections received on the designated frontend */
static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe) static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
{ {
if (fe->fe_counters.shared.tg[tgid - 1]) if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn); _HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_conn);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1); update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->conn_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_conn);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max, HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
update_freq_ctr(&fe->fe_counters._conn_per_sec, 1)); update_freq_ctr(&fe->fe_counters._conn_per_sec, 1));
} }
@ -179,12 +188,12 @@ static inline void proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
/* increase the number of cumulated connections accepted by the designated frontend */ /* increase the number of cumulated connections accepted by the designated frontend */
static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe) static inline void proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
{ {
if (fe->fe_counters.shared.tg[tgid - 1]) if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1); update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max, HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.sps_max,
update_freq_ctr(&fe->fe_counters._sess_per_sec, 1)); update_freq_ctr(&fe->fe_counters._sess_per_sec, 1));
} }
@ -199,19 +208,19 @@ static inline void proxy_inc_fe_cum_sess_ver_ctr(struct listener *l, struct prox
http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver)) http_ver > sizeof(fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver))
return; return;
if (fe->fe_counters.shared.tg[tgid - 1]) if (fe->fe_counters.shared.tg)
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]); _HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
if (l && l->counters && l->counters->shared.tg[tgid - 1]) if (l && l->counters && l->counters->shared.tg && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]); _HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->cum_sess_ver[http_ver - 1]);
} }
/* increase the number of cumulated streams on the designated backend */ /* increase the number of cumulated streams on the designated backend */
static inline void proxy_inc_be_ctr(struct proxy *be) static inline void proxy_inc_be_ctr(struct proxy *be)
{ {
if (be->be_counters.shared.tg[tgid - 1]) if (be->be_counters.shared.tg) {
_HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&be->be_counters.shared.tg[tgid - 1]->cum_sess);
if (be->be_counters.shared.tg[tgid - 1])
update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1); update_freq_ctr(&be->be_counters.shared.tg[tgid - 1]->sess_per_sec, 1);
}
HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max, HA_ATOMIC_UPDATE_MAX(&be->be_counters.sps_max,
update_freq_ctr(&be->be_counters._sess_per_sec, 1)); update_freq_ctr(&be->be_counters._sess_per_sec, 1));
} }
@ -226,12 +235,12 @@ static inline void proxy_inc_fe_req_ctr(struct listener *l, struct proxy *fe,
if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req)) if (http_ver >= sizeof(fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req) / sizeof(*fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req))
return; return;
if (fe->fe_counters.shared.tg[tgid - 1]) if (fe->fe_counters.shared.tg) {
_HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]); _HA_ATOMIC_INC(&fe->fe_counters.shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (l && l->counters && l->counters->shared.tg[tgid - 1])
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
if (fe->fe_counters.shared.tg[tgid - 1])
update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1); update_freq_ctr(&fe->fe_counters.shared.tg[tgid - 1]->req_per_sec, 1);
}
if (l && l->counters && l->counters->shared.tg)
_HA_ATOMIC_INC(&l->counters->shared.tg[tgid - 1]->p.http.cum_req[http_ver]);
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max, HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.p.http.rps_max,
update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1)); update_freq_ctr(&fe->fe_counters.p.http._req_per_sec, 1));
} }

View file

@ -13,6 +13,7 @@ int qcs_http_handle_standalone_fin(struct qcs *qcs);
size_t qcs_http_snd_buf(struct qcs *qcs, struct buffer *buf, size_t count, size_t qcs_http_snd_buf(struct qcs *qcs, struct buffer *buf, size_t count,
char *fin); char *fin);
size_t qcs_http_reset_buf(struct qcs *qcs, struct buffer *buf, size_t count);
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -38,7 +38,6 @@
#include <haproxy/quic_cc-t.h> #include <haproxy/quic_cc-t.h>
#include <haproxy/quic_frame-t.h> #include <haproxy/quic_frame-t.h>
#include <haproxy/quic_openssl_compat-t.h> #include <haproxy/quic_openssl_compat-t.h>
#include <haproxy/quic_stats-t.h>
#include <haproxy/quic_tls-t.h> #include <haproxy/quic_tls-t.h>
#include <haproxy/quic_tp-t.h> #include <haproxy/quic_tp-t.h>
#include <haproxy/show_flags-t.h> #include <haproxy/show_flags-t.h>

View file

@ -30,6 +30,7 @@
#include <import/eb64tree.h> #include <import/eb64tree.h>
#include <import/ebmbtree.h> #include <import/ebmbtree.h>
#include <haproxy/counters.h>
#include <haproxy/chunk.h> #include <haproxy/chunk.h>
#include <haproxy/dynbuf.h> #include <haproxy/dynbuf.h>
#include <haproxy/ncbmbuf.h> #include <haproxy/ncbmbuf.h>
@ -193,7 +194,11 @@ static inline void *qc_counters(enum obj_type *o, const struct stats_module *m)
p = l ? l->bind_conf->frontend : p = l ? l->bind_conf->frontend :
s ? s->proxy : NULL; s ? s->proxy : NULL;
return p ? EXTRA_COUNTERS_GET(p->extra_counters_fe, m) : NULL; if (l && p)
return EXTRA_COUNTERS_GET(p->extra_counters_fe, m);
else if (s && p)
return EXTRA_COUNTERS_GET(p->extra_counters_be, m);
return NULL;
} }
void chunk_frm_appendf(struct buffer *buf, const struct quic_frame *frm); void chunk_frm_appendf(struct buffer *buf, const struct quic_frame *frm);

View file

@ -20,8 +20,7 @@
#define QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION "CLIENT_TRAFFIC_SECRET_0" #define QUIC_OPENSSL_COMPAT_CLIENT_APPLICATION "CLIENT_TRAFFIC_SECRET_0"
#define QUIC_OPENSSL_COMPAT_SERVER_APPLICATION "SERVER_TRAFFIC_SECRET_0" #define QUIC_OPENSSL_COMPAT_SERVER_APPLICATION "SERVER_TRAFFIC_SECRET_0"
void quic_tls_compat_msg_callback(struct connection *conn, void quic_tls_compat_msg_callback(int write_p, int version, int content_type,
int write_p, int version, int content_type,
const void *buf, size_t len, SSL *ssl); const void *buf, size_t len, SSL *ssl);
int quic_tls_compat_init(struct bind_conf *bind_conf, SSL_CTX *ctx); int quic_tls_compat_init(struct bind_conf *bind_conf, SSL_CTX *ctx);
void quic_tls_compat_keylog_callback(const SSL *ssl, const char *line); void quic_tls_compat_keylog_callback(const SSL *ssl, const char *line);

View file

@ -6,8 +6,6 @@
#error "Must define USE_OPENSSL" #error "Must define USE_OPENSSL"
#endif #endif
extern struct stats_module quic_stats_module;
enum { enum {
QUIC_ST_RXBUF_FULL, QUIC_ST_RXBUF_FULL,
QUIC_ST_DROPPED_PACKET, QUIC_ST_DROPPED_PACKET,
@ -52,6 +50,7 @@ enum {
QUIC_ST_STREAM_DATA_BLOCKED, QUIC_ST_STREAM_DATA_BLOCKED,
QUIC_ST_STREAMS_BLOCKED_BIDI, QUIC_ST_STREAMS_BLOCKED_BIDI,
QUIC_ST_STREAMS_BLOCKED_UNI, QUIC_ST_STREAMS_BLOCKED_UNI,
QUIC_ST_NCBUF_GAP_LIMIT,
QUIC_STATS_COUNT /* must be the last */ QUIC_STATS_COUNT /* must be the last */
}; };
@ -99,6 +98,7 @@ struct quic_counters {
long long stream_data_blocked; /* total number of times STREAM_DATA_BLOCKED frame was received */ long long stream_data_blocked; /* total number of times STREAM_DATA_BLOCKED frame was received */
long long streams_blocked_bidi; /* total number of times STREAMS_BLOCKED_BIDI frame was received */ long long streams_blocked_bidi; /* total number of times STREAMS_BLOCKED_BIDI frame was received */
long long streams_blocked_uni; /* total number of times STREAMS_BLOCKED_UNI frame was received */ long long streams_blocked_uni; /* total number of times STREAMS_BLOCKED_UNI frame was received */
long long ncbuf_gap_limit; /* total number of times we failed to add data to ncbuf due to gap size limit */
}; };
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -7,7 +7,9 @@
#endif #endif
#include <haproxy/quic_stats-t.h> #include <haproxy/quic_stats-t.h>
#include <haproxy/stats-t.h>
extern struct stats_module quic_stats_module;
void quic_stats_transp_err_count_inc(struct quic_counters *ctrs, int error_code); void quic_stats_transp_err_count_inc(struct quic_counters *ctrs, int error_code);
#endif /* USE_QUIC */ #endif /* USE_QUIC */

View file

@ -65,7 +65,7 @@ struct shard_info {
uint nbgroups; /* number of groups in this shard (=#rx); Zero = unused. */ uint nbgroups; /* number of groups in this shard (=#rx); Zero = unused. */
uint nbthreads; /* number of threads in this shard (>=nbgroups) */ uint nbthreads; /* number of threads in this shard (>=nbgroups) */
struct receiver *ref; /* first one, reference for FDs to duplicate */ struct receiver *ref; /* first one, reference for FDs to duplicate */
struct receiver *members[MAX_TGROUPS]; /* all members of the shard (one per thread group) */ struct receiver **members; /* all members of the shard (one per thread group) */
}; };
/* This describes a receiver with all its characteristics (address, options, etc) */ /* This describes a receiver with all its characteristics (address, options, etc) */

View file

@ -27,7 +27,6 @@
#include <haproxy/connection-t.h> #include <haproxy/connection-t.h>
#include <haproxy/dns-t.h> #include <haproxy/dns-t.h>
#include <haproxy/obj_type-t.h> #include <haproxy/obj_type-t.h>
#include <haproxy/stats-t.h>
#include <haproxy/task-t.h> #include <haproxy/task-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>

View file

@ -63,6 +63,7 @@ int smp_expr_output_type(struct sample_expr *expr);
int c_none(struct sample *smp); int c_none(struct sample *smp);
int c_pseudo(struct sample *smp); int c_pseudo(struct sample *smp);
int smp_dup(struct sample *smp); int smp_dup(struct sample *smp);
int sample_check_arg_base64(struct arg *arg, char **err);
/* /*
* This function just apply a cast on sample. It returns 0 if the cast is not * This function just apply a cast on sample. It returns 0 if the cast is not

View file

@ -38,7 +38,7 @@ void sc_update_tx(struct stconn *sc);
struct task *sc_conn_io_cb(struct task *t, void *ctx, unsigned int state); struct task *sc_conn_io_cb(struct task *t, void *ctx, unsigned int state);
int sc_conn_sync_recv(struct stconn *sc); int sc_conn_sync_recv(struct stconn *sc);
void sc_conn_sync_send(struct stconn *sc); int sc_conn_sync_send(struct stconn *sc);
int sc_applet_sync_recv(struct stconn *sc); int sc_applet_sync_recv(struct stconn *sc);
void sc_applet_sync_send(struct stconn *sc); void sc_applet_sync_send(struct stconn *sc);
@ -74,6 +74,70 @@ static inline struct buffer *sc_ob(const struct stconn *sc)
{ {
return &sc_oc(sc)->buf; return &sc_oc(sc)->buf;
} }
/* The application layer tells the stream connector that it just got the input
* buffer it was waiting for. A read activity is reported. The SC_FL_HAVE_BUFF
* flag is set and held until sc_used_buff() is called to indicate it was
* used.
*/
static inline void sc_have_buff(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_BUFF) {
sc->flags &= ~SC_FL_NEED_BUFF;
sc->flags |= SC_FL_HAVE_BUFF;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector failed to get an input buffer and is waiting for it.
* It indicates a willingness to deliver data to the buffer that will have to
* be retried. As such, callers will often automatically clear SE_FL_HAVE_NO_DATA
* to be called again as soon as SC_FL_NEED_BUFF is cleared.
*/
static inline void sc_need_buff(struct stconn *sc)
{
sc->flags |= SC_FL_NEED_BUFF;
}
/* The stream connector indicates that it has successfully allocated the buffer
* it was previously waiting for so it drops the SC_FL_HAVE_BUFF bit.
*/
static inline void sc_used_buff(struct stconn *sc)
{
sc->flags &= ~SC_FL_HAVE_BUFF;
}
/* Tell a stream connector some room was made in the input buffer and any
* failed attempt to inject data into it may be tried again. This is usually
* called after a successful transfer of buffer contents to the other side.
* A read activity is reported.
*/
static inline void sc_have_room(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_ROOM) {
sc->flags &= ~SC_FL_NEED_ROOM;
sc->room_needed = 0;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector announces it failed to put data into the input buffer
* by lack of room. Since it indicates a willingness to deliver data to the
* buffer that will have to be retried. Usually the caller will also clear
* SE_FL_HAVE_NO_DATA to be called again as soon as SC_FL_NEED_ROOM is cleared.
*
* The caller is responsible to specified the amount of free space required to
* progress. It must take care to not exceed the buffer size.
*/
static inline void sc_need_room(struct stconn *sc, ssize_t room_needed)
{
sc->flags |= SC_FL_NEED_ROOM;
BUG_ON_HOT(room_needed > (ssize_t)c_size(sc_ic(sc)));
sc->room_needed = room_needed;
}
/* returns the stream's task associated to this stream connector */ /* returns the stream's task associated to this stream connector */
static inline struct task *sc_strm_task(const struct stconn *sc) static inline struct task *sc_strm_task(const struct stconn *sc)
{ {
@ -344,10 +408,15 @@ static inline int sc_sync_recv(struct stconn *sc)
/* Perform a synchronous send using the right version, depending the endpoing is /* Perform a synchronous send using the right version, depending the endpoing is
* a connection or an applet. * a connection or an applet.
*/ */
static inline void sc_sync_send(struct stconn *sc) static inline int sc_sync_send(struct stconn *sc, unsigned cnt)
{ {
if (sc_ep_test(sc, SE_FL_T_MUX)) if (!sc_ep_test(sc, SE_FL_T_MUX))
sc_conn_sync_send(sc); return 0;
if (cnt >= 2 && co_data(sc_oc(sc))) {
task_wakeup(__sc_strm(sc)->task, TASK_WOKEN_MSG);
return 0;
}
return sc_conn_sync_send(sc);
} }
/* Combines both sc_update_rx() and sc_update_tx() at once */ /* Combines both sc_update_rx() and sc_update_tx() at once */

View file

@ -286,6 +286,7 @@ struct srv_per_tgroup {
struct queue queue; /* pending connections */ struct queue queue; /* pending connections */
struct server *server; /* pointer to the corresponding server */ struct server *server; /* pointer to the corresponding server */
struct eb32_node lb_node; /* node used for tree-based load balancing */ struct eb32_node lb_node; /* node used for tree-based load balancing */
char *extra_counters_storage; /* storage for extra_counters */
struct server *next_full; /* next server in the temporary full list */ struct server *next_full; /* next server in the temporary full list */
unsigned int last_other_tgrp_served; /* Last other tgrp we dequeued from */ unsigned int last_other_tgrp_served; /* Last other tgrp we dequeued from */
unsigned int self_served; /* Number of connection we dequeued from our own queue */ unsigned int self_served; /* Number of connection we dequeued from our own queue */
@ -383,7 +384,6 @@ struct server {
unsigned next_eweight; /* next pending eweight to commit */ unsigned next_eweight; /* next pending eweight to commit */
unsigned cumulative_weight; /* weight of servers prior to this one in the same group, for chash balancing */ unsigned cumulative_weight; /* weight of servers prior to this one in the same group, for chash balancing */
int maxqueue; /* maximum number of pending connections allowed */ int maxqueue; /* maximum number of pending connections allowed */
unsigned int queueslength; /* Sum of the length of each queue */
int shard; /* shard (in peers protocol context only) */ int shard; /* shard (in peers protocol context only) */
int log_bufsize; /* implicit ring bufsize (for log server only - in log backend) */ int log_bufsize; /* implicit ring bufsize (for log server only - in log backend) */
@ -406,6 +406,7 @@ struct server {
unsigned int max_used_conns; /* Max number of used connections (the counter is reset at each connection purges */ unsigned int max_used_conns; /* Max number of used connections (the counter is reset at each connection purges */
unsigned int est_need_conns; /* Estimate on the number of needed connections (max of curr and previous max_used) */ unsigned int est_need_conns; /* Estimate on the number of needed connections (max of curr and previous max_used) */
unsigned int curr_sess_idle_conns; /* Current number of idle connections attached to a session instead of idle/safe trees. */ unsigned int curr_sess_idle_conns; /* Current number of idle connections attached to a session instead of idle/safe trees. */
unsigned int queueslength; /* Sum of the length of each queue */
/* elements only used during boot, do not perturb and plug the hole */ /* elements only used during boot, do not perturb and plug the hole */
struct guid_node guid; /* GUID global tree node */ struct guid_node guid; /* GUID global tree node */

View file

@ -55,6 +55,7 @@ int srv_update_addr(struct server *s, void *ip, int ip_sin_family, struct server
struct sample_expr *_parse_srv_expr(char *expr, struct arg_list *args_px, struct sample_expr *_parse_srv_expr(char *expr, struct arg_list *args_px,
const char *file, int linenum, char **err); const char *file, int linenum, char **err);
int server_parse_exprs(struct server *srv, struct proxy *px, char **err); int server_parse_exprs(struct server *srv, struct proxy *px, char **err);
int srv_configure_auto_sni(struct server *srv, int *err_code, char **err);
int server_set_inetaddr(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater, struct buffer *msg); int server_set_inetaddr(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater, struct buffer *msg);
int server_set_inetaddr_warn(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater); int server_set_inetaddr_warn(struct server *s, const struct server_inetaddr *inetaddr, struct server_inetaddr_updater updater);
void server_get_inetaddr(struct server *s, struct server_inetaddr *inetaddr); void server_get_inetaddr(struct server *s, struct server_inetaddr *inetaddr);
@ -207,7 +208,7 @@ static inline void server_index_id(struct proxy *px, struct server *srv)
/* increase the number of cumulated streams on the designated server */ /* increase the number of cumulated streams on the designated server */
static inline void srv_inc_sess_ctr(struct server *s) static inline void srv_inc_sess_ctr(struct server *s)
{ {
if (s->counters.shared.tg[tgid - 1]) { if (s->counters.shared.tg) {
_HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess); _HA_ATOMIC_INC(&s->counters.shared.tg[tgid - 1]->cum_sess);
update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1); update_freq_ctr(&s->counters.shared.tg[tgid - 1]->sess_per_sec, 1);
} }
@ -218,7 +219,7 @@ static inline void srv_inc_sess_ctr(struct server *s)
/* set the time of last session on the designated server */ /* set the time of last session on the designated server */
static inline void srv_set_sess_last(struct server *s) static inline void srv_set_sess_last(struct server *s)
{ {
if (s->counters.shared.tg[tgid - 1]) if (s->counters.shared.tg)
HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns)); HA_ATOMIC_STORE(&s->counters.shared.tg[tgid - 1]->last_sess, ns_to_sec(now_ns));
} }
@ -349,28 +350,26 @@ static inline int srv_is_transparent(const struct server *srv)
(srv->flags & SRV_F_MAPPORTS); (srv->flags & SRV_F_MAPPORTS);
} }
/* Detach server from proxy list. It is supported to call this /* Detach <srv> server from its parent proxy list.
* even if the server is not yet in the list *
* Must be called under thread isolation or when it is safe to assume * Must be called under thread isolation.
* that the parent proxy doesn't is not skimming through the server list
*/ */
static inline void srv_detach(struct server *srv) static inline void srv_detach(struct server *srv)
{ {
struct proxy *px = srv->proxy; struct proxy *px = srv->proxy;
if (px->srv == srv)
px->srv = srv->next;
else {
struct server *prev; struct server *prev;
if (px->srv == srv) {
px->srv = srv->next;
}
else {
for (prev = px->srv; prev && prev->next != srv; prev = prev->next) for (prev = px->srv; prev && prev->next != srv; prev = prev->next)
; ;
BUG_ON(!prev); /* Server instance not found in proxy list ? */
BUG_ON(!prev);
prev->next = srv->next; prev->next = srv->next;
} }
/* reset the proxy's ready_srv if it was this one */
/* Reset the proxy's ready_srv if it was this one. */
HA_ATOMIC_CAS(&px->ready_srv, &srv, NULL); HA_ATOMIC_CAS(&px->ready_srv, &srv, NULL);
} }

View file

@ -73,6 +73,14 @@ struct ckch_conf {
char *id; char *id;
char **domains; char **domains;
} acme; } acme;
struct {
struct {
char *type; /* "RSA" or "ECSDA" */
int bits; /* bits for RSA */
char *curves; /* NID of curves for ECDSA*/
} key;
int on;
} gencrt;
}; };
/* /*

View file

@ -80,6 +80,7 @@ void ssl_store_delete_cafile_entry(struct cafile_entry *ca_e);
int ssl_store_load_ca_from_buf(struct cafile_entry *ca_e, char *cert_buf, int append); int ssl_store_load_ca_from_buf(struct cafile_entry *ca_e, char *cert_buf, int append);
int ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type); int ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type);
int __ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type, int shuterror); int __ssl_store_load_locations_file(char *path, int create_if_none, enum cafile_type type, int shuterror);
const char *ha_default_cert_dir();
extern struct cert_exts cert_exts[]; extern struct cert_exts cert_exts[];
extern int (*ssl_commit_crlfile_cb)(const char *path, X509_STORE *ctx, char **err); extern int (*ssl_commit_crlfile_cb)(const char *path, X509_STORE *ctx, char **err);

View file

@ -28,6 +28,7 @@
/* crt-list entry functions */ /* crt-list entry functions */
void ssl_sock_free_ssl_conf(struct ssl_bind_conf *conf); void ssl_sock_free_ssl_conf(struct ssl_bind_conf *conf);
struct ssl_bind_conf *crtlist_dup_ssl_conf(struct ssl_bind_conf *src);
char **crtlist_dup_filters(char **args, int fcount); char **crtlist_dup_filters(char **args, int fcount);
void crtlist_free_filters(char **args); void crtlist_free_filters(char **args);
void crtlist_entry_free(struct crtlist_entry *entry); void crtlist_entry_free(struct crtlist_entry *entry);

View file

@ -32,6 +32,8 @@ int ssl_sock_set_generated_cert(SSL_CTX *ctx, unsigned int key, struct bind_conf
unsigned int ssl_sock_generated_cert_key(const void *data, size_t len); unsigned int ssl_sock_generated_cert_key(const void *data, size_t len);
int ssl_sock_gencert_load_ca(struct bind_conf *bind_conf); int ssl_sock_gencert_load_ca(struct bind_conf *bind_conf);
void ssl_sock_gencert_free_ca(struct bind_conf *bind_conf); void ssl_sock_gencert_free_ca(struct bind_conf *bind_conf);
EVP_PKEY *ssl_gen_EVP_PKEY(int keytype, int curves, int bits, char **errmsg);
X509 *ssl_gen_x509(EVP_PKEY *pkey);
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */
#endif /* _HAPROXY_SSL_GENCERT_H */ #endif /* _HAPROXY_SSL_GENCERT_H */

View file

@ -194,7 +194,7 @@ struct issuer_chain {
struct connection; struct connection;
typedef void (*ssl_sock_msg_callback_func)(struct connection *conn, typedef void (*ssl_sock_msg_callback_func)(
int write_p, int version, int content_type, int write_p, int version, int content_type,
const void *buf, size_t len, SSL *ssl); const void *buf, size_t len, SSL *ssl);
@ -338,6 +338,8 @@ struct global_ssl {
int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */ int renegotiate; /* Renegotiate mode (SSL_RENEGOTIATE_ flag) */
char **passphrase_cmd; char **passphrase_cmd;
int passphrase_cmd_args_cnt; int passphrase_cmd_args_cnt;
unsigned int certificate_compression:1; /* allow to explicitely disable certificate compression */
}; };
/* The order here matters for picking a default context, /* The order here matters for picking a default context,
@ -361,6 +363,7 @@ struct passphrase_cb_data {
const char *path; const char *path;
struct ckch_data *ckch_data; struct ckch_data *ckch_data;
int passphrase_idx; int passphrase_idx;
int callback_called;
}; };
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */

View file

@ -25,12 +25,13 @@
#include <haproxy/connection.h> #include <haproxy/connection.h>
#include <haproxy/counters.h>
#include <haproxy/openssl-compat.h> #include <haproxy/openssl-compat.h>
#include <haproxy/pool-t.h> #include <haproxy/pool-t.h>
#include <haproxy/proxy-t.h> #include <haproxy/proxy-t.h>
#include <haproxy/quic_conn-t.h> #include <haproxy/quic_conn-t.h>
#include <haproxy/ssl_sock-t.h> #include <haproxy/ssl_sock-t.h>
#include <haproxy/stats.h> #include <haproxy/stats-t.h>
#include <haproxy/thread.h> #include <haproxy/thread.h>
extern struct list tlskeys_reference; extern struct list tlskeys_reference;
@ -73,7 +74,7 @@ int ssl_sock_get_alpn(const struct connection *conn, void *xprt_ctx,
const char **str, int *len); const char **str, int *len);
int ssl_bio_and_sess_init(struct connection *conn, SSL_CTX *ssl_ctx, int ssl_bio_and_sess_init(struct connection *conn, SSL_CTX *ssl_ctx,
SSL **ssl, BIO **bio, BIO_METHOD *bio_meth, void *ctx); SSL **ssl, BIO **bio, BIO_METHOD *bio_meth, void *ctx);
int ssl_sock_srv_try_reuse_sess(struct ssl_sock_ctx *ctx, struct server *srv); void ssl_sock_srv_try_reuse_sess(struct ssl_sock_ctx *ctx, struct server *srv);
const char *ssl_sock_get_sni(struct connection *conn); const char *ssl_sock_get_sni(struct connection *conn);
const char *ssl_sock_get_cert_sig(struct connection *conn); const char *ssl_sock_get_cert_sig(struct connection *conn);
const char *ssl_sock_get_cipher_name(struct connection *conn); const char *ssl_sock_get_cipher_name(struct connection *conn);

View file

@ -57,6 +57,9 @@ const char *nid2nist(int nid);
const char *sigalg2str(int sigalg); const char *sigalg2str(int sigalg);
const char *curveid2str(int curve_id); const char *curveid2str(int curve_id);
int aes_process(struct buffer *data, struct buffer *nonce, struct buffer *key, int key_size,
struct buffer *aead_tag, struct buffer *aad, struct buffer *out, int decrypt, int gcm);
#endif /* _HAPROXY_SSL_UTILS_H */ #endif /* _HAPROXY_SSL_UTILS_H */
#endif /* USE_OPENSSL */ #endif /* USE_OPENSSL */

View file

@ -15,7 +15,7 @@ enum stfile_domain {
}; };
#define SHM_STATS_FILE_VER_MAJOR 1 #define SHM_STATS_FILE_VER_MAJOR 1
#define SHM_STATS_FILE_VER_MINOR 1 #define SHM_STATS_FILE_VER_MINOR 2
#define SHM_STATS_FILE_HEARTBEAT_TIMEOUT 60 /* passed this delay (seconds) process which has not #define SHM_STATS_FILE_HEARTBEAT_TIMEOUT 60 /* passed this delay (seconds) process which has not
* sent heartbeat will be considered down * sent heartbeat will be considered down
@ -47,7 +47,7 @@ struct shm_stats_file_hdr {
*/ */
struct { struct {
pid_t pid; pid_t pid;
int heartbeat; // last activity of this process + heartbeat timeout, in ticks uint heartbeat; // last activity of this process + heartbeat timeout, in ticks
} slots[64]; } slots[64];
int objects; /* actual number of objects stored in the shm */ int objects; /* actual number of objects stored in the shm */
int objects_slots; /* total available objects slots unless map is resized */ int objects_slots; /* total available objects slots unless map is resized */
@ -64,9 +64,9 @@ struct shm_stats_file_hdr {
*/ */
struct shm_stats_file_object { struct shm_stats_file_object {
char guid[GUID_MAX_LEN + 1]; char guid[GUID_MAX_LEN + 1];
uint8_t tgid; // thread group ID from 1 to 64 uint16_t tgid; // thread group ID
uint8_t type; // SHM_STATS_FILE_OBJECT_TYPE_* to know how to handle object.data uint8_t type; // SHM_STATS_FILE_OBJECT_TYPE_* to know how to handle object.data
ALWAYS_PAD(6); // 6 bytes hole, ensure it remains the same size 32 vs 64 bits arch ALWAYS_PAD(5); // 5 bytes hole, ensure it remains the same size 32 vs 64 bits arch
uint64_t users; // bitfield that corresponds to users of the object (see shm_stats_file_hdr slots) uint64_t users; // bitfield that corresponds to users of the object (see shm_stats_file_hdr slots)
/* as the struct may hold any of the types described here, let's make it /* as the struct may hold any of the types described here, let's make it
* so it may store up to the heaviest one using an union * so it may store up to the heaviest one using an union

View file

@ -25,6 +25,7 @@
#include <import/ebtree-t.h> #include <import/ebtree-t.h>
#include <haproxy/api-t.h> #include <haproxy/api-t.h>
#include <haproxy/buf-t.h> #include <haproxy/buf-t.h>
#include <haproxy/counters-t.h>
/* Flags for applet.ctx.stats.flags */ /* Flags for applet.ctx.stats.flags */
#define STAT_F_FMT_HTML 0x00000001 /* dump the stats in HTML format */ #define STAT_F_FMT_HTML 0x00000001 /* dump the stats in HTML format */
@ -515,23 +516,13 @@ struct field {
} u; } u;
}; };
enum counters_type {
COUNTERS_FE = 0,
COUNTERS_BE,
COUNTERS_SV,
COUNTERS_LI,
COUNTERS_RSLV,
COUNTERS_OFF_END
};
/* Entity used to generate statistics on an HAProxy component */ /* Entity used to generate statistics on an HAProxy component */
struct stats_module { struct stats_module {
struct list list; struct list list;
const char *name; const char *name;
/* functor used to generate the stats module using counters provided through data parameter */ /* function used to generate the stats module using counters provided through data parameter */
int (*fill_stats)(void *data, struct field *, unsigned int *); int (*fill_stats)(struct stats_module *, struct extra_counters *, struct field *, unsigned int *);
struct stat_col *stats; /* statistics provided by the module */ struct stat_col *stats; /* statistics provided by the module */
void *counters; /* initial values of allocated counters */ void *counters; /* initial values of allocated counters */
@ -543,12 +534,6 @@ struct stats_module {
char clearable; /* reset on a clear counters */ char clearable; /* reset on a clear counters */
}; };
struct extra_counters {
char *data; /* heap containing counters allocated in a linear fashion */
size_t size; /* size of allocated data */
enum counters_type type; /* type of object containing the counters */
};
/* stats_domain is used in a flag as a 1 byte field */ /* stats_domain is used in a flag as a 1 byte field */
enum stats_domain { enum stats_domain {
STATS_DOMAIN_PROXY = 0, STATS_DOMAIN_PROXY = 0,
@ -593,58 +578,9 @@ struct show_stat_ctx {
int iid, type, sid; /* proxy id, type and service id if bounding of stats is enabled */ int iid, type, sid; /* proxy id, type and service id if bounding of stats is enabled */
int st_code; /* the status code returned by an action */ int st_code; /* the status code returned by an action */
struct buffer chunk; /* temporary buffer which holds a single-line output */ struct buffer chunk; /* temporary buffer which holds a single-line output */
struct watcher px_watch; /* watcher to automatically update obj1 on backend deletion */
struct watcher srv_watch; /* watcher to automatically update obj2 on server deletion */ struct watcher srv_watch; /* watcher to automatically update obj2 on server deletion */
enum stat_state state; /* phase of output production */ enum stat_state state; /* phase of output production */
}; };
extern THREAD_LOCAL void *trash_counters;
#define EXTRA_COUNTERS(name) \
struct extra_counters *name
#define EXTRA_COUNTERS_GET(counters, mod) \
(likely(counters) ? \
((void *)((counters)->data + (mod)->counters_off[(counters)->type])) : \
(trash_counters))
#define EXTRA_COUNTERS_REGISTER(counters, ctype, alloc_failed_label) \
do { \
typeof(*counters) _ctr; \
_ctr = calloc(1, sizeof(*_ctr)); \
if (!_ctr) \
goto alloc_failed_label; \
_ctr->type = (ctype); \
*(counters) = _ctr; \
} while (0)
#define EXTRA_COUNTERS_ADD(mod, counters, new_counters, csize) \
do { \
typeof(counters) _ctr = (counters); \
(mod)->counters_off[_ctr->type] = _ctr->size; \
_ctr->size += (csize); \
} while (0)
#define EXTRA_COUNTERS_ALLOC(counters, alloc_failed_label) \
do { \
typeof(counters) _ctr = (counters); \
_ctr->data = malloc((_ctr)->size); \
if (!_ctr->data) \
goto alloc_failed_label; \
} while (0)
#define EXTRA_COUNTERS_INIT(counters, mod, init_counters, init_counters_size) \
do { \
typeof(counters) _ctr = (counters); \
memcpy(_ctr->data + mod->counters_off[_ctr->type], \
(init_counters), (init_counters_size)); \
} while (0)
#define EXTRA_COUNTERS_FREE(counters) \
do { \
if (counters) { \
free((counters)->data); \
free(counters); \
} \
} while (0)
#endif /* _HAPROXY_STATS_T_H */ #endif /* _HAPROXY_STATS_T_H */

View file

@ -24,6 +24,7 @@
#define _HAPROXY_STATS_H #define _HAPROXY_STATS_H
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/counters.h>
#include <haproxy/listener-t.h> #include <haproxy/listener-t.h>
#include <haproxy/stats-t.h> #include <haproxy/stats-t.h>
#include <haproxy/tools-t.h> #include <haproxy/tools-t.h>
@ -167,7 +168,8 @@ static inline enum stats_domain_px_cap stats_px_get_cap(uint32_t domain)
} }
int stats_allocate_proxy_counters_internal(struct extra_counters **counters, int stats_allocate_proxy_counters_internal(struct extra_counters **counters,
int type, int px_cap); int type, int px_cap,
char **storage, size_t step);
int stats_allocate_proxy_counters(struct proxy *px); int stats_allocate_proxy_counters(struct proxy *px);
void stats_register_module(struct stats_module *m); void stats_register_module(struct stats_module *m);

View file

@ -224,7 +224,7 @@ static forceinline char *sc_show_flags(char *buf, size_t len, const char *delim,
_(SC_FL_NEED_BUFF, _(SC_FL_NEED_ROOM, _(SC_FL_NEED_BUFF, _(SC_FL_NEED_ROOM,
_(SC_FL_RCV_ONCE, _(SC_FL_SND_ASAP, _(SC_FL_SND_NEVERWAIT, _(SC_FL_SND_EXP_MORE, _(SC_FL_RCV_ONCE, _(SC_FL_SND_ASAP, _(SC_FL_SND_NEVERWAIT, _(SC_FL_SND_EXP_MORE,
_(SC_FL_ABRT_WANTED, _(SC_FL_SHUT_WANTED, _(SC_FL_ABRT_DONE, _(SC_FL_SHUT_DONE, _(SC_FL_ABRT_WANTED, _(SC_FL_SHUT_WANTED, _(SC_FL_ABRT_DONE, _(SC_FL_SHUT_DONE,
_(SC_FL_EOS, _(SC_FL_HAVE_BUFF)))))))))))))))))))); _(SC_FL_EOS, _(SC_FL_HAVE_BUFF, _(SC_FL_NO_FASTFWD)))))))))))))))))))));
/* epilogue */ /* epilogue */
_(~0U); _(~0U);
return buf; return buf;

View file

@ -24,6 +24,7 @@
#include <haproxy/api.h> #include <haproxy/api.h>
#include <haproxy/connection.h> #include <haproxy/connection.h>
#include <haproxy/hstream-t.h>
#include <haproxy/htx-t.h> #include <haproxy/htx-t.h>
#include <haproxy/obj_type.h> #include <haproxy/obj_type.h>
#include <haproxy/stconn-t.h> #include <haproxy/stconn-t.h>
@ -45,10 +46,12 @@ void se_shutdown(struct sedesc *sedesc, enum se_shut_mode mode);
struct stconn *sc_new_from_endp(struct sedesc *sedesc, struct session *sess, struct buffer *input); struct stconn *sc_new_from_endp(struct sedesc *sedesc, struct session *sess, struct buffer *input);
struct stconn *sc_new_from_strm(struct stream *strm, unsigned int flags); struct stconn *sc_new_from_strm(struct stream *strm, unsigned int flags);
struct stconn *sc_new_from_check(struct check *check, unsigned int flags); struct stconn *sc_new_from_check(struct check *check, unsigned int flags);
struct stconn *sc_new_from_haterm(struct sedesc *sd, struct session *sess, struct buffer *input);
void sc_free(struct stconn *sc); void sc_free(struct stconn *sc);
int sc_attach_mux(struct stconn *sc, void *target, void *ctx); int sc_attach_mux(struct stconn *sc, void *target, void *ctx);
int sc_attach_strm(struct stconn *sc, struct stream *strm); int sc_attach_strm(struct stconn *sc, struct stream *strm);
int sc_attach_hstream(struct stconn *sc, struct hstream *hs);
void sc_destroy(struct stconn *sc); void sc_destroy(struct stconn *sc);
int sc_reset_endp(struct stconn *sc); int sc_reset_endp(struct stconn *sc);
@ -331,6 +334,21 @@ static inline struct check *sc_check(const struct stconn *sc)
return NULL; return NULL;
} }
/* Returns the haterm stream from a sc if the application is a
* haterm stream. Otherwise NULL is returned. __sc_hstream() returns the haterm
* stream without any control while sc_hstream() check the application type.
*/
static inline struct hstream *__sc_hstream(const struct stconn *sc)
{
return __objt_hstream(sc->app);
}
static inline struct hstream *sc_hstream(const struct stconn *sc)
{
if (obj_type(sc->app) == OBJ_TYPE_HATERM)
return __objt_hstream(sc->app);
return NULL;
}
/* Returns the name of the application layer's name for the stconn, /* Returns the name of the application layer's name for the stconn,
* or "NONE" when none is attached. * or "NONE" when none is attached.
*/ */
@ -397,67 +415,6 @@ static inline void se_need_remote_conn(struct sedesc *se)
se_fl_set(se, SE_FL_APPLET_NEED_CONN); se_fl_set(se, SE_FL_APPLET_NEED_CONN);
} }
/* The application layer tells the stream connector that it just got the input
* buffer it was waiting for. A read activity is reported. The SC_FL_HAVE_BUFF
* flag is set and held until sc_used_buff() is called to indicate it was
* used.
*/
static inline void sc_have_buff(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_BUFF) {
sc->flags &= ~SC_FL_NEED_BUFF;
sc->flags |= SC_FL_HAVE_BUFF;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector failed to get an input buffer and is waiting for it.
* It indicates a willingness to deliver data to the buffer that will have to
* be retried. As such, callers will often automatically clear SE_FL_HAVE_NO_DATA
* to be called again as soon as SC_FL_NEED_BUFF is cleared.
*/
static inline void sc_need_buff(struct stconn *sc)
{
sc->flags |= SC_FL_NEED_BUFF;
}
/* The stream connector indicates that it has successfully allocated the buffer
* it was previously waiting for so it drops the SC_FL_HAVE_BUFF bit.
*/
static inline void sc_used_buff(struct stconn *sc)
{
sc->flags &= ~SC_FL_HAVE_BUFF;
}
/* Tell a stream connector some room was made in the input buffer and any
* failed attempt to inject data into it may be tried again. This is usually
* called after a successful transfer of buffer contents to the other side.
* A read activity is reported.
*/
static inline void sc_have_room(struct stconn *sc)
{
if (sc->flags & SC_FL_NEED_ROOM) {
sc->flags &= ~SC_FL_NEED_ROOM;
sc->room_needed = 0;
sc_ep_report_read_activity(sc);
}
}
/* The stream connector announces it failed to put data into the input buffer
* by lack of room. Since it indicates a willingness to deliver data to the
* buffer that will have to be retried. Usually the caller will also clear
* SE_FL_HAVE_NO_DATA to be called again as soon as SC_FL_NEED_ROOM is cleared.
*
* The caller is responsible to specified the amount of free space required to
* progress. It must take care to not exceed the buffer size.
*/
static inline void sc_need_room(struct stconn *sc, ssize_t room_needed)
{
sc->flags |= SC_FL_NEED_ROOM;
BUG_ON_HOT(room_needed > (ssize_t)global.tune.bufsize);
sc->room_needed = room_needed;
}
/* The stream endpoint indicates that it's ready to consume data from the /* The stream endpoint indicates that it's ready to consume data from the
* stream's output buffer. Report a send activity if the SE is unblocked. * stream's output buffer. Report a send activity if the SE is unblocked.
*/ */

View file

@ -320,7 +320,10 @@ struct stream {
struct list *current_rule_list; /* this is used to store the current executed rule list. */ struct list *current_rule_list; /* this is used to store the current executed rule list. */
void *current_rule; /* this is used to store the current rule to be resumed. */ void *current_rule; /* this is used to store the current rule to be resumed. */
int rules_exp; /* expiration date for current rules execution */ int rules_exp; /* expiration date for current rules execution */
int tunnel_timeout; int tunnel_timeout; /* per-stream tunnel timeout, set by set-timeout action */
int connect_timeout; /* per-stream connect timeout, set by set-timeout action */
int queue_timeout; /* per-stream queue timeout, set by set-timeout action */
int tarpit_timeout; /* per-stream tarpit timeout, set by set-timeout action */
struct { struct {
void *ptr; /* Pointer on the entity (def: NULL) */ void *ptr; /* Pointer on the entity (def: NULL) */

View file

@ -59,7 +59,7 @@ extern struct pool_head *pool_head_uniqueid;
extern struct data_cb sess_conn_cb; extern struct data_cb sess_conn_cb;
struct stream *stream_new(struct session *sess, struct stconn *sc, struct buffer *input); void *stream_new(struct session *sess, struct stconn *sc, struct buffer *input);
void stream_free(struct stream *s); void stream_free(struct stream *s);
int stream_upgrade_from_sc(struct stconn *sc, struct buffer *input); int stream_upgrade_from_sc(struct stconn *sc, struct buffer *input);
int stream_set_http_mode(struct stream *s, const struct mux_proto_list *mux_proto); int stream_set_http_mode(struct stream *s, const struct mux_proto_list *mux_proto);

View file

@ -217,6 +217,7 @@ enum lock_label {
QC_CID_LOCK, QC_CID_LOCK,
CACHE_LOCK, CACHE_LOCK,
GUID_LOCK, GUID_LOCK,
PROXIES_DEL_LOCK,
OTHER_LOCK, OTHER_LOCK,
/* WT: make sure never to use these ones outside of development, /* WT: make sure never to use these ones outside of development,
* we need them for lock profiling! * we need them for lock profiling!

View file

@ -60,7 +60,6 @@ extern int thread_cpus_enabled_at_boot;
/* Only way found to replace variables with constants that are optimized away /* Only way found to replace variables with constants that are optimized away
* at build time. * at build time.
*/ */
enum { all_tgroups_mask = 1UL };
enum { tid_bit = 1UL }; enum { tid_bit = 1UL };
enum { tid = 0 }; enum { tid = 0 };
enum { tgid = 1 }; enum { tgid = 1 };
@ -208,7 +207,6 @@ void wait_for_threads_completion();
void set_thread_cpu_affinity(); void set_thread_cpu_affinity();
unsigned long long ha_get_pthread_id(unsigned int thr); unsigned long long ha_get_pthread_id(unsigned int thr);
extern volatile unsigned long all_tgroups_mask;
extern volatile unsigned int rdv_requests; extern volatile unsigned int rdv_requests;
extern volatile unsigned int isolated_thread; extern volatile unsigned int isolated_thread;
extern THREAD_LOCAL unsigned int tid; /* The thread id */ extern THREAD_LOCAL unsigned int tid; /* The thread id */
@ -364,15 +362,19 @@ static inline unsigned long thread_isolated()
extern uint64_t now_mono_time(void); \ extern uint64_t now_mono_time(void); \
if (_LK_ != _LK_UN) { \ if (_LK_ != _LK_UN) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) \ if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) \
lock_start = now_mono_time(); \ lock_start = now_mono_time(); \
} \ } \
(void)(expr); \ (void)(expr); \
if (_LK_ == _LK_UN) { \ if (_LK_ == _LK_UN) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (th_ctx->lock_level == 0 && unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) \ if (th_ctx->lock_level == 0 &&\
unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) \
th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \ th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \
} else if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) { \ } else if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) { \
uint64_t now = now_mono_time(); \ uint64_t now = now_mono_time(); \
if (lock_start) \ if (lock_start) \
th_ctx->lock_wait_total += now - lock_start; \ th_ctx->lock_wait_total += now - lock_start; \
@ -386,7 +388,8 @@ static inline unsigned long thread_isolated()
typeof(expr) _expr = (expr); \ typeof(expr) _expr = (expr); \
if (_expr == 0) { \ if (_expr == 0) { \
th_ctx->lock_level += bal; \ th_ctx->lock_level += bal; \
if (unlikely(th_ctx->flags & TH_FL_TASK_PROFILING)) { \ if (unlikely((th_ctx->flags & (TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L)) == \
(TH_FL_TASK_PROFILING|TH_FL_TASK_PROFILING_L))) { \
if (_LK_ == _LK_UN && th_ctx->lock_level == 0) \ if (_LK_ == _LK_UN && th_ctx->lock_level == 0) \
th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \ th_ctx->locked_total += now_mono_time() - th_ctx->lock_start_date; \
else if (_LK_ != _LK_UN && th_ctx->lock_level == 1) \ else if (_LK_ != _LK_UN && th_ctx->lock_level == 1) \

View file

@ -42,7 +42,7 @@ struct thread_set {
ulong abs[(MAX_THREADS + LONGBITS - 1) / LONGBITS]; ulong abs[(MAX_THREADS + LONGBITS - 1) / LONGBITS];
ulong rel[MAX_TGROUPS]; ulong rel[MAX_TGROUPS];
}; };
ulong grps; /* bit field of all non-empty groups, 0 for abs */ ulong nbgrps; /* Number of thread groups, 0 for abs */
}; };
/* tasklet classes */ /* tasklet classes */
@ -69,6 +69,8 @@ enum {
#define TH_FL_IN_DBG_HANDLER 0x00000100 /* thread currently in the debug signal handler */ #define TH_FL_IN_DBG_HANDLER 0x00000100 /* thread currently in the debug signal handler */
#define TH_FL_IN_WDT_HANDLER 0x00000200 /* thread currently in the wdt signal handler */ #define TH_FL_IN_WDT_HANDLER 0x00000200 /* thread currently in the wdt signal handler */
#define TH_FL_IN_ANY_HANDLER 0x00000380 /* mask to test if the thread is in any signal handler */ #define TH_FL_IN_ANY_HANDLER 0x00000380 /* mask to test if the thread is in any signal handler */
#define TH_FL_TASK_PROFILING_L 0x00000400 /* task profiling in locks (also requires TASK_PROFILING) */
#define TH_FL_TASK_PROFILING_M 0x00000800 /* task profiling in mem alloc (also requires TASK_PROFILING) */
/* we have 4 buffer-wait queues, in highest to lowest emergency order */ /* we have 4 buffer-wait queues, in highest to lowest emergency order */
#define DYNBUF_NBQ 4 #define DYNBUF_NBQ 4

View file

@ -77,7 +77,7 @@ static inline int thread_set_nth_group(const struct thread_set *ts, int n)
{ {
int i; int i;
if (ts->grps) { if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++) for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--) if (ts->rel[i] && !n--)
return i + 1; return i + 1;
@ -95,7 +95,7 @@ static inline ulong thread_set_nth_tmask(const struct thread_set *ts, int n)
{ {
int i; int i;
if (ts->grps) { if (ts->nbgrps) {
for (i = 0; i < MAX_TGROUPS; i++) for (i = 0; i < MAX_TGROUPS; i++)
if (ts->rel[i] && !n--) if (ts->rel[i] && !n--)
return ts->rel[i]; return ts->rel[i];
@ -111,7 +111,7 @@ static inline void thread_set_pin_grp1(struct thread_set *ts, ulong mask)
{ {
int i; int i;
ts->grps = 1; ts->nbgrps = 1;
ts->rel[0] = mask; ts->rel[0] = mask;
for (i = 1; i < MAX_TGROUPS; i++) for (i = 1; i < MAX_TGROUPS; i++)
ts->rel[i] = 0; ts->rel[i] = 0;

View file

@ -466,6 +466,13 @@ char *escape_string(char *start, char *stop,
const char escape, const long *map, const char escape, const long *map,
const char *string, const char *string_stop); const char *string, const char *string_stop);
/*
* Appends a quoted and escaped string to a chunk buffer. The string is
* enclosed in double quotes and special characters are escaped with backslash.
* Returns 0 on success, -1 if the buffer is too small (output is rolled back).
*/
int chunk_escape_string(struct buffer *chunk, const char *str, size_t len);
/* Below are RFC8949 compliant cbor encode helper functions, see source /* Below are RFC8949 compliant cbor encode helper functions, see source
* file for functions descriptions * file for functions descriptions
*/ */

View file

@ -45,6 +45,7 @@
#define TRC_ARG_CHK (1 << 3) #define TRC_ARG_CHK (1 << 3)
#define TRC_ARG_QCON (1 << 4) #define TRC_ARG_QCON (1 << 4)
#define TRC_ARG_APPCTX (1 << 5) #define TRC_ARG_APPCTX (1 << 5)
#define TRC_ARG_HSTRM (1 << 6)
#define TRC_ARG1_PRIV (TRC_ARG_PRIV << 0) #define TRC_ARG1_PRIV (TRC_ARG_PRIV << 0)
#define TRC_ARG1_CONN (TRC_ARG_CONN << 0) #define TRC_ARG1_CONN (TRC_ARG_CONN << 0)
@ -53,6 +54,7 @@
#define TRC_ARG1_CHK (TRC_ARG_CHK << 0) #define TRC_ARG1_CHK (TRC_ARG_CHK << 0)
#define TRC_ARG1_QCON (TRC_ARG_QCON << 0) #define TRC_ARG1_QCON (TRC_ARG_QCON << 0)
#define TRC_ARG1_APPCTX (TRC_ARG_APPCTX << 0) #define TRC_ARG1_APPCTX (TRC_ARG_APPCTX << 0)
#define TRC_ARG1_HSTRM (TRC_ARG_HSTRM << 0)
#define TRC_ARG2_PRIV (TRC_ARG_PRIV << 8) #define TRC_ARG2_PRIV (TRC_ARG_PRIV << 8)
#define TRC_ARG2_CONN (TRC_ARG_CONN << 8) #define TRC_ARG2_CONN (TRC_ARG_CONN << 8)
@ -61,6 +63,7 @@
#define TRC_ARG2_CHK (TRC_ARG_CHK << 8) #define TRC_ARG2_CHK (TRC_ARG_CHK << 8)
#define TRC_ARG2_QCON (TRC_ARG_QCON << 8) #define TRC_ARG2_QCON (TRC_ARG_QCON << 8)
#define TRC_ARG2_APPCTX (TRC_ARG_APPCTX << 8) #define TRC_ARG2_APPCTX (TRC_ARG_APPCTX << 8)
#define TRC_ARG2_HSTRM (TRC_ARG_HSTRM << 8)
#define TRC_ARG3_PRIV (TRC_ARG_PRIV << 16) #define TRC_ARG3_PRIV (TRC_ARG_PRIV << 16)
#define TRC_ARG3_CONN (TRC_ARG_CONN << 16) #define TRC_ARG3_CONN (TRC_ARG_CONN << 16)
@ -69,6 +72,7 @@
#define TRC_ARG3_CHK (TRC_ARG_CHK << 16) #define TRC_ARG3_CHK (TRC_ARG_CHK << 16)
#define TRC_ARG3_QCON (TRC_ARG_QCON << 16) #define TRC_ARG3_QCON (TRC_ARG_QCON << 16)
#define TRC_ARG3_APPCTX (TRC_ARG_APPCTX << 16) #define TRC_ARG3_APPCTX (TRC_ARG_APPCTX << 16)
#define TRC_ARG3_HSTRM (TRC_ARG_HSTRM << 16)
#define TRC_ARG4_PRIV (TRC_ARG_PRIV << 24) #define TRC_ARG4_PRIV (TRC_ARG_PRIV << 24)
#define TRC_ARG4_CONN (TRC_ARG_CONN << 24) #define TRC_ARG4_CONN (TRC_ARG_CONN << 24)
@ -77,6 +81,7 @@
#define TRC_ARG4_CHK (TRC_ARG_CHK << 24) #define TRC_ARG4_CHK (TRC_ARG_CHK << 24)
#define TRC_ARG4_QCON (TRC_ARG_QCON << 24) #define TRC_ARG4_QCON (TRC_ARG_QCON << 24)
#define TRC_ARG4_APPCTX (TRC_ARG_APPCTX << 24) #define TRC_ARG4_APPCTX (TRC_ARG_APPCTX << 24)
#define TRC_ARG4_HSTRM (TRC_ARG_HSTRM << 24)
/* usable to detect the presence of any arg of the desired type */ /* usable to detect the presence of any arg of the desired type */
#define TRC_ARGS_CONN (TRC_ARG_CONN * 0x01010101U) #define TRC_ARGS_CONN (TRC_ARG_CONN * 0x01010101U)
@ -85,6 +90,7 @@
#define TRC_ARGS_CHK (TRC_ARG_CHK * 0x01010101U) #define TRC_ARGS_CHK (TRC_ARG_CHK * 0x01010101U)
#define TRC_ARGS_QCON (TRC_ARG_QCON * 0x01010101U) #define TRC_ARGS_QCON (TRC_ARG_QCON * 0x01010101U)
#define TRC_ARGS_APPCTX (TRC_ARG_APPCTX * 0x01010101U) #define TRC_ARGS_APPCTX (TRC_ARG_APPCTX * 0x01010101U)
#define TRC_ARGS_HSTRM (TRC_ARG_HSTRM * 0x01010101U)
enum trace_state { enum trace_state {
@ -148,6 +154,7 @@ struct trace_ctx {
const struct check *check; const struct check *check;
const struct quic_conn *qc; const struct quic_conn *qc;
const struct appctx *appctx; const struct appctx *appctx;
const struct hstream *hs;
}; };
/* Regarding the verbosity, if <decoding> is not NULL, it must point to a NULL- /* Regarding the verbosity, if <decoding> is not NULL, it must point to a NULL-
@ -185,7 +192,7 @@ struct trace_source {
const void *lockon_ptr; // what to lockon when lockon is set const void *lockon_ptr; // what to lockon when lockon is set
const struct trace_source *follow; // other trace source's tracker to follow const struct trace_source *follow; // other trace source's tracker to follow
int cmdline; // true if source was activated via -dt command line args int cmdline; // true if source was activated via -dt command line args
}; } THREAD_ALIGNED();
#endif /* _HAPROXY_TRACE_T_H */ #endif /* _HAPROXY_TRACE_T_H */

View file

@ -57,12 +57,15 @@ struct vars {
}; };
#define VDF_PARENT_CTX 0x00000001 // Set if the variable is related to the parent stream #define VDF_PARENT_CTX 0x00000001 // Set if the variable is related to the parent stream
#define VDF_NAME_ALLOCATED 0x00000002 // Set if name was allocated and must be freed
/* This struct describes a variable as found in an arg_data */ /* This struct describes a variable as found in an arg_data */
struct var_desc { struct var_desc {
uint64_t name_hash; uint64_t name_hash;
enum vars_scope scope; enum vars_scope scope;
uint flags; /*VDF_* */ uint flags; /*VDF_* */
const char *name; /* variable name (not owned) */
size_t name_len; /* variable name length */
}; };
struct var { struct var {
@ -70,6 +73,7 @@ struct var {
uint64_t name_hash; /* XXH3() of the variable's name, indexed by <name_node> */ uint64_t name_hash; /* XXH3() of the variable's name, indexed by <name_node> */
uint flags; // VF_* uint flags; // VF_*
/* 32-bit hole here */ /* 32-bit hole here */
char *name; /* variable name (allocated) */
struct sample_data data; /* data storage. */ struct sample_data data; /* data storage. */
}; };

Some files were not shown because too many files have changed in this diff Show more