Doing this, we can remove the last BF_HIJACK user and remove
produce_content(). s->data_source could also be removed but
it is currently used to detect if the stats or a server was
used.
The new "show errors" command sent on a unix socket will dump
all captured request and response errors for all proxies. It is
also possible to bound the log to frontends and backends whose
ID is passed as an optional parameter.
The output provides information about frontend, backend, server,
session ID, source address, error type, and error position along
with a complete dump of the request or response which has caused
the error.
If a new error scratches the one currently being reported, then
the dump is aborted with a warning message, and processing goes
on to next error.
It is now possible to list all known sessions by issuing "show sess"
on the unix stats socket. The format is not much evolved but it is
very useful for debugging.
The doc has been updated to reflect the new keyword.
This is the first step in implementing a session dump tool.
A session dump will need restart points. It will be necessary for
it to get references to sessions which can be moved when the session
dies.
The principle is not that complex : when a session ends, it looks for
any potential back-references. If it finds any, then it moves them to
the next session in the list. The dump function will of course have
to restart from that new point.
The listener referenced in the fd was only used to check the
listener state upon session termination. There was no guarantee
that the FD had not been reassigned by the moment it was processed,
so this was a bit racy. Having it in the session is more robust.
It will be very convenient to have an analyser state in the session.
It will always be initialized to zero. The analysers can make use of
it, but must reset it to zero when they leave.
It was a bit awkward to have session.c call return_srv_error() for
HTTP error messages related to servers. The function has been adapted
to be passed a pointer to the faulty stream interface, and is now a
pointer in the session. It is possible that in the future, it will
become a callback in the stream interface itself.
In order to avoid having to call per-protocol logging function directly
from session.c, it's better to assign the logging function when the session
is created. This also eliminates a test when the function is needed, and
opens the way to more complete logging functions.
Now the global variable 'sessions' will be a dual-linked list of all
known sessions. The list element is set at the beginning of the session
so that it's easier to follow them all with gdb.
It is quite hard to track when the current session has already been counted
or discounted from the server's total number of established sessions. For
this reason, we introduce a new session flag, SN_CURR_SESS, which indicates
if the current session is one of those reported by the server or not. It
simplifies session accounting and makes it far more robust. It also makes
it possible to perform a last-minute cleanup during session_free().
Right now, with this fix and a few more buffer transitions fixes, no session
were found to remain after a test.
srv_state has been removed from HTTP state machines, and states
have been split in either TCP states or analyzers. For instance,
the TARPIT state has just become a simple analyzer.
New flags have been added to the struct buffer to compensate this.
The high-level stream processors sometimes need to force a disconnection
without touching a file-descriptor (eg: report an error). But if
they touched BF_SHUTW or BF_SHUTR, the file descriptor would not
be closed. Thus, the two SHUT?_NOW flags have been added so that
an application can request a forced close which the stream interface
will be forced to obey.
During this change, a new BF_HIJACK flag was added. It will
be used for data generation, eg during a stats dump. It
prevents the producer on a buffer from sending data into it.
BF_SHUTR_NOW /* the producer must shut down for reads ASAP */
BF_SHUTW_NOW /* the consumer must shut down for writes ASAP */
BF_HIJACK /* the producer is temporarily replaced */
BF_SHUTW_NOW has precedence over BF_HIJACK. BF_HIJACK has
precedence over BF_MAY_FORWARD (so that it does not need it).
New functions buffer_shutr_now(), buffer_shutw_now(), buffer_abort()
are provided to manipulate BF_SHUT* flags.
A new type "stream_interface" has been added to describe both
sides of a buffer. A stream interface has states and error
reporting. The session now has two stream interfaces (one per
side). Each buffer has stream_interface pointers to both
consumer and producer sides.
The server-side file descriptor has moved to its stream interface,
so that even the buffer has access to it.
process_srv() has been split into three parts :
- tcp_get_connection() obtains a connection to the server
- tcp_connection_failed() tests if a previously attempted
connection has succeeded or not.
- process_srv_data() only manages the data phase, and in
this sense should be roughly equivalent to process_cli.
Little code has been removed, and a lot of old code has been
left in comments for now.
A new member has been added to the struct session. It keeps a trace
of what block of code performs a close or a shutdown on a socket, and
in what sequence. This is extremely convenient for post-mortem analysis
where flag combinations and states seem impossible. A new ABORT_NOW()
macro has also been added to make the code immediately segfault where
called.
This is a first attempt at separating data processing from the
TCP state machine. Those two states have been replaced with flags
in the session indicating what needs to be analyzed. The corresponding
code is still called before and in lieu of TCP states.
Next change should get rid of the specific SV_STANALYZE which is in
fact a client state.
Then next change should consist in making it possible to analyze
TCP contents while being in CL_STDATA (or CL_STSHUT*).
Some people need to inspect contents of TCP requests before
deciding to forward a connection or not. A future extension
of this demand might consist in selecting a server farm
depending on the protocol detected in the request.
For this reason, a new state CL_STINSPECT has been added on
the client side. It is immediately entered upon accept() if
the statement "tcp-request inspect-delay <xxx>" is found in
the frontend configuration. Haproxy will then wait up to
this amount of time trying to find a matching ACL, and will
either accept or reject the connection depending on the
"tcp-request content <action> {if|unless}" rules, where
<action> is either "accept" or "reject".
Note that it only waits that long if no definitive verdict
can be found earlier. That generally implies calling a fetch()
function which does not have enough information to decode
some contents, or a match() function which only finds the
beginning of what it's looking for.
It is only at the ACL level that partial data may be processed
as such, because we need to distinguish between MISS and FAIL
*before* applying the term negation.
Thus it is enough to add "| ACL_PARTIAL" to the last argument
when calling acl_exec_cond() to indicate that we expect
ACL_PAT_MISS to be returned if some data is missing (for
fetch() or match()). This is the only case we may return
this value. For this reason, the ACL check in process_cli()
has become a lot simpler.
A new ACL "req_len" of type "int" has been added. Right now
it is already possible to drop requests which talk too early
(eg: for SMTP) or which don't talk at all (eg: HTTP/SSL).
Also, the acl fetch() functions have been extended in order
to permit reporting of missing data in case of fetch failure,
using the ACL_TEST_F_MAY_CHANGE flag.
The default behaviour is unchanged, and if no rule matches,
the request is accepted.
As a side effect, all layer 7 fetching functions have been
cleaned up so that they now check for the validity of the
layer 7 pointer before dereferencing it.
If the system date is set backwards while haproxy is running,
some scheduled events are delayed by the amount of time the
clock went backwards. This is particularly problematic on
systems where the date is set at boot, because it seldom
happens that health-checks do not get sent for a few hours.
Before switching to use clock_gettime() on systems which
provide it, we can at least ensure that the clock is not
going backwards and maintain two clocks : the "date" which
represents what the user wants to see (mostly for logs),
and an internal date stored in "now", used for scheduled
events.
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
When a server terminates a connection, the next session in its
own queue was immediately processed. Because of this, if all
server queues are always filled, then no new anonymous request
will be processed. Consider oldest request between global and
server queues to choose from which to pick the request.
An improvement over this will consist in adding a configurable
offset when comparing expiration dates, so that cookie-less
requests can get either less or more priority.
Due to the way the stats socket work, it was not possible to
maintain the information related to the command entered, so
after filling a whole buffer, the request was lost and it was
considered that there was nothing to write anymore.
The major reason was that some flags were passed directly
during the first call to stats_dump_raw() instead of being
stored persistently in the session.
To definitely fix this problem, flags were added to the stats
member of the session structure.
A second problem appeared. When the stats were produced, a first
call to client_retnclose() was performed, then one or multiple
subsequent calls to buffer_write_chunks() were done. But once the
stats buffer was full and a reschedule operated, the buffer was
flushed, the write flag cleared from the buffer and nothing was
done to re-arm it.
For this reason, a check was added in the proto_uxst_stats()
function in order to re-call the client FSM when data were added
by stats_dump_raw(). Finally, the whole unix stats dump FSM was
rewritten to avoid all the magics it depended on. It is now
simpler and looks more like the HTTP one.
Currently there is a ~16KB limit for a data size passed via unix socket.
It is caused by a trivial bug ttat is going to fixed soon, however
in most cases there is no need to dump a full stats.
This patch makes possible to select a scope of dumped data by extending
current "show stat" to "show stat [<iid> <type> <sid>]":
- iid is a proxy id, -1 to dump all proxies
- type selects type of dumpable objects: 1 for frontend, 2 for backend, 4 for
server, -1 for all types. Values can be ORed, for example:
1+2=3 -> frontend+backend.
1+2+4=7 -> frontend+backend+server.
- sid is a service id, -1 to dump everything from the selected proxy.
To do this I implemented a new session flag (SN_STAT_BOUND), added three
variables in data_ctx.stats (iid, type, sid), modified dumpstats.c and
completely revorked the process_uxst_stats: now it waits for a "\n"
terminated string, splits args and uses them. BTW: It should be quite easy
to add new commands, for example to enable/disable servers, the only problem
I can see is a not very lucky config name (*stats* socket). :|
During the work I also fixed two bug:
- s->flags were not initialized for proto_uxst
- missing comma if throttling not enabled (caused by a stupid change in
"Implement persistent id for proxies and servers")
Other changes:
- No more magic type valuse, use STATS_TYPE_FE/STATS_TYPE_BE/STATS_TYPE_SV
- Don't memset full s->data_ctx (it was clearing s->data_ctx.stats.{iid/type/sid},
instead initialize stats.sv & stats.sv_st (stats.px and stats.px_st were already
initialized)
With all that changes it was extremely easy to write a short perl plugin
for a perl-enabled net-snmp (also included in this patch).
29385 is my PEN (Private Enterprise Number) and I'm willing to donate
the SNMPv2-SMI::enterprises.29385.106.* OIDs for HAProxy if there
is nothing assigned already.
Now when a server has "redir <prefix>" on its config line, any HEAD or GET
request addressing it will lead to a 302 with Location set to "<prefix>"
immediately followed by the relative URI of the incoming request. This makes
it very easy to send redirect to browsers to check remote static servers, as
well as to provide redirection for remote sites when the local one is down.
Several users have complained that when haproxy gets a connection
failure due to an active reject from a server, it immediately
retries, often leading to the same situation being repeated until
the retry counter reaches zero.
Now if a connection error shows up, a turn-around state of 1 second
is applied before retrying. This is performed by faking a connection
timeout in order not to touch much code. However, a cleaner method
would involve an extra state.
This patch extends a little previously added functionality to also
count retries and redispatches for servers. Now it is possible to know
which server causes redispatches as it is not always the same that takes
most retries.
While working with the code I found that redistribute_pending() does not increment
srv->redispatches && be->redispatches. I don't know how to test it but
I think the fix is correct. If not I can withdraw it.
I also extended logs to show how many retries were done and if redispatching
was necessary ('+'). I'm using an additional session flag SN_REDISP to match
redispatched connections. I had to rearrange all defines in session.h to make
more room for it.
The documentation about logs was also fixed a little (sorry, english only),
as current version uses totally different format. BTW: examples are still
outdated, maybe next time...
Finally, I changed %d -> %u for retries/redispatches as those variables
are declared as unsigned.
It is now possible to get CSV ouput from the statistics by
simply appending ";csv" to the HTTP request sent to get the
stats. The fields keep the same ordering as in the HTML page,
and a field "pxname" has been prepended at the beginning of
the line.
The stats page now supports an option to hide servers which are DOWN
and to enable/disable automatic refresh. It is also possible to ask
for an immediate refresh.
There are multiple places where the client's destination address is
required. Let's store it in the session when needed, and add a flag
to inform that it has been retrieved.
Some session flags were clearly related to HTTP transactions.
A new 'flags' field has been added to http_txn, and the
associated flags moved to proto_http.h.
Some parts of HTTP processing were incorrectly called "request" while
they are messages or transactions. The following structure members
have changed :
http_msg.hdr_state => msg_state
http_msg.sor => som
http_req.req_state => removed
http_req => http_txn
The HTTP parser has been rewritten for better compliance to RFC2616.
The same parser is now usable for both requests and responses, and
it now supports HTTP/0.9 as well as multi-line headers. It has also
been improved for speed ; a typicial HTTP request is parsed in about
2 microseconds on a 1 GHz processor.
The monitor-uri check has been moved so that the requests are not
logged. The httpclose option now tries to change as little as
possible in the request, and does not affect the first header if
it is already set to 'close'. HTTP/0.9 requests are converted to
HTTP/1.0 before being forwarded.
Headers and request transformations are now distinct. The headers
list is updated after each insertion/removal/transformation. The
request is re-parsed and checked after each transformation. It is
not possible anymore to remove a request, and requests which lead
to invalid request lines are now rejected.
A struct http_req has been created to collect every information
related to an HTTP request being processed. Right now, it is
still in the struct session but the frontier is clear now.
The nbconn attribute in the proxies was not relevant anymore because
a frontend A may use backend B and both of them must account for their
respective connections. For this reason, there now are two separate
counters for frontend and backend connections.
The stats page has been updated to reflect the backend, but a separate
line entry for the frontend with error counts would be good.
Note that as of now, beconn may be higher than maxconn, because maxconn
applies to the frontend, while beconn may be increased due to sessions
passed from another frontend.
There was a confusion about the way to find filters and backend
parameters from sessions. The chaining has been changed between
the session and the proxy.
Now, a session knows only two proxies : one frontend (->fe) and
one backend (->be). Each proxy has a link to the proxy providing
filters and to the proxy providing backend parameters (both self
by default).
The captures (cookies and headers) have been attached to the
frontend's filters for now.
The uri_auth and the statistics are attached to the backend's
filters so that the uri can depend on a hostname for instance.
The req_cap, hdr_state, hdr_idx, auth_hdr and req_line have been moved
to a dedicated hreq structure in the session. It makes is easier to
add HTTP-specific fields such as SOR (start of request) and EOF (end
of headers).
It also made it possible to fix two bugs introduced by last commit :
- end of headers not correctly detected
- hdr_idx not freed upon one specific error during session creation
When the backend side will be reworked, it should rely on a similar
structure.
The new parser uses an FSM to strictly follow RFC2616.
Headers are indexed and parsed only once they're all available.
That way, complex regexes make more sense.
HTTP processing is now performed in several phases by calling
multiple functions, making the code cleaner and easier to read.
Note that req[i]pass does not work anymore because it would
require that we mark a header to be ignored. What is really
needed is to have the ability to add an exception to a matching
(match xx except yy).
Several bugs have been fixed in appsession during the conversion
to the new FSM (method length and recovery on malloc errors).
The code does build and work with the debug examples, but is
not usable yet to connect to anything as it does not forward
the requests yet.
This structure will consume 4 bytes per header to keep track of
headers within a request or a response without having to parse
the whole request for each regex. As it's not possible to allocate
only 4 bytes, we define a max number of HTTP headers. We set it
to (BUFSIZE+79)/80 so that 8kB buffers can contain 100 headers
(like Apache), resulting in 400 bytes dedicated to indexation,
or about 400/(2*8kB) ~= 2.4% of the memory usage.
The references to the proxy from the session have been turned into
Frontend (fe), Filters (fi) and Backend (be). This should ease the
migration to the L7 switching features. Next step will be to kill
the struct proxy and have 3 independant structs instead, each
referenced from entities called listener, frontend, filters and
backend.
It is now possible to tarpit connections based on regex matches.
The tarpit timeout is equal to the contimeout. A 500 server error
response is faked, and the logs show the status flags as "PT" which
indicate the connection has been tarpitted.