2006-06-25 20:48:02 -04:00
|
|
|
/*
|
|
|
|
|
* Task management functions.
|
|
|
|
|
*
|
2009-03-07 11:25:21 -05:00
|
|
|
* Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
|
2006-06-25 20:48:02 -04:00
|
|
|
*
|
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
|
*
|
|
|
|
|
*/
|
|
|
|
|
|
2009-03-08 17:25:28 -04:00
|
|
|
#include <string.h>
|
|
|
|
|
|
2006-06-29 11:53:05 -04:00
|
|
|
#include <common/config.h>
|
2007-05-13 13:43:47 -04:00
|
|
|
#include <common/memory.h>
|
2006-06-29 11:53:05 -04:00
|
|
|
#include <common/mini-clist.h>
|
2007-04-29 04:41:56 -04:00
|
|
|
#include <common/standard.h>
|
2007-04-28 16:40:08 -04:00
|
|
|
#include <common/time.h>
|
MAJOR: task: make use of the scope-aware ebtree functions
Currently the task scheduler suffers from an O(n) lookup when
skipping tasks that are not for the current thread. The reason
is that eb32_lookup_ge() has no information about the current
thread so it always revisits many tasks for other threads before
finding its own tasks.
This is particularly visible with HTTP/2 since the number of
concurrent streams created at once causes long series of tasks
for the same stream in the scheduler. With only 10 connections
and 100 streams each, by running on two threads, the performance
drops from 640kreq/s to 11.2kreq/s! Lookup metrics show that for
only 200000 task lookups, 430 million skips had to be performed,
which means that on average, each lookup leads to 2150 nodes to
be visited.
This commit backports the principle of scope lookups for ebtrees
from the ebtree_v7 development tree. The idea is that each node
contains a mask indicating the union of the scopes for the nodes
below it, which is fed during insertion, and used during lookups.
Then during lookups, branches that do not contain any leaf matching
the requested scope are simply ignored. This perfectly matches a
thread mask, allowing a thread to only extract the tasks it cares
about from the run queue, and to always find them in O(log(n))
instead of O(n). Thus the scheduler uses tid_bit and
task->thread_mask as the ebtree scope here.
Doing this has recovered most of the performance, as can be seen on
the test below with two threads, 10 connections, 100 streams each,
and 1 million requests total :
Before After Gain
test duration : 89.6s 4.73s x19
HTTP requests/s (DEBUG) : 11200 211300 x19
HTTP requests/s (PROD) : 15900 447000 x28
spin_lock time : 85.2s 0.46s /185
time per lookup : 13us 40ns /325
Even when going to 6 threads (on 3 hyperthreaded CPU cores), the
performance stays around 284000 req/s, showing that the contention
is much lower.
A test showed that there's no benefit in using this for the wait queue
though.
2017-11-05 07:34:20 -05:00
|
|
|
#include <eb32sctree.h>
|
2009-10-26 16:10:04 -04:00
|
|
|
#include <eb32tree.h>
|
2006-06-25 20:48:02 -04:00
|
|
|
|
2019-04-24 02:10:57 -04:00
|
|
|
#include <proto/fd.h>
|
|
|
|
|
#include <proto/freq_ctr.h>
|
2007-05-12 16:35:00 -04:00
|
|
|
#include <proto/proxy.h>
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 18:22:06 -04:00
|
|
|
#include <proto/stream.h>
|
2006-06-25 20:48:02 -04:00
|
|
|
#include <proto/task.h>
|
2007-04-29 04:41:56 -04:00
|
|
|
|
2018-11-26 05:58:30 -05:00
|
|
|
DECLARE_POOL(pool_head_task, "task", sizeof(struct task));
|
|
|
|
|
DECLARE_POOL(pool_head_tasklet, "tasklet", sizeof(struct tasklet));
|
2006-06-25 20:48:02 -04:00
|
|
|
|
2017-07-12 08:31:10 -04:00
|
|
|
/* This is the memory pool containing all the signal structs. These
|
2018-11-15 17:19:23 -05:00
|
|
|
* struct are used to store each required signal between two tasks.
|
2017-07-12 08:31:10 -04:00
|
|
|
*/
|
2018-11-26 05:58:30 -05:00
|
|
|
DECLARE_POOL(pool_head_notification, "notification", sizeof(struct notification));
|
2017-07-12 08:31:10 -04:00
|
|
|
|
2009-03-21 13:13:21 -04:00
|
|
|
unsigned int nb_tasks = 0;
|
2018-07-26 12:53:28 -04:00
|
|
|
volatile unsigned long global_tasks_mask = 0; /* Mask of threads with tasks in the global runqueue */
|
2016-12-06 03:15:30 -05:00
|
|
|
unsigned int tasks_run_queue = 0;
|
|
|
|
|
unsigned int tasks_run_queue_cur = 0; /* copy of the run queue size */
|
2009-03-21 13:33:52 -04:00
|
|
|
unsigned int nb_tasks_cur = 0; /* copy of the tasks count */
|
2009-03-21 05:01:42 -04:00
|
|
|
unsigned int niced_tasks = 0; /* number of niced tasks in the run queue */
|
2017-03-30 09:37:25 -04:00
|
|
|
|
2019-09-24 02:25:15 -04:00
|
|
|
THREAD_LOCAL struct task_per_thread *sched = &task_per_thread[0]; /* scheduler context for the current thread */
|
2017-11-26 04:08:06 -05:00
|
|
|
|
2018-11-25 14:12:18 -05:00
|
|
|
__decl_aligned_spinlock(rq_lock); /* spin lock related to run queue */
|
2019-05-28 12:48:07 -04:00
|
|
|
__decl_aligned_rwlock(wq_lock); /* RW lock related to the wait queue */
|
2006-06-25 20:48:02 -04:00
|
|
|
|
2018-06-06 08:22:03 -04:00
|
|
|
#ifdef USE_THREAD
|
2018-10-15 08:52:21 -04:00
|
|
|
struct eb_root timers; /* sorted timers tree, global */
|
2018-05-18 12:38:23 -04:00
|
|
|
struct eb_root rqueue; /* tree constituting the run queue */
|
2018-07-26 09:59:38 -04:00
|
|
|
int global_rqueue_size; /* Number of element sin the global runqueue */
|
2018-06-06 08:22:03 -04:00
|
|
|
#endif
|
2018-10-15 08:52:21 -04:00
|
|
|
|
2009-03-21 05:01:42 -04:00
|
|
|
static unsigned int rqueue_ticks; /* insertion count */
|
2018-10-15 10:12:48 -04:00
|
|
|
|
|
|
|
|
struct task_per_thread task_per_thread[MAX_THREADS];
|
2008-06-24 02:17:16 -04:00
|
|
|
|
2009-03-07 11:25:21 -05:00
|
|
|
/* Puts the task <t> in run queue at a position depending on t->nice. <t> is
|
|
|
|
|
* returned. The nice value assigns boosts in 32th of the run queue size. A
|
2016-12-06 03:15:30 -05:00
|
|
|
* nice value of -1024 sets the task to -tasks_run_queue*32, while a nice value
|
|
|
|
|
* of 1024 sets the task to tasks_run_queue*32. The state flags are cleared, so
|
|
|
|
|
* the caller will have to set its flags after this call.
|
2009-03-07 11:25:21 -05:00
|
|
|
* The task must not already be in the run queue. If unsure, use the safer
|
|
|
|
|
* task_wakeup() function.
|
2008-06-30 01:51:00 -04:00
|
|
|
*/
|
2018-05-18 12:38:23 -04:00
|
|
|
void __task_wakeup(struct task *t, struct eb_root *root)
|
2007-04-30 07:15:14 -04:00
|
|
|
{
|
2018-06-06 08:22:03 -04:00
|
|
|
#ifdef USE_THREAD
|
2018-05-18 12:38:23 -04:00
|
|
|
if (root == &rqueue) {
|
|
|
|
|
HA_SPIN_LOCK(TASK_RQ_LOCK, &rq_lock);
|
|
|
|
|
}
|
2019-04-15 03:18:31 -04:00
|
|
|
#endif
|
2018-05-18 12:38:23 -04:00
|
|
|
/* Make sure if the task isn't in the runqueue, nobody inserts it
|
|
|
|
|
* in the meanwhile.
|
|
|
|
|
*/
|
2019-03-08 12:48:47 -05:00
|
|
|
_HA_ATOMIC_ADD(&tasks_run_queue, 1);
|
2018-07-26 09:25:49 -04:00
|
|
|
#ifdef USE_THREAD
|
|
|
|
|
if (root == &rqueue) {
|
2019-04-17 13:10:22 -04:00
|
|
|
global_tasks_mask |= t->thread_mask;
|
2019-04-18 08:12:51 -04:00
|
|
|
__ha_barrier_store();
|
2018-07-26 09:25:49 -04:00
|
|
|
}
|
|
|
|
|
#endif
|
2019-03-08 12:48:47 -05:00
|
|
|
t->rq.key = _HA_ATOMIC_ADD(&rqueue_ticks, 1);
|
2008-06-30 01:51:00 -04:00
|
|
|
|
|
|
|
|
if (likely(t->nice)) {
|
|
|
|
|
int offset;
|
|
|
|
|
|
2019-03-08 12:48:47 -05:00
|
|
|
_HA_ATOMIC_ADD(&niced_tasks, 1);
|
2019-04-15 03:18:31 -04:00
|
|
|
offset = t->nice * (int)global.tune.runqueue_depth;
|
2009-03-07 11:25:21 -05:00
|
|
|
t->rq.key += offset;
|
2008-06-30 01:51:00 -04:00
|
|
|
}
|
|
|
|
|
|
2019-04-25 02:57:41 -04:00
|
|
|
if (task_profiling_mask & tid_bit)
|
2018-05-31 08:48:54 -04:00
|
|
|
t->call_date = now_mono_time();
|
|
|
|
|
|
2018-05-18 12:38:23 -04:00
|
|
|
eb32sc_insert(root, &t->rq, t->thread_mask);
|
2018-06-06 08:22:03 -04:00
|
|
|
#ifdef USE_THREAD
|
2018-05-18 12:38:23 -04:00
|
|
|
if (root == &rqueue) {
|
|
|
|
|
global_rqueue_size++;
|
2019-03-08 12:48:47 -05:00
|
|
|
_HA_ATOMIC_OR(&t->state, TASK_GLOBAL);
|
2018-05-18 12:38:23 -04:00
|
|
|
HA_SPIN_UNLOCK(TASK_RQ_LOCK, &rq_lock);
|
2018-06-06 08:22:03 -04:00
|
|
|
} else
|
|
|
|
|
#endif
|
|
|
|
|
{
|
2018-10-15 10:12:48 -04:00
|
|
|
int nb = ((void *)root - (void *)&task_per_thread[0].rqueue) / sizeof(task_per_thread[0]);
|
|
|
|
|
task_per_thread[nb].rqueue_size++;
|
2018-05-18 12:38:23 -04:00
|
|
|
}
|
2018-07-27 11:14:41 -04:00
|
|
|
#ifdef USE_THREAD
|
2018-07-26 11:55:11 -04:00
|
|
|
/* If all threads that are supposed to handle this task are sleeping,
|
|
|
|
|
* wake one.
|
|
|
|
|
*/
|
|
|
|
|
if ((((t->thread_mask & all_threads_mask) & sleeping_thread_mask) ==
|
2019-03-14 19:23:10 -04:00
|
|
|
(t->thread_mask & all_threads_mask))) {
|
|
|
|
|
unsigned long m = (t->thread_mask & all_threads_mask) &~ tid_bit;
|
|
|
|
|
|
|
|
|
|
m = (m & (m - 1)) ^ m; // keep lowest bit set
|
|
|
|
|
_HA_ATOMIC_AND(&sleeping_thread_mask, ~m);
|
|
|
|
|
wake_thread(my_ffsl(m) - 1);
|
|
|
|
|
}
|
2018-07-27 11:14:41 -04:00
|
|
|
#endif
|
2018-05-18 12:38:23 -04:00
|
|
|
return;
|
2007-04-30 07:15:14 -04:00
|
|
|
}
|
2007-05-12 16:35:00 -04:00
|
|
|
|
2007-04-29 04:41:56 -04:00
|
|
|
/*
|
2009-03-08 11:35:27 -04:00
|
|
|
* __task_queue()
|
2007-04-29 04:41:56 -04:00
|
|
|
*
|
2018-10-15 08:52:21 -04:00
|
|
|
* Inserts a task into wait queue <wq> at the position given by its expiration
|
2009-03-07 11:25:21 -05:00
|
|
|
* date. It does not matter if the task was already in the wait queue or not,
|
2009-03-08 11:35:27 -04:00
|
|
|
* as it will be unlinked. The task must not have an infinite expiration timer.
|
2009-03-21 05:01:42 -04:00
|
|
|
* Last, tasks must not be queued further than the end of the tree, which is
|
|
|
|
|
* between <now_ms> and <now_ms> + 2^31 ms (now+24days in 32bit).
|
2009-03-08 11:35:27 -04:00
|
|
|
*
|
|
|
|
|
* This function should not be used directly, it is meant to be called by the
|
|
|
|
|
* inline version of task_queue() which performs a few cheap preliminary tests
|
2018-10-15 08:52:21 -04:00
|
|
|
* before deciding to call __task_queue(). Moreover this function doesn't care
|
|
|
|
|
* at all about locking so the caller must be careful when deciding whether to
|
|
|
|
|
* lock or not around this call.
|
2007-04-29 04:41:56 -04:00
|
|
|
*/
|
2018-10-15 08:52:21 -04:00
|
|
|
void __task_queue(struct task *task, struct eb_root *wq)
|
2006-06-25 20:48:02 -04:00
|
|
|
{
|
2009-03-08 11:35:27 -04:00
|
|
|
if (likely(task_in_wq(task)))
|
2009-03-07 11:25:21 -05:00
|
|
|
__task_unlink_wq(task);
|
|
|
|
|
|
|
|
|
|
/* the task is not in the queue now */
|
2009-03-21 05:01:42 -04:00
|
|
|
task->wq.key = task->expire;
|
2008-06-29 11:00:59 -04:00
|
|
|
#ifdef DEBUG_CHECK_INVALID_EXPIRATION_DATES
|
2009-03-21 05:01:42 -04:00
|
|
|
if (tick_is_lt(task->wq.key, now_ms))
|
2008-06-29 11:00:59 -04:00
|
|
|
/* we're queuing too far away or in the past (most likely) */
|
2009-03-07 11:25:21 -05:00
|
|
|
return;
|
2008-06-29 11:00:59 -04:00
|
|
|
#endif
|
2008-07-05 12:16:19 -04:00
|
|
|
|
2018-10-15 08:52:21 -04:00
|
|
|
eb32_insert(wq, &task->wq);
|
2007-04-29 04:41:56 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
2008-06-24 02:17:16 -04:00
|
|
|
* Extract all expired timers from the timer queue, and wakes up all
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
* associated tasks.
|
2007-04-29 04:41:56 -04:00
|
|
|
*/
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
void wake_expired_tasks()
|
2007-04-29 04:41:56 -04:00
|
|
|
{
|
2019-09-24 02:25:15 -04:00
|
|
|
struct task_per_thread * const tt = sched; // thread's tasks
|
2007-04-29 04:41:56 -04:00
|
|
|
struct task *task;
|
2008-06-24 02:17:16 -04:00
|
|
|
struct eb32_node *eb;
|
2019-06-04 11:16:29 -04:00
|
|
|
__decl_hathreads(int key);
|
2008-06-29 11:00:59 -04:00
|
|
|
|
2018-10-15 08:52:21 -04:00
|
|
|
while (1) {
|
|
|
|
|
lookup_next_local:
|
2019-09-24 01:19:08 -04:00
|
|
|
eb = eb32_lookup_ge(&tt->timers, now_ms - TIMER_LOOK_BACK);
|
2018-10-15 08:52:21 -04:00
|
|
|
if (!eb) {
|
|
|
|
|
/* we might have reached the end of the tree, typically because
|
|
|
|
|
* <now_ms> is in the first half and we're first scanning the last
|
|
|
|
|
* half. Let's loop back to the beginning of the tree now.
|
|
|
|
|
*/
|
2019-09-24 01:19:08 -04:00
|
|
|
eb = eb32_first(&tt->timers);
|
2018-10-15 08:52:21 -04:00
|
|
|
if (likely(!eb))
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
if (tick_is_lt(now_ms, eb->key))
|
2018-10-15 08:52:21 -04:00
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
/* timer looks expired, detach it from the queue */
|
|
|
|
|
task = eb32_entry(eb, struct task, wq);
|
|
|
|
|
__task_unlink_wq(task);
|
|
|
|
|
|
|
|
|
|
/* It is possible that this task was left at an earlier place in the
|
|
|
|
|
* tree because a recent call to task_queue() has not moved it. This
|
|
|
|
|
* happens when the new expiration date is later than the old one.
|
|
|
|
|
* Since it is very unlikely that we reach a timeout anyway, it's a
|
|
|
|
|
* lot cheaper to proceed like this because we almost never update
|
|
|
|
|
* the tree. We may also find disabled expiration dates there. Since
|
|
|
|
|
* we have detached the task from the tree, we simply call task_queue
|
|
|
|
|
* to take care of this. Note that we might occasionally requeue it at
|
|
|
|
|
* the same place, before <eb>, so we have to check if this happens,
|
|
|
|
|
* and adjust <eb>, otherwise we may skip it which is not what we want.
|
|
|
|
|
* We may also not requeue the task (and not point eb at it) if its
|
|
|
|
|
* expiration time is not set.
|
|
|
|
|
*/
|
|
|
|
|
if (!tick_is_expired(task->expire, now_ms)) {
|
|
|
|
|
if (tick_isset(task->expire))
|
2019-09-24 01:19:08 -04:00
|
|
|
__task_queue(task, &tt->timers);
|
2018-10-15 08:52:21 -04:00
|
|
|
goto lookup_next_local;
|
|
|
|
|
}
|
|
|
|
|
task_wakeup(task, TASK_WOKEN_TIMER);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#ifdef USE_THREAD
|
2019-05-28 12:57:25 -04:00
|
|
|
if (eb_is_empty(&timers))
|
|
|
|
|
goto leave;
|
|
|
|
|
|
|
|
|
|
HA_RWLOCK_RDLOCK(TASK_WQ_LOCK, &wq_lock);
|
|
|
|
|
eb = eb32_lookup_ge(&timers, now_ms - TIMER_LOOK_BACK);
|
|
|
|
|
if (!eb) {
|
|
|
|
|
eb = eb32_first(&timers);
|
|
|
|
|
if (likely(!eb)) {
|
|
|
|
|
HA_RWLOCK_RDUNLOCK(TASK_WQ_LOCK, &wq_lock);
|
|
|
|
|
goto leave;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
key = eb->key;
|
|
|
|
|
HA_RWLOCK_RDUNLOCK(TASK_WQ_LOCK, &wq_lock);
|
|
|
|
|
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
if (tick_is_lt(now_ms, key))
|
2019-05-28 12:57:25 -04:00
|
|
|
goto leave;
|
|
|
|
|
|
|
|
|
|
/* There's really something of interest here, let's visit the queue */
|
|
|
|
|
|
2009-03-21 05:01:42 -04:00
|
|
|
while (1) {
|
2019-05-28 12:48:07 -04:00
|
|
|
HA_RWLOCK_WRLOCK(TASK_WQ_LOCK, &wq_lock);
|
2017-09-27 08:59:38 -04:00
|
|
|
lookup_next:
|
2017-03-30 09:37:25 -04:00
|
|
|
eb = eb32_lookup_ge(&timers, now_ms - TIMER_LOOK_BACK);
|
2017-09-27 08:59:38 -04:00
|
|
|
if (!eb) {
|
2009-03-21 05:01:42 -04:00
|
|
|
/* we might have reached the end of the tree, typically because
|
|
|
|
|
* <now_ms> is in the first half and we're first scanning the last
|
|
|
|
|
* half. Let's loop back to the beginning of the tree now.
|
|
|
|
|
*/
|
|
|
|
|
eb = eb32_first(&timers);
|
2017-11-05 13:09:27 -05:00
|
|
|
if (likely(!eb))
|
2009-03-21 05:01:42 -04:00
|
|
|
break;
|
|
|
|
|
}
|
2008-06-29 11:00:59 -04:00
|
|
|
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
if (tick_is_lt(now_ms, eb->key))
|
2017-11-05 13:09:27 -05:00
|
|
|
break;
|
2008-06-24 02:17:16 -04:00
|
|
|
|
2009-03-21 05:01:42 -04:00
|
|
|
/* timer looks expired, detach it from the queue */
|
|
|
|
|
task = eb32_entry(eb, struct task, wq);
|
|
|
|
|
__task_unlink_wq(task);
|
2009-03-08 01:46:27 -05:00
|
|
|
|
2009-03-21 05:01:42 -04:00
|
|
|
/* It is possible that this task was left at an earlier place in the
|
|
|
|
|
* tree because a recent call to task_queue() has not moved it. This
|
|
|
|
|
* happens when the new expiration date is later than the old one.
|
|
|
|
|
* Since it is very unlikely that we reach a timeout anyway, it's a
|
|
|
|
|
* lot cheaper to proceed like this because we almost never update
|
|
|
|
|
* the tree. We may also find disabled expiration dates there. Since
|
|
|
|
|
* we have detached the task from the tree, we simply call task_queue
|
2009-07-14 17:48:55 -04:00
|
|
|
* to take care of this. Note that we might occasionally requeue it at
|
|
|
|
|
* the same place, before <eb>, so we have to check if this happens,
|
|
|
|
|
* and adjust <eb>, otherwise we may skip it which is not what we want.
|
2009-08-09 03:09:54 -04:00
|
|
|
* We may also not requeue the task (and not point eb at it) if its
|
|
|
|
|
* expiration time is not set.
|
2009-03-21 05:01:42 -04:00
|
|
|
*/
|
|
|
|
|
if (!tick_is_expired(task->expire, now_ms)) {
|
2017-11-05 13:09:27 -05:00
|
|
|
if (tick_isset(task->expire))
|
2018-10-15 08:52:21 -04:00
|
|
|
__task_queue(task, &timers);
|
2017-09-27 08:59:38 -04:00
|
|
|
goto lookup_next;
|
2006-06-25 20:48:02 -04:00
|
|
|
}
|
2009-03-21 05:01:42 -04:00
|
|
|
task_wakeup(task, TASK_WOKEN_TIMER);
|
2019-05-28 12:48:07 -04:00
|
|
|
HA_RWLOCK_WRUNLOCK(TASK_WQ_LOCK, &wq_lock);
|
2009-03-21 05:01:42 -04:00
|
|
|
}
|
2008-06-24 02:17:16 -04:00
|
|
|
|
2019-05-28 12:48:07 -04:00
|
|
|
HA_RWLOCK_WRUNLOCK(TASK_WQ_LOCK, &wq_lock);
|
2018-10-15 08:52:21 -04:00
|
|
|
#endif
|
2019-05-28 12:57:25 -04:00
|
|
|
leave:
|
MINOR: tasks: split wake_expired_tasks() in two parts to avoid useless wakeups
We used to have wake_expired_tasks() wake up tasks and return the next
expiration delay. The problem this causes is that we have to call it just
before poll() in order to consider latest timers, but this also means that
we don't wake up all newly expired tasks upon return from poll(), which
thus systematically requires a second poll() round.
This is visible when running any scheduled task like a health check, as there
are systematically two poll() calls, one with the interval, nothing is done
after it, and another one with a zero delay, and the task is called:
listen test
bind *:8001
server s1 127.0.0.1:1111 check
09:37:38.200959 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8696843}) = 0
09:37:38.200967 epoll_wait(3, [], 200, 1000) = 0
09:37:39.202459 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8712467}) = 0
>> nothing run here, as the expired task was not woken up yet.
09:37:39.202497 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8715766}) = 0
09:37:39.202505 epoll_wait(3, [], 200, 0) = 0
09:37:39.202513 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8719064}) = 0
>> now the expired task was woken up
09:37:39.202522 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:37:39.202537 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:37:39.202565 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:37:39.202577 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:37:39.202585 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:37:39.202659 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:37:39.202673 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8814713}) = 0
09:37:39.202683 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:37:39.202693 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=8818617}) = 0
09:37:39.202701 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:37:39.202715 close(7) = 0
Let's instead split the function in two parts:
- the first part, wake_expired_tasks(), called just before
process_runnable_tasks(), wakes up all expired tasks; it doesn't
compute any timeout.
- the second part, next_timer_expiry(), called just before poll(),
only computes the next timeout for the current thread.
Thanks to this, all expired tasks are properly woken up when leaving
poll, and each poll call's timeout remains up to date:
09:41:16.270449 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10223556}) = 0
09:41:16.270457 epoll_wait(3, [], 200, 999) = 0
09:41:17.270130 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10238572}) = 0
09:41:17.270157 socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
09:41:17.270194 fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
09:41:17.270204 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
09:41:17.270216 setsockopt(7, SOL_TCP, TCP_QUICKACK, [0], 4) = 0
09:41:17.270224 connect(7, {sa_family=AF_INET, sin_port=htons(1111), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
09:41:17.270299 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLOUT, {u32=7, u64=7}}) = 0
09:41:17.270314 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10337841}) = 0
09:41:17.270323 epoll_wait(3, [{EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}], 200, 1000) = 1
09:41:17.270332 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=10341860}) = 0
09:41:17.270340 getsockopt(7, SOL_SOCKET, SO_ERROR, [111], [4]) = 0
09:41:17.270367 close(7) = 0
This may be backported to 2.1 and 2.0 though it's unlikely to bring any
user-visible improvement except to clarify debugging.
2019-12-11 02:12:23 -05:00
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Checks the next timer for the current thread by looking into its own timer
|
|
|
|
|
* list and the global one. It may return TICK_ETERNITY if no timer is present.
|
|
|
|
|
* Note that the next timer might very well be slighly in the past.
|
|
|
|
|
*/
|
|
|
|
|
int next_timer_expiry()
|
|
|
|
|
{
|
|
|
|
|
struct task_per_thread * const tt = sched; // thread's tasks
|
|
|
|
|
struct eb32_node *eb;
|
|
|
|
|
int ret = TICK_ETERNITY;
|
|
|
|
|
__decl_hathreads(int key);
|
|
|
|
|
|
|
|
|
|
/* first check in the thread-local timers */
|
|
|
|
|
eb = eb32_lookup_ge(&tt->timers, now_ms - TIMER_LOOK_BACK);
|
|
|
|
|
if (!eb) {
|
|
|
|
|
/* we might have reached the end of the tree, typically because
|
|
|
|
|
* <now_ms> is in the first half and we're first scanning the last
|
|
|
|
|
* half. Let's loop back to the beginning of the tree now.
|
|
|
|
|
*/
|
|
|
|
|
eb = eb32_first(&tt->timers);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (eb)
|
|
|
|
|
ret = eb->key;
|
|
|
|
|
|
|
|
|
|
#ifdef USE_THREAD
|
|
|
|
|
if (!eb_is_empty(&timers)) {
|
|
|
|
|
HA_RWLOCK_RDLOCK(TASK_WQ_LOCK, &wq_lock);
|
|
|
|
|
eb = eb32_lookup_ge(&timers, now_ms - TIMER_LOOK_BACK);
|
|
|
|
|
if (!eb)
|
|
|
|
|
eb = eb32_first(&timers);
|
|
|
|
|
if (eb)
|
|
|
|
|
key = eb->key;
|
|
|
|
|
HA_RWLOCK_RDUNLOCK(TASK_WQ_LOCK, &wq_lock);
|
|
|
|
|
if (eb)
|
|
|
|
|
ret = tick_first(ret, key);
|
|
|
|
|
}
|
|
|
|
|
#endif
|
2017-11-05 13:09:27 -05:00
|
|
|
return ret;
|
2006-06-25 20:48:02 -04:00
|
|
|
}
|
|
|
|
|
|
2020-01-30 12:13:13 -05:00
|
|
|
/* Walks over tasklet list <list> and run at most <max> of them. Returns
|
|
|
|
|
* the number of entries effectively processed (tasks and tasklets merged).
|
|
|
|
|
* The count of tasks in the list for the current thread is adjusted.
|
|
|
|
|
*/
|
|
|
|
|
static int run_tasks_from_list(struct list *list, int max)
|
|
|
|
|
{
|
|
|
|
|
struct task *(*process)(struct task *t, void *ctx, unsigned short state);
|
|
|
|
|
struct task *t;
|
|
|
|
|
unsigned short state;
|
|
|
|
|
void *ctx;
|
|
|
|
|
int done = 0;
|
|
|
|
|
|
|
|
|
|
while (done < max && !LIST_ISEMPTY(list)) {
|
|
|
|
|
t = (struct task *)LIST_ELEM(list->n, struct tasklet *, list);
|
2020-01-31 10:39:30 -05:00
|
|
|
state = (t->state & (TASK_SHARED_WQ|TASK_SELF_WAKING));
|
2020-01-30 12:13:13 -05:00
|
|
|
|
|
|
|
|
ti->flags &= ~TI_FL_STUCK; // this thread is still running
|
|
|
|
|
activity[tid].ctxsw++;
|
|
|
|
|
ctx = t->context;
|
|
|
|
|
process = t->process;
|
|
|
|
|
t->calls++;
|
2020-01-31 04:39:03 -05:00
|
|
|
sched->current = t;
|
2020-01-30 12:13:13 -05:00
|
|
|
|
|
|
|
|
if (TASK_IS_TASKLET(t)) {
|
2020-01-31 10:39:30 -05:00
|
|
|
state = _HA_ATOMIC_XCHG(&t->state, state);
|
|
|
|
|
__ha_barrier_atomic_store();
|
|
|
|
|
__tasklet_remove_from_tasklet_list((struct tasklet *)t);
|
2020-01-30 12:13:13 -05:00
|
|
|
process(NULL, ctx, state);
|
|
|
|
|
done++;
|
2020-01-31 04:39:03 -05:00
|
|
|
sched->current = NULL;
|
|
|
|
|
__ha_barrier_store();
|
2020-01-30 12:13:13 -05:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
2020-01-31 10:39:30 -05:00
|
|
|
state = _HA_ATOMIC_XCHG(&t->state, state | TASK_RUNNING);
|
|
|
|
|
__ha_barrier_atomic_store();
|
|
|
|
|
__tasklet_remove_from_tasklet_list((struct tasklet *)t);
|
|
|
|
|
|
2020-01-30 12:13:13 -05:00
|
|
|
/* OK then this is a regular task */
|
|
|
|
|
|
|
|
|
|
task_per_thread[tid].task_list_size--;
|
|
|
|
|
if (unlikely(t->call_date)) {
|
|
|
|
|
uint64_t now_ns = now_mono_time();
|
|
|
|
|
|
|
|
|
|
t->lat_time += now_ns - t->call_date;
|
|
|
|
|
t->call_date = now_ns;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
__ha_barrier_store();
|
|
|
|
|
if (likely(process == process_stream))
|
|
|
|
|
t = process_stream(t, ctx, state);
|
|
|
|
|
else if (process != NULL)
|
|
|
|
|
t = process(t, ctx, state);
|
|
|
|
|
else {
|
|
|
|
|
__task_free(t);
|
|
|
|
|
sched->current = NULL;
|
|
|
|
|
__ha_barrier_store();
|
|
|
|
|
/* We don't want max_processed to be decremented if
|
|
|
|
|
* we're just freeing a destroyed task, we should only
|
|
|
|
|
* do so if we really ran a task.
|
|
|
|
|
*/
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
sched->current = NULL;
|
|
|
|
|
__ha_barrier_store();
|
|
|
|
|
/* If there is a pending state we have to wake up the task
|
|
|
|
|
* immediately, else we defer it into wait queue
|
|
|
|
|
*/
|
|
|
|
|
if (t != NULL) {
|
|
|
|
|
if (unlikely(t->call_date)) {
|
|
|
|
|
t->cpu_time += now_mono_time() - t->call_date;
|
|
|
|
|
t->call_date = 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
state = _HA_ATOMIC_AND(&t->state, ~TASK_RUNNING);
|
|
|
|
|
if (state & TASK_WOKEN_ANY)
|
|
|
|
|
task_wakeup(t, 0);
|
|
|
|
|
else
|
|
|
|
|
task_queue(t);
|
|
|
|
|
}
|
|
|
|
|
done++;
|
|
|
|
|
}
|
|
|
|
|
return done;
|
|
|
|
|
}
|
|
|
|
|
|
2008-06-29 16:40:23 -04:00
|
|
|
/* The run queue is chronologically sorted in a tree. An insertion counter is
|
|
|
|
|
* used to assign a position to each task. This counter may be combined with
|
|
|
|
|
* other variables (eg: nice value) to set the final position in the tree. The
|
|
|
|
|
* counter may wrap without a problem, of course. We then limit the number of
|
2017-11-14 04:38:36 -05:00
|
|
|
* tasks processed to 200 in any case, so that general latency remains low and
|
2019-04-12 12:03:41 -04:00
|
|
|
* so that task positions have a chance to be considered. The function scans
|
|
|
|
|
* both the global and local run queues and picks the most urgent task between
|
|
|
|
|
* the two. We need to grab the global runqueue lock to touch it so it's taken
|
|
|
|
|
* on the very first access to the global run queue and is released as soon as
|
|
|
|
|
* it reaches the end.
|
2006-06-25 20:48:02 -04:00
|
|
|
*
|
2008-06-29 16:40:23 -04:00
|
|
|
* The function adjusts <next> if a new event is closer.
|
2006-06-25 20:48:02 -04:00
|
|
|
*/
|
2014-12-15 07:26:01 -05:00
|
|
|
void process_runnable_tasks()
|
2006-06-25 20:48:02 -04:00
|
|
|
{
|
2019-09-24 02:25:15 -04:00
|
|
|
struct task_per_thread * const tt = sched;
|
2019-04-12 12:03:41 -04:00
|
|
|
struct eb32sc_node *lrq = NULL; // next local run queue entry
|
|
|
|
|
struct eb32sc_node *grq = NULL; // next global run queue entry
|
2007-01-06 18:38:00 -05:00
|
|
|
struct task *t;
|
2020-01-30 12:13:13 -05:00
|
|
|
int max_processed, done;
|
2019-10-11 10:35:01 -04:00
|
|
|
struct mt_list *tmp_list;
|
2017-11-14 04:26:53 -05:00
|
|
|
|
2019-05-22 01:06:44 -04:00
|
|
|
ti->flags &= ~TI_FL_STUCK; // this thread is still running
|
|
|
|
|
|
2019-05-29 13:22:43 -04:00
|
|
|
if (!thread_has_tasks()) {
|
2019-04-12 12:03:41 -04:00
|
|
|
activity[tid].empty_rq++;
|
|
|
|
|
return;
|
|
|
|
|
}
|
2019-10-11 10:35:01 -04:00
|
|
|
/* Merge the list of tasklets waken up by other threads to the
|
|
|
|
|
* main list.
|
|
|
|
|
*/
|
|
|
|
|
tmp_list = MT_LIST_BEHEAD(&sched->shared_tasklet_list);
|
|
|
|
|
if (tmp_list)
|
2020-01-30 12:37:28 -05:00
|
|
|
LIST_SPLICE_END_DETACHED(&sched->tasklets[TL_URGENT], (struct list *)tmp_list);
|
2019-04-12 12:03:41 -04:00
|
|
|
|
2016-12-06 03:15:30 -05:00
|
|
|
tasks_run_queue_cur = tasks_run_queue; /* keep a copy for reporting */
|
2009-03-21 13:33:52 -04:00
|
|
|
nb_tasks_cur = nb_tasks;
|
2018-05-24 12:59:04 -04:00
|
|
|
max_processed = global.tune.runqueue_depth;
|
2018-05-18 12:45:28 -04:00
|
|
|
|
2019-04-15 03:33:42 -04:00
|
|
|
if (likely(niced_tasks))
|
|
|
|
|
max_processed = (max_processed + 3) / 4;
|
|
|
|
|
|
OPTIM: task: readjust CPU bandwidth distribution since last update
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
2020-01-31 11:55:09 -05:00
|
|
|
/* run up to max_processed/3 urgent tasklets */
|
|
|
|
|
done = run_tasks_from_list(&tt->tasklets[TL_URGENT], (max_processed + 2) / 3);
|
2020-01-30 12:37:28 -05:00
|
|
|
max_processed -= done;
|
|
|
|
|
|
OPTIM: task: readjust CPU bandwidth distribution since last update
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
2020-01-31 11:55:09 -05:00
|
|
|
/* pick up to max_processed/2 (~=3/4*(max_processed-done)) regular tasks from prio-ordered run queues */
|
2020-01-30 12:37:28 -05:00
|
|
|
|
2019-04-12 12:03:41 -04:00
|
|
|
/* Note: the grq lock is always held when grq is not null */
|
2017-11-14 04:38:36 -05:00
|
|
|
|
OPTIM: task: readjust CPU bandwidth distribution since last update
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
2020-01-31 11:55:09 -05:00
|
|
|
while (tt->task_list_size < (3 * max_processed + 3) / 4) {
|
2019-04-12 12:03:41 -04:00
|
|
|
if ((global_tasks_mask & tid_bit) && !grq) {
|
2019-04-15 12:52:40 -04:00
|
|
|
#ifdef USE_THREAD
|
2019-04-12 12:03:41 -04:00
|
|
|
HA_SPIN_LOCK(TASK_RQ_LOCK, &rq_lock);
|
|
|
|
|
grq = eb32sc_lookup_ge(&rqueue, rqueue_ticks - TIMER_LOOK_BACK, tid_bit);
|
|
|
|
|
if (unlikely(!grq)) {
|
|
|
|
|
grq = eb32sc_first(&rqueue, tid_bit);
|
|
|
|
|
if (!grq) {
|
2019-04-17 13:10:22 -04:00
|
|
|
global_tasks_mask &= ~tid_bit;
|
2019-04-12 12:03:41 -04:00
|
|
|
HA_SPIN_UNLOCK(TASK_RQ_LOCK, &rq_lock);
|
2018-07-26 09:25:49 -04:00
|
|
|
}
|
2017-11-05 10:35:59 -05:00
|
|
|
}
|
2019-04-15 12:52:40 -04:00
|
|
|
#endif
|
2019-04-12 12:03:41 -04:00
|
|
|
}
|
2017-11-05 10:35:59 -05:00
|
|
|
|
2019-04-12 12:03:41 -04:00
|
|
|
/* If a global task is available for this thread, it's in grq
|
|
|
|
|
* now and the global RQ is locked.
|
|
|
|
|
*/
|
2018-05-18 12:38:23 -04:00
|
|
|
|
2019-04-12 12:03:41 -04:00
|
|
|
if (!lrq) {
|
2019-09-24 01:19:08 -04:00
|
|
|
lrq = eb32sc_lookup_ge(&tt->rqueue, rqueue_ticks - TIMER_LOOK_BACK, tid_bit);
|
2019-04-12 12:03:41 -04:00
|
|
|
if (unlikely(!lrq))
|
2019-09-24 01:19:08 -04:00
|
|
|
lrq = eb32sc_first(&tt->rqueue, tid_bit);
|
2017-11-05 10:35:59 -05:00
|
|
|
}
|
|
|
|
|
|
2019-04-12 12:03:41 -04:00
|
|
|
if (!lrq && !grq)
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
if (likely(!grq || (lrq && (int)(lrq->key - grq->key) <= 0))) {
|
|
|
|
|
t = eb32sc_entry(lrq, struct task, rq);
|
|
|
|
|
lrq = eb32sc_next(lrq, tid_bit);
|
|
|
|
|
__task_unlink_rq(t);
|
2018-05-18 12:38:23 -04:00
|
|
|
}
|
2019-04-15 12:52:40 -04:00
|
|
|
#ifdef USE_THREAD
|
2019-04-12 12:03:41 -04:00
|
|
|
else {
|
|
|
|
|
t = eb32sc_entry(grq, struct task, rq);
|
|
|
|
|
grq = eb32sc_next(grq, tid_bit);
|
|
|
|
|
__task_unlink_rq(t);
|
|
|
|
|
if (unlikely(!grq)) {
|
|
|
|
|
grq = eb32sc_first(&rqueue, tid_bit);
|
|
|
|
|
if (!grq) {
|
2019-04-17 13:10:22 -04:00
|
|
|
global_tasks_mask &= ~tid_bit;
|
2019-04-12 12:03:41 -04:00
|
|
|
HA_SPIN_UNLOCK(TASK_RQ_LOCK, &rq_lock);
|
|
|
|
|
}
|
|
|
|
|
}
|
2017-03-30 09:37:25 -04:00
|
|
|
}
|
2019-04-15 12:52:40 -04:00
|
|
|
#endif
|
2019-04-12 12:03:41 -04:00
|
|
|
|
2019-09-20 11:18:35 -04:00
|
|
|
/* Make sure the entry doesn't appear to be in a list */
|
2019-10-11 10:35:01 -04:00
|
|
|
LIST_INIT(&((struct tasklet *)t)->list);
|
2018-05-18 12:45:28 -04:00
|
|
|
/* And add it to the local task list */
|
2020-01-30 12:37:28 -05:00
|
|
|
tasklet_insert_into_tasklet_list(&tt->tasklets[TL_NORMAL], (struct tasklet *)t);
|
2019-10-11 10:35:01 -04:00
|
|
|
tt->task_list_size++;
|
2019-04-30 08:55:18 -04:00
|
|
|
activity[tid].tasksw++;
|
2018-05-18 12:45:28 -04:00
|
|
|
}
|
2019-04-12 12:03:41 -04:00
|
|
|
|
|
|
|
|
/* release the rqueue lock */
|
|
|
|
|
if (grq) {
|
|
|
|
|
HA_SPIN_UNLOCK(TASK_RQ_LOCK, &rq_lock);
|
|
|
|
|
grq = NULL;
|
|
|
|
|
}
|
|
|
|
|
|
OPTIM: task: readjust CPU bandwidth distribution since last update
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
2020-01-31 11:55:09 -05:00
|
|
|
/* run between 0.4*max_processed and max_processed/2 regular tasks */
|
|
|
|
|
done = run_tasks_from_list(&tt->tasklets[TL_NORMAL], (3 * max_processed + 3) / 4);
|
2020-01-30 12:37:28 -05:00
|
|
|
max_processed -= done;
|
|
|
|
|
|
OPTIM: task: readjust CPU bandwidth distribution since last update
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
2020-01-31 11:55:09 -05:00
|
|
|
/* run between max_processed/4 and max_processed bulk tasklets */
|
2020-01-30 12:37:28 -05:00
|
|
|
done = run_tasks_from_list(&tt->tasklets[TL_BULK], max_processed);
|
2020-01-30 12:13:13 -05:00
|
|
|
max_processed -= done;
|
2019-04-12 12:03:41 -04:00
|
|
|
|
2020-01-30 12:37:28 -05:00
|
|
|
if (!LIST_ISEMPTY(&sched->tasklets[TL_URGENT]) |
|
|
|
|
|
!LIST_ISEMPTY(&sched->tasklets[TL_NORMAL]) |
|
|
|
|
|
!LIST_ISEMPTY(&sched->tasklets[TL_BULK]))
|
2019-04-12 12:03:41 -04:00
|
|
|
activity[tid].long_rq++;
|
2006-06-25 20:48:02 -04:00
|
|
|
}
|
|
|
|
|
|
2019-07-12 02:31:17 -04:00
|
|
|
/* create a work list array for <nbthread> threads, using tasks made of
|
|
|
|
|
* function <fct>. The context passed to the function will be the pointer to
|
|
|
|
|
* the thread's work list, which will contain a copy of argument <arg>. The
|
|
|
|
|
* wake up reason will be TASK_WOKEN_OTHER. The pointer to the work_list array
|
|
|
|
|
* is returned on success, otherwise NULL on failure.
|
|
|
|
|
*/
|
|
|
|
|
struct work_list *work_list_create(int nbthread,
|
|
|
|
|
struct task *(*fct)(struct task *, void *, unsigned short),
|
|
|
|
|
void *arg)
|
|
|
|
|
{
|
|
|
|
|
struct work_list *wl;
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
wl = calloc(nbthread, sizeof(*wl));
|
|
|
|
|
if (!wl)
|
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < nbthread; i++) {
|
2019-08-08 09:47:21 -04:00
|
|
|
MT_LIST_INIT(&wl[i].head);
|
2019-07-12 02:31:17 -04:00
|
|
|
wl[i].task = task_new(1UL << i);
|
|
|
|
|
if (!wl[i].task)
|
|
|
|
|
goto fail;
|
|
|
|
|
wl[i].task->process = fct;
|
|
|
|
|
wl[i].task->context = &wl[i];
|
|
|
|
|
wl[i].arg = arg;
|
|
|
|
|
}
|
|
|
|
|
return wl;
|
|
|
|
|
|
|
|
|
|
fail:
|
|
|
|
|
work_list_destroy(wl, nbthread);
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* destroy work list <work> */
|
|
|
|
|
void work_list_destroy(struct work_list *work, int nbthread)
|
|
|
|
|
{
|
|
|
|
|
int t;
|
|
|
|
|
|
|
|
|
|
if (!work)
|
|
|
|
|
return;
|
|
|
|
|
for (t = 0; t < nbthread; t++)
|
|
|
|
|
task_destroy(work[t].task);
|
|
|
|
|
free(work);
|
|
|
|
|
}
|
|
|
|
|
|
2018-12-06 08:05:20 -05:00
|
|
|
/*
|
|
|
|
|
* Delete every tasks before running the master polling loop
|
|
|
|
|
*/
|
|
|
|
|
void mworker_cleantasks()
|
|
|
|
|
{
|
|
|
|
|
struct task *t;
|
|
|
|
|
int i;
|
2018-12-06 09:14:37 -05:00
|
|
|
struct eb32_node *tmp_wq = NULL;
|
|
|
|
|
struct eb32sc_node *tmp_rq = NULL;
|
2018-12-06 08:05:20 -05:00
|
|
|
|
|
|
|
|
#ifdef USE_THREAD
|
|
|
|
|
/* cleanup the global run queue */
|
2018-12-06 09:14:37 -05:00
|
|
|
tmp_rq = eb32sc_first(&rqueue, MAX_THREADS_MASK);
|
|
|
|
|
while (tmp_rq) {
|
|
|
|
|
t = eb32sc_entry(tmp_rq, struct task, rq);
|
|
|
|
|
tmp_rq = eb32sc_next(tmp_rq, MAX_THREADS_MASK);
|
2019-04-17 16:51:06 -04:00
|
|
|
task_destroy(t);
|
2018-12-06 08:05:20 -05:00
|
|
|
}
|
|
|
|
|
/* cleanup the timers queue */
|
2018-12-06 09:14:37 -05:00
|
|
|
tmp_wq = eb32_first(&timers);
|
|
|
|
|
while (tmp_wq) {
|
|
|
|
|
t = eb32_entry(tmp_wq, struct task, wq);
|
|
|
|
|
tmp_wq = eb32_next(tmp_wq);
|
2019-04-17 16:51:06 -04:00
|
|
|
task_destroy(t);
|
2018-12-06 08:05:20 -05:00
|
|
|
}
|
|
|
|
|
#endif
|
|
|
|
|
/* clean the per thread run queue */
|
|
|
|
|
for (i = 0; i < global.nbthread; i++) {
|
2018-12-06 09:14:37 -05:00
|
|
|
tmp_rq = eb32sc_first(&task_per_thread[i].rqueue, MAX_THREADS_MASK);
|
|
|
|
|
while (tmp_rq) {
|
|
|
|
|
t = eb32sc_entry(tmp_rq, struct task, rq);
|
|
|
|
|
tmp_rq = eb32sc_next(tmp_rq, MAX_THREADS_MASK);
|
2019-04-17 16:51:06 -04:00
|
|
|
task_destroy(t);
|
2018-12-06 08:05:20 -05:00
|
|
|
}
|
|
|
|
|
/* cleanup the per thread timers queue */
|
2018-12-06 09:14:37 -05:00
|
|
|
tmp_wq = eb32_first(&task_per_thread[i].timers);
|
|
|
|
|
while (tmp_wq) {
|
|
|
|
|
t = eb32_entry(tmp_wq, struct task, wq);
|
|
|
|
|
tmp_wq = eb32_next(tmp_wq);
|
2019-04-17 16:51:06 -04:00
|
|
|
task_destroy(t);
|
2018-12-06 08:05:20 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-11-26 10:31:20 -05:00
|
|
|
/* perform minimal intializations */
|
|
|
|
|
static void init_task()
|
2009-03-07 11:25:21 -05:00
|
|
|
{
|
2018-05-18 12:38:23 -04:00
|
|
|
int i;
|
|
|
|
|
|
2018-06-06 08:22:03 -04:00
|
|
|
#ifdef USE_THREAD
|
2018-10-15 08:52:21 -04:00
|
|
|
memset(&timers, 0, sizeof(timers));
|
2009-03-07 11:25:21 -05:00
|
|
|
memset(&rqueue, 0, sizeof(rqueue));
|
2018-06-06 08:22:03 -04:00
|
|
|
#endif
|
2018-10-15 10:12:48 -04:00
|
|
|
memset(&task_per_thread, 0, sizeof(task_per_thread));
|
2018-05-18 12:45:28 -04:00
|
|
|
for (i = 0; i < MAX_THREADS; i++) {
|
2020-01-30 12:37:28 -05:00
|
|
|
LIST_INIT(&task_per_thread[i].tasklets[TL_URGENT]);
|
|
|
|
|
LIST_INIT(&task_per_thread[i].tasklets[TL_NORMAL]);
|
|
|
|
|
LIST_INIT(&task_per_thread[i].tasklets[TL_BULK]);
|
2019-10-11 10:35:01 -04:00
|
|
|
MT_LIST_INIT(&task_per_thread[i].shared_tasklet_list);
|
2018-05-18 12:45:28 -04:00
|
|
|
}
|
2009-03-07 11:25:21 -05:00
|
|
|
}
|
|
|
|
|
|
2018-11-26 10:31:20 -05:00
|
|
|
INITCALL0(STG_PREPARE, init_task);
|
|
|
|
|
|
2006-06-25 20:48:02 -04:00
|
|
|
/*
|
|
|
|
|
* Local variables:
|
|
|
|
|
* c-indent-level: 8
|
|
|
|
|
* c-basic-offset: 8
|
|
|
|
|
* End:
|
|
|
|
|
*/
|