The dns_cache_flush() drops the old database and creates a new one, but
it forgets to create the task(s) that runs the node pruning and cleaning
the rbtdb when flushing it next time. This causes the cleaning to skip
cleaning the parent nodes (with .down == NULL) leading to increased
memory usage over time until the database is unable to keep up and just
stays overmem all the time.
(cherry picked from commit d4bc4e5cc6)
Previously, rbtdb->task had quantum of 1 because it was originally used
just for freeing RBTDB contents, which can happen on a "best effort"
basis (does not need to be prioritized). However, when tree pruning was
implemented, it also started sending events to that task, enabling the
latter to become clogged up with a significant event backlog because it
only pruned a single RBTDB node per event.
To prioritize tree pruning (as it is necessary for enforcing the
configured memory use limit for the cache memory context), create a
second task with a virtually unlimited quantum (UINT_MAX) and send the
tree-pruning events to this new task, to ensure that all nodes scheduled
for pruning will be processed before further nodes are queued in a
similar fashion.
This change enables dropping the prunenodes list and restoring the
originally-used logic that allocates and sends a separate event for each
node to prune.
(cherry picked from commit 540a5b5a2c)
Reconstruct the variant of the prune_tree() parent cleaning to consider
all elibible parents in a single loop as we were doing before all the
changes that led to this commit.
Update code comments so that they more precisely describe what the
relevant bits of code actually do.
(cherry picked from commit 12c42a6c07)
Commit 37101c7c8a checks the prunelink
member of the node that was just pruned, not its parent node that was
intended to be examined. Fix by checking the prunelink member of the
parent node, so that adding the latter to its relevant prunenodes list
twice is properly guarded against.
(cherry picked from commit 7d9be24bb1)
If a node cleaned up by prune_tree() happens to belong to the same node
bucket as its parent, the latter is directly appended to the prunenodes
list currently processed by prune_tree(). However, the relevant code
branch does not account for the fact that the parent might already be on
the list it is trying to append it to. Fix by only calling
ISC_LIST_APPEND() for parent nodes not yet added to their relevant
prunenodes list.
(cherry picked from commit 4b6fc97af6)
Commit 801e888d03 made the prune_tree()
function use send_to_prune_tree() for triggering pruning of deleted leaf
nodes' parents. This enabled the following sequence of events to
happen:
1. Node A, which is a leaf node, is passed to send_to_prune_tree() and
its pruning is queued.
2. Node B is added to the RBTDB as a child of node A before the latter
gets pruned.
3. Node B, which is now a leaf node itself (and is likely to belong to
a different node bucket than node A), is passed to
send_to_prune_tree() and its pruning gets queued.
4. Node B gets pruned. Its parent, node A, now becomes a leaf again
and therefore the prune_tree() call that handled node B calls
send_to_prune_tree() for node A.
5. Since node A was already queued for pruning in step 1 (but not yet
pruned), the INSIST(!ISC_LINK_LINKED(node, prunelink)); assertion
fails for node A in send_to_prune_tree().
The above sequence of events is not a sign of pathological behavior.
Replace the assertion check with a conditional early return from
send_to_prune_tree().
(cherry picked from commit f6289ad931)
It was discovered that the TTL-based cleaning could build up
a significant backlog of the rdataset headers during the periods where
the top of the TTL heap isn't expired yet. Make the TTL-based cleaning
more aggressive by cleaning more headers from the heap when we are
adding new header into the RBTDB.
(cherry picked from commit d8220ca4ca)
(cherry picked from commit 496fe6bc60)
It was discovered that an expired header could sit on top of the heap
a little longer than desireable. Remove expired headers (headers with
rdh_ttl set to 0) from the heap completely, so they don't block the next
TTL-based cleaning.
(cherry picked from commit a9383e4b95)
(cherry picked from commit abe080d16e)
The log message for commit c3377cbfaa
explained:
Instead of issuing a separate isc_task_send() call for every RBTDB node
that triggers tree pruning, maintain a list of nodes from which tree
pruning can be started from and only issue an isc_task_send() call if
pruning has not yet been triggered by another RBTDB node.
The extra queuing overhead eliminated by this change could be remotely
exploited to cause excessive memory use.
However, it turned out that having a single queue for the nodes to be
pruned increased lock contention to a level where cleaning up nodes from
the RBTDB took too long, causing the amount of memory used by the cache
to grow indefinitely over time.
This commit makes the prunenodes list bucketed, adds a quantum of 10
items per prune_tree() run, and simplifies parent node cleaning in the
prune_tree() logic.
Instead of juggling node locks in a cycle, only clean up the node
currently being pruned and queue its parent (if it is also eligible) for
pruning in the same way (by sending an event).
This simplifies the code and also spreads the pruning load across more
task loop ticks, which is better for lock contention as less things run
in a tight loop.
(cherry picked from commit 2df147cb12)
dns__cacherbt_expireheader can unlink / free header_prev underneath
it. Use ISC_LIST_TAIL after calling dns__cacherbt_expireheader
instead to get the next pointer to be processed.
(cherry picked from commit 7ce2e86024)
We were missing a test where a single owner name would have multiple
types with a different case. The generated RRSIGs and NSEC records will
then have different case than the signed records and message parser have
to cope with that and treat everything as the same owner.
(cherry picked from commit a114042059)
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.
(cherry picked from commit 175655b771)
Don't parse the crypto data before parsing and matching the id and the
algorithm for consecutive DNSKEYs. This allows us to parse the RData
only in case the other parameters match allowing us to skip keys that
are of no interest to us, but still would consume precious CPU time by
parsing possibly garbage with OpenSSL.
(cherry picked from commit f39cd17a26)
Remember the position in the iterator when selecting the next signing
key. This should speed up processing for larger DNSKEY RRSets because
we don't have to iterate from start over and over again.
(cherry picked from commit 21af5c9a97)
Change the taskmgr (and thus netmgr) in a way that it supports fast and
slow task queues. The fast queue is used for incoming DNS traffic and
it will pass the processing to the slow queue for sending outgoing DNS
messages and processing resolver messages.
In the future, more tasks might get moved to the slow queues, so the
cached and authoritative DNS traffic can be handled without being slowed
down by operations that take longer time to process.
(cherry picked from commit 1b3b0cef22)
The fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if an rdata type that can only
occur in the question was found in another section.
(cherry picked from commit 510f1de8a6)
the fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if duplicate rdatasets were found
in the question section. this commit ensures that rdatasets are
correctly disassociated and freed when this occurs.
(cherry picked from commit 4c19d35614)
The logic contained in dangerfile.py incorrectly warns about missing
release note changes for merge requests preparing release documentation
as such merge requests rename files in the doc/notes/ directory. This
(correctly) causes these files to be passed to dangerfile.py via
danger.git.created_files and danger.git.deleted_files rather than via
danger.git.modified_files, which in turn causes the logic checking the
use of the "Release Notes" label to assume that no release notes are
added, removed, or modified by a given merge request.
Fix by considering all types of file changes (modifications, additions,
and removals - which also covers file renaming) when checking whether a
given merge request modifies release notes. Update the warning messages
accordingly.
However, when trying to find release notes added by a given merge
request, deleted files must not be considered. Tweak the logic looking
for GitLab identifiers in the release notes added by a given merge
request so that it only scans modified and added (or renamed) files.
(cherry picked from commit 0fec404c64)