Commit graph

13509 commits

Author SHA1 Message Date
Michał Kępień
c340f9d6e4 Merge tag 'v9.16.49' into bind-9.16 2024-03-20 14:37:45 +01:00
Mark Andrews
3bcd6385d4 Only call memmove if the rdata length is non zero
This avoids undefined behaviour on zero length rdata where the
data pointer is NULL.

(cherry picked from commit 228cc557fe)
2024-03-14 11:06:25 +11:00
Michał Kępień
b138931fc0
Account for changes to struct dns_rbtnode
Commit eba7fb5f9f modified the definition
of struct dns_rbtnode.  Doing that changes the layout of map-format zone
files.  Bump MAPAPI and update the offsets used in map-format zone file
checks in the "masterformat" system test, as these changes were
inadvertently omitted from the aforementioned change.

(cherry picked from commit 52fe0b6be7)
2024-03-07 09:57:48 +01:00
Michał Kępień
52fe0b6be7
Account for changes to struct dns_rbtnode
Commit 540a5b5a2c modified the definition
of struct dns_rbtnode.  Doing that changes the layout of map-format zone
files.  Bump MAPAPI and update the offsets used in map-format zone file
checks in the "masterformat" system test, as these changes were
inadvertently omitted from the aforementioned change.
2024-03-07 09:42:38 +01:00
Ondřej Surý
5f98eba608
Move the task creation into cache_create_db()
The dns_cache_flush() drops the old database and creates a new one, but
it forgets to create the task(s) that runs the node pruning and cleaning
the rbtdb when flushing it next time.  This causes the cleaning to skip
cleaning the parent nodes (with .down == NULL) leading to increased
memory usage over time until the database is unable to keep up and just
stays overmem all the time.

(cherry picked from commit d4bc4e5cc6)
2024-03-06 19:17:32 +01:00
Ondřej Surý
eba7fb5f9f
Create a second pruning task for rbtdb with unlimited quantum
Previously, rbtdb->task had quantum of 1 because it was originally used
just for freeing RBTDB contents, which can happen on a "best effort"
basis (does not need to be prioritized).  However, when tree pruning was
implemented, it also started sending events to that task, enabling the
latter to become clogged up with a significant event backlog because it
only pruned a single RBTDB node per event.

To prioritize tree pruning (as it is necessary for enforcing the
configured memory use limit for the cache memory context), create a
second task with a virtually unlimited quantum (UINT_MAX) and send the
tree-pruning events to this new task, to ensure that all nodes scheduled
for pruning will be processed before further nodes are queued in a
similar fashion.

This change enables dropping the prunenodes list and restoring the
originally-used logic that allocates and sends a separate event for each
node to prune.

(cherry picked from commit 540a5b5a2c)
2024-03-06 19:17:32 +01:00
Ondřej Surý
a548312191
Restore the parent cleaning logic in prune_tree()
Reconstruct the variant of the prune_tree() parent cleaning to consider
all elibible parents in a single loop as we were doing before all the
changes that led to this commit.

Update code comments so that they more precisely describe what the
relevant bits of code actually do.

(cherry picked from commit 12c42a6c07)
2024-03-06 19:17:32 +01:00
Ondřej Surý
d4bc4e5cc6
Move the task creation into cache_create_db()
The dns_cache_flush() drops the old database and creates a new one, but
it forgets to create the task(s) that runs the node pruning and cleaning
the rbtdb when flushing it next time.  This causes the cleaning to skip
cleaning the parent nodes (with .down == NULL) leading to increased
memory usage over time until the database is unable to keep up and just
stays overmem all the time.

(cherry picked from commit 79040a669c)
2024-03-06 18:43:49 +01:00
Ondřej Surý
540a5b5a2c
Create a second pruning task for rbtdb with unlimited quantum
Previously, rbtdb->task had quantum of 1 because it was originally used
just for freeing RBTDB contents, which can happen on a "best effort"
basis (does not need to be prioritized).  However, when tree pruning was
implemented, it also started sending events to that task, enabling the
latter to become clogged up with a significant event backlog because it
only pruned a single RBTDB node per event.

To prioritize tree pruning (as it is necessary for enforcing the
configured memory use limit for the cache memory context), create a
second task with a virtually unlimited quantum (UINT_MAX) and send the
tree-pruning events to this new task, to ensure that all nodes scheduled
for pruning will be processed before further nodes are queued in a
similar fashion.

This change enables dropping the prunenodes list and restoring the
originally-used logic that allocates and sends a separate event for each
node to prune.

(cherry picked from commit 231b2375e5)
2024-03-06 18:43:49 +01:00
Ondřej Surý
12c42a6c07
Restore the parent cleaning logic in prune_tree()
Reconstruct the variant of the prune_tree() parent cleaning to consider
all elibible parents in a single loop as we were doing before all the
changes that led to this commit.

Update code comments so that they more precisely describe what the
relevant bits of code actually do.

(cherry picked from commit 454c75a33a)
2024-03-06 18:43:49 +01:00
Michał Kępień
0b59306166
Check the prunelink member of the correct node
Commit 37101c7c8a checks the prunelink
member of the node that was just pruned, not its parent node that was
intended to be examined.  Fix by checking the prunelink member of the
parent node, so that adding the latter to its relevant prunenodes list
twice is properly guarded against.

(cherry picked from commit 7d9be24bb1)
2024-03-02 06:37:53 +01:00
Michał Kępień
7d9be24bb1
Check the prunelink member of the correct node
Commit 4b6fc97af6 checks the prunelink
member of the node that was just pruned, not its parent node that was
intended to be examined.  Fix by checking the prunelink member of the
parent node, so that adding the latter to its relevant prunenodes list
twice is properly guarded against.
2024-03-02 06:36:37 +01:00
Michał Kępień
37101c7c8a
Do not re-add a node to the same prunenodes list
If a node cleaned up by prune_tree() happens to belong to the same node
bucket as its parent, the latter is directly appended to the prunenodes
list currently processed by prune_tree().  However, the relevant code
branch does not account for the fact that the parent might already be on
the list it is trying to append it to.  Fix by only calling
ISC_LIST_APPEND() for parent nodes not yet added to their relevant
prunenodes list.

(cherry picked from commit 4b6fc97af6)
2024-03-01 18:19:39 +01:00
Michał Kępień
4b6fc97af6
Do not re-add a node to the same prunenodes list
If a node cleaned up by prune_tree() happens to belong to the same node
bucket as its parent, the latter is directly appended to the prunenodes
list currently processed by prune_tree().  However, the relevant code
branch does not account for the fact that the parent might already be on
the list it is trying to append it to.  Fix by only calling
ISC_LIST_APPEND() for parent nodes not yet added to their relevant
prunenodes list.
2024-03-01 18:12:37 +01:00
Michał Kępień
cb9928aaeb
Gracefully handle resending a node to prune_tree()
Commit 801e888d03 made the prune_tree()
function use send_to_prune_tree() for triggering pruning of deleted leaf
nodes' parents.  This enabled the following sequence of events to
happen:

 1. Node A, which is a leaf node, is passed to send_to_prune_tree() and
    its pruning is queued.

 2. Node B is added to the RBTDB as a child of node A before the latter
    gets pruned.

 3. Node B, which is now a leaf node itself (and is likely to belong to
    a different node bucket than node A), is passed to
    send_to_prune_tree() and its pruning gets queued.

 4. Node B gets pruned.  Its parent, node A, now becomes a leaf again
    and therefore the prune_tree() call that handled node B calls
    send_to_prune_tree() for node A.

 5. Since node A was already queued for pruning in step 1 (but not yet
    pruned), the INSIST(!ISC_LINK_LINKED(node, prunelink)); assertion
    fails for node A in send_to_prune_tree().

The above sequence of events is not a sign of pathological behavior.
Replace the assertion check with a conditional early return from
send_to_prune_tree().

(cherry picked from commit f6289ad931)
2024-02-29 18:06:12 +01:00
Michał Kępień
f6289ad931
Gracefully handle resending a node to prune_tree()
Commit 2df147cb12 made the prune_tree()
function use send_to_prune_tree() for triggering pruning of deleted leaf
nodes' parents.  This enabled the following sequence of events to
happen:

 1. Node A, which is a leaf node, is passed to send_to_prune_tree() and
    its pruning is queued.

 2. Node B is added to the RBTDB as a child of node A before the latter
    gets pruned.

 3. Node B, which is now a leaf node itself (and is likely to belong to
    a different node bucket than node A), is passed to
    send_to_prune_tree() and its pruning gets queued.

 4. Node B gets pruned.  Its parent, node A, now becomes a leaf again
    and therefore the prune_tree() call that handled node B calls
    send_to_prune_tree() for node A.

 5. Since node A was already queued for pruning in step 1 (but not yet
    pruned), the INSIST(!ISC_LINK_LINKED(node, prunelink)); assertion
    fails for node A in send_to_prune_tree().

The above sequence of events is not a sign of pathological behavior.
Replace the assertion check with a conditional early return from
send_to_prune_tree().
2024-02-29 17:38:52 +01:00
Ondřej Surý
d7e3c782fd
Make the TTL-based cleaning more aggressive
It was discovered that the TTL-based cleaning could build up
a significant backlog of the rdataset headers during the periods where
the top of the TTL heap isn't expired yet.  Make the TTL-based cleaning
more aggressive by cleaning more headers from the heap when we are
adding new header into the RBTDB.

(cherry picked from commit d8220ca4ca)
(cherry picked from commit 496fe6bc60)
2024-02-29 16:14:05 +01:00
Ondřej Surý
1082495439
Remove expired rdataset headers from the heap
It was discovered that an expired header could sit on top of the heap
a little longer than desireable.  Remove expired headers (headers with
rdh_ttl set to 0) from the heap completely, so they don't block the next
TTL-based cleaning.

(cherry picked from commit a9383e4b95)
(cherry picked from commit abe080d16e)
2024-02-29 16:14:05 +01:00
Ondřej Surý
496fe6bc60
Make the TTL-based cleaning more aggressive
It was discovered that the TTL-based cleaning could build up
a significant backlog of the rdataset headers during the periods where
the top of the TTL heap isn't expired yet.  Make the TTL-based cleaning
more aggressive by cleaning more headers from the heap when we are
adding new header into the RBTDB.

(cherry picked from commit d8220ca4ca)
2024-02-29 16:09:36 +01:00
Ondřej Surý
abe080d16e
Remove expired rdataset headers from the heap
It was discovered that an expired header could sit on top of the heap
a little longer than desireable.  Remove expired headers (headers with
rdh_ttl set to 0) from the heap completely, so they don't block the next
TTL-based cleaning.

(cherry picked from commit a9383e4b95)
2024-02-29 16:09:34 +01:00
Ondřej Surý
801e888d03
Reduce lock contention during RBTDB tree pruning
The log message for commit c3377cbfaa
explained:

    Instead of issuing a separate isc_task_send() call for every RBTDB node
    that triggers tree pruning, maintain a list of nodes from which tree
    pruning can be started from and only issue an isc_task_send() call if
    pruning has not yet been triggered by another RBTDB node.

    The extra queuing overhead eliminated by this change could be remotely
    exploited to cause excessive memory use.

However, it turned out that having a single queue for the nodes to be
pruned increased lock contention to a level where cleaning up nodes from
the RBTDB took too long, causing the amount of memory used by the cache
to grow indefinitely over time.

This commit makes the prunenodes list bucketed, adds a quantum of 10
items per prune_tree() run, and simplifies parent node cleaning in the
prune_tree() logic.

Instead of juggling node locks in a cycle, only clean up the node
currently being pruned and queue its parent (if it is also eligible) for
pruning in the same way (by sending an event).

This simplifies the code and also spreads the pruning load across more
task loop ticks, which is better for lock contention as less things run
in a tight loop.

(cherry picked from commit 2df147cb12)
2024-02-29 12:47:25 +01:00
Ondřej Surý
2df147cb12
Reduce lock contention during RBTDB tree pruning
The log message for commit c3377cbfaa
explained:

    Instead of issuing a separate isc_task_send() call for every RBTDB node
    that triggers tree pruning, maintain a list of nodes from which tree
    pruning can be started from and only issue an isc_task_send() call if
    pruning has not yet been triggered by another RBTDB node.

    The extra queuing overhead eliminated by this change could be remotely
    exploited to cause excessive memory use.

However, it turned out that having a single queue for the nodes to be
pruned increased lock contention to a level where cleaning up nodes from
the RBTDB took too long, causing the amount of memory used by the cache
to grow indefinitely over time.

This commit makes the prunenodes list bucketed, adds a quantum of 10
items per prune_tree() run, and simplifies parent node cleaning in the
prune_tree() logic.

Instead of juggling node locks in a cycle, only clean up the node
currently being pruned and queue its parent (if it is also eligible) for
pruning in the same way (by sending an event).

This simplifies the code and also spreads the pruning load across more
task loop ticks, which is better for lock contention as less things run
in a tight loop.
2024-02-29 12:18:00 +01:00
Mark Andrews
742dc803a8
Do not use header_prev in expire_lru_headers
dns__cacherbt_expireheader can unlink / free header_prev underneath
it.  Use ISC_LIST_TAIL after calling dns__cacherbt_expireheader
instead to get the next pointer to be processed.

(cherry picked from commit 7ce2e86024)
2024-02-23 12:38:31 +01:00
Michał Kępień
b56c18b477 BIND 9.16.48
-----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCgAtFiEENKwGS3ftSQfs1TU17QVz/8hFYQUFAmXIsC8PHG1pY2hhbEBp
 c2Mub3JnAAoJEO0Fc//IRWEFiKMP/RA9Xb1P21Gj235DghhIEEAKeU1ivhwa51KD
 KMajhrXA5x1ynmiR0EXlJtGOm7HPVo7k17PcVyVMao5alieqOmS5plapBcBv5Zpn
 ozm0AQfXC/kODk39JPrSb9n/sBcZ5cVnl70pomNnTxvLMRgvrw59Vmrft6/+edX0
 u9hib/HqzBOhl0MZacxPuqHXnEhK7cNhJxf6X364JkDxA10yT2h5FlR1W2XIQVky
 a7nFqKwF/8bMLndnOD8CeNHXp/6kUCfUlU6BSPBBqJlZjlHQTUzo7ky0tyMTewVt
 /elndS+2atNBDTGQOxkF0QtopN6gBqpx/t9cIH2n1OQFb95Lp+t/VKYRlKIKC293
 uMgHMufwEcoJHsDEjUJnReBtrBEbnAxJ5+xChKbH05Ga6l0e8h2G06nKBZgW97lX
 2HGEBVmyJZX3HYt2U9g2EVA6nRfHN+JUTgMulMD5bqE3WpN/nxdudRQJzy5ceP95
 vzl3ELwUxM0ZmHGJyEm5GXuf0S9mvY7VUATHzoJjjNmChMyfdaaKmv7VJS1f7vCu
 Y56ribLwWhM+t5uNiejJdxyZSdKvFETcLmOX2bTZKj66IVIKLfxskwPYdEJbeIdx
 P0xEB7ZHSSn0yhazq9jIkNxPitJqzHv9kvqyf0c71lQUOucJSo2GHDVT8nta6Ogf
 ODOKd88+
 =EZaJ
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN SSH SIGNATURE-----
 U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAg25GGAuUyFX1gxo7QocNm8V6J/8
 frHSduYX7Aqk4iJLwAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAABTAAAAC3NzaC1lZDI1NTE5
 AAAAQLbh/3CvihAJrC9KrB5YcbPDGLaY5XDgvjv+P5NkrR4v1iWxsw7FchTtiJhQw8K1Pc
 hWNE/z2sph+06JblRssg4=
 -----END SSH SIGNATURE-----

Merge tag 'v9.16.48' into bind-9.16

BIND 9.16.48
2024-02-14 13:41:33 +01:00
Ondřej Surý
f493a83941
Fix case insensitive matching in isc_ht hash table implementation
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.

(cherry picked from commit 175655b771)
2024-02-11 11:57:58 +01:00
Ondřej Surý
f654ed7a05
Optimize cname_and_other_data to stop as earliest as possible
Stop the cname_and_other_data processing if we already know that the
result is true.  Also, we know that CNAME will be placed in the priority
headers, so we can stop looking for CNAME if we haven't found CNAME and
we are past the priority headers.

(cherry picked from commit 3f774c2a8a)
2024-02-08 09:42:52 +01:00
Ondřej Surý
8ef414a7f3
Optimize the slabheader placement for certain RRTypes
Mark the infrastructure RRTypes as "priority" types and place them at
the beginning of the rdataslab header data graph.  The non-priority
types either go right after the priority types (if any).

(cherry picked from commit 3ac482be7f)
2024-02-08 09:42:48 +01:00
Ondřej Surý
51d6488e82
Fix missing RRSIG for CNAME with different slabheader order
The cachedb was missing piece of code (already found in zonedb) that
would make lookups in the slabheaders to miss the RRSIGs for CNAME if
the order of CNAME and RRSIG(CNAME) was reversed in the node->data.

(cherry picked from commit 5070c7f5c7)
2024-02-08 09:42:42 +01:00
Michal Nowak
bab1aa9666
prep 9.16.47 2024-02-02 11:19:57 +01:00
Ondřej Surý
a520fbc047
Optimize selecting the signing key
Don't parse the crypto data before parsing and matching the id and the
algorithm for consecutive DNSKEYs.  This allows us to parse the RData
only in case the other parameters match allowing us to skip keys that
are of no interest to us, but still would consume precious CPU time by
parsing possibly garbage with OpenSSL.

(cherry picked from commit f39cd17a26)
2024-02-01 21:51:07 +01:00
Ondřej Surý
3d206e918b
Don't iterate from start every time we select new signing key
Remember the position in the iterator when selecting the next signing
key.  This should speed up processing for larger DNSKEY RRSets because
we don't have to iterate from start over and over again.

(cherry picked from commit 21af5c9a97)
2024-02-01 21:51:07 +01:00
Mark Andrews
6a65a42528
Fail processing incoming DNS message on first validation failure
Stop processing the DNS validation when first validation failure occurs
in the DNS message.

(cherry picked from commit 0add293477)
2024-02-01 21:51:07 +01:00
Mark Andrews
751b7cc475
Skip revoked keys when selecting DNSKEY in the validation loop
Don't select revoked keys when iterating through DNSKEYs in the DNSSEC
validation routines.

(cherry picked from commit 439e16e4de)
2024-02-01 21:51:07 +01:00
Ondřej Surý
c12608ca93
Split fast and slow task queues
Change the taskmgr (and thus netmgr) in a way that it supports fast and
slow task queues.  The fast queue is used for incoming DNS traffic and
it will pass the processing to the slow queue for sending outgoing DNS
messages and processing resolver messages.

In the future, more tasks might get moved to the slow queues, so the
cached and authoritative DNS traffic can be handled without being slowed
down by operations that take longer time to process.

(cherry picked from commit 1b3b0cef22)
2024-02-01 21:51:07 +01:00
Evan Hunt
f397ff5bb8
fix another message parsing regression
The fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if an rdata type that can only
occur in the question was found in another section.

(cherry picked from commit 510f1de8a6)
2024-01-31 16:04:59 +01:00
Evan Hunt
0bbb0065e6
fix a message parsing regression
the fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if duplicate rdatasets were found
in the question section. this commit ensures that rdatasets are
correctly disassociated and freed when this occurs.

(cherry picked from commit 4c19d35614)
2024-01-31 16:04:59 +01:00
Michał Kępień
c3377cbfaa
Limit isc_task_send() overhead for tree pruning
Instead of issuing a separate isc_task_send() call for every RBTDB node
that triggers tree pruning, maintain a list of nodes from which tree
pruning can be started from and only issue an isc_task_send() call if
pruning has not yet been triggered by another RBTDB node.

The extra queuing overhead eliminated by this change could be remotely
exploited to cause excessive memory use.

As this change modifies struct dns_rbtnode by adding a new 'prunelink'
member to it, bump MAPAPI to prevent any attempts of loading map-format
zone files created using older BIND 9 versions.

(cherry picked from commit 24381cc36d)
2024-01-05 12:40:50 +01:00
Mark Andrews
7db2796507
Restore dns64 state during serve-stale processing
If we are in the process of looking for the A records as part of
dns64 processing and the server-stale timeout triggers, redo the
dns64 changes that had been made to the orignal qctx.

(cherry picked from commit 1fcc483df1)
2024-01-05 12:24:05 +01:00
Mark Andrews
c732624936
Save the correct result value to resume with nxdomain-redirect
The wrong result value was being saved for resumption with
nxdomain-redirect when performing the fetch.  This lead to an assert
when checking that RFC 1918 reverse queries where not leaking to
the global internet.

(cherry picked from commit 9d0fa07c5e)
2024-01-05 12:10:22 +01:00
Matthijs Mekking
c44965af33
Fix windows build, remove external symbols
The functions dns_message_find and dns_message_movename have been
removed. Remove the symbols from libdns.def.in to fix the windows
build.
2024-01-05 11:52:05 +01:00
Ondřej Surý
a4baf32415
Backport isc_ht API changes from BIND 9.18
To prevent allocating large hashtable in dns_message, we need to
backport the improvements to isc_ht API from BIND 9.18+ that includes
support for case insensitive keys and incremental rehashing of the
hashtables.
2024-01-05 11:52:05 +01:00
Ondřej Surý
608707b4f5
Use hashtable when parsing a message
When parsing messages use a hashtable instead of a linear search to
reduce the amount of work done in findname when there's more than one
name in the section.

There are two hashtables:

1) hashtable for owner names - that's constructed for each section when
we hit the second name in the section and destroyed right after parsing
that section;

2) per-name hashtable - for each name in the section, we construct a new
hashtable for that name if there are more than one rdataset for that
particular name.

(cherry picked from commit b8a9631754)
2024-01-05 11:52:05 +01:00
Mark Andrews
ec28eb05db
Address race in dns_tsigkey_find()
Restart the process with a write lock if we discover an expired key
while holding the read lock.

(cherry picked from commit d2ba96488e)
2024-01-05 11:28:25 +01:00
Mark Andrews
9c9adc137c Use 'now' rather than 'inception' in 'add_sigs'
When kasp support was added 'inception' was used as a proxy for
'now' and resulted in signatures not being generated or the wrong
signatures being generated.  'inception' is the time to be set
in the signatures being generated and is usually in the past to
allow for clock skew.  'now' determines what keys are to be used
for signing.

(cherry picked from commit 6066e41948)
2023-12-19 12:55:03 +11:00
Evan Hunt
0361d6ab70 correctly limit hash resize to RBTDB_GLUE_TABLE_MAX_BITS
Use < instead of <= when testing the new new hash bits size,
otherwise it can exceed the limit.

(cherry picked from commit 8f73814469)
2023-12-06 11:45:19 -08:00
Mark Andrews
e5e8e3f226 Adjust comment to have correct message limit value
(cherry picked from commit 560c245971)
2023-12-06 09:06:31 +11:00
Mark Andrews
c9147530fd Adjust message buffer sizes in test code
(cherry picked from commit cbfcdbc199)
2023-12-06 09:06:31 +11:00
Mark Andrews
057c12d29a Check that buffer length in dns_message_renderbegin
The maximum DNS message size is 65535 octets. Check that the buffer
being passed to dns_message_renderbegin does not exceed this as the
compression code assumes that all offsets are no bigger than this.

(cherry picked from commit a069513234)
2023-12-06 09:06:31 +11:00
Ondřej Surý
62cf6b2e7f
Deprecate AES algorithm for DNS cookies
The AES algorithm for DNS cookies was being kept for legacy reasons,
and it can be safely removed in the next major release.  Mark is as
deprecated, so the `named-checkconf` prints a warning when in use.

(cherry picked from commit 67d14b0ee5)
2023-12-05 10:56:19 +01:00
Evan Hunt
12c60e9a26 set loadtime during initial transfer of a secondary zone
when transferring in a non-inline-signing secondary for the first time,
we previously never set the value of zone->loadtime, so it remained
zero. this caused a test failure in the statschannel system test,
and that test case was temporarily disabled.  the value is now set
correctly and the test case has been reinstated.

(cherry picked from commit 9643281453)
2023-11-20 09:56:50 -08:00