In HTTP/1.0 and HTTP/1.1, RFC 9112 section 9.6 says the last response
in a connection should include a `Connection: close` header, but the
statschannel server omitted it.
In an HTTP/1.0 response, the statschannel server can sometimes send a
`Connection: keep-alive` header when it is about to close the
connection. There are two ways:
If the first request on a connection is keep-alive and the second
request is not, then _both_ responses have `Connection: keep-alive`
but the connection is (correctly) closed after the second response.
If a single request contains
Connection: close
Connection: keep-alive
then RFC 9112 section 9.3 says the keep-alive header is ignored, but
the statschannel sends a spurious keep-alive in its response, though
it correctly closes the connection.
To fix these bugs, make it more clear that the `httpd->flags` are part
of the per-request-response state. The Connection: flags are now
described in terms of the effect they have instead of what causes them
to be set.
The isc_result_t enum was to sparse when each library code would skip to
next << 16 as a base. Remove the huge holes in the isc_result_t enum to
make the isc_result tables more compact.
This change required a rewrite how we map dns_rcode_t to isc_result_t
and back, so we don't ever return neither isc_result_t value nor
dns_rcode_t out of defined range.
The mapping functions between isc_result_t and dns_rcode_t could return
both isc_result_t values not defined in the header and dns_rcode_t
values not defined in the header because it blindly maps anything
withing full 12-bits defined for RCODEs to isc_result_t and back.
Refactor the dns_result_{from,to}rcode() functions to always return
valid isc_result_t and dns_rcode_t values by explicitly mapping the
values to each other and returning DNS_R_SERVFAIL (dns_rcode_servfail)
when encountering value out of the defined range.
The dns_zone_catz_enable_db() and dns_zone_catz_disable_db()
functions can race with similar operations in the catz module
because there is no synchronization between the threads.
Add catz functions which use the view's catalog zones' lock
when registering/unregistering the database update notify callback,
and use those functions in the dns_zone module, instead of doing it
directly.
view->adb may be referenced while the view is shutting down as the
zone uses a weak reference to the view and examines view->adb but
dns_view_detach call dns_adb_detach to clear view->adb.
- style fixes and general tidying-up in tkey.c
- remove the unused 'intoken' parameter from dns_tkey_buildgssquery()
- remove an unnecessary call to dns_tkeyctx_create() in ns_server_create()
(the TKEY context that was created there would soon be destroyed and
another one created when the configuration was loaded).
purely to assuage my desire for consistency across modules,
result variables have been renamed to 'result' as they are
throughout most of BIND. there are no other changes.
since it is not necessary to find partial matches when looking
up names in a TSIG keyring, we can use a hash table instead of
an RBT to store them.
the tsigkey object now stores the key name as a dns_fixedname
rather than allocating memory for it.
the `name` parameter to dns_tsigkeyring_add() has been removed;
it was unneeded since the tsigkey object already contains a copy
of the name.
the opportunistic cleanup_ring() function has been removed;
it was only slowing down lookups.
this function was no longer needed, because the algorithm name is no
longer copied into the tsigkey object by dns_tsigkey_createfromkey();
it's always just a pointer to a statically defined name.
the prior practice of passing a dns_name containing the
expanded name of an algorithm to dns_tsigkey_create() and
dns_tsigkey_createfromkey() is unnecessarily cumbersome;
we can now pass the algorithm number instead.
- remove the 'ring' parameter from dns_tsigkey_createfromkey(),
and use dns_tsigkeyring_add() to add key objects to a keyring instead.
- add a magic number to dns_tsigkeyring_t
- change dns_tsigkeyring_dumpanddetach() to dns_tsigkeyring_dump();
we now call dns_tsigkeyring_detach() separately.
- remove 'maxgenerated' from dns_tsigkeyring_t since it never changes.
use the ISC_REFCOUNT attach/detach implementation in dns/tsig.c
so that detailed tracing can be used during refactoring.
dns_tsig_keyring_t has been renamed dns_tsigkeyring_t so the type
and the attach/detach function names will match.
- style cleanups.
- simplify the function parameters to dns_tsigkey_create():
+ remove 'restored' and 'generated', they're only ever set to false.
+ remove 'creator' because it's only ever set to NULL.
+ remove 'inception' and 'expiry' because they're only ever set to
(0, 0) or (now, now), and either way, this means "never expire".
+ remove 'ring' because we can just use dns_tsigkeyring_add() instead.
- rename dns_keyring_restore() to dns_tsigkeyring_restore() to match the
rest of the functions operating on dns_tsigkeyring objects.
The "dns_dnssec_findzonekeys2" log message is a leftover from when that
was the name of the function. Rename to match the current name of the
function.
The find_zone_keys() function was not working properly for
inline-signed zones. It only worked if the DNSKEY records were also
published in the unsigned version of the zone. But this is not the
case when you use dnssec-policy, the DNSKEY records will only occur
in the signed version of the zone. Therefor, when looking for keys
to sign the zone, only the newly added keys in the dynamic update
were found (which could be zero), ignoring existing keys.
Also, if a DNSKEY was added, it would try to sign the zone with just
this new key, and this would only work if the key files for that key
were imported into the key-directory.
This is a design error, because the goal is to sign the zone with the
keys for which we actually have key files for. So instead of looking
for DNSKEY records to then search for the matching key files, call
dns_dnssec_findmatchingkeys() which just looks for the keys we have
on disk for the given zone. It will also set the correct DNSSEC
signing hints.
When a catalog zone is updated using AXFR, the zone database is changed,
so it is required to unregister the update notification callback from
the old database, and register it for the new one.
Currently, here is the order of the steps happening in such scenario:
1. The zone.c:zone_startload() function registers the notify callback
on the new database using dns_zone_catz_enable_db()
2. The callback, when called, notices that the new 'db' is different
than 'catz->db', and unregisters the old callback for 'catz->db',
marks that it's unregistered by setting 'catz->db_registered' to
false, then it schedules an update if it isn't already scheduled.
3. The offloaded update process, after completing its job, notices that
'catz->db_registered' is false, and (re)registers the update callback
for the current database it is working on. There is no harm here even
if it was registered also on step 1, and we can't skip it, because
this function can also be called "artificially" during a
reconfiguration, and in that case the registration step is required
here.
A problem arises when before step 1 an update process was already
in a running state, operating on the old database, and finishing its
work only after step 2. As described in step 3, dns__catz_update_cb()
notices that 'catz->db_registered' is false and registers the callback
on the current database it is working on, which, at that state, is
already obsolete and unused by the zone. When it detaches the database,
the function which is responsible for its cleanup (e.g. free_rbtdb())
asserts because there is a registered update notify callback there.
To fix the problem, instead of delaying the (re)registration to step 3,
make sure that the new callback is registered and 'catz->db_registered'
is accordingly marked on step 2.
When cache memory usage is over the configured cache size (overmem) and
we are cleaning unused entries, it might not be enough to clean just two
entries if the entries to be expired are smaller than the newly added
rdata. This could be abused by an attacker to cause a remote Denial of
Service by possibly running out of the operating system memory.
Currently, the addrdataset() tries to do a single TTL-based cleaning
considering the serve-stale TTL and then optionally moves to overmem
cleaning if we are in that condition. Then the overmem_purge() tries to
do another single TTL based cleaning from the TTL heap and then continue
with LRU-based cleaning up to 2 entries cleaned.
Squash the TTL-cleaning mechanism into single call from addrdataset(),
but ignore the serve-stale TTL if we are currently overmem.
Then instead of having a fixed number of entries to clean, pass the size
of newly added rdatasetheader to the overmem_purge() function and
cleanup at least the size of the newly added data. This prevents the
cache going over the configured memory limit (`max-cache-size`).
Additionally, refactor the overmem_purge() function to reduce for-loop
nesting for readability.
When create_fetch() in the dns_validator unit detects deadlock, it
returns DNS_R_NOVALIDSIG, but it didn't attach to the validator. The
other condition to returning result != ISC_R_SUCCESS would be error from
dns_resolver_createfetch(). The caller (in two places out of three)
would detect the error condition and always detach from the validator.
Move the dns_validator_detach() on dns_resolver_createfetch() error
condition to create_fetch() function and cleanup the extra detaches in
seek_dnskey() and get_dsset().
This commit changes send buffers allocation strategy for stream based
transports. Before that change we would allocate a dynamic buffers
sized at 64Kb even when we do not need that much. That could lead to
high memory usage on server. Now we resize the send buffer to match
the size of the actual data, freeing the memory at the end of the
buffer for being reused later.
dns_view_find* may be called after the final call to dns_view_detach
is made which detaches view->zonetable to permit the server to
shutdown. We need to detect if view->zonetable is NULL during this
stage and appropriately recover.
In some cases, the inlined version rcu_dereference() would not compile
when working on pointer to opaque struct (namely Ubuntu Jammy). Detect
such condition in the autoconf and disable the inlining of the small
functions if it breaks the build.
The number of clients per query is calculated using the pending
fetch responses in the list. The dns_resolver_createfetch() function
includes every item in the list when deciding whether the limit is
reached (i.e. fctx->spilled is true). Then, when the limit is reached,
there is another calculation in fctx_sendevents(), when deciding
whether it is needed to increase the limit, but this time the TRYSTALE
responses are not included in the calculation (because of early break
from the loop), and because of that the limit is never increased.
A single client can have more than one associated response/event in the
list (currently max. two), and calculating them as separate "clients"
is unexpected. E.g. if 'stale-answer-enable' is enabled and
'stale-answer-client-timeout' is enabled and is larger than 0, then
each client will have two events, which will effectively halve the
clients-per-query limit.
Fix the dns_resolver_createfetch() function to calculate only the
regular FETCHDONE responses/events.
Change the fctx_sendevents() function to also calculate only FETCHDONE
responses/events. Currently, this second change doesn't have any impact,
because the TRYSTALE events were already skipped, but having the same
condition in both places will help prevent similar bugs in the future
if a new type of response/event is ever added.
Remove the code implementing nonstardard behaviors that were formerly
needed to allow GSS-TSIG to work with Windows 2000, which passed
End-of-Life in 2010.
Deprecate the "oldgsstsig" command and "-o" command line option
to nsupdate; these are now treated as synonyms for "gsstsig" and "-g"
respectively.
We recently fixed a bug where in some cases (when following an
expired CNAME for example), named could return SERVFAIL if the target
record is still valid (see isc-projects/bind9#3678, and
isc-projects/bind9!7096). We fixed this by considering non-stale
RRsets as well during the stale lookup.
However, this triggered a new bug because despite the answer from
cache not being stale, the lookup may be triggered by serve-stale.
If the answer from database is not stale, the fix in
isc-projects/bind9!7096 erroneously skips the serve-stale logic.
Add 'answer_found' checks to the serve-stale logic to fix this issue.
isc_mem_put NULL's the pointer to the memory being freed. The
equality test 'parent->r == node' was accidentally being turned
into a test against NULL.
This commit ensures that access to the TLS context cache within zone
manager is properly synchronised.
Previously there was a possibility for it to get unexpectedly
NULLified for a brief moment by a call to
dns_zonemgr_set_tlsctx_cache() from one thread, while being accessed
from another (e.g. from got_transfer_quota()). This behaviour could
lead to server abort()ing on configuration reload (under very rare
circumstances).
That behaviour has been fixed.
when a TCP dispatch times out, we call tcp_recv() with a result
value of ISC_R_TIMEDOUT; this cancels the oldest dispatch
entry in the dispatch's active queue, plus any additional entries
that have waited longer than their configured timeouts. if, at
that point, there were more dispatch entries still on the active
queue, it resumes reading, but until now it failed to restart
the timer.
this has been corrected: we now calculate a new timeout
based on the oldest dispatch entry still remaining. this
requires us to initialize the start time of each dispatch entry
when it's first added to the queue.
in order to ensure that the handling of timed-out requests is
consistent, we now calculate the runtime of each dispatch
entry based on the same value for 'now'.
incidentally also fixed a compile error that turned up when
DNS_DISPATCH_TRACE was turned on.
it was possible to add a TSIG key to more than one TSIG
keyring at a time, and this was in fact happening with the
session key, which was generated once and then added to the
keyrings for each view as it was configured.
this has been corrected and a REQUIRE added to dns_tsigkeyring_add()
to prevent it from happening again.
The following code block repeats quite often:
if (rdata.type == dns_rdatatype_dnskey ||
rdata.type == dns_rdatatype_cdnskey ||
rdata.type == dns_rdatatype_cds)
Introduce a new function to reduce the repetition.
The raw zone is not supposed to be signed. DNSKEY records in a raw zone
should not trigger zone signing. The update code needs to be able to
identify when it is working on a raw zone. Add dns_zone_israw() and
dns_zone_issecure() enable it to do this. Also, we need to check the
case for 'auto-dnssec maintain'.
For inline-signing zones, sometimes kasp was not detected because
the function was called on the raw (unsigned) version of the zone,
but the kasp is only set on the secure (signed) version of the zone.
Fix the dns_zone_getkasp() function to check whether the zone
structure is inline_raw(), and if so, use the kasp from the
secure version.
In zone.c we can access the kasp pointer directly.
When synchronizing the journal or database from the unsigned version of
the zone to the secure version of the zone, allow DNSKEY records to be
synced, because these may be added by the user with the sole intent to
publish the record (not used for signing). This may be the case for
example in the multisigner model 2 (RFC 8901).
Additional code needs to be added to ensure that we do not remove DNSKEY
records that are under our control. Keys under our control are keys that
are used for signing the zone and thus that we have key files for.
Same counts for CDNSKEY and CDS (records that are derived from keys).
Use current used pointer - 16 instead of a saved pointer as Coverity
thinks the memory may be freed between assignment and use of 'cp'.
isc_buffer_put{mem,uint{8,16,32}} can theoretically free the memory
if there is a dynamic buffer in use but that is not the case here.
Thread sanitizer warns that parts of the qp-trie are accessed
both with and without the mutex; the unlocked accesses happen during
destruction, so they should be benign, but there's no harm locking
anyway to convince tsan it is clean.
Also, ensure .tsan-suppress and .tsan-suppress-extra are in sync.
RUNTIME_CHECK on the "wrap" variable avoids possible NULL dereference:
thread.c: In function 'thread_wrap':
thread.c:60:15: error: dereference of possibly-NULL 'wrap' [CWE-690] [-Werror=analyzer-possible-null-dereference]
60 | *wrap = (struct thread_wrap){
The RUNTIME_CHECK was there before
7d1ceaf35d.