This commit adds a lengthy test where the ZSK is rolled but the
KSK is offline (except for when the DNSKEY RRset is changed). The
specific scenario has the `dnskey-kskonly` configuration option set
meaning the DNSKEY RRset should only be signed with the KSK.
A new zone `updatecheck-kskonly.secure` is added to test against,
that can be dynamically updated, and that can be controlled with rndc
to load the DNSSEC keys.
There are some pre-checks for this test to make sure everything is
fine before the ZSK roll, after the new ZSK is published, and after
the old ZSK is deleted. Note there are actually two ZSK rolls in
quick succession.
When the latest added ZSK becomes active and its predecessor becomes
inactive, the KSK is offline. However, the DNSKEY RRset did not
change and it has a good signature that is valid for long enough.
The expected behavior is that the DNSKEY RRset stays signed with
the KSK only (signature does not need to change). However, the
test will fail because after reconfiguring the keys for the zone,
it wants to add re-sign tasks for the new active keys (in sign_apex).
Because the KSK is offline, named determines that the only other
active key, the latest ZSK, will be used to resign the DNSKEY RRset,
in addition to keeping the RRSIG of the KSK.
The question is: Why do we need to resign the DNSKEY RRset
immediately when a new key becomes active? This is not required,
only once the next resign task is triggered the new active key
should replace signatures that are in need of refreshing.
(cherry picked from commit 8bc10bcf59)
Some system tests assume dig's default setings are in effect. While
these defaults may only be silently overridden (because of specific
options set in /etc/resolv.conf) for BIND releases using liblwres for
parsing /etc/resolv.conf (i.e. BIND 9.11 and older), it is arguably
prudent to make sure that tests relying on specific +timeout and +tries
settings specify these explicitly in their dig invocations, in order to
prevent test failures from being triggered by any potential changes to
current defaults.
(cherry picked from commit b6cce0fb8b)
Simply looking for the key ID surrounded by spaces in the tested
dnssec-signzone output file is not a precise enough method of checking
for signatures prepared using a given key ID: it can be tripped up by
cross-algorithm key ID collisions and certain low key IDs (e.g. 60, the
TTL specified in bin/tests/system/dnssec/signer/example.db.in), which
triggers false positives for the "dnssec" system test. Make key ID
extraction precise by using an awk script which operates on specific
fields.
(cherry picked from commit a40c60e4c1)
The "mirror" system test expects all dig queries (including recursive
ones) to be responded to within 1 second, which turns out to be overly
optimistic in certain cases and leads to false positives being
triggered. Increase dig query timeout used throughout the "mirror"
system test to 2 seconds in order to alleviate the issue.
(cherry picked from commit 73afbdc552)
Currently, ns3 in the "mirror" system test sends trust anchor telemetry
queries every second as it is started with "-T tat=1". Given the number
of trust anchors configured on ns3 (9), TAT-related traffic clutters up
log files, hindering troubleshooting efforts. Increase TAT query
interval to 3 seconds in order to alleviate the issue.
Note that the interval chosen cannot be much higher if intermittent test
failures are to be avoided: TAT queries are only sent after the
configured number of seconds passes since resolver startup. Quick
experiments show that even on contemporary hardware, ns3 should be
running for at least 5 seconds before it is first shut down, so a
3-second TAT query interval seems to be a reasonable, future-proof
compromise. Ensure the relevant check is performed before ns3 is first
shut down to emphasize this trade-off and make it more clear by what
time TAT queries are expected to be sent.
(cherry picked from commit 6847a29b54)
"rndc dumpdb" works asynchronously, i.e. the requested dump may not yet
be fully written to disk by the time "rndc" returns. Prevent false
positives for the "serve-stale" system test by only checking dump
contents after the line indicating that it is complete is written.
(cherry picked from commit 6e3f812afc)
bin/tests/system/stop.pl only waits for the PID file to be cleaned up
while named cleans up the lock file after the PID file. Thus, the
aforementioned script may consider a named instance to be fully shut
down when in fact it is not.
Fix by also checking whether the lock file exists when determining a
given instance's shutdown status. This change assumes that if a named
instance uses a lock file, it is called "named.lock".
Also rename clean_pid_file() to pid_file_exists(), so that it is called
more appropriately (it does not clean up the PID file itself, it only
returns the server's identifier if its PID file is not yet cleaned up).
(cherry picked from commit c787a539d2)
MR !1141 broke the way stop.pl is invoked when start.pl fails:
- start.pl changes the working directory to $testdir/$server before
attempting to start $server,
- commit 27ee629e6b causes the $testdir
variable in stop.pl to be determined using the $SYSTEMTESTTOP
environment variable, which is set to ".." by all tests.sh scripts,
- commit e227815af5 makes start.pl pass
$test (the test's name) rather than $testdir (the path to the test's
directory) to stop.pl when a given server fails to start.
Thus, when a server is restarted from within a tests.sh script and such
a restart fails, stop.pl attempts to look for the server directory in a
nonexistent location ($testdir/$server/../$test, i.e. $testdir/$test,
instead of $testdir/../$test). Fix the issue by changing the working
directory before stop.pl is invoked in the scenario described above.
(cherry picked from commit 4afad2a047)
On Unix systems, the CYGWIN environment variable is not set at all when
BIND system tests are run. If a named instance crashes on shutdown or
otherwise fails to clean up its pidfile and the CYGWIN environment
variable is not set, stop.pl will print an uninitialized value warning
on standard error. Prevent this by using defined().
(cherry picked from commit 91e5a99b9b)
ifconfig.sh depends on config.guess for platform guessing. It uses it to
choose between ifconfig or ip tools to configure interfaces. If
system-wide automake script is installed and local was not found, use
platform guess. It should work well on mostly any sane platform. Still
prefers local guess, but passes when if cannot find it.
(cherry picked from commit 38301052e1)
When a zone is converted from NSEC to NSEC3, the private record at zone
apex indicating that NSEC3 chain creation is in progress may be removed
during a different (later) zone_nsec3chain() call than the one which
adds the NSEC3PARAM record. The "delzsk.example" zone check only waits
for the NSEC3PARAM record to start appearing in dig output while private
records at zone apex directly affect "rndc signing -list" output. This
may trigger false positives for the "autosign" system test as the output
of the "rndc signing -list" command used for checking ZSK deletion
progress may contain extra lines which are not accounted for. Ensure
the private record is removed from zone apex before triggering ZSK
deletion in the aforementioned check.
Also future-proof the ZSK deletion progress check by making it only look
at lines it should care about.
(cherry picked from commit e02de04e97)
For checks querying a named instance with "dnssec-accept-expired yes;"
set, authoritative responses have a TTL of 300 seconds. Assuming empty
resolver cache, TTLs of RRsets in the ANSWER section of the first
response to a given query will always match their authoritative
counterparts. Also note that for a DNSSEC-validating named resolver,
validated RRsets replace any existing non-validated RRsets with the same
owner name and type, e.g. cached from responses received while resolving
CD=1 queries. Since TTL capping happens before a validated RRset is
inserted into the cache and RRSIG expiry time does not impose an upper
TTL bound when "dnssec-accept-expired yes;" is set and, as pointed out
above, the original TTLs of the relevant RRsets equal 300 seconds, the
RRsets in the ANSWER section of the responses to expiring.example/SOA
and expired.example/SOA queries sent with CD=0 should always be exactly
120 seconds, never a lower value. Make the relevant TTL checks stricter
to reflect that.
(cherry picked from commit a85cc41486)
Always expecting a TTL of exactly 300 seconds for RRsets found in the
ADDITIONAL section of responses received for CD=1 queries sent during
TTL capping checks is too strict since these responses will contain
records cached from multiple DNS messages received during the resolution
process.
In responses to queries sent with CD=1, ns.expiring.example/A in the
ADDITIONAL section will come from a delegation returned by ns2 while the
ANSWER section will come from an authoritative answer returned by ns3.
If the queries to ns2 and ns3 happen at different Unix timestamps,
RRsets cached from the older response will have a different TTL by the
time they are returned to dig, triggering a false positive.
Allow a safety margin of 60 seconds for checks inspecting the ADDITIONAL
section of responses to queries sent with CD=1 to fix the issue. A
safety margin this large is likely overkill, but it is used nevertheless
for consistency with similar safety margins used in other TTL capping
checks.
(cherry picked from commit 8baf859063)
Commit c032c54dda inadvertently changed
the DNS message section inspected by one of the TTL capping checks from
ADDITIONAL to ANSWER, introducing a discrepancy between that check's
description and its actual meaning. Revert to inspecting the ADDITIONAL
section in the aforementioned check.
(cherry picked from commit a597bd52a6)
Changes introduced by commit 6b8e4d6e69
were incomplete as not all time-sensitive checks were updated to match
revised "nta-lifetime" and "nta-recheck" values. Prevent rare false
positives by updating all NTA-related checks so that they work reliably
with "nta-lifetime 12s;" and "nta-recheck 9s;". Update comments as well
to prevent confusion.
(cherry picked from commit 9a36a1bba3)
During "dlv" system test setup, the "sed" regex used for mangling the
DNSKEY RRset for the "druz" zone does not include the plus sign ("+"),
which may:
- cause the replacement to happen near the end of DNSKEY RDATA, which
can cause the latter to become an invalid Base64 string,
- prevent the replacement from being performed altogether.
Both cases prevent the "dlv" system test from behaving as intended and
may trigger false positives. Add the missing character to the
aforementioned regex to ensure the replacement is always performed on
bytes 10-25 of DNSKEY RDATA.
(cherry picked from commit fd13fef299)
The "check key refreshes are resumed after root servers become
available" check may trigger a false positive for the "mkeys" system
test if the second example/TXT query sent by dig is received by ns5 less
than a second after it receives a REFUSED response to the upstream query
it sends to ns1 in order to resolve the first example/TXT query sent by
dig. Since that REFUSED response from ns1 causes ns5 to return a
SERVFAIL answer to dig, example/TXT is added to the SERVFAIL cache,
which is enabled by default with a TTL of 1 second. This in turn may
cause ns5 to return a cached SERVFAIL response to the second example/TXT
query sent by dig, i.e. make ns5 not perform full query processing as
expected by the check.
Since the primary purpose of the check in question is to ensure that key
refreshes are resumed once initially unavailable root servers become
available, the optimal solution appears to be disabling SERVFAIL cache
for ns5 as doing that still allows the check to fulfill its purpose and
it is arguably more prudent than always sleeping for 1 second.
(cherry picked from commit 7c6bff3c4e)
For consistency between all system tests, add missing setup.sh scripts
for tests which do not have one yet and ensure every setup.sh script
calls its respective clean.sh script.
Temporary files created by a given system test should be removed by its
clean.sh script, not its setup.sh script. Remove redundant "rm"
invocations from setup.sh scripts. Move required "rm" invocations from
setup.sh scripts to their corresponding clean.sh scripts.
If dots are not escaped in the "1.2.3.4" regular expressions used for
checking whether IP address 1.2.3.4 is present in the tested resolver's
answers, a COOKIE that matches such a regular expression will trigger a
false positive for the "resolver" system test. Properly escape dots in
the aforementioned regular expressions to prevent that from happening.
(cherry picked from commit 70ae48e5cb)
Including $SYSTEMTESTTOP/conf.sh from a system test's clean.sh script is
not needed for anything while it causes an error message to be printed
out when "./configure" is run, as "make clean" is invoked at the end.
Remove the offending line to prevent the error from occurring.
(cherry picked from commit 6602848460)
For all system tests utilizing named instances, call clean.sh from each
test's setup.sh script in a consistent way to make sure running the same
system test multiple times using run.sh does not trigger false positives
caused by stale files created by previous runs.
Ideally we would just call clean.sh from run.sh, but that would break
some quirky system tests like "rpz" or "rpzrecurse" and being consistent
for the time being does not hurt.
(cherry picked from commit a077a3ae8a)
These tests check if a key with an unsupported algorithm in
managed-keys is ignored and when seeing an algorithm rollover to
an unsupported algorithm, the new key will be ignored too.
When a mirror zone is verified, the 'ignore_kskflag' argument passed to
dns_zoneverify_dnssec() is set to false. This means that in order for
its verification to succeed, a mirror zone needs to have at least one
key with the SEP bit set configured as a trust anchor. This brings no
security benefit and prevents zones signed only using keys without the
SEP bit set from being mirrored, so change the value of the
'ignore_kskflag' argument passed to dns_zoneverify_dnssec() to true.
The "mirror" system test checks whether log messages announcing a mirror
zone coming into effect are emitted properly. However, the helper
functions responsible for waiting for zone transfers and zone loading to
complete do not wait for these exact log messages, but rather for other
ones preceding them, which introduces a possibility of false positives.
This problem cannot be addressed by just changing the log message to
look for because the test still needs to discern between transferring a
zone and loading a zone.
Add two new log messages at debug level 99 (which is what named
instances used in system tests are configured with) that are to be
emitted after the log messages announcing a mirror zone coming into
effect. Tweak the aforementioned helper functions to only return once
the log messages they originally looked for are followed by the newly
added log messages. This reliably prevents races when looking for
"mirror zone is now in use" log messages and also enables a workaround
previously put into place in the "mirror" system test to be reverted.
In the "mirror" system test, ns3 periodically sends trust anchor
telemetry queries to ns1 and ns2. It may thus happen that for some
non-recursive queries for names inside mirror zones which are not yet
loaded, ns3 will be able to synthesize a negative answer from the cached
records it obtained from trust anchor telemetry responses. In such
cases, NXDOMAIN responses will be sent with the root zone SOA in the
AUTHORITY section. Since the root zone used in the "mirror" system test
has the same serial number as ns2/verify.db.in and zone verification
checks look for the specified serial numbers anywhere in the answer, the
test could be broken if different zone names were used.
The +noauth dig option could be used to address this weakness, but that
would prevent entire responses from being stored for later inspection,
which in turn would hamper troubleshooting test failures. Instead, use
a different serial number for ns2/verify.db.in than for any other zone
used in the "mirror" system test and check the number of records in the
ANSWER section of each response.
Due to the way the "mirror" system test is set up, it is impossible for
the "verify-unsigned" and "verify-untrusted" zones to contain any serial
number other than the original one present in ns2/verify.db.in. Thus,
using presence of a different serial number in the SOA records of these
zones as an indicator of problems with mirror zone verification is
wrong. Look for the original zone serial number instead as that is the
one that will be returned by ns3 if one of the aforementioned zones is
successfully verified.