bind9/bin/tests/system
Mark Andrews 30cb70c826 Add test cases using static and static-stub zones
RPZ NSIP and NSDNAME checks were failing with "unrecognized NS
rpz_rrset_find() failed: glue" when static or static-stub zones
where used to resolve the query name.

Add tests using stub and static-stub zones that are expected to
be filtered and not-filtered against NSIP and NSDNAME rules.

stub and static-stub queries are expected to be filtered

stub-nomatch and static-stub-nomatch queries are expected to be passed
2022-05-04 23:30:32 +10:00
..
acl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
additional Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
addzone Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
allow-query Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
auth Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
autosign Use wait_for_log_re in the autosign system test 2022-04-19 17:00:31 +01:00
builtin Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
cacheclean Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
case Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
catz Document catalog zones member zone reset by change of unique label 2022-04-28 14:04:28 +00:00
cds Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
chain Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
checkconf Rename "hostname" to "remote-hostname" within "tls" 2022-05-03 17:15:43 +03:00
checkds Rework imports in dnspython-based system tests 2022-03-14 08:59:32 +01:00
checknames Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
checkzone add a system test for $GENERATE with an integer overflow 2022-04-01 07:56:52 +00:00
common Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
cookie Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
database Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
delzone Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
dialup Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
digdelv When using +qr in dig print the data of the current query 2022-04-05 11:20:41 +00:00
dispatch Reuse common port-related test fixtures 2022-03-14 08:59:32 +01:00
dlzexternal test ECS information is passed in dlzexternal 2022-01-27 13:53:59 -08:00
dns64 Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
dnssec Check that pending negative cache entries for DS can be used successfully 2022-04-19 08:38:26 +10:00
dnstap Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
doth Rename "hostname" to "remote-hostname" within "tls" 2022-05-03 17:15:43 +03:00
dscp Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
dsdigest Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
dupsigs Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
dyndb Set hard thread affinity for each zone 2022-04-01 23:50:34 +02:00
ecdsa Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
eddsa Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
ednscompliance Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
emptyzones Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
engine_pkcs11 Save keyfromlabel error output 2022-03-21 10:11:02 +01:00
fetchlimit Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
filter-aaaa Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
formerr Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
forward Add a hung fetch check while chasing DS in the forward system test 2022-04-08 10:27:26 +02:00
geoip2 Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
glue Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
hooks Make isc_ht optionally case insensitive 2022-03-28 15:02:18 -07:00
idna Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
include-multiplecfg Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
inline Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
integrity Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
ixfr Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
journal Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
kasp Minor fixes in kasp system test 2022-04-29 13:38:09 +02:00
keepalive Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
keyfromlabel Fix keyfromlabel test, missing status update 2022-02-04 13:40:18 +01:00
keymgr2kasp Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
legacy Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
limits Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
logfileconfig Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
masterfile Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
masterformat Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
metadata Teach dnssec-settime to read times that it writes 2022-03-25 16:05:43 +01:00
mirror Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
mkeys Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
names Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
notify Avoid timeouts in the notify system test 2022-04-19 17:00:31 +01:00
nsec3 Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
nslookup Check that no debugging / errors are reported normally 2022-01-31 14:18:55 -08:00
nsupdate Log "not authoritative for update zone" more clearly 2022-03-30 12:50:30 +01:00
nzd2nzf Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
padding Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
pending Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
pipelined Remove the keep-response-order system test 2022-02-18 09:16:03 +01:00
qmin Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
reclimit Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
redirect Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
resolver fix resolver test when built without --enable-querytrace 2022-04-01 09:54:44 -07:00
rndc Improve forensics for the querylog section of rndc system test 2022-05-02 13:57:49 +00:00
rootkeysentinel Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
rpz Add test cases using static and static-stub zones 2022-05-04 23:30:32 +10:00
rpzextra Fix skipping tests requiring dnspython 2022-03-14 08:59:32 +01:00
rpzrecurse Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
rrchecker Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
rrl Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
rrsetorder rrsetorder should use stop_server() in tests.sh 2022-01-27 11:57:27 +01:00
rsabigexponent Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
runtime Remove prepare-softhsm2.sh from runtime test 2022-01-27 10:46:58 +01:00
serve-stale Add stale answer extended errors 2022-04-28 09:58:25 +02:00
sfcache Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
shutdown Rework imports in dnspython-based system tests 2022-03-14 08:59:32 +01:00
smartsign Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
sortlist Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
spf Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
staticstub Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
statistics Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
statschannel Fix skipping tests requiring the requests module 2022-03-14 08:59:32 +01:00
stress Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
stub Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
synthfromdnssec Add regression test for CVE-2022-0635 2022-04-05 09:54:45 +02:00
tcp Add system test lingering CLOSE_WAIT TCP sockets 2022-04-07 17:02:48 +02:00
timeouts Rework skipping long tests 2022-03-14 08:59:32 +01:00
tkey Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
tools Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
transport-acl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
tsig Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
tsiggss Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
ttl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
unknown Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
upforwd Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
verify add a test for dnssec-verify -J 2022-02-17 12:03:05 -08:00
views Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
wildcard Rework imports in dnspython-based system tests 2022-03-14 08:59:32 +01:00
xfer negative 'blackhole' ACL match could be treated as positive 2022-02-16 19:05:06 -08:00
xferquota Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
zero Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
zonechecks Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
.gitignore Clean up test.output.* references 2022-01-27 15:32:28 +01:00
ans.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
ckdnsrps.sh Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
cleanall.sh Clean up test.output.* references 2022-01-27 15:32:28 +01:00
conf.sh.common Avoid timeouts in the notify system test 2022-04-19 17:00:31 +01:00
conf.sh.in Add system test for engine_pkcs11 2022-02-04 13:40:18 +01:00
conftest.py Reuse common port-related test fixtures 2022-03-14 08:59:32 +01:00
custom-test-driver Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
digcomp.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
ditch.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
feature-test.c fix resolver test when built without --enable-querytrace 2022-04-01 09:54:44 -07:00
fromhex.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
genzone.sh Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
get_ports.sh Improve test discovery logic in get_ports.sh 2022-03-14 08:59:32 +01:00
ifconfig.sh.in Add tests for forwarder cache poisoning scenarios 2022-04-07 18:43:23 +02:00
kasp.sh Fix a kasp system test bug 2022-04-29 13:38:09 +02:00
Makefile.am Fix "digdelv" system test requirements 2022-04-22 11:25:27 +02:00
makejournal.c support $INCLUDE in makejournal 2022-02-17 12:03:05 -08:00
org.isc.bind.system MacOS needs more IP addresses to run the system tests 2022-03-29 16:59:19 +01:00
org.isc.bind.system.plist 2948. [port] MacOS: provide a mechanism to configure the test 2010-08-25 04:51:51 +00:00
packet.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
pytest_custom_markers.py Rework skipping long tests 2022-03-14 08:59:32 +01:00
README Clean up test.output.* references 2022-01-27 15:32:28 +01:00
resolve.c Remove isc_appctx_t use in dns_client 2022-03-29 14:14:49 -07:00
run.gdb Dump the backtrace to stdout when core is found in systest directory 2019-11-21 02:05:47 +08:00
run.sh.in Rework skipping long tests 2022-03-14 08:59:32 +01:00
send.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
start.pl Extend the 'doth' system test with Strict/Mutual TLS checks 2022-03-28 16:22:53 +03:00
start.sh.in Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
stop.pl Remove leftover test code for Windows 2022-01-27 09:08:29 +01:00
stop.sh.in Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
testcrypto.sh Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
testsock.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00
testsock6.pl Update the copyright information in all files in the repository 2022-01-11 09:05:02 +01:00

Copyright (C) Internet Systems Consortium, Inc. ("ISC")

SPDX-License-Identifier: MPL-2.0

This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0.  If a copy of the MPL was not distributed with this
file, you can obtain one at https://mozilla.org/MPL/2.0/.

See the COPYRIGHT file distributed with this work for additional
information regarding copyright ownership.

Introduction
===
This directory holds a simple test environment for running bind9 system tests
involving multiple name servers.

With the exception of "common" (which holds configuration information common to
multiple tests), each directory holds a set of scripts and configuration
files to test different parts of BIND.  The directories are named for the
aspect of BIND they test, for example:

  dnssec/       DNSSEC tests
  forward/      Forwarding tests
  glue/         Glue handling tests

etc.

Typically each set of tests sets up 2-5 name servers and then performs one or
more tests against them.  Within the test subdirectory, each name server has a
separate subdirectory containing its configuration data.  These subdirectories
are named "nsN" or "ansN" (where N is a number between 1 and 8, e.g. ns1, ans2
etc.)

The tests are completely self-contained and do not require access to the real
DNS.  Generally, one of the test servers (usually ns1) is set up as a root
nameserver and is listed in the hints file of the others.


Preparing to Run the Tests
===
To enable all servers to run on the same machine, they bind to separate virtual
IP addresses on the loopback interface.  ns1 runs on 10.53.0.1, ns2 on
10.53.0.2, etc.  Before running any tests, you must set up these addresses by
running the command

    sh ifconfig.sh up

as root.  The interfaces can be removed by executing the command:

    sh ifconfig.sh down

... also as root.

The servers use unprivileged ports (above 1024) instead of the usual port 53,
so they can be run without root privileges once the interfaces have been set
up.


Note for MacOS Users
---
If you wish to make the interfaces survive across reboots, copy
org.isc.bind.system and org.isc.bind.system.plist to /Library/LaunchDaemons
then run

    launchctl load /Library/LaunchDaemons/org.isc.bind.system.plist

... as root.


Running the System Tests
===

Running an Individual Test
---
The tests can be run individually using the following command:

    sh run.sh [flags] <test-name> [<test-arguments>]

e.g.

    sh run.sh [flags] notify

Optional flags are:

    -k              Keep servers running after the test completes.  Each test
                    usually starts a number of nameservers, either instances
                    of the "named" being tested, or custom servers (written in
                    Python or Perl) that feature test-specific behavior.  The
                    servers are automatically started before the test is run
                    and stopped after it ends.  This flag leaves them running
                    at the end of the test, so that additional queries can be
                    sent by hand.  To stop the servers afterwards, use the
                    command "sh stop.sh <test-name>".

    -n              Noclean - do not remove the output files if the test
                    completes successfully.  By default, files created by the
                    test are deleted if it passes;  they are not deleted if the
                    test fails.

    -p <number>     Sets the range of ports used by the test.  A block of 100
                    ports is available for each test, the number given to the
                    "-p" switch being the number of the start of that block
                    (e.g. "-p 7900" will mean that the test is able to use
                    ports 7900 through 7999).  If not specified, the test will
                    have ports 5000 to 5099 available to it.

Arguments are:

    test-name       Mandatory.  The name of the test, which is the name of the
                    subdirectory in bin/tests/system holding the test files.

    test-arguments  Optional arguments that are passed to each of the test's
                    scripts.


Running All The System Tests
---
To run all the system tests, enter the command:

    make [-j numproc] test

The optional "numproc" argument specifies the maximum number of tests that can
run in parallel.  The default is 1, which means that all of the tests run
sequentially.  If greater than 1, up to "numproc" tests will run simultaneously,
new tests being started as tests finish.  Each test will get a unique set of
ports, so there is no danger of tests interfering with one another.  Parallel
running will reduce the total time taken to run the BIND system tests, but will
mean that the output from all the tests sent to the screen will be mixed up
with one another.

In this case, retention of the output files after a test completes successfully
is specified by setting the environment variable SYSTEMTEST_NO_CLEAN to 1 prior
to running make, e.g.

    SYSTEMTEST_NO_CLEAN=1 make [-j numproc] test

while setting environment variable SYSTEMTEST_FORCE_COLOR to 1 forces system
test output to be printed in color.


Format of Test Output
---
All output from the system tests is in the form of lines with the following
structure:

    <letter>:<test-name>:<message> [(<number>)]

e.g.

    I:catz:checking that dom1.example is not served by primary (1)

The meanings of the fields are as follows:

<letter>
This indicates the type of message.  This is one of:

    S   Start of the test
    A   Start of test (retained for backwards compatibility)
    T   Start of test (retained for backwards compatibility)
    E   End of the test
    I   Information.  A test will typically output many of these messages
        during its run, indicating test progress.  Note that such a message may
        be of the form "I:testname:failed", indicating that a sub-test has
        failed.
    R   Result.  Each test will result in one such message, which is of the
        form:

                R:<test-name>:<result>

        where <result> is one of:

            PASS        The test passed
            FAIL        The test failed
            SKIPPED     The test was not run, usually because some
                        prerequisites required to run the test are missing.

<test-name>
This is the name of the test from which the message emanated, which is also the
name of the subdirectory holding the test files.

<message>
This is text output by the test during its execution.

(<number>)
If present, this will correlate with a file created by the test.  The tests
execute commands and route the output of each command to a file.  The name of
this file depends on the command and the test, but will usually be of the form:

    <command>.out.<suffix><number>

e.g. nsupdate.out.test28, dig.out.q3.  This aids diagnosis of problems by
allowing the output that caused the problem message to be identified.


Re-Running the Tests
---
If there is a requirement to re-run a test (or the entire test suite), the
files produced by the tests should be deleted first.  Normally, these files are
deleted if the test succeeds but are retained on error.  The run.sh script
automatically calls a given test's clean.sh script before invoking its setup.sh
script.

Deletion of the files produced by the set of tests (e.g. after the execution of
make) can be carried out using the command:

    sh cleanall.sh

or

    make testclean

(Note that the Makefile has two other targets for cleaning up files: "clean"
will delete all the files produced by the tests, as well as the object and
executable files used by the tests.  "distclean" does all the work of "clean"
as well as deleting configuration files produced by "configure".)


Developer Notes
===
This section is intended for developers writing new tests.


Overview
---
As noted above, each test is in a separate directory.  To interact with the
test framework, the directories contain the following standard files:

prereq.sh   Run at the beginning to determine whether the test can be run at
            all; if not, we see a R:SKIPPED result.  This file is optional:
            if not present, the test is assumed to have all its prerequisites
            met.

setup.sh    Run after prereq.sh, this sets up the preconditions for the tests.
            Although optional, virtually all tests will require such a file to
            set up the ports they should use for the test.

tests.sh    Runs the actual tests.  This file is mandatory.

clean.sh    Run at the end to clean up temporary files, but only if the test
            was completed successfully and its running was not inhibited by the
	    "-n" switch being passed to "run.sh".  Otherwise the temporary
	    files are left in place for inspection.

ns<N>       These subdirectories contain test name servers that can be queried
	    or can interact with each other.  The value of N indicates the
	    address the server listens on: for example, ns2 listens on
	    10.53.0.2, and ns4 on 10.53.0.4.  All test servers use an
	    unprivileged port, so they don't need to run as root.  These
	    servers log at the highest debug level and the log is captured in
	    the file "named.run".

ans<N>      Like ns[X], but these are simple mock name servers implemented in
            Perl or Python.  They are generally programmed to misbehave in ways
            named would not so as to exercise named's ability to interoperate
            with badly behaved name servers.


Port Usage
---
In order for the tests to run in parallel, each test requires a unique set of
ports.  These are specified by the "-p" option passed to "run.sh", which sets
environment variables that the scripts listed above can reference.

The convention used in the system tests is that the number passed is the start
of a range of 100 ports.  The test is free to use the ports as required,
although the first ten ports in the block are named and generally tests use the
named ports for their intended purpose.  The names of the environment variables
are:

    PORT                     Number to be used for the query port.
    CONTROLPORT              Number to be used as the RNDC control port.
    EXTRAPORT1 - EXTRAPORT8  Eight port numbers that can be used as needed.

Two other environment variables are defined:

    LOWPORT                  The lowest port number in the range.
    HIGHPORT                 The highest port number in the range.

Since port ranges usually start on a boundary of 10, the variables are set such
that the last digit of the port number corresponds to the number of the
EXTRAPORTn variable.  For example, if the port range were to start at 5200, the
port assignments would be:

    PORT = 5200
    EXTRAPORT1 = 5201
        :
    EXTRAPORT8 = 5208
    CONTROLPORT = 5209
    LOWPORT = 5200
    HIGHPORT = 5299

When running tests in parallel (i.e. giving a value of "numproc" greater than 1
in the "make" command listed above), it is guaranteed that each
test will get a set of unique port numbers.


Writing a Test
---
The test framework requires up to four shell scripts (listed above) as well as
a number of nameserver instances to run.  Certain expectations are put on each
script:


General
---
1. Each of the four scripts will be invoked with the command

    (cd <test-directory> ; sh <script> [<arguments>] )

... so that working directory when the script starts executing is the test
directory.

2. Arguments can be only passed to the script if the test is being run as a
one-off with "run.sh".  In this case, everything on the command line after the
name of the test is passed to each script.  For example, the command:

    sh run.sh -p 12300 mytest -D xyz

... will run "mytest" with a port range of 12300 to 12399.  Each of the
framework scripts provided by the test will be invoked using the remaining
arguments, e.g.:

   (cd mytest ; sh prereq.sh -D xyz)
   (cd mytest ; sh setup.sh -D xyz)
   (cd mytest ; sh tests.sh -D xyz)
   (cd mytest ; sh clean.sh -D xyz)

No arguments will be passed to the test scripts if the test is run as part of
a run of the full test suite (e.g. the tests are started with make).

3.  Each script should start with the following lines:

    . ../conf.sh

"conf.sh" defines a series of environment variables together with functions
useful for the test scripts.


prereq.sh
---
As noted above, this is optional.  If present, it should check whether specific
software needed to run the test is available and/or whether BIND has been
configured with the appropriate options required.

    * If the software required to run the test is present and the BIND
      configure options are correct, prereq.sh should return with a status code
      of 0.

    * If the software required to run the test is not available and/or BIND
      has not been configured with the appropriate options, prereq.sh should
      return with a status code of 1.

    * If there is some other problem (e.g. prerequisite software is available
      but is not properly configured), a status code of 255 should be returned.


setup.sh
---
This is responsible for setting up the configuration files used in the test.

To cope with the varying port number, ports are not hard-coded into
configuration files (or, for that matter, scripts that emulate nameservers).
Instead, setup.sh is responsible for editing the configuration files to set the
port numbers.

To do this, configuration files should be supplied in the form of templates
containing tokens identifying ports.  The tokens have the same name as the
environment variables listed above, but are prefixed and suffixed by the "@"
symbol.  For example, a fragment of a configuration file template might look
like:

    controls {
        inet 10.53.0.1 port @CONTROLPORT@ allow { any; } keys { rndc_key; };
    };

    options {
        query-source address 10.53.0.1;
        notify-source 10.53.0.1;
        transfer-source 10.53.0.1;
        port @PORT@;
        allow-new-zones yes;
    };

setup.sh should copy the template to the desired filename using the
"copy_setports" shell function defined in "conf.sh", i.e.

    copy_setports ns1/named.conf.in ns1/named.conf

This replaces the tokens @PORT@, @CONTROLPORT@, @EXTRAPORT1@ through
@EXTRAPORT8@ with the contents of the environment variables listed above.
setup.sh should do this for all configuration files required when the test
starts.

("setup.sh" should also use this method for replacing the tokens in any Perl or
Python name servers used in the test.)


tests.sh
---
This is the main test file and the contents depend on the test.  The contents
are completely up to the developer, although most test scripts have a form
similar to the following for each sub-test:

    1. n=`expr $n + 1`
    2. echo_i "prime cache nodata.example ($n)"
    3. ret=0
    4. $DIG -p ${PORT} @10.53.0.1 nodata.example TXT > dig.out.test$n
    5. grep "status: NOERROR" dig.out.test$n > /dev/null || ret=1
    6. grep "ANSWER: 0," dig.out.test$n > /dev/null || ret=1
    7. if [ $ret != 0 ]; then echo_i "failed"; fi
    8. status=`expr $status + $ret`

1.  Increment the test number "n" (initialized to zero at the start of the
    script).

2.  Indicate that the sub-test is about to begin.  Note that "echo_i" instead
    of "echo" is used.  echo_i is a function defined in "conf.sh" which will
    prefix the message with "I:<testname>:", so allowing the output from each
    test to be identified within the output.  The test number is included in
    the message in order to tie the sub-test with its output.

3. Initialize return status.

4 - 6. Carry out the sub-test.  In this case, a nameserver is queried (note
    that the port used is given by the PORT environment variable, which was set
    by the inclusion of the file "conf.sh" at the start of the script).  The
    output is routed to a file whose suffix includes the test number.  The
    response from the server is examined and, in this case, if the required
    string is not found, an error is indicated by setting "ret" to 1.

7.  If the sub-test failed, a message is printed. "echo_i" is used to print
    the message to add the prefix "I:<test-name>:" before it is output.

8.  "status", used to track how many of the sub-tests have failed, is
    incremented accordingly.  The value of "status" determines the status
    returned by "tests.sh", which in turn determines whether the framework
    prints the PASS or FAIL message.

Regardless of this, rules that should be followed are:

a.  Use the environment variables set by conf.sh to determine the ports to use
    for sending and receiving queries.

b.  Use a counter to tag messages and to associate the messages with the output
    files.

c.  Store all output produced by queries/commands into files.  These files
    should be named according to the command that produced them, e.g. "dig"
    output should be stored in a file "dig.out.<suffix>", the suffix being
    related to the value of the counter.

d.  Use "echo_i" to output informational messages.

e.  Retain a count of test failures and return this as the exit status from
    the script.


clean.sh
---
The inverse of "setup.sh", this is invoked by the framework to clean up the
test directory.  It should delete all files that have been created by the test
during its run.


Starting Nameservers
---
As noted earlier, a system test will involve a number of nameservers.  These
will be either instances of named, or special servers written in a language
such as Perl or Python.

For the former, the version of "named" being run is that in the "bin/named"
directory in the tree holding the tests (i.e. if "make test" is being run
immediately after "make", the version of "named" used is that just built).  The
configuration files, zone files etc. for these servers are located in
subdirectories of the test directory named "nsN", where N is a small integer.
The latter are special nameservers, mostly used for generating deliberately bad
responses, located in subdirectories named "ansN" (again, N is an integer).
In addition to configuration files, these directories should hold the
appropriate script files as well.

Note that the "N" for a particular test forms a single number space, e.g. if
there is an "ns2" directory, there cannot be an "ans2" directory as well.
Ideally, the directory numbers should start at 1 and work upwards.

When running a test, the servers are started using "start.sh" (which is nothing
more than a wrapper for start.pl).  The options for "start.pl" are documented
in the header for that file, so will not be repeated here.  In summary, when
invoked by "run.sh", start.pl looks for directories named "nsN" or "ansN" in
the test directory and starts the servers it finds there.


"named" Command-Line Options
---
By default, start.pl starts a "named" server with the following options:

    -c named.conf   Specifies the configuration file to use (so by implication,
                    each "nsN" nameserver's configuration file must be called
                    named.conf).

    -d 99           Sets the maximum debugging level.

    -D <name>       The "-D" option sets a string used to identify the
                    nameserver in a process listing.  In this case, the string
                    is the name of the subdirectory.

    -g              Runs the server in the foreground and logs everything to
                    stderr.

    -m record
                    Turns on these memory usage debugging flags.

    -U 4            Uses four listeners.

    -X named.lock   Acquires a lock on this file in the "nsN" directory, so
                    preventing multiple instances of this named running in this
                    directory (which could possibly interfere with the test).

All output is sent to a file called "named.run" in the nameserver directory.

The options used to start named can be altered.  There are three ways of doing
this.  "start.pl" checks the methods in a specific order: if a check succeeds,
the options are set and any other specification is ignored.  In order, these
are:

1. Specifying options to "start.sh"/"start.pl" after the name of the test
directory, e.g.

    sh start.sh reclimit ns1 -- "-c n.conf -d 43"

(This is only really useful when running tests interactively.)

2. Including a file called "named.args" in the "nsN" directory.  If present,
the contents of the first non-commented, non-blank line of the file are used as
the named command-line arguments.  The rest of the file is ignored.

3. Tweaking the default command line arguments with "-T" options.  This flag is
used to alter the behavior of BIND for testing and is not documented in the
ARM.  The presence of certain files in the "nsN" directory adds flags to
the default command line (the content of the files is irrelevant - it
is only the presence that counts):

    named.noaa       Appends "-T noaa" to the command line, which causes
                     "named" to never set the AA bit in an answer.

    named.dropedns   Adds "-T dropedns" to the command line, which causes
                     "named" to recognise EDNS options in messages, but drop
                     messages containing them.

    named.maxudp1460 Adds "-T maxudp1460" to the command line, setting the
                     maximum UDP size handled by named to 1460.

    named.maxudp512  Adds "-T maxudp512" to the command line, setting the
                     maximum UDP size handled by named to 512.

    named.noedns     Appends "-T noedns" to the command line, which disables
                     recognition of EDNS options in messages.

    named.notcp      Adds "-T notcp", which disables TCP in "named".

    named.soa        Appends "-T nosoa" to the command line, which disables
                     the addition of SOA records to negative responses (or to
                     the additional section if the response is triggered by RPZ
                     rewriting).

Starting Other Nameservers
---
In contrast to "named", nameservers written in Perl or Python (whose script
file should have the name "ans.pl" or "ans.py" respectively) are started with a
fixed command line.  In essence, the server is given the address and nothing
else.

(This is not strictly true: Python servers are provided with the number of the
query port to use.  Altering the port used by Perl servers currently requires
creating a template file containing the "@PORT@" token, and having "setup.sh"
substitute the actual port being used before the test starts.)


Stopping Nameservers
---
As might be expected, the test system stops nameservers with the script
"stop.sh", which is little more than a wrapper for "stop.pl".  Like "start.pl",
the options available are listed in the file's header and will not be repeated
here.

In summary though, the nameservers for a given test, if left running by
specifying the "-k" flag to "run.sh" when the test is started, can be stopped
by the command:

    sh stop.sh <test-name> [server]

... where if the server (e.g. "ns1", "ans3") is not specified, all servers
associated with the test are stopped.


Adding a Test to the System Test Suite
---
Once a test has been created, the following files should be edited:

* conf.sh.common The name of the test should be added to the PARALLEL_COMMON
variable.

* Makefile.am The name of the test should be added to the TESTS variable.

(It is likely that a future iteration of the system test suite will remove the
need to edit multiple files to add a test.)


Valgrind
---
When running system tests, named can be run under Valgrind.  The output from
Valgrind are sent to per-process files that can be reviewed after the test has
completed.  To enable this, set the USE_VALGRIND environment variable to
"helgrind" to run the Helgrind tool, or any other value to run the Memcheck
tool.  To use "helgrind" effectively, build BIND with --disable-atomic.


Maintenance Notes
===
This section is aimed at developers maintaining BIND's system test framework.

Notes on Parallel Execution
---
Although execution of an individual test is controlled by "run.sh", which
executes the above shell scripts (and starts the relevant servers) for each
test, the running of all tests in the test suite is controlled by the Makefile.

All system tests are capable of being run in parallel.  For this to work, each
test needs to use a unique set of ports.  To avoid the need to define which
tests use which ports (and so risk port clashes as further tests are added),
the ports are determined by "get_ports.sh", a port broker script which keeps
track of ports given to each individual system test.


Cleaning Up From Tests
---
When a test is run, up to three different types of files are created:

1. Files generated by the test itself, e.g. output from "dig" and "rndc", are
stored in the test directory.

2. Files produced by named which may not be cleaned up if named exits
abnormally, e.g. core files, PID files etc., are stored in the test directory.

If the test fails, all these files are retained.  But if the test succeeds,
they are cleaned up at different times:

1. Files generated by the test itself are cleaned up by the test's own
"clean.sh", which is called from "run.sh".

2. Files that may not be cleaned up if named exits abnormally can be removed
using the "cleanall.sh" script.