Automated Tests

From MediaWiki
Jump to navigationJump to search

Automated Tests

The "test" sub-directory contains a collection of test scripts which can be run by a NST developer to help assure that a new build is behaving properly.

The design of the test framework is as follows:

  • Tests are run from the development machine against a remote probe (ssh is used to transfer and run test scripts).
  • Tests may be DESTRUCTIVE to the target machine! Run tests against a NST system BEFORE deployment or final installation.
  • The automated tests are good base line checks, but they are not exhaustive. They are useful in determining if the current development environment is ready for human testing, but do not replace human testing.
  • Running one, many or all tests is a simple process.
  • Creating tests is a simple process.
  • Automated tests do not need to worry about missing libraries, unknown owners or broken symbolic links. There are system wide test which already perform these checks.

Running Tests

To run tests, you must have the following two systems at your disposal:

  • A Fedora based NST Development system with the NST source code checked out and configured (the same system used to build the NST ISO image).
  • A running NST system booted from either the NST ISO image OR a hard disk installation of the NST. ***WARNING*** The tests are run against this system and will change the state/configuration of the system - you will likely need to reboot, re-configure, and/or re-install after running tests against this NST system.

Hanging Tests

If a test hangs, you may need to ssh to the NST system being tested, and kill the hanging processes by hand.

The nessus test has sometimes triggered this situation.

Running The Full "probe-check"

To run the full test suite against a running NST system, use the "probe-check" make target on the development system in the top level directory as shown below (change the IP address shown to the IP address of the NST system to be checked):

make HOST=192.168.22.13 probe-check
[root@localhost nst]# make HOST=192.168.22.13 probe-check

   ... All testing is done

       - Runs "ldd" on files looking for missing modules.
       - Checks if there are any obvious ownership issues.
       - Checks for symbolic links which point nowhere.
       - Runs each of the test scripts under the "test" directory.

       Takes a long time to complete ...

[root@localhost nst]# less -R probe-check.log

   ... By passing the "-R" option to less, you should
       see color while reviewing the log file ...

[root@localhost nst]#

Running All Tests

To run all of the tests found under the: "test" sub-directory without running the ldd, ownership, and symbolic link tests, use the "test" make target on the development system as shown below (change the IP address shown to the IP address of the NST system to be checked):

make HOST=192.168.22.13 test
[root@localhost nst]# make HOST=192.168.22.13 test

   ... Runs each of the test scripts under the "test" directory.

       Takes a long time to complete ...

[root@localhost nst]# less -R test.log

   ... By passing the "-R" option to less, you should
       see color while reviewing the log file ...

[root@localhost nst]#

Running A Single Test

To quickly run a specific test found under the: "test" sub-directory, include the TEST=NAME option when invoking the "test" make target (change the IP address shown to the IP address of the NST system to be checked):

make HOST=192.168.22.13 TEST=mbrowse test
[root@localhost nst]# make HOST=192.168.22.13 TEST=mbrowse test

Running test: "mbrowse" on: "192.168.22.13"

      Script: "/root/nst/tmp/test/mbrowse/runtest"
         Log: "/root/nst/tmp/test/mbrowse/runtest.log"

  Locating: "mbrowse" ........................................ [  OK  ]
  Verify "mbrowse -help" returns expected version ............ [  OK  ]
  Success! All 2 tests passed ................................ [  OK  ]

SUCCESS!    All of the tests passed - congratulations.

[root@localhost nst]#

Running Multiple Tests

To quickly run a specific set of tests found under the: "test" sub-directory, include the TEST="NAME0 NAME1 ..." option when invoking the "test" make target (change the IP address shown to the IP address of the NST system to be checked):

make HOST=192.168.22.13 TEST="getipaddr mbrowse nfs" test
[root@localhost nst]# make HOST=192.168.22.13 TEST="getipaddr mbrowse nfs" test

Running test: "getipaddr" on: "192.168.22.13"

      Script: "/root/nst/tmp/test/getipaddr/runtest"
         Log: "/root/nst/tmp/test/getipaddr/runtest.log"

  Locating: "getipaddr" ...................................... [  OK  ]
  Locating: "wget" ........................................... [  OK  ]
  Getting public IP address .................................. [  OK  ]
  Checking: "http://nst.sourceforge.net/nst/cgi-bin/ip.cgi" .. [  OK  ]
  Checking: "http://www.networksecuritytoolkit.org/nst/cgi-bin [  OK  ]
  Checking: "http://whatismyip.org/" ......................... [  OK  ]
  Success! All 6 tests passed ................................ [  OK  ]

Running test: "mbrowse" on: "192.168.22.13"

      Script: "/root/nst/tmp/test/mbrowse/runtest"
         Log: "/root/nst/tmp/test/mbrowse/runtest.log"

  Locating: "mbrowse" ........................................ [  OK  ]
  Verify "mbrowse -help" returns expected version ............ [  OK  ]
  Success! All 2 tests passed ................................ [  OK  ]

Running test: "nfs" on: "192.168.22.13"

      Script: "/root/nst/tmp/test/nfs/runtest"
         Log: "/root/nst/tmp/test/nfs/runtest.log"

  Locating: "mount" .......................................... [  OK  ]
  Locating: "service" ........................................ [  OK  ]
  Locating: "showmount" ...................................... [  OK  ]
  Locating: "sleep" .......................................... [  OK  ]
  Locating: "sync" ........................................... [  OK  ]
  Locating: "umount" ......................................... [  OK  ]
  Saving: "/etc/exports" ..................................... [  OK  ]
  Installing custom: "/etc/exports" .......................... [  OK  ]
  Starting service: rpcbind .................................. [  OK  ]
  Starting service: nfslock .................................. [  OK  ]
  Starting service: rpcidmapd ................................ [  OK  ]
  Starting service: rstatd ................................... [  OK  ]
  Starting service: nfs ...................................... [  OK  ]
  Testing /usr/local/sbin/showmount output ................... [  OK  ]
  Mounting exported filesystem to: "/root/runtest/nfs/nfs" ... [  OK  ]
  Verifying that we can see file on NFS mount ................ [  OK  ]
  Verify that we see removal of file on NFS mount ............ [  OK  ]
  Umount of: "/root/runtest/nfs/nfs" ......................... [  OK  ]
  Restoring: "/etc/exports" .................................. [  OK  ]
  Stopping serivce: nfs ...................................... [  OK  ]
  Stopping serivce: rstatd ................................... [  OK  ]
  Stopping serivce: rpcidmapd ................................ [  OK  ]
  Stopping serivce: nfslock .................................. [  OK  ]
  Stopping serivce: rpcbind .................................. [  OK  ]
  Success! All 24 tests passed ............................... [  OK  ]

SUCCESS!    All of the tests passed - congratulations.

[root@localhost nst]#

When Tests Fail

Should a test fail, it is often desirable to dig a bit deeper into the log files produced by running the test to see what happened.

Log Files Left On The NST System

When each test is run, a temporary directory is created on the NST system having the form: "/root/runtest/TEST_NAME".

For example, after running the getipaddr, one should find the: "/root/runtest/getipaddr" directory on the NST system. At a minimum, one should find the following files under this directory:

runtest.log
Verbose output from the running of the test.
runtest
The fully formed test script (supporting bash functions joined with test.bash).
test.bash
The source test script.

In addition to the above files, one may also find temporary files generated by running the test under this directory. For example, the clamav test makes requests to the NST WUI and saves the HTML responses to temporary files.

NOTE: These files will only be present if the test fails.

Log Files Left On The Development System

Regardless of whether a test passes or fails, all of the files created under the "/root/runtest/TEST_NAME" directory will be transferred back to the: "tmp/test/TEST_NAME" directory on the development system. NOTE: The "tmp" may be a different location if you didn't use the default configuration options when configuring your development system.

For example, after running the getipaddr, one should find the: "tmp/test/getipaddr" directory on the development system. At a minimum, one should find the following files under this directory:

make HOST=192.168.22.13 TEST="getipaddr"
[root@localhost nst]# make HOST=192.168.22.13 TEST="getipaddr" test

Running test: "getipaddr" on: "192.168.22.13"

      Script: "/root/nst/tmp/test/getipaddr/runtest"
         Log: "/root/nst/tmp/test/getipaddr/runtest.log"

  Locating: "getipaddr" ...................................... [  OK  ]
  Locating: "wget" ........................................... [  OK  ]
  Getting public IP address .................................. [  OK  ]
  Checking: "http://nst.sourceforge.net/nst/cgi-bin/ip.cgi" .. [  OK  ]
  Checking: "http://www.networksecuritytoolkit.org/nst/cgi-bin [  OK  ]
  Checking: "http://whatismyip.org/" ......................... [  OK  ]
  Success! All 6 tests passed ................................ [  OK  ]

SUCCESS!    All of the tests passed - congratulations.

[root@localhost nst]# ls tmp/test/getipaddr
runtest  runtest.log  state-after.txt  state-before.txt  test.bash
[root@localhost nst]#

Creating Tests

Creating new automated tests is fairly easy once one gets started. The most difficult part is learning the core test functions found in: test/include/test_functions.bash.


Files Making Up The Test

To create a new test:

  • Create a directory under the: "test" directory with a name related to the test to be performed. For example, "test/snort" is the directory containing the test for snort.
  • Create a bash script under the new directory with the name: test.bash. For example, "test/snort/test.bash" is the bash script used to test snort.
  • Optionally, place any other files your test may require under the same directory. The "test/Image_Graph" test is an example of a script having additional files used during the test.

NOTE: You may use any language available on the NST system when creating tests (bash, PHP, Python, ...). HOWEVER, you MUST create the file: test.bash as the initial launch point for your test.

Functions Which Can Be Used

If one examines the various "test.bash" scripts for existing tests, one will notice that they make use of bash functions not present in the "test.bash" file itself. This is a design decision which allows us to keep the "test.bash" scripts small, yet provide access to a vast number of bash functions and variables used throughout the NST.

It works in the following manner:

  • The "test/include/run_test.bash" script is used during the "make test" process and takes a "test.bash" file and prepends a large number of bash files to form a "runtest" script.
  • The newly formed "runtest" script, as well as the other supporting test files are transferred to the NST system being tested.
  • The "runtest" script is then executed.

The bash files prepended include:

"test/include/test_functions.bash"
This file contains the core set of functions and variables used when creating test scripts. It is a fairly well documented file and one should review the comments prior to creating their own test cases.
"config/config.sh"
This file is created when the top level "configure" script is run and defines many variables that can be useful when creating test cases.
"src/include/functions/*.bash"
All of functions defined in the bash files under this directory are available.
"config/TEST_NAME.sh"
If the name of the test matches one of the entries in the "/include/data/packages.tsv" file, then the contents of its generated configuration file will also be available. For example, the test for mbrowse can assume that "config/mbrowse.sh" will be included and can use variables like: ${mbrowse_VER}.


Simple Example

The following shows then entire script making up the original mbrowse test (test/mbrowse/test.bash).

# ${Id}
#
# Make sure mbrowse is installed

test_require GREP grep;
test_require MBROWSE mbrowse;

EXPECT_TEXT="^mbrowse ${mbrowse_VER}";

test_start "Verify \"mbrowse -help\" returns expected version";
${MBROWSE} -help 2>&1 | ${GREP} "${EXPECT_TEXT}" &>/dev/null;
test_passed_or_exit "${PIPESTATUS[1]}" \
  "Failed to find \"${EXPECT_TEXT}\" in \"mbrowse -help\" output";

# End of all tests

test_exit;

The above is an example of a very simple test and performs the following checks:

  • It runs: "test_require MBROWSE mbrowse" to verify that the mbrowse executable can be found on the NST system (if successful, it sets the variable MBROWSE to the location of the executable).
  • It then checks to see if the short help output from the mbrowse executable contains the expected version number.
  • Finally the "test_exit" function is called for a summary report.

Complex Example

The clamav test (see: "test/clamav/test.bash"), is an example of a more complex test bash script.

  • It uses the: "test_require" function to make sure the necessary executables are present on the test system.
  • It uses the: "test_require_wui" function to make sure the NST WUI is up and running.
  • It defines a couple of helper bash functions.
  • It uses the: "test_start" and "test_results" functions to help report on the success/failure of running tests.
  • It redirects output to the: "test_log" function (this will appear in the: "runtest.log" output file).
  • It uses the: "${PIPESTATUS[0]}" array to pick out the appropriate exit code when multiple commands are piped together.
  • It uses the: "test_wui_get" function to "drive" the NST WUI. By using this function, the test script is able to: Perform a virus scan, check scan results, download an infected file, peform another scan and verify the virus is detected.

The following shows the 2007-12-31 revision of: "test/clamav/test.bash"):


# ${Id}
#
# Run some tests on the clamav package and the NST WUI front end to it.

test_require ELINKS elinks;
test_require GREP grep;
test_require LN ln;
test_require RM rm;
test_require SLEEP sleep;

# Make sure WUI is up and running for this test
test_require_wui;

# Base URL to access clamav via NST WUI
CLAMAV_URL="/nstwui/cgi-bin/system/clamscan.cgi";

# restore_to_ship_state
#
#  Restores clamav to what ships on CD (removes any virus updates)

restore_to_ship_state() {
  local CDIR="/var/lib/clamav";

  if [ -L "${CDIR}" ]; then
    test_start "Restore to shipping state";
    (${RM} -fr "${CDIR}" && ${LN} -s "/usr/local${CDIR}" "${CDIR}") 2>&1 | \
      test_log;
    test_results ${PIPESTATUS[0]};
  fi

  if [ -d "/tmp/infected" ]; then
    test_start "Removing: /tmp/infected";
    ${RM} -fr "/tmp/infected"; 
    test_results ${PIPESTATUS[0]};
  fi
}


# perform_virus_scan ECNT
#
# Function to perform clamav scan on infected directory and then remove it
# after words using the NST WUI
#
# ECNT - Number of expected viruses

perform_virus_scan() {
  local ECNT="${1}";
  local SCAN_DIR="/tmp/infected";
  local START_SCAN_URL="${CLAMAV_URL}?action=startScan&path=${SCAN_DIR}&infected=nothing";
  local START_SCAN_FILE="${TEST_DIR}/start.html";

  # Create scan directory if it doesn't exist
  [ -d "${SCAN_DIR}" ] || ${MKDIR} -p "${SCAN_DIR}";

  test_start "Starting virus scan on: ${SCAN_DIR}";
  test_wui_get "${START_SCAN_URL}" "${START_SCAN_FILE}";
  if [ ! -s "${START_SCAN_FILE}" ]; then
    test_failed;
    return 1;
  fi

  test_passed;

  test_start "Verify WUI returns \"Starting up a.*Clam AntiVirus.*\"";
  ${ELINKS} < "${START_SCAN_FILE}" | \
    ${GREP} "Starting up a.*Clam.*AntiVirus" &> /dev/null;
  test_results "${PIPESTATUS[1]}";

  # Now, wait for scan to complete
  local SCAN_PROGRESS_URL="${CLAMAV_URL}";
  local SCAN_PROGRESS_FILE="${TEST_DIR}/progress.html";

  local i;
  local MAX_TRIES=20;
  for ((i=0; i <= ${MAX_TRIES}; i++)); do
    TEST_WAIT=30;
    test_start "Waiting ${TEST_WAIT} more seconds for results";
    ${SLEEP} ${TEST_WAIT};
    test_passed;

    test_start "Requesting scan progress page";
    test_wui_get "${SCAN_PROGRESS_URL}" "${SCAN_PROGRESS_FILE}";
    if [ ! -s "${SCAN_PROGRESS_FILE}" ]; then
      test_failed;
      return 1;
    fi
    test_passed;

    # "Remove All Scan Results" button doesn't show up until scan completes
    ${ELINKS} < "${SCAN_PROGRESS_FILE}" | \
      ${GREP} "Remove All Scan Results" "${SCAN_PROGRESS_FILE}" | \
      test_log;
    if [ "${PIPESTATUS[1]}" == "0" ]; then
      test_start "Scan completed";
      test_passed;
      break;
    fi

    if ((i == MAX_TRIES)); then
      test_start "Last attempt";
      test_failed;
      return 1;
    fi
  done
 
  # Now check the report
  local SCAN_RESULTS_URL="${CLAMAV_URL}";
  local SCAN_RESULTS_FILE="${TEST_DIR}/results.html";
  test_start "Getting final results file";
  test_wui_get "${SCAN_RESULTS_URL}" "${SCAN_RESULTS_FILE}";
  if [ ! -s "${SCAN_RESULTS_FILE}" ]; then
    test_failed;
    return 1;
  fi
  test_passed;

  # Check for some expected results in report file
  for i in \
      "Remove All Scan Results" \
      "${SCAN_DIR}.*${ECNT}.*Results.*Remove"; do

    test_start "Searching report for: \"${i}\"";
    ${ELINKS} < "${SCAN_RESULTS_FILE}" | \
      ${GREP} "${i}" 2>&1 | \
      test_log;
    test_results "${PIPESTATUS[1]}";

  done

  # Now remove all reports
  local SCAN_REMOVE_URL="${CLAMAV_URL}?action=removeResults&removeId=all&path=/";
  local SCAN_REMOVE_FILE="${TEST_DIR}/remove.html";
  test_start "Request removal of ALL results";
  test_wui_get "${SCAN_REMOVE_URL}" "${SCAN_REMOVE_FILE}";
  if [ ! -s "${SCAN_REMOVE_FILE}" ]; then
    test_failed;
    return 1;
  fi
  test_passed;

  test_start "Verifying removal was OK";
  ${ELINKS} < "${SCAN_REMOVE_FILE}" | \
    ${GREP} "Removing the.*entire" &>/dev/null;
  test_results "${PIPESTATUS[1]}";

}

# Scan expects to start as if just booted from ISO
restore_to_ship_state;

# Initially we should not see ANY viruses
perform_virus_scan 0;

#
# Use NST WUI to download/install infected file
#

DOWNLOAD_INFECTED_URL="${CLAMAV_URL}?action=fetch_virus&infected_file_url=http://www.eicar.org/download/eicar_com.zip";
DOWNLOAD_INFECTED_FILE="${TEST_DIR}/download-virus.html";
test_start "Downloading infected file";
test_wui_get "${DOWNLOAD_INFECTED_URL}" "${DOWNLOAD_INFECTED_FILE}";
if [ ! -s "${DOWNLOAD_INFECTED_FILE}" ]; then
  test_failed;
  return 1;
fi
test_passed;

test_start "Verifying download was OK";
${ELINKS} < "${DOWNLOAD_INFECTED_FILE}" | \
  ${GREP} "/tmp/infected/eicar_com.zip.*saved" &>/dev/null;
test_results "${PIPESTATUS[1]}";

# After download, we should see 1 virus
perform_virus_scan 1;

# Restore to ship state
restore_to_ship_state;

# End of all tests

test_exit;