4. Test identification

4.1. General information

Tests described in this section trace back to the DIMS System Requirements v 2.9.0 document, Section Requirements, as described in Section Requirements traceability.

4.1.1. Test levels

DIMS components will be tested at four distinct levels.

  1. Unit tests [U]: These are tests of individual software components at the program or library level. These tests are primarily written by those who are developing the software to validate the software elements at a low (e.g., library or discrete shell command) perform their functions properly, independent of any other components.
  2. Integration tests [I]: These are tests that are intended to verify the interfaces between components against the software design. Defects between interfaces are identified by these tests before their impact is observed at the system level through random or systemic failures.
  3. Component interface tests [C]: These are checks of how data is processed as it is entered into and output from the system. Expected output may be compared against a cryptographic hash of the actual output to determine when actual output is malformed or otherwise deviates from expectations. Other interface tests (e.g., web application graphical user interface input/output) may be tested manually through visual inspection by a test user.
  4. System tests [S]: Also known as end-to-end tests, these are tests to determine if the overall system meets it requirements for general data processing and function. All system components produce test results that are complied into a single system test report that can be compared to detect differences between system tests, or to identify specific components that may have failed somewhere within the larger complex system.

The first two levels of tests are performed on a continuous basis during development. The final two levels pertain to the FQT described in this document.

Note

These test levels are to be identified in test related code and data using the following identifiers:

  • unit [U]
  • integration [I]
  • component [C]
  • system [S]

4.1.2. Test classes

We will employ one or more of the following classes of tests to DIMS components:

  1. Expected value [EV]: values from the expected classes of the input domain will be used to test nominal performance
  2. Simulated data [SD]: simulated data for nominal and extreme geophysical conditions will be used to support error detection, recovery and reporting
  3. Erroneous input [EI]: sample values known to be erroneous will be used to test error detection, recovery and reporting
  4. Stress [ST]: maximum capacity of the input domain, including concurrent execution of multiple processes will be used to test external interfaces, error handling and size and execution time
  5. Timing [TT]: wall clock time, CPU time and I/O time will be recorded
  6. Desk check [DC]: both code and output will be manually inspected and analyzed

Note

These test classes are to be identified in test related code and data using the following identifiers:

  • expected_value [EV]
  • simulated_data [SD]
  • erroneous_input [EI]
  • stress [ST]
  • timing [TT]
  • desk_check [DC]

4.1.3. Qualification Methods

Five qualification methods [1] will be used in testing to establish conformance with requirements as described in this Section.

  1. Inspection: Visual examination, review of descriptive documentation, and comparison of the actual characteristics with predetermined criteria.
  2. Demonstration: Exercise of a sample of observable functional operations. This method is appropriate for demonstrating the successful integration, high-level functionality, and connectivity provided by the overall system.
  3. Manual Test: Manual tests will be performed when automated tests are not feasible.
  4. Automated Test: When possible, test procedures will be automated.
  5. Analysis: Technical evaluation, processing, review, or study of accumulated data.

Note

These qualification methods are to be identified in test related code and data using the following identifiers:

  • inspection
  • demonstration
  • manual_test
  • automated_test
  • analysis

4.2. General test conditions

4.2.1. Data recording, reduction, and analysis

Test results from each test will be stored and indexed so as to be retrievable and post-processed for two primary reasons:

  1. To be able to compare TestA to TestB and determine the difference in results (e.g., to identify regression errors, site-specific differences that were not anticipated during development, or uncover latent bugs related to services that are not managed properly and may not come up after a crash or other failure condition.
  2. To be able to produce reStructuredText format files that can be inserted into a directory hierarchy for the Test Report document that can then be rendered using Sphinx to produce a deliverable HTML and/or PDF version.

This will allow developers to test code releases before they are pushed to “production” deployments, and for involved stakeholders doing independent field testing to generate test reports that can be sent back to the DIMS development team for debugging and code fixes.

4.3. Planned tests

4.3.1. Backend Data Stores CSCI - (BDS)

Backend data stores include temporary and long-term storage of event data, user attributes, user state, indicators and observables, and other incident response related data produced during use of the DIMS system. The following sections describe the scope of formal testing for the Backend Data Stores (BDS) CSCI.

4.3.1.1. Test Levels

General testing of the Backend Data Stores CSCI will take place at the levels described in Test levels. Unit and integration levels apply to development, and the remaining levels apply to FQT.

  • Unit tests
  • Integration tests
  • Component interface tests
  • System tests

4.3.1.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of the Backend Data Stores CSCI:

  • Expected value testing
  • Simulated data
  • Erroneous input
  • Desk check testing

4.3.1.3. General Test Conditions

The following sub-paragraphs identify and describe the planned collections of FQT tests. Test personnel should have access to the Firefox web browser, VPN access, a properly configured DIMS shell environment for testing.

4.3.1.3.1. Acceptance Tests

This collection of tests are run by a Tester via the User Interface to exercise the Backend Data Stores CSCI and verify its functionality satisfies requirements in requirements and user stories. Acceptance tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: System
  2. Test type or class: Expected value, simulated data, erroneous input, desk check
  3. Qualification method: Test
  4. SR reference: [[attributeStorage]] Attribute Storage, [[bdsUserStory1]] BDS User Story 1, [[bdsUserStory2]] BDS User Story 2,
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.1.3.2. Operational Tests

Tests in the Operational collection are automated tests that run when the CSCI is started and at proscribed intervals during operation. These tests will report results via a log fanout and are used to verify system operation and availability. (Some of the test capabilities in this category will also be used for performance of the tests described in States and Modes.)

  1. Test levels: System
  2. Test type or class: Timing, desk check
  3. Qualification method: Test
  4. SR reference: [[bdsUserStory1]] BDS User Story 1, [[bdsUserStory2]] BDS User Story 2
  5. Type of data to be recorded: Component ID, Wall clock time, other data TBD.

4.3.2. Dashboard Web Application CSCI - (DWA)

The Dashboard Web Application, also referred to as the DIMS Dashboard, consists of web application server (“DWA Server”) and client (“DWA Client”) components. The following sections describe the scope of testing for the Dashboard Web Application CSCI.

4.3.2.1. Test Levels

General testing of the Dashboard Web Application CSCI will take place at the levels described in Test levels. Unit and integration levels apply to development, and the remaining levels apply to FQT.

  • Unit tests
  • Integration tests
  • Component interface tests
  • System tests

4.3.2.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of the Dashboard Web Application CSCI:

  • Expected value testing
  • Simulated data
  • Erroneous input
  • Desk check testing

4.3.2.3. General Test Conditions

The following sub-paragraphs identify and describe the planned collections of FQT tests. Test personnel should have access to the Firefox web browser, VPN access, a properly configured DIMS shell environment for testing.

4.3.2.3.1. User Interface Tests

The purpose of this collection is to validate the functionality of Dashboard Web Application User Interface (UI) elements. UI tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: Component interface
  2. Test type or class: Expected value, simulated data, erroneous input, desk check
  3. Qualification method: Test
  4. SR reference: [[dwaUserStory7]] DWA User Story 7
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.2.3.2. Acceptance Tests

This collection of tests are run by a Tester via the User Interface to exercise the Dashboard Web Application and verify its functionality satisfies requirements in user stories. Acceptance tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: System
  2. Test type or class: Expected value, simulated data, erroneous input, desk check
  3. Qualification method: Test
  4. SR reference: [[dwaUserStory1]] DWA User Story 1, [[dwaUserStory2]] DWA User Story 2, [[dwaUserStory3]] DWA User Story 3, [[dwaUserStory4]] DWA User Story 4, [[dwaUserStory5]] DWA User Story 5, [[dwaUserStory6]] DWA User Story 6, [[dwaUserStory9]] DWA User Story 9
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.2.3.3. Operational Tests

Tests in the Operational collection are automated tests that run when the CSCI is started and at proscribed intervals during operation. These tests will report results via a log fanout and are used to verify system operation and availability. (Some of the test capabilities in this category will also be used for performance of the tests described in States and Modes.)

  1. Test levels: System
  2. Test type or class: Timing, desk check
  3. Qualification method: Test
  4. SR reference: [[dwaUserStory8]] DWA User Story 8
  5. Type of data to be recorded: Component ID, Wall clock time, other data TBD.

4.3.3. Data Integration and User Tools CSCI - (DIUT)

The following sections describe the scope of formal testing for the Data Integration and User Tools (DIUT) CSCI.

4.3.3.1. Test Levels

General testing of the Data Integration and User Tools CSCI will take place at the levels described in Test levels. Unit and integration levels apply to development, and the remaining levels apply to FQT.

  • Unit tests
  • Integration tests
  • Component interface tests
  • System tests

4.3.3.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of the Data Integration and User Tools CSCI:

  • Expected value testing
  • Simulated network failures testing
  • Stress testing
  • Timing testing

4.3.3.3. General Test Conditions

The following sub-paragraphs identify and describe the planned groups of tests for the DIUT CSCI.

4.3.3.3.1. Tupelo Whole Disk Initial Acquisition Test

This test relates to Tupelo, a whole disk acquisition and search tool which is one component of the DIUT. The purpose of this test is to ensure that the entire contents of a test disk of arbitrary size can be uploaded to a Tupelo store component over a network.

  1. Test Levels: integration, system
  2. Test classes: expected value, timing, stress
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Copy of test disk content stored in Tupelo store.
4.3.3.3.2. Tupelo Whole Disk Subsequent Acquisition Test

This test also relates to Tupelo. The purpose of this test is to ensure that the entire contents of a test disk of arbitrary size can be uploaded to a Tupelo store component over a network. That disk was previously uploaded to the same store. The upload time and filesystem usage at the store site should be less than for an initial upload.

  1. Test Levels: integration, system
  2. Test classes: expected value, timing
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Test log showing smaller stored disk and reduced elapsed time for disk acquisition.
4.3.3.3.3. Tupelo Store Tools Test

This test also relates to Tupelo. The purpose of this test is to ensure that Tupelo store-processing tools can create so-called ‘products’ from previously uploaded disk images. These products are then to be stored in the same store as the images.

  1. Test Levels: integration, system
  2. Test classes: expected value, timing
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Products of store tools to exist as supplementary files in Tupelo store.
4.3.3.3.4. Tupelo Artifact Search Test

This test also relates to Tupelo. The purpose of this test is to ensure that a search request sent to a Tupelo store, via e.g. AMQP, results in the correct response. If the search input identifies an artifact which should be found in the store, a positive result must be communicated to the search invoker. Similarly for a query which should be not located. The objective is to avoid false positives and false negatives.

  1. Test Levels: integration, system
  2. Test classes: expected value, timing
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Log files generated when making test queries of the existence of various files to a Tupelo store.
4.3.3.3.5. Tupelo Sizing Test

This test also relates to Tupelo. The purpose of this test is to stress the Tupelo software by inputting a large disk image, on the order of 1 or even 2TB.

  1. Test Levels: integration, system
  2. Test classes: stress, timing
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Copy of test disk content stored in Tupelo store.
4.3.3.3.6. Tupelo Network Failure Test

This test also relates to Tupelo. The purpose of this test is to assert the correctness of the Tupelo store when a disk upload is interrupted by both a client failure and a network failure.

  1. Test Levels: integration, system
  2. Test classes: expected state
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Summary of Tupelo store contents before and after a whole disk upload operation interrupted by a client or network failure.
4.3.3.3.7. Tupelo Boot Media Test 1

This test also relates to Tupelo. The purpose of this test is to check that a computer can be booted from a CD/USB containing a Linux Live CD with integrated Tupelo software, and that the local hard drive(s) of that computer can be uploaded to a remote Tupelo store over the network.

  1. Test Levels: integration, system
  2. Test classes: expected state
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Observed behavior during demonstration.
  6. Special Requirements: Tupelo Boot CD
4.3.3.3.8. Tupelo Boot Media Test 2

This test also relates to Tupelo. The purpose of this test is to check that a computer can be booted from a CD/USB containing a Linux Live CD with integrated Tupelo software, and that the local hard drive(s) of that computer can be uploaded to a Tupelo store located on a locally attached external hard drive.

  1. Test Levels: integration, system
  2. Test classes: expected state
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[diutUserStory6]] DIUT User Story 6
  5. Type of Data Recorded: Disk contents of computer’s own hard drive and external hard drive.
  6. Special Requirements: Tupelo Boot CD and External Hard Drive and Cabling
4.3.3.3.9. User Interface Tests

The purpose of this collection is to validate the functionality of the Data Integration and User Tools capabilities related to general incident response and/or incident tracking or investigative activities. These tests are related to tests described in User Interface Tests in the DWA CSCI section. DIUT CSCI tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: Component interface
  2. Test type or class: Expected value, simulated data, erroneous input, desk check
  3. Qualification method: Test
  4. SR reference: [[diutUserStory2]] DIUT User Story 2, [[diutUserStory8]] DIUT User Story 8
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.3.3.10. Acceptance Tests

This collection of tests are run by a Tester via the User Interface to exercise the Data Integration and User Tools capabilities and verify its functionality satisfies requirements in user stories. These tests are related to tests described in Acceptance Tests in the DWA CSCI section. Acceptance tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: System
  2. Test type or class: Expected value, simulated data, erroneous input, desk check
  3. Qualification method: Test
  4. SR reference: [[incidentTracking]] Incident/Campaign Tracking, [[knowledgeAcquisition]] Knowledge Acquisition, [[aggregateSummary]] Summarize Aggregate Data, [[diutUserStory1]] DIUT User Story 1, [[diutUserStory3]] DIUT User Story 3, [[diutUserStory4]] DIUT User Story 4, [[diutUserStory5]] DIUT User Story 5, [[diutUserStory7]] DIUT User Story 7
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.3.3.11. Operational Tests

Tests in the Operational collection are automated tests that run when the CSCI is started and at proscribed intervals during operation. These tests will report results via a log fanout and are used to verify system operation and availability. (Some of the test capabilities in this category will also be used for performance of the tests described in States and Modes.)

  1. Test levels: System
  2. Test type or class: Timing, desk check
  3. Qualification method: Test
  4. SR reference: [[aggregateSummary]] Summarize Aggregate Data, [[diutUserStory2]] DIUT User Story 2, [[diutUserStory4]] DIUT User Story 4, [[diutUserStory8]] DIUT User Story 8
  5. Type of data to be recorded: Component ID, Wall clock time, other data TBD.

4.3.4. Vertical/Lateral Information Sharing CSCI - (VLIS)

The following sections describe the scope of formal testing for the Vertical and Lateral Information Sharing (VLIS) CSCI.

4.3.4.1. Test Levels

General testing of the Vertical and Lateral Information Sharing CSCI will take place at the levels described in Test levels. Unit and integration levels apply to development, and the remaining levels apply to FQT.

  • Unit tests
  • Component interface tests
  • System tests

4.3.4.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of the Vertical and Lateral Information Sharing CSCI:

  • Expected value testing

4.3.4.3. General Test Conditions

The following sub-paragraphs identify and describe the planned groups of tests.

4.3.4.3.1. Ingest of Indicators of Compromise via STIX Documents

This test relates to stix-java and Tupelo. stix-java is a DIMS-sourced Java library for manipulation of Mitre’s STIX document format. STIX documents containing indicators-of-compromise (IOCs) in the form of file hashes and file names shall be parsed. The hashes and names shall be submitted to the DIMS Tupelo component, and all the stored disks searched for the IOCs. Hit or miss results are then collected.

  1. Test Levels: component interface, system
  2. Test classes: expected value
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[structuredInput]] Structured data input
  5. Type of Data Recorded: Copy of search results, copy of input STIX documents, summary of Tupelo store state.
4.3.4.3.2. Authoring of Indicators of Compromise via STIX Documents

This test relates to stix-java. stix-java is a DIMS-sourced Java library for manipulation of Mitre’s STIX document format. STIX documents containing indicators-of-compromise (IOCs) in the form of file hashes and file names shall be created. The hashes and names shall be auto-generated from output of CIF feeds, from Ops-Trust email attachments and from Tupelo whole disk analysis results.

  1. Test Levels: component interface, system
  2. Test classes: expected value
  3. Qualification Method: Demonstration, inspection
  4. SR reference: [[structuredInput]] Structured data input
  5. Type of Data Recorded: Copy of created STIX documents, summary of Tupelo store state, CIF feed results

4.3.5. States and Modes

There are several states/modes that the DIMS system must support, including a test mode, debug mode, and a demonstration mode. The following section describes the scope of testing for these states/modes.

4.3.5.1. Test Levels

General testing of the required states/modes will take place at the System level only, as described in Test levels.

4.3.5.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of states/modes.

  • Desk check testing

4.3.5.3. General Test Conditions

The following sub-paragraphs identify and describe the planned collections of FQT tests. Test personnel should have access to the Firefox web browser, VPN access, a properly configured DIMS shell environment for testing.

4.3.5.3.1. States/Modes Tests

The purpose of this collection is to validate the functionality of the defined states/modes. These tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: System level
  2. Test type or class: Desk check
  3. Qualification method: Test
  4. SR reference: [[modeToggles]] Mode toggles, [[testMode]] Test Mode, [[debugMode]] Debug Mode, [[demoMode]] Demonstration Mode
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)

4.3.6. Security and Privacy Tests

There are several security controls related to user accounts, access keys, and network access. The following section describes the scope of testing for these aspects of DIMS.

4.3.6.1. Test Levels

General testing of the required security and privacy requirements will take place at the Component interface level and System level, as described in Test levels.

4.3.6.2. Test Classes

The following classes of tests, described in Test classes will be performed during formal qualification testing of states/modes.

  • Expected value testing
  • Erroneous input
  • Desk check testing

4.3.6.3. General Test Conditions

The following sub-paragraphs identify and describe the planned collections of FQT tests. Test personnel should have access to the Firefox web browser, VPN access, a properly configured DIMS shell environment for some testing, while other tests (e.g., port scanning) will be done from external hosts without any proper account or credential data.

4.3.6.3.1. Security Tests

The purpose of this collection is to validate the functionality of the defined security and privacy requirements. These tests will be entered, managed, executed, and reported via JIRA. The test descriptions, steps, test data, expected results for each step, and actual results will be included in the Test Report.

  1. Test levels: Component interface level, System level
  2. Test type or class: Expected value, Erroneous Input, Desk check
  3. Qualification method: Test
  4. SR reference: [[networkAccessControls]] Network Access Controls, [[accountAccessControls]] Account Access Controls, [[secondFactorAuth]] Second-factor authentication, [[accountSuspension]] Account suspension, [[keyRegeneration]] Key Regeneration and Replacement
  5. Special requirements: Access to the DIMS JIRA tool
  6. Type of data to be recorded: Tester, Execution date, Status (Pass/Fail)
4.3.6.3.2. Operational Tests

Tests in the Operational collection are automated tests that run on-demand or at proscribed intervals during normal operation. These tests will report results via both the DWA CSCI components, and a log fanout and are used to verify system operation and availability. (Some of the test capabilities in this category are closely related to tests described in Operational Tests.)

  1. Test levels: System
  2. Test type or class: Timing, desk check
  3. Qualification method: Test
  4. SR reference: [[diutUserStory2]] DIUT User Story 2, [[diutUserStory4]] DIUT User Story 4, [[diutUserStory5]] DIUT User Story 5
  5. Type of data to be recorded: Component ID, Wall clock time, other data TBD.

Note

An application penetration test of DIMS components, including the Dashboard Web Application CSCI - (DWA) and the ops-trust portal (part of Vertical/Lateral Information Sharing CSCI - (VLIS) and described in DIMS Operational Concept Description v 2.9.0, Sections Ops-Trust portal Code Base and Ops-Trust portal Code Base) is to be performed by a professional service company.

This is a separate test from those described in this Test Plan, and the results will be reported in a separate document to be included in the final Test Report.

4.3.7. Design and Implementation Tests

A set of contractual requirements deal with the design and implementation of the internal software system and documentation. Tests in this collection are manual tests based on inspection or other observational qualification methods.

  1. Test levels: System
  2. Test type or class: Desk check
  3. Qualification method: Manual Test, Inspection
  4. SR reference: [[automatedProvisioning]] Automated Provisioning, [[agileDevelopment]] Agile development, [[continuousIntegration]] Continuous Integration & Delivery, [[leverageOpenSource]] Leveraging open source components
  5. Type of data to be recorded: Declarative statements as appropriate.

4.3.8. Software Release Tests

A set of contractual requirements deal with the public release of open source software components and documentation. Tests in this collection are manual tests based on inspection or other observational qualification methods.

  1. Test levels: System
  2. Test type or class: Desk check
  3. Qualification method: Manual Test, Inspection
  4. SR reference: [[exportControl]] Export control, [[noEncryption]] No included cryptographic elements, [[openSourceRelease]] Open source release
  5. Type of data to be recorded: Declarative statements as appropriate.
[1]Source: Automated Software Testing: Introduction, Management, and Performance, by Elfriede Dustin, Jeff Rashka, and John Paul.