116 lines
4.6 KiB
Markdown
116 lines
4.6 KiB
Markdown
|
# Testing Azure Kinect SDK
|
||
|
|
||
|
To run the tests, be sure the [Depth Engine](depthengine.md) is present.
|
||
|
|
||
|
## Test Categories
|
||
|
|
||
|
The Azure Kinect repo has several categories of tests:
|
||
|
|
||
|
* Unit tests
|
||
|
* Functional tests
|
||
|
* Functional tests that only depends on 1 Azure Kinect
|
||
|
* Functional tests that have custom requirements.
|
||
|
* Stress tests
|
||
|
* Perf tests
|
||
|
* Firmware tests
|
||
|
|
||
|
## Test Quality
|
||
|
|
||
|
Functional and unit tests are run every time a pull request is submitted by Azure
|
||
|
Kinect team members and must run reliably. Authors of new tests should verify tests
|
||
|
succeed when run over multiple iterations. Tests should also be multi thread safe;
|
||
|
[gtest_parallel.py](https://github.com/google/gtest-parallel) has been a helpful
|
||
|
tool to ensure threaded and performance-sensitive tests can succeed even in the
|
||
|
worst of conditions.
|
||
|
|
||
|
## Test Types
|
||
|
|
||
|
### Unit Tests
|
||
|
|
||
|
Unit tests are tests which run on the build machine. They must be very quick
|
||
|
(~ <1s), be reproducible, and not require hardware. Unit tests are built
|
||
|
using the Google Test framework. For a basic example of writing a unit test
|
||
|
please see
|
||
|
[tests/UnitTests/queue_ut/queue.cpp](../tests/UnitTests/queue_ut/queue.cpp).
|
||
|
|
||
|
After compiling, unit tests can be run using `ctest -L unit` in the build
|
||
|
directory. Unit tests are run as part of the CI system.
|
||
|
|
||
|
**NOTE:** *These tests must succeed for a pull request to be merged.*
|
||
|
|
||
|
### Functional Tests
|
||
|
|
||
|
#### Single Device
|
||
|
|
||
|
Functional tests are tests which run on the test machine. They must be quick
|
||
|
(~ <10s), be reproducible, and may require hardware. Functional tests are
|
||
|
built using the Google Test framework. For a basic example of writing a
|
||
|
functional test please see
|
||
|
[tests/example/test.cpp](../tests/example/test.cpp).
|
||
|
|
||
|
After compiling, functional tests can be run using `ctest -L "^functional$"`
|
||
|
in the build directory. Functional tests are run as part of the CI system.
|
||
|
|
||
|
**NOTE:** *These tests must succeed for a pull request to be merged.*
|
||
|
|
||
|
#### Custom Configurations
|
||
|
|
||
|
Not everyone will have access to multiple devices. So we have moved tests with
|
||
|
extra dependencies into their own test label. Some tests require OpenCV,
|
||
|
multiple devices, or even a known chessboard. The hope is that users running
|
||
|
tests outside of automation will be less complacent to errors if tests that will
|
||
|
fail outside automation are disabled in these workflows by default. To run these
|
||
|
tests, use the command `ctest -L "functional_custom"`.
|
||
|
|
||
|
### Stress Tests
|
||
|
|
||
|
Stress tests are tests which run the same logic repeatedly to check for
|
||
|
crashes while the host is under heavy load. These tests will be run on a rolling
|
||
|
build and may require hardware. Stress tests are built using the Google Test
|
||
|
framework.
|
||
|
|
||
|
After compiling, stress tests can be run using `ctest -L stress` in the build directory.
|
||
|
|
||
|
**NOTE:** *At the moment, the Azure Kinect SDK does not have any stress tests. However,
|
||
|
there are some 'stress-like' tests running as unit tests that run for several
|
||
|
seconds to detect threading and timing related issues.*
|
||
|
|
||
|
### Perf Tests
|
||
|
|
||
|
Perf tests are tests who results are purely statistics and not Pass/Fail.
|
||
|
These tests will be run on a rolling build and may require hardware. Perf
|
||
|
tests are built using the Google Test framework.
|
||
|
|
||
|
After compiling, perf tests can be run using `ctest -L perf` in the build directory.
|
||
|
|
||
|
**NOTE:** *These tests are run on demand*
|
||
|
|
||
|
### Firmware Tests
|
||
|
|
||
|
Firmware tests are used to validate new firmware drops. Firmware
|
||
|
tests are built using the Google Test framework. After compiling, firmware tests
|
||
|
can be run using "bin\firmware_fw.exe -ff \<factory firmware\> -lf \<lkg firmware\>
|
||
|
-tf \<test firmware\> -cf \<candidate firmware\>" in the build directory.
|
||
|
|
||
|
**NOTE:** *These tests are run manually when a new firmware candidate is being
|
||
|
evaluated. These tests rely on a specific hardware configuration that includes a
|
||
|
USB connection exerciser [Type C](https://store.mcci.com/collections/frontpage/products/model-3101-type-c-connection-exerciser)
|
||
|
or [Type A](https://store.mcci.com/collections/frontpage/products/hmd-exerciser).*
|
||
|
|
||
|
## Running tests
|
||
|
|
||
|
These tests are built using the Google Test framework. The easiest way to
|
||
|
invoke a single test is to run the test executable generated by the build.
|
||
|
This works great for rapid iteration on one test executable but does not
|
||
|
scale to running all tests.
|
||
|
|
||
|
To run all tests, use CTest. CTest has registered all tests and will execute
|
||
|
all test executables. However, CTest requires that the test binaries are run on
|
||
|
the same machine they were built on.
|
||
|
|
||
|
To run tests on separate build machine:
|
||
|
|
||
|
* Copy the bin folder to the target machine.
|
||
|
* Copy [RunTestList.py](../scripts/RunTestList.py) to the target machine.
|
||
|
* Use \<xxx\>_test_list.txt files from bin folder as `list` input to the script.
|