This test class contains 5 different test methods:
testOneIntendedReader - checks if there is only one reader which is intended for a product. Multiple intended readers can cause problems. testPluginDecodeQualifications - checks if returned decode qualification is as expected and the detection of the qualification doesn't take too long. testProductIO_readProduct - simply opens the product, without reading raster data. testProductReadTimes - testing how long reading of data takes and logs the results.
testReadIntendedProductContent - this is the most important test. It ensures that the project structure is as expected, the data is correctly read and the geo-information is as required.
Java VM system properties are used to configure the execution.
This property enables the acceptance test runner. If not set or set to false, all reader acceptance tests are skipped and a message is printed to the console window. Only unit-level tests are executed in this case.-Dsnap.reader.tests.execute=true
This property defines the root directory for the test dataset. All test-product definitions are referenced relative to this root directory. If the property is not set or does not denote a valid directory, the test setup fails. The data is at different locations on the server and the local system. But from this specified base folder, the directory structure must be the same.
By default, the reader tests fail if test data is missing. This property can be set to false to avoid this. It is helpful for developers if they don't have the complete test data set on their developer machine.
If the ProductReaderPlugIn class name is given, the tests for this reader plugin are executed only. It is also possible to indicate the package of the reader plugins: for example, if <ProductReaderPlugIn_ClassName> is "org.esa.s3tbx.dataio.landsat", the tests for all the landsat reader plugins will be executed.
Creating a New Reader Test
In order to define a reader test at least two files must be provided in the resource directory of the reader module. They must be placed in the same package as the implemented reader plugin. The files must follow this naming convention:
where <PRODUCT-ID> must match the product identifier that is defined within the *-data.json file.
The file L71191027_02720070313.json shows the example content. How such a file can be created is explained further down, in the section 'Create the Expected Content'.
Defining the Dependencies
There are two different approaches for executing the reader tests:
1. Execute Tests within the Reader Project
In the first case, the dependency to the reader implementation must be added to the dependency list of the snap-reader-tests module. This is the approach used for the toolboxes, like s2tbx and s1tbx.
Remember to install the reader module to your local maven repository after you have made changes. The dependencies used by the snap-reader-tests link to the installed jars. If not installed the old code will be used for the new test.
After committing the JSON files the reader tests are automatically executed on the build server.
2. Execute Tests within the SNAP Reader Test Project.
For the second approach, the dependency on the snap-reader-test project has to be added to the list of the reader dependencies.
This approach can be used by external reader acceptance tests, like in ProbaVBox or SeaDAS. Here you need to implement a test class extending 'org.esa.snap.dataio.ProductReaderAcceptanceTest', leaving its implementation empty. This is only to have something in the projected which can be started.
In the dependency list of the external module, the usual and the 'test-jar' type of the snap-reader-tests dependency must be added.
It is possible to add the snap-reader-tests as a new plugin in SNAP. Once installed, an option in the menu will be available in order to generate the expected content:
It will generate the expected content by selecting randomly some pixel values for every band, metadata fields... If some pins have been selected in the product, then the pixel values will be collected in these points.
Instead of using the compiled plugin within a SNAP installation, also the cluster which is build can be used within the IDE. The path to the cluster must be set to the '–-cluster' parameter. See also How to run and debug SNAP from an IDE