The validate page is subdivided into areas for the various validation parameters. Each area has menu with two icons on the top right. Hover over the question mark to get help for the respective areas. You can use the +/- icon to fold up or fold down the area.
Step 1 - Choose the data you would like to validate - including the dataset name, the version of the dataset, and one of the soil moisture variables provided in the dataset. More details on the supported datasets can be found here.
Step 2 [optional] - Choose the criteria by which you would like to filter this dataset. The filters available depend on the data contained within the chosen dataset. For example, you can filter the C3S data to include only data with no inconsistencies detected (flag = 0). Details of the filter options provided for each dataset are given on the supported datasets page here. You can also hover your mouse pointer over the question mark next to a filter to get a short explanation.
Step 3 [optional] - If you want to intercompare several datasets, you can add more datasets to the validation
using the + button, up to a maximum of five. Configure the settings for the additional datasets by selecting
the respective tab and repeating steps 1 and 2 above.
Intercomparison: The intercomparison mode of QA4SM validates up to five satellite data sets against a common reference data set. For each reference location (e.g. each ISMN station) it finds the nearest observation series in all selected satellite products. All observations series are then scaled (if selected) and temporally matched to the reference series. For validation only the common time stamps (that are available in all satellite products) are used to calculate validation metrics between the reference and each individual satellite product. This way deviations in the metrics due to different temporal coverage are excluded and validation results represent differences in the performance of the compared satellite products.
Step 4 - Choose the reference dataset you would like to use for the validation including the dataset name, the version of the dataset, and the soil moisture variables provided in the dataset. More details on the supported datasets can be found here.
Step 5 [optional] - Choose the criteria by which you would like to filter the reference data prior to running the validation. The filters available depend on the data contained within the chosen dataset. For example, you can filter the ISMN data to include only data points where the soil_moisture_flag is "G" for "good". Details of the filter options provided for each dataset are given on the supported datasets page here. You can also hover your mouse pointer over the question mark next to a filter to get a short explanation.
Step 6 [optional] - If you want to calculate metrics from anomalies instead of absolute values, select the desired method in the "Method" drop-down menu. The options are:
Step 7 [optional] - Chose the geographic area over which the validation should be performed. You can either specify a lat/lon bounding
box directly or you can select the area on a map by clicking the globe button. The trash button will clear all four bounding
If you don't specify an area, a global validation will be done.
Step 8 [optional] - Choose the date range over which the validation should be performed. Accepted formats are: YYYY*MM*DD or DD*MM*YYYY where * can be any of ".", "/" or "-". If you don't provide a period it will be implicitly determined through temporal matching of the data and reference selected. For the time range covered by the various datasets, see the datasets page.
Step 9 [optional] - Activate Triple Collocation Analysis (TC). By default, QA4SM calculates validation metrics between dataset pairs (the reference and each candidate data set). If more than 3 datasets are selected (including the reference), the switch to activate TC becomes available. If TC is selected, in addition to the basic metrics between dataset pairs, TC metrics between triples of selected datasets are calculated. The reference is included in all triples and metrics are found for all candidate datasets. Note that each TC metric is specific to a dataset (this is evident from the metric name). TC metrics are affected by all 3 used datasets. Only results from TC with independent datasets should be used.
Step 10 - Choose how the data (or reference) will be scaled before metrics calculation. The data can be scaled to the reference (default) or vice versa. Note that in an intercomparision validation (with multiple datasets), only scaling to reference is possible. The scaling method determines how values of one dataset are mapped onto the value range of the other dataset for better comparability.
Step 11 - Optionally name your validation results to make it easier to identify it in the list of all your validations.
Step 12 - Run the validation process. You'll be notified via e-mail once it's finished. You don't need to keep the results window (or even your browser) open for the validation to run. The email will contain a link to your results.
The list shows all your validations, including the currently running ones, sorted by date (latest first).
Note: Your validations will be automatically removed 60 days after completion by our auto-cleanup process, unless you extend or archive them. You will be warned via email about validation expiry 7 days before deletion.
The icons in the validations' title bars indicate the following:
The buttons on the right-hand side of each validation have the following effects:
Once the validation process is finished, you can see a summary of the validation run on the results page.
The buttons at the bottom of the result overview have the following effects:
The following metrics are calculated during the validation process:
|Pearson's r||Pearson correlation coefficient|
|Pearson's r p-value||p-value for pearson correlation coefficient|
|Spearman's rho||Spearman rank correlation coefficient|
|Spearman's rho p-value||p-value for Spearman rank correlation coefficient|
|Root-mean-square deviation||Root-mean-square deviation|
|Bias (difference of means)||Average Error|
|# observations||Number of Observations|
|Unbiased root-mean-square deviation||Unbiased root-mean-square deviation|
|Mean square error||Mean square error|
|Mean square error correlation||Mean square error correlation|
|Mean square error bias||Mean square error bias|
|Mean square error variance||Mean square error variance|
|Residual Sum of Squares||Residual Sum of Squares|
|TC: Signal-to-noise ratio||TC: Signal-to-noise ratio|
|TC: Error standard deviation||TC: Error standard deviation'|
|TC: Scaling coefficient||TC: Scaling coefficient|
Visualisations of these metrics are displayed in the Result Files section of the page: boxplots and geographical
overview maps. You can select the metric shown with the left drop-down button below the graphs.
For an intercomparison validation, all boxplots are combined into one graph. The dataset displayed in the overview map can be selected with the drop-down button on the right.
You can also download a zipfile of all the plots in png and svg (vector) format by clicking on the Download all graphs button, and the result NetCDF file with all metrics with the Download results in NetCDF button.
This feature allows you to publish the result NetCDF file of your validation to Zenodo under your own name but without creating your own Zenodo account. This gives you a DOI for your results, which you can cite in your publications to give your readers open access to your data.
Once you click the Publish button on the validation result page, you will
be presented with a dialog
containing the metadata the results will be published with. You can change the metadata to your liking (within
some limits) and start the file upload to Zenodo by clicking Publish
Note that we require 'qa4sm' to be one of the keywords, and that Title, Description, Keywords, and Name are mandatory fields. You don't need to give an affiliation or ORCID, though. Changes you make to your author details will not be stored to your user profile - for that, please got to the Profile page.
The upload can take a few minutes, please be patient. If it should fail, please try again a few hours later. If it still doesn't work, please email us at support (at) qa4sm.eu and include the error message you received.
Please be aware that the NetCDF file and the metadata will be stored at Zenodo under the account of the QA4SM
project but with your name as the author. Zenodo is a separate website run at CERN over which the QA4SM team
has no control.
Assigning a DOI to a result also means that it cannot easily be unpublished or deleted - see also Zenodo's FAQ.
If you prefer to use your own Zenodo account, you can of course do so - the QA4SM publication feature is just for convenience. Just download the NetCDF result file and upload it yourself through Zenodo's submission process with your own account. We'd ask you to use 'qa4sm' as one of the keywords so that we can easily find all QA4SM results on Zenodo with a keyword search.
If you want to email us to send comments, report errors, or ask questions, you can do so at support (at) qa4sm.eu.Back to top