Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Current »

The first alternative Multi-size Product API lead to the insight that actually only a single, product level geo-coding is required when we force all raster data node image-to-model transformation to be affine. If we follow that approach, we'd have to either remove the RasterDataNode.geoCoding property entirely or remove just the setter. The getter could then look like this after inventing a AffineWrapperGeoCoding:

public final void getGeoCoding() {
	Product product = getProduct();
    if (product != null && product.getGeoCoding() != null) {
		return new AffineWrapperGeoCoding(product.getGeoCoding(), 
                                          getImageToModelTransform());
	}
    return null;
}

However, we feel that such a model would be too restrictive. Also, we'd had to replace all RasterDataNode.setGeoCoding(gc) occurences. This includes places where no affine alternative exists, e.g. the AVNIR2 and probably also the S-1 L1 SLC and S-3 SLSTR L1B bands (see Multi-size Products Specification).

If we drop our "affine" requirement we face new logical as well as terminological problems in the current SNAP datamodel API:

  1. RasterDataNode.getSourceImage() returns a MultiLevelImage, whose getModel() method returns a MultiLevelModel
  2. MultiLevelModel has a getImageToModelTransform() method which returns an AffineTransform

So what does the model in getImageToModelTransform transform refer to? It can't actually be the Product.modelCRS because in the general case - with different geo-codings per raster data node - there may be any, including non-linear transformations between a raster data node's image coordinates and the product's scene image coordinates.

Possible problem mitigation and API clarification:

  • Product.modelCRS becomes Product.sceneCRS - also fits better to other properties such as Product.sceneRasterSize (or better sceneImageSize?). However, remember that all vector data is then stored in scene CRS coordinates.
  • We wont use the term model in org.esa.snap.framework API anymore
  • The term model makes still perfect sense in the Ceres Layer and MultiLevelImage APIs.

After all, the model coordinate space is the one used by graphical layers before layers render their content (model elements) onto views that use view coordinates.

  • Image layers: In order to display an image on a view, a concatenated tranformation from image to model coordinates (from multi-level images) and then from model to view coordinates is used (both affine).
  • Vector data layers: In order to display vector data nodes, we'll have to generate figures for it. Figure geometries must always be in model coordinates, but vector data geometries are given in scene coordinates. In order to generate the transformed figure geometries, we need a scene-to-model transformation. Where does this come from? I.e. CRS.findTransform(product.getSceneCRS(), ???) -> 

Proposal: How to derive a scene-to-model-transform:

  • 1.: If a band's geocoding is a crsgeocoding and the map crs of this geocoding equals the product's scene crs then the scene-to-model-transform is found via CRS.findMathTransform
  • 2.: The scene model transformations are set by the readers, as the transformations are known to them. 
  • 3.: If no transformation is given, an identity transform is returned.

 

This file lists some places where the scenerastertransfrom had been used and the upcoming scen-to-model-transform might be used:

  • No labels