Skip to content

pixel_distribution_adaptation

PixelDistributionAdaptation

Bases: ImageOnlyAlbumentation

Naive and quick pixel-level domain adaptation.

It provides pixel-level domain adaptation by fitting a simple transform (such as PCA, StandardScaler or MinMaxScaler) on both the original and reference image, transforming the original image with the transform trained on this image, and then performing an inverse transformation using the transform fitted on the reference image.

Parameters:

Name Type Description Default
reference_images Union[Any, Iterable[Any]]

Sequence of objects that will be converted to images by read_fn. Can either be path of images or numpy arrays (depends upon read_fn).

required
blend_ratio Tuple[float, float]

Tuple of min and max blend ratio. Matched image will be blended with original with random blend factor for increased diversity of generated images.

(0.25, 1.0)
read_fn Callable

User-defined function to read image, tensor or numpy array. Function should get an element of reference_images

lambda x: x
transform_type str

type of transform; "pca", "standard", "minmax" are allowed.

'pca'
inputs Union[str, Iterable[str]]

Key(s) of images to be modified.

required
outputs Union[str, Iterable[str]]

Key(s) into which to write the modified images.

required
mode Union[None, str, Iterable[str]]

What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument like "!infer" or "!train".

None
ds_id Union[None, str, Iterable[str]]

What dataset id(s) to execute this Op in. To execute regardless of ds_id, pass None. To execute in all ds_ids except for a particular one, you can pass an argument like "!ds1".

None
Source code in fastestimator/fastestimator/op/numpyop/univariate/pixel_distribution_adaptation.py
@traceable()
class PixelDistributionAdaptation(ImageOnlyAlbumentation):
    """Naive and quick pixel-level domain adaptation.

    It provides pixel-level domain adaptation by fitting a simple transform (such as PCA, StandardScaler or
    MinMaxScaler) on both the original and reference image, transforming the original image with the transform
    trained on this image, and then performing an inverse transformation using the transform fitted on
    the reference image.

    Args:
        reference_images: Sequence of objects that will be converted to images by read_fn. Can either be path
            of images or numpy arrays (depends upon read_fn).
        blend_ratio: Tuple of min and max blend ratio. Matched image will be blended with original with
            random blend factor for increased diversity of generated images.
        read_fn: User-defined function to read image, tensor or numpy array. Function should get an element
            of reference_images
        transform_type: type of transform; "pca", "standard", "minmax" are allowed.
        inputs: Key(s) of images to be modified.
        outputs: Key(s) into which to write the modified images.
        mode: What mode(s) to execute this Op in. For example, "train", "eval", "test", or "infer". To execute
            regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument
            like "!infer" or "!train".
        ds_id: What dataset id(s) to execute this Op in. To execute regardless of ds_id, pass None. To execute in all
            ds_ids except for a particular one, you can pass an argument like "!ds1".
    """
    def __init__(self,
                 inputs: Union[str, Iterable[str]],
                 outputs: Union[str, Iterable[str]],
                 reference_images: Union[Any, Iterable[Any]],
                 mode: Union[None, str, Iterable[str]] = None,
                 ds_id: Union[None, str, Iterable[str]] = None,
                 blend_ratio: Tuple[float, float] = (0.25,1.0),
                 read_fn: Callable = lambda x: x, # for reading tensor to numpy array
                 transform_type: str = 'pca'
                 ):
        super().__init__(PixelDistributionAdaptationAlb(reference_images=reference_images,
                                blend_ratio=blend_ratio,
                                read_fn=read_fn,
                                transform_type=transform_type,
                                always_apply=True),
                         inputs=inputs,
                         outputs=outputs,
                         mode=mode,
                         ds_id=ds_id)