No Constant Result: Result varies from iOS SDK to Android,

Background removal In Android & iOS are not producing the same result, its giving great result in Android, but in iOS, the result is not that much great. The iOS result has some glitches(white patches) in person detection and that happens in 8/10 convertions.

iOS Output

I’m using the latest Fritz version 5.4.1, [FritzVisionPeopleSegmentationModelAcurate (5.3.0)] and i was testing on iPhone 6s, 7 and iPhone 11.

Original Image

Android Output

Hello, thanks for posting this issue here. There are a few reasons why you might get different results between iOS and Android.

  1. Differences in image pre-processing before images are passed into the model. For example, using center cropping versus scale fit. On Android, images are simply resized to the model input size automatically with the FritzVisionImage. On iOS, you can set scale and cropping options via the FritzVisionSegmentationModelOptions object. You can try different options on iOS and see if you find one that produces the same results as Android.

  2. Different post processing thresholds. The model outputs a pixel mask where the value of each pixel represents the probability that that pixel belongs to a given class (e.g. foreground or background). Via post-processing functions, those probabilities are converted to an alpha mask used to remove the background of an image. You can use different thresholds to control how strict the mask is. For the image you showed above, you could try lowering the threshold in hopes of capturing more pixels.

In general, you should not expect the model to produce perfect masks with the same parameters for every image on every platform. To plan for failures, consider providing the users with sliders that control the threshold discussed in item 2 above or letter them manually edit areas of the mask that need to be fixed with a brush-like interface.

Hope that helps!

1 Like

Wil try that out, thank you for your reply.