How do I use GPU-enabled models?

To enable the model to run on the GPU, you must explicitly set it in the predictor options.

FritzVisionSegmentationPredictorOptions options = new FritzVisionSegmentationPredictorOptions(); 
options.useGPU = true; 
SegmentationOnDeviceModel onDeviceModel = FritzVisionModels.getPetSegmentationOnDeviceModel(ModelVariant.FAST);
FritzVisionSegmentationPredictor predictor = FritzVision.ImageSegmentation.getPredictor(onDeviceModel, options);

In order for the model to run, the following conditions must be met:

  • The predictor must be initialized in the same thread as it is called. For example, calls to FritzVision.ImageSegmentation.getPredictor(...) and predictor.predict(visionImage) must be called in the same thread.
  • The OpenGL context must be initialized before trying to load a GPU in a predictor.
  • You may not use the GPU for “small” model variants. Small variants already use weight quantization for fast, CPU-only calculations.

Currently running models on the GPU is not supported with OTA model updates.