Doesn't have transform properties like the default style templates

Hi

Wondering why the snapml template which seems identical to the colab template provided by lens studio documentation doesn’t seem to include the transform property making it much less usable for a lens than the default template. I feel like this is a missing feature that most beginners might not miss but is needed. Also, if you could have users pay more/ have default access 512 -1024 sizes of train since 320 is inadequate for most use-cases.

Hello. Can you be more specific about the transform property you are looking for?

You can change the output size of the style model inference in Lens Studio by changing the model’s input size in the ML component. The output will scale accordingly. Note that the default is 320 pixels because that is what the model is trained at. We’ve had good results moving as high as 1024, but the model will run more slowly, especially on older devices.

Changing the input size for training is a separate parameter that does not impact output sizes during inference. Very few style transfer training systems train on larger images. If you have a use case that requires training on larger images, let us know.

i’ve noticed that changing the inference size on one dimension in lens studio doesn’t affect the computation time Inference Sizes in Style Transfer but it does affect the quality quite a bit (its nicer quality). But I find it strange that i can’t change the inference values on a fritz trained model in lens studio to go lower which is why i asked my question. I think there’s great value in training a value at least 512 because of the use of different inference sizes has proven to be a good result for me.

I think I understand your question, but let me double check:

  1. On the “transform” property, Do you mean the “transform” option on the ML component? If so, the difference between the Fritz template and the Lens Studio template is that models trained with Fritz already have pre- and post-processing included in the model architecture itself. This means that it is not necessary apply any inverse data transform to the output and you can leave it on “none” for Fritz models. It is also not necessary to apply any position, rotation, or scale transforms in the pre-processing.

This is an image of the ML component from the official Lens Studio Style Transfer template. Note the Transform option would be set to None in our template.
Screen Shot 2020-12-16 at 1.19.30 PM

  1. Changing the resolution will always impact computation time, although at lower resolutions, that calculation is so fast that it might be within the measurement errors. If you increase one dimension to, say, 2048px you will notice a slowdown.

  2. The inference size for Fritz trained models can be changed to any value, higher or lower, within Lens Studio. I believe Lens Studio itself has a global maximum on the model input size, but within that, you can change a Fritz model’s Input size to 100x100 or 1080x1080. One thing to note here is that you should be changing the model’s input size and not the output size. The output size is derived from the input size.

Screen Shot 2020-12-16 at 1.30.00 PM Screen Shot 2020-12-16 at 1.30.16 PM

  1. We’ve chosen a default training resolution of 320x320 to balance training time and flexibility. I’ll definitely add training size customization to our feature request backlog.

Finally, do you mind sharing some comparison results between the model you trained with Fritz and the one you’ve training with the official SnapML Collab notebook?

Thanks for all of the feedback!