User @Remliv asked about training with images of unequal sizes.
There are straightforward solutions to the issue by using e.g. pillow (see simple example here)
However, not all users are able to code in python and it seems likely that many users would benefit from the following pre-processing capabilities
1/ Automatically pad images to match the maximum dimensions of any one of them
2/ Automatically scale images with nice interpolation (so as not to create confusing blocks/edges for CNN)
These simplest capabilities could be controlled by checkbox at the bottom of the data wizard column.
This would allow Perceptilabs to be used with images that may have been collected from diverse sources by users who are otherwise unable to pre-process images; this sounds like the sort of use case/user profile PL has identified elsewhere for support.
(Many other transforms and options are possible: crop, with fixed TL, TC, TR, ML, MC, MR, BL, BC, BR; pad with similar padding replacement preferences; autoconvert to greyscale… but more input from users is desirable before adding anything that needs significant UI extension)