I’ve so far been able to set up the perceptilabs library in a conda environment and have given a few models a go. So far everything seems to run fine, however my GPU doesn’t appear to be utilized when training/validating despite tensorflow 1.15.0 being able to iterate them just fine in the same environment. Loading the cuda related libs also appears to be fine as I’m able to do GPU training in this same environment as well. The cuda version I have by default is 11, though the conda environment itself is using 9.0 as that seems to be better for TF 1.15.0.
I do see the following warning, if it is helpful.
Let me know if any other information may be useful in resolving this, or if perhaps I’m misunderstanding something.