Validation & Test data sets

I was wondering about the Training, Validation, Test datasets the other day.

I see the use of the Validation data set at the end of each epoch.

I find the presentation hard to understand at times; for example, it doesn’t really make sense to show the “evolution” of loss, accuracy, etc. on the validation data set when only the statistics after all elements in the validation data set have been used really matter, but I can see that it is easy to present it this way. Certainly not showing validation “progress” as a graph would avoid that confusion and it would be nice just to have a summary text table with (a copy-to-clipboard button)

Moving on… if we accept that Validation data sets are used to assess and “tune” models, where in Perceptilabs can that - or will that - tuning be specifiable? I don’t think I have seen anywhere that Validation results can have an effect.

Finally, the Test data set should be held back to allow objective comparison between models. Currently it is used to assess a single model after all epochs of training.

Is there some way in which PerceptiLabs can really split off a Test set so that multiple models could be compared with data that none of them have seen before?

As I understand it, the Training, Validation, Test split is done on each run, so the content of each set will be different between runs and between different models.

So, to conclude: what is the recommended approach to managing datasets in PerceptiLabs and how may that evolve with the product?