I got a great question from two YouTube viewers “Dakota merrival” and “VCC1316” (hope you see this as I can’t seem to find your emails ).
They asked about normalizing target values, whether it’s a good idea to do and how PerceptiLabs handles normalizing - specifically if we ensure that the normalization only is computed based on the training data but still applied to the validation and testing data.
For the first question I found two good links here:
To summarize it, scaling (not normalizing) the target values can be a good idea in case the values are so high that you are in risk of overflowing the gradients, but won’t have much effect besides that.
Normalizing the target values on the other hand can be damaging as you are changing your target datas distribution.
For the second question, I checked with our devs and we do indeed only calculate it based on the training data. We also automatically include it in the exported models pipeline so that it will behave the same if placed in production.
Hope that answers your questions and feel free to come with follow-ups!