Having recently noted the use of distributions for initialising dense networks, I have now got to convolutions

for the former inialisation is via
initial = tf.random.truncated_normal((n_inputs, self._n_neurons), stddev=0.1)
for the latter it is
W = tf.compat.v1.get_variable('W', shape = shape, initializer= tf.initializers.glorot_uniform())

glorot_uniform accepts a seed parameter, and so does truncated_normal,
but they don’t seem to be used.

How will reproducibility be ensured?

Unfortunately, TensorFlow is by default not very reproducible. To get a model to run exactly the same way twice is not very easy and one of the weaknesses of TensorFlow (this is the point where I should have a reference, but don’t have any close to hand).

But enough about TensorFlow. For PerceptiLabs, seed is something we need to start adding in a few places, initializers and data split are two of them which we will be looking to add soon.

I had a previous exchange with someone else about something you do to ensure reproducible randoms, so I thought you had in fact got that covered (and in hindsight the seed default is zero so one would hope the same results obtain).

If you find a reference for reproducibility please share (but don’t spend time looking… if it becomes important I can look myself… just asking in case you recognise it somewhere)