 # Code question: y, y_before

in` __call__` this is seen a lot…

``````y = tf.matmul(flat_node, W) + b
y_before = y
y = tf.nn.relu(y)
y_before = tf.nn.relu(y_before)
``````

and these then appear in the output…

``````self._outputs = {
'output': y,
'y_before': y_before,
'initial': tf.expand_dims(initial, axis=0),
'W': tf.expand_dims(W, axis=0),
'b': tf.expand_dims(b, axis=0),
'flat_node': tf.expand_dims(flat_node, axis=0),
}
return self._outputs
``````

Now since, y & y_before end up being exactly the same, this looks rather inefficient, so I assume that something very clever is going on here…

What exactly does this code achieve?

(And, 2nd Q: how does the dictionary of the output get used as the input of the next layer?)

Update
Convolution doesn’t follow the same pattern, there y, y_before are just

``````y = tf.add(tf.nn.conv2d(x, W, strides=[1, self._stride, self._stride, 1], padding=self._padding), b)
y_before = y
y = tf.nn.tanh(y)
``````

Why the difference? Hey @JulianSMoore,
If you enable Batch_normalization you should see a difference The reason behind it is so that we can show a preview without batch normalization, and we don’t want to show with batch normalization because our previews are generated off of just a single sample. Turns out, if you do batch normalization on a single sample, things become uniform, which is misleading and not very interesting.
The code should be generated without y_before unless there is a batch_norm though, that’s a bit sloppy from our side Hi @robertl

y_before should be there when batch_norm is on; OK.

And not otherwise I suspected something was amiss just because the pattern was inconsistent for no obvious reason.

Yea, looks like y_before was not properly hidden in the other cases, but it should do no harm at least 