Recurrent models taking longer and longer

I’m not sure if this is because I used Return_sequences=True of if there is another reason; however, with each epoch my LSTM network is getting slower and slower.

This is odd because when I look at the hardware monitor, it shows that I am not using all of my hardware resources, is this the way that it is supposed to be?

Here is my training, notice that the cpu and gpu aren’t being fully utilised:

Here is my model:

Hey @JWalker,
Happy new year!

This could be related to a separate issue where PerceptiLabs slows down over time if there is a lot of data being processed.
We have an ongoing process to improve on it, but it seems to still be happening for larger datasets or complex models.
Hopefully we can find a fix for it soon, I’ll keep you updated.

Thanks Robert and happy new year to you too.

I think you might be right and that it is PL related because a 10 deep dense NN also does something similar.

Hi James

Something that I discovered yesterday that might make for a useful rule-of-thumb, and hopefully speed things up is that - in extremely loose terms (because the source is highly technical https://proceedings.mlr.press/v134/malach21a/malach21a.pdf and I don’t pretend to understand any more than the general thrust!) - if a 3 layer model doesn’t get you “close” to what you want, adding more layers won’t help.

Do you get much of a speed improvement with fewer layers? If so, maybe tweak that (architecture, hyperparams) before refinement?

Thanks, a one layer model is much faster, but converges very slowly, it also slows down over time so i’m not sure if it is a false economy. They two layer model is definitely better, but again the speed reduction hurts. What I haven’t done, is simply throw a lot more input data at it. I have restricted to 24 input points to try and calculate one whilst I try to understand how to improve the model.

The three layer model is a good tip. PL wouldn’t accept more than 2 layers if my time series had more than one feature per time point so I only got as far as a two layer model. At the moment, my last model gave me a NaN and i’m testing to see if that was because I overclocked my gpu too much or if it was because of the model.

Edit, it isn’t the overclock.

Ah yes, I responded to the 10 layer dense remark but you are really focussed on the RNN; very different cases in principle (plus, I have never got my head around stacked RNN’s - if you have any useful info on that I’d be interested to hear about it)

Ok, I see now, thanks.

I am experimenting to try and find what works. RNNs ought to work in principle, but I think in the end a custom module might have to be written. I’m a long way from that at the moment in terms of skill level.