Request: please destroy the infinity stone

It seems (as of V0.11.15) that PL has not yet renounced the power of the Memory Stone, with which even a mere mortal can lay waste to every byte of memory and swap space simply by running the same model twice.

Alas, the CGI black screen was not the most impressive effect to demonstrate the exercise of power, but it was effective. Before I could finish casting a Task Manager spell to send PL to the Suspended Dimension it was too late.

However, in the end it was the tiniest of things that saved the day: Sleep. On waking the PC, there was just enough time before Chrome/PL/Python regained full consciousness to bring the training run to a halt.

Just as well… there is always some unsaved work somewhere.

Is there any mitigation for this? I was doing some semi-serious exploration and it would be nice not to have to restart the server every time I want to change a model parameter.

And, I thought you had renounced the stone ages ago…

[Config: Windows 10 Home 64-bit, build 20H2 full updated, Google Chrome, CUDA for TF2.4 in environment. Desktop background: nice savannah image with zebras - hmmm… are the stripes the problem??]

Hey @JulianSMoore,
First of all, love the writing :smiley:

0.11.15 does unfortunately still have this issue, the new Data Wizard build should have fixed this though :slight_smile:
If this still persist in your build then we will take another look at it, otherwise the rest of the community will be granted access a memory stone free life in just a week or so.

OK, I’ll wait and see… not worth trying to fix that just for me! (though the level of doubt in “should have fixed this” is a bit disconcerting :wink: )

Of course the only problem with the data wizard is going to be I can’t use different “training” components for my little mnist-encoder/decoder…

unless you can fix up the notebook export - now that would enable a lot of experimentation!

By the looks of the Mnist encoder/decoder you sent in another thread I can produce a CSV file which lets you do the same thing with the Data Wizard :slight_smile:
Which Mnist dataset are you using?

The full 70k image set that you shared as mnist.npy (the normalised version). Thx :slight_smile:
Keen to see how that works: 70k 28x28 images would surely waste a ton of disk space.

NB I read the other day that there is an official 60/10 training/test split, the 10k coming from known different writers… might be nice to know whether that file was organised the same way and hence whether it could be split like that.