It seems (as of V0.11.15) that PL has not yet renounced the power of the Memory Stone, with which even a mere mortal can lay waste to every byte of memory and swap space simply by running the same model twice.
Alas, the CGI black screen was not the most impressive effect to demonstrate the exercise of power, but it was effective. Before I could finish casting a Task Manager spell to send PL to the Suspended Dimension it was too late.
However, in the end it was the tiniest of things that saved the day: Sleep. On waking the PC, there was just enough time before Chrome/PL/Python regained full consciousness to bring the training run to a halt.
Just as well… there is always some unsaved work somewhere.
Is there any mitigation for this? I was doing some semi-serious exploration and it would be nice not to have to restart the server every time I want to change a model parameter.
And, I thought you had renounced the stone ages ago…
[Config: Windows 10 Home 64-bit, build 20H2 full updated, Google Chrome, CUDA for TF2.4 in environment. Desktop background: nice savannah image with zebras - hmmm… are the stripes the problem??]