BiT Transfer demo - "Task Failed"?

(V0.12.25, Windows 10 Home 21H1 64-bit, Python 3.8)

(Oh, now I have a similarity problem with the version of this that was posted in the Show the Community by mistake and then deleted…)

I’ve just tried to build and run the BiT demo

Two comments: one extra advice to anyone else doing the same, and a bug.

Advice, don’t cut corners! Do this first

  • pip install tensorflow_hub

then either copy/paste the whole code at once, or if editing line by line, keep this order (how I ended up doing the import second isn’t really relevant!)

  • import tensorflow_hub as hub, then
  • input_=hub.KerasLayer("")(input_)

It did not work for me in the other order: I suspect an interaction between parsing the code and cacheing in the 0.12.25 version of PL (IIRC there is a bug in the editability of custom components such that only 1st edit is accepted).

Attempts to rebuild the model after an initial simple error by me failed unless the edit order was as given above and it was also done in an incognito window where there was now no cache hit on the (?) component name (?) (the rebuilt model had a new name). Fortunately, I think the cache problem will be gone in the next release.

If you get it right first time (pip install, then copy/paste all code at once) it will probably be fine in this respect.


When I did manage to rebuild and run there were no errors in the LR panel under Errors, but this cropped up in a dialog

Traceback (most recent call last):
  File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\", line 1513, in full_dispatch_request
    rv = self.dispatch_request()
  File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\", line 1499, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\", line 83, in view
    return self.dispatch_request(*args, **kwargs)
  File "perceptilabs\endpoints\session\", line 76, in perceptilabs.endpoints.session.base.SessionProxy.dispatch_request
  File "perceptilabs\endpoints\session\", line 154, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.send_request
  File "perceptilabs\endpoints\session\", line 175, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.get_task_info
  File "perceptilabs\endpoints\session\", line 81, in perceptilabs.endpoints.session.threaded_executor.TaskCache.get
  File "perceptilabs\endpoints\session\", line 91, in perceptilabs.endpoints.session.threaded_executor.TaskCache.get
  File "perceptilabs\endpoints\session\", line 125, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.start_task.run_task
  File "perceptilabs\endpoints\session\", line 114, in perceptilabs.endpoints.session.utils.run_kernel
  File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\asyncio\", line 616, in run_until_complete
    return future.result()
  File "perceptilabs\endpoints\session\", line 98, in run
RuntimeError: Task failed!

Any ideas about the bug?

Hey @JulianSMoore,
Hmm, do you have the terminal logs to accompany that? They might be a bit more telling :slight_smile:

I was just going through one… now attached. But also, I quit, restarted and - the model name has gone and the custom component is messed up again (in an incognito tab - possibly the same one)

If I do a full restart and reopen in a new incognito tab the model is… oh! On OPEN it was initially ok, but after a few seconds it changed to dim = 1000 again

Covid with BiT hub layer -v log.txt (469.8 KB)

Other components are now being indicated as having been edited ((no auto update… reset to enable auto-settings on Dense_1, 2 and Conv_1.

I didn’t touch them :expressionless:

PS I edited the custom component after reset, just to add import… clicked close, was warned that changes were not saved, said save, but apparently the changes weren’t saved. (?)

Thanks for the logs, I’ll take a look!

Just one quick question, shouldn’t the custom component output 1000 (which of course crashes the Conv component as it’s getting a 1d input instead of 2d)?
Was comparing with Using Big Transfer (BiT) from TensorFlow Hub 👀 and it looks like that part is correct.

In regards to the model name vanishing and other components being edited though, that sounds strange.

Oh, yes, now I look at Martin’s image I can see that 1000 is in fact correct… and it must be the 224x224x3 that was wrong. But the model had no errors with the latter AFAIR, it’s only afterwards it doesn’t like it at all.

I suspect - suggesting not to waste time on this right now - that the issue won’t exist in the next release, because it looks as though Martin was using that branch to create/share the image.