V0.12.3 Recommender fail

Having previously encountered the message that there was a maximum of two inputs right now, I edited down my dataset from 10 inputs to 2 inputs.

However, the result is clearly off-target. [That said, guidance on manually fixing this model would still be welcome]

CSV also attached: for the curious, I am attempting (in both PL & TF separately) to do some cosmology - photometric redshift estimation from object magnitudes in various bands (u,g,r,i,z). The data here was taken from the ANNZ2 work by Sadeh and relies on the Baryon Oscillation Spectroscopic Survey Data Release 10 (there are more recent data sets, but Sadeh made CSV’s available, which saves effort).

Not novel work because, photo-z estimation is an established technique… but how one arrives at the best transformation ugriz->Redshift is a matter of constant interest.

boss_dr10_0 _G_R_RedZ.zip (68.9 KB)

Hi @JulianSMoore,
Thanks for the CSV!

Multi-input is something we still need to fix which is why it’s looking like that.
By looking at your layer-list it seems like you do have all the inputs, just not visible on the workspace.
Pressing Ctrl+A and then dragging everything down should reveal the other input component and let you build it out.

There also seems to be an issue with the Merge component from your error. Deleting and replacing the merge component will solve that and let you keep building the model.
Here is an example I did quick:


Hope that helps! :slight_smile:

Hi @robertl

Next day… the missing input has appeared (minus its connection to the Merge) - with no action from me. But the kernel died for some reason and when I restarted and reopened I saw that node for a fraction of a second and then it disappeared. Ctrl+A drag and it re-appeared (no connection to Merge)

Deleted the Merge per your comment and and re-added as concatenate on Dim -1 [that will need explaining/clarifying for users in future] before the Dense; set batch size to 20 and run and… ooops! Apart from the unsilenced Tensorflow warnings and the request for a bug report, the issue seems to be with the training component… I then saw that the Dense was only 1 Neuron (because recommended only connected 1 input initially?) so bumped that to 2 and still there’s an error… -v=3 debug info after the screenshot.

Comments?

Photo-z 2 -v=3.txt (50.0 KB)

Seems things are moving around a bit for you :confused:

The Dense component in your screenshot to be 2 (as you said), but it should be 1 to match with the Output component.
What happened when trying to run it with 1 neuron?

Hi @robertl Re 1 Neuron Dense; it didn’t work but I couldn’t recall the details so I just ran it again with Dense = 1 (it hadn’t saved, so it is as created) and the message is this (which ~explains why I tried Dense = 2)

Internal error in asyncio.events:88: Trainer raised an error on validation: InvalidArgumentError()
Traceback (most recent call last):
  File "perceptilabs\coreInterface.py", line 262, in perceptilabs.coreInterface.coreLogic._validate_trainer
  File "perceptilabs\trainer\base.py", line 63, in perceptilabs.trainer.base.Trainer.validate
  File "perceptilabs\trainer\base.py", line 218, in perceptilabs.trainer.base.Trainer._compute_total_loss
  File "c:\users\julian\anaconda3\envs\pl_tf2_main\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "c:\users\julian\anaconda3\envs\pl_tf2_main\lib\site-packages\tensorflow\python\ops\array_ops.py", line 195, in reshape
    result = gen_array_ops.reshape(tensor, shape, name)
  File "c:\users\julian\anaconda3\envs\pl_tf2_main\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8372, in reshape
    tensor, shape, name=name, ctx=_ctx)
  File "c:\users\julian\anaconda3\envs\pl_tf2_main\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8397, in reshape_eager_fallback
    ctx=ctx, name=name)
  File "c:\users\julian\anaconda3\envs\pl_tf2_main\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1 values, but the requested shape has 2 [Op:Reshape]

There were no Problems listed in the modeller.

(After that in the terminal with -v=3 we get the TF autograph warnings)

PS I clicked “Reset component” on the training component and it seems to do stuff - which I guess a Locked component shouldn’t. “Load CSV” appeared in it’s panel and the preview disappeared. Now that model seems broken.

Logged out, restarted server… model component previews now appear on all elements except the training component, so - since I didn’t restart the browser/clear the cache - I guess there’s a cache issue again…?

Log out and log in again in Incognito mode and the Training component is still broken…

Kill server, restart and log in in Incognito window, Training component still broken. Not cache, model issue.

model.json was updated at 11:42 (it’s now 11:44 and I have NOT saved the model recently)

I guess this model is broken now. I’ve attached the json for you.

model.zip (1.9 KB)

Hi @JulianSMoore,
The “Reset component” thing is a bug for sure, added it to our list.
Sounds like it completely kills the model, sorry about that!

I managed to reproduce it, and it seems you get that error when the inputs go straight to the merge component. As you can see in your screenshot, it doesn’t actually merge the inputs for some reason, the output of the merge is of dim 1 instead of 2.
Putting Dense components between the Inputs and the Merge fixes it.
I’ve added this as a bug.

OK @robertl thx. Still confused about the merge though - I think the recommender defaults to Addition when Concatenate is required AFAICS to pass through multiple inputs - ah! - you mean the length of the dimension is 2 when it should be 1? (I get confused about when Dim means the dimensions plural and when Dim means Axis and when Dim means extent/length in that dimension/along that axis :slight_smile:)

Yes, it should be 2 :slight_smile: Curious that adding Dense does something to resolve it.

Very curious indeed, looking forward to when we get to it, would love to know what causes it :slight_smile: