V0.12.16 Model does not run

Update - latest PL version confirmed; issue report stands

Model json and -v=3 output from run (dataset provided in another recent thread I think). Some issue with tensor reshaping?? Input welcome!

photoZ (minimalTraindDf).zip (12.9 KB)

Model looks like this: a hand constructed multi-input regression without the categorical inputs (because of issues with tensor rank of one-hot encoded categorical inputs and merge inputs)

Additional issues: click on Target or an input (not merge or dense components) and 2 problems are listed, both ~“TypeError: run() takes exactly 2 positional arguments (1 given)”. These problems disappear if I return to the model hub and then click on the model there, forcing it to re-parse/process.

Hi @JulianSMoore,
Thanks for reporting this!

I will look into the rank issues you are having in your other thread to try and make sense of them, likely I’ll just input the data into PL and see how to make it work from there.

PL models are also not able to run if there are individual components loose on the workspace right now, everything needs to be connected into a single acyclic graph for it to run.

Hi @robertl oh yeah, you mentioned the single acyclic graph before and I forgot, and I didn’t want to delete them because I can’t add them back because: datawizard. Of course, if I can save-as this model then I can come back to it, and delete those two components in the interim

Suggestion though: isolated sub-graphs shouldn’t matter - and it will be a lot easier for model builders if they can be ignored. The model to be run should be the acyclic graph whose components are connected to the target - if they’re not, they should be ignored. Would that work?

That would work and would be a nice stability upgrade :slight_smile:

1 Like

Alas, those two categorical inputs that are unconnected can’t be removed from this version of PL because they are locked (they’re inputs), and I can’t link them in and I don’t want to rebuild the whole thing again - there are too many obstacles to this model right now so I need to wait until the merge is fixed, and maybe one-hot encoding coming out with the right dimensions to concatenate.

Maybe 0.12.18?? :slight_smile: Not too long to wait!

Haha, we have a few other things getting pushed into 0.12.18 so that might be a bit optimistic, but not too long after that I hope :slight_smile:

and maybe one-hot encoding coming out with the right dimensions to concatenate.

How exactly did you mean with this one?

Well, not long after 0.12.18 then :smiley: So much is happening, has happened, will happen that little bit longer is not an issue - especially since I’ve already done this in pure python and am primarily using it to exercise PL and explore the metrics etc.

and maybe one-hot encoding coming out with the right dimensions to concatenate.

How exactly did you mean with this one?

Sorry, I thought that had been communicated and “received” earlier - connecting the categorical inputs to a merge input for concatenation causes a tensor rank mismatch error I think, so I was guessing that the categorical o/p was n x1 when the merge expected 1 x n (or vice versa). If you do attempt to reconstruct based on the images and the CSV provided, it will probably become clearer :wink:

Somewhere I did say that I had identified the cause of an error on the merge as only arising due to the categorical inputs… but I may not have been detailed enough!

Ah ok thanks for the clarification :slight_smile:
Plenty of parallel forum threads combined with a lot of other things to do that day mixed it up for me.
We have it on the list of things to fix.

No worries; following these threads can be tricky when there’s no single point of focus (e.g. “the” question)! Thx :slight_smile: