Issues with Seeing in the Dark - data load, tf.python.types missing

I thought I’d already reported this, but apparently not [Update: yes I did, but via the in-tool bug report that sends to github, so it is issue 88 there]

Model source: perceptilabs github repo Seeing-in-the-Dark

Yesterday I downloaded the model (and manually edited the file paths in model.json), then imported the model. Following instructions, Local_1 data was set to Long_cropped OK, but there was an error on setting Local_2 to Short_Cropped (see issue 88)

Today (perceptilabs shutdown yesterday and restarted this morning), I reopened the model and Short_Cropped was loaded fine - but…

Now there is a new error on Convolution_1 - from the Problems panel:

Traceback (most recent call last):
  File "perceptilabs\lwcore\strategies\tf1x.py", line 50, in perceptilabs.lwcore.strategies.tf1x.Tf1xInnerStrategy.run
  File "<rendered-code: 1601056258719 [DeepLearningConv]>", line 30, in __call__

  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\python\util\lazy_loader.py", line 62, in __getattr__
    module = self._load()
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\python\util\lazy_loader.py", line 45, in _load
    module = importlib.import_module(self.__name__)
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\__init__.py", line 39, in <module>
    from tensorflow.contrib import compiler
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\__init__.py", line 21, in <module>
    from tensorflow.contrib.compiler import jit
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\__init__.py", line 22, in <module>
    from tensorflow.contrib.compiler import xla
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\xla.py", line 22, in <module>
    from tensorflow.python.estimator import model_fn as model_fn_lib
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\python\estimator\model_fn.py", line 26, in <module>
    from tensorflow_estimator.python.estimator import model_fn
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_estimator\python\estimator\model_fn.py", line 29, in <module>
    from tensorflow.python.types import core
ModuleNotFoundError: No module named 'tensorflow.python.types'

Now, I built this environment with a conda install tf 1.15, so I would have expected all tf dependencies to be there already, but if not, then brought in by perceptilabs as needed during installation, but maybe that’s not the issue…

there is a related issue on github that a qualified dev would understand (but I don’t)… No module named ‘tensorflow.python.types’ when building estimator from master branch. Googling also throws up many similar issues.

That said, a search in site-packages found no *.python.types file, but (learning?) I think maybe imports are specifying paths because in my TF 2.3 installation there is tensorflow > python > types folder structure, within which I do find some type definitions in .py files such as core.py. I have yet to find a similar set in tf 1.15

Update: dtypes do seem to exist in tf 1.15 but elsewhere… see

Lib\site-packages\tensorflow_core\python\framework\dtypes.py

Given that info, how could/should it be referenced to make the model work now?

Hope that helps, Julian

1 Like

Separate but related query as a reply (to keep them separated).

Investigating types, my guess is that the reference is implicit via dtypes in lines such as this in all the convolution components

x = tf.dtypes.cast(inputs['input'], tf.float32)

The new question is: where is precision controlled in the model? 16-32-64 bit or even ints possible?

This issue also affects another published model, the textile classification model at github here

Warning: don’t quote such output… dunders like init become markup bold; use “preformatted text”

Traceback (most recent call last):
  File "perceptilabs\lwcore\strategies\tf1x.py", line 50, in perceptilabs.lwcore.strategies.tf1x.Tf1xInnerStrategy.run
  File "<rendered-code: 1599466492197 [DeepLearningConv]>", line 30, in __call__

  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\python\util\lazy_loader.py", line 63, in __getattr__
    return getattr(module, item)
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
    module = self._load()
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow\__init__.py", line 44, in _load
    module = _importlib.import_module(self.__name__)
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\__init__.py", line 39, in <module>
    from tensorflow.contrib import compiler
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\__init__.py", line 21, in <module>
    from tensorflow.contrib.compiler import jit
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\__init__.py", line 22, in <module>
    from tensorflow.contrib.compiler import xla
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\contrib\compiler\xla.py", line 22, in <module>
    from tensorflow.python.estimator import model_fn as model_fn_lib
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_core\python\estimator\model_fn.py", line 26, in <module>
    from tensorflow_estimator.python.estimator import model_fn
  File "c:\users\julian\anaconda3\envs\perceptilabs_tf1-15_gpu\lib\site-packages\tensorflow_estimator\python\estimator\model_fn.py", line 29, in <module>
    from tensorflow.python.types import core
ModuleNotFoundError: No module named 'tensorflow.python.types'
1 Like

For information: the issue with short-cropped is reproducible in a CPU only environment; other errors in GPU environments.

Hello @JulianSMoore Thanks for reaching out. Can you please share with us what your OS is?
Can you share what error you get when you try loading the data in tensorflow cpu environment?
Regarding the error in convolution layer, it seems the error occurred in tf-gpu environment. Can you try reinstalling tf again and check if the error still exists?

Hi @mukund_s

OS is Win 10 Home 64-bit 20H2.

I’ll repeat in perceptilabs-cpu environment and let you know -> Already did. See above. problem with data in CPU environment prevents me saying anything about conv layer, so the rest of the discussion is moot until I can load the data in the CPU environment. CPU environment is already at 0.11.7 so there are no updates I could try; but I currently suspect the data over the code anyway.

[Re GPU environments: I currently have two perceptilabs-gpu environments - one with CUDA in env at 0.11.7, and one that uses OS CUDA at 0.11.6.1

I may try in other environments (I may also clone and perform updates, esp. of TF)]

What error do you get when you try to load the data in cpu environment? Cause both the datalayers work in similar way. I like to know why one data layer had no issues and other one has.

Data layer load error? The information was provided at the top of this thread; it’s in git issue 88 (Robert tells me that there is uploaded data from the perceptilabs bug report button behind the otherwise short info shown; hope it will be helpful).

That said, the error was “need more than 1 value to unpack”.

thanks. Can you share with us model.json file? It’s hard to find out source of the error right now cause I was able to load the model in the repo without any issues. Also given that one layer isn’t throwing any error, i like to see how your model is looking like.

How shall I provide you the json file? Can’t upload here.

Just realised, we can’t upload files here yet. Are by any chance on slack community channel?
https://perceptilabs-com.slack.com/join/shared_invite/enQtODQ5NzAwNDkxOTExLWUxODAwZDk0MzA1MmM4OTViNWE4MmVjYjc2OTQwMTQ4N2NmM2ZlYmI5NjZjOWRiYjBkYjBjMTMzNjEyMDNiNDk

Yes, I’m in perceptilabs slack… have DM’d you with the file

Any chance of a fix for whatever ails the handling of the short_cropped data?

I checked github but there’s been no update to the files. Environment, types issues all resolved, just the data problem AFAICT. I’ve looked at the data - in GIMP and Python and it seems perfectly reasonable - is it just that it has alpha that causes a problem somewhere?

I know the description of the model says short_cropped is RGBA but I guess something downstreeam doesn’t like the 4th channel…

Internal error in asyncio.events:88: Error in create_response
Traceback (most recent call last):
  File "perceptilabs\mainInterface.py", line 236, in perceptilabs.mainInterface.Interface.create_response
  File "perceptilabs\mainInterface.py", line 330, in perceptilabs.mainInterface.Interface._create_response
  File "perceptilabs\mainInterface.py", line 557, in perceptilabs.mainInterface.Interface._get_network_data
  File "perceptilabs\lwInterface.py", line 246, in perceptilabs.lwInterface.GetNetworkData.run
  File "perceptilabs\createDataObject.py", line 338, in perceptilabs.createDataObject.subsample_data
  File "perceptilabs\createDataObject.py", line 294, in perceptilabs.createDataObject.createDataObject
  File "perceptilabs\createDataObject.py", line 240, in perceptilabs.createDataObject.create_type_object
  File "perceptilabs\createDataObject.py", line 142, in perceptilabs.createDataObject.grayscale
  File "perceptilabs\createDataObject.py", line 45, in perceptilabs.createDataObject.grayscale2RGBA
ValueError: need more than 1 value to unpack
1 Like

@mukund_s Gentle nudge re the above :wink: Anything to share?

Hey @JulianSMoore,
Sorry for the slow reply time on this one.

From what I have heard, we were not able to reproduce this issue :confused:
Does this only happen if you read the data into the Seeing in the Dark model, or does it also happen if you try on an empty workspace with a single data component?

Hi @robertl,

Conclusion: the model editor is not robust against rapid user entry/unexpected user action ordering: in the end I rebuilt the model from your nodes and re-linked them, but the relinking didn’t work and had to be redone on some components. The rebuilt model is attached zipmodel.zip (3.9 KB)

Seems like a fairly fundamental issue which, once recognised and fixed, will cure a lot of other things :smiley:

Steps…
Copied and pasted the component from Seeing in the dark to new model and error came with it…

File "perceptilabs\createDataObject.py", line 45, in perceptilabs.createDataObject.grayscale2RGBA
ValueError: need more than 1 value to unpack

And then went away!

Convolutions 1, 2 & 3 copied and relinked and still ok (can’t remember if there is a transient error )
Copy/paste Local data for long cropped (standing in isolation for the moment) and the error re-occurs - and then goes away again

So, I thought I’d rebuild the whole thing by copy/paste…

NB Copy/paste many components not only loses connections (previous reported and you noted improvement planned) but also component placement lost - everything became squished up

While relinking, it keeps processing, and even delivered “kernel offline” on attempting to save… but that goes away once it has finished.

Still left with errors…

but after RE linking input_2 on merge 3_1, then 2_1 and then merge 1_1, and finally output of merge 1_1, then deconv 2_1

And finally it is complete with no errors.

So… not robust and the originally reported error should have gone away, but because of connection issues/parsing the model it was never resolved. It was probably relinking the data components to new sources that propagated an error that was not properly resolved because: no idea :slight_smile:

HTH!

1 Like

new day, new server instance… open up yesterday’s success and groan. The error is back, though nothing has changed (I notice the model.json was modified again, just not by me).

Really looking forward to the next release!