GAN: Import of image folder not working

Hello. So I’ve tried loading a folder of png images as a dataset for a GAN. However, when I select the folder, I am still told that there is no data in local. Is there something else that needs to be done in this situation to be able to load this folder of images?

Hello,
You can also look at these docs to make sure you are loading it correctly: https://perceptilabs.com/docs/components

Or follow the steps in this video: loading data folder tutorial

If the error still persists, can you start Perceptilabs with command perceptilabs -v=3 and see logs in your terminal. can you send us those logs so that we can identify the source of the error?

Thank you, but 2 things I need to ask.

  1. Components page says something about parameters. Are they something that need to be manually changed and if so how?

  2. I can’t seem to view the video because I don’t have access to the Slack page

Hi @Magenta,

Try this link instead: https://drive.google.com/file/d/1o8tU9xIsWl8gwt8a7iPy81j04F1-19ub/view?usp=sharing

The parameters mentioned in that page are just the components settings over to the right. For a Data components they are mostly just to load the data (the Load Data button).

Thank you for the help. I’ve been successfully get my dataset into the GAN and run the program. Unfortunately I’ve had an error when attempting to begin training the program which I’m unsure about. Sorry, I’m new to this.

Userland error in layer 1598914700401 [DeepLearningFC]. Line: 28
ValueError(“Dimensions must be equal, but are 784 and 65536 for ‘DeepLearningFC_Dense_3_1/MatMul’ (op: ‘MatMul’) with input shapes: [?,784], [65536,128].”)

File “C:\Users\toops\AppData\Local\Temp/training_script.py”, line 1347, in run, origin 1598990049723, line 520 [TrainGan]
self.init_layer(graph, mode)
File “C:\Users\toops\AppData\Local\Temp/training_script.py”, line 1057, in init_layer, origin 1598990049723, line 230 [TrainGan]
random_discriminator_layer_output_tensors = build_discriminator_graph(generator_output_tensor)
File “C:\Users\toops\AppData\Local\Temp/training_script.py”, line 1045, in build_discriminator_graph, origin 1598990049723, line 218 [TrainGan]
y = dst_node.layer_instance(inputs)
File “C:\Users\toops\AppData\Local\Temp/training_script.py”, line 657, in call, origin 1598914700401, line 28 [DeepLearningFC]
y = tf.matmul(flat_node, W) + b
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\util\dispatch.py”, line 180, in wrapper
return target(*args, **kwargs)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\ops\math_ops.py”, line 2754, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py”, line 6136, in mat_mul
name=name)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\framework\op_def_library.py”, line 794, in _apply_op_helper
op_def=op_def)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\util\deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 3357, in create_op
attrs, op_def, compute_device)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 3426, in _create_op_internal
op_def=op_def)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 1770, in init
control_input_ops)
File “c:\users\toops\anaconda3\envs\pl\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 1610, in _create_c_op
raise ValueError(str(e))

Edit: Also, is there a way to retain colour in the dataset rather than turn them to blank white images?

Hello @Magenta, great that you are able to load the dataset.
Your error seems to to be network specific.
Can you tell us more about the size and shape of your dataset and also share the network you are trying to build?

Sorry, I forgot to add earlier that I was able to resolve the error. Sorry for wasting your time