GAN example: Modifying random generator for different image shapes

I recognize that this question is less specific to PerceptiLabs, but since it has been a valuable entry point into ML for me, I’m hoping to get feedback within this community.

I have a collection of images that I’d like to use with the GAN example, but the shape is 28x28x3 as opposed to the 28x28x1 of the mnist. What is the best practice to modify the generator so that the random components will work with this set of images?

Thank you!

Hello @markhirsch, what we basically do in the generator is, generate an image of required size from a random sample. In our GAN template, we are starting with a a random tensor of size 100 and then upsampling it using dense layers to a size 784 in a gradual manner. Here 784 is same as the shape of the data we want to learn from(28 * 28 * 1), before reshaping.

Because you have images of shape 28 * 28 * 3, you would basically need your generator to produce images of same size and then we reshape using the reshape layer. Since, 28 * 28 * 3 = 2352, we want the generated image to have same number of pixels. Here is one way I chose to do it in gradual steps from 100 to 2352. Here 100 is also arbitrary. You can choose a random sample of your choice. It can be of any shape and size.

Hope this helps you in building the model successfully! Let us know, if you have any more questions.


1 Like

this is extremely helpful, thank you!!

1 Like

Since you mention upsampling here, is there or will there be support for convolution transpose upsampling?

The Deconvolution component should do just that for you :slight_smile:

:man_facepalming: Ah, the “Deconvolution” cunningly labelled “Deconvolution” just to check whether I’m paying attention :laughing: :laughing: :laughing:

1 Like