Pix2Pix GANs are used for image-to-image translation. Image-to-image translation changes the style of an input image to a target style: it tries to map input image → output image.
In this article, we will see how to build a model to create galaxy images from noisy pictures. An example is below:
I had to create a dataset from scratch as there are lots of galaxy image datasets but none of them really fit this purpose. After some thinking, I came up with an idea. First of all, lots of galaxy images were downloaded from google images. For each image, I had created an edge map using Canny edge detection. Based on the strength of the threshold, we get different quality edge maps. With a weaker threshold, we get a noisy image that roughly sketches the outline of the galaxy.
The Canny edge image looks noisy and we will create a training pair like this: (Canny edge image of Galaxy, Original Galaxy Image)
The Image-to-image translation task is to translate a noisy image into a galaxy style image.
To train my GAN, I followed the classic Tensorflow tutorial: pix2pix: Image-to-image translation with a conditional GAN*
The code is embedded below but you can also find it here.
After training, the h5 model was saved and used for predictions as shown below.
The code can be found here.