Here’s an abstract image of Victoria Harbour. It was created by a Convolutional Neural Network Model which could compose images in the style of another image. This is a technique called Neural Style Transfer, which introduced in Leon A. Gatys’ paper, A Neural Algorithm of Artistic Style.
Concretely, Neural Style Transfer is a technique which takes three images: a content image, a style image and an input image (which is randomly initialized), and then blend them together. If you want to learn more about the details, you could found the steps on Medium and a useful Jupyter Notebook created by TensorFlow Organization on Github. Here are the content image and style image of the output above.
I modified the source code provided by TensorFlow Organization and developed a frontend of the model with C# to make it more friendly to users. You could supervise training process and save output image conveniently.
You need to install Python 3.6 for backend and .Net Framework 4.6.1 for frontend. Python Packages such as numpy, pillow, and tensorflow-gpu are also required. This guide will show you how to install tensorflow-gpu.
For the first time, the model will automatically download a pre-trained VGG-19 network and it may take a few minutes. Time spent in the training process depends on the computing capability of your graphics card. I’m training this model on Nvidia GeForce GTX 1060 6GB and the training process usually spend 2 or 3 minutes. However, if you want to train this model on CPU (such as Intel Core i5-8250U), maybe you will waste a few hours.
Try out your own images.
Source Code: https://github.com/xAsiimov/ImageNST (MIT License)