Artistic AI paints portraits of people who aren't really there

Mike Tyka paints the portraits of people who don’t exist. The subjects of his ephemeral artwork are not born from any brush. Rather, they are sculpted — roughly — from the digital imagination of his computer’s neural network.

Faces are “interesting and we humans react strongly to them, we tend to read a lot into them,” Tyka explained over email. “I find I connect with them when I work with them, I’m curious about who they might be, if they existed.”

As such, Tyka has spent the past nine months or so developing the Portraits of Imaginary People project, which follows his earlier works, Inceptionism and The Groovik’s Cube. For Imaginary People, Tyka sought to use generative neural networks to create original portraits, much like the one Alexander Reben used to mimic Bob Ross’ speaking style.

To do so, Tyka turned to a machine learning technique known as a generative adversarial network (GAN). “I started experimenting with GANs for this installation I did with Refik Anadol, where we used the technique to generate imaginary historical documents from a large archive,” Tyka wrote. “After we finished that project (it opened in April in Istanbul), I started looking at faces again using the same techniques.”

If you want a generative model like a GAN to, say, draw you a picture of a cat, you’ll first have to get a huge data set of cat pictures and then train the model to create a picture of a cat with all the requisite features like ears, whiskers and a tail. In this case, Tyka utilized roughly 20,000 high-resolution portraits from Flickr as his base training data set.

That’s a good first step, but let’s say you want a realistic picture of a cat, not a digital doodle. To do that, you need to set up a second (adversarial) neural network, known as a discriminator, for your GAN. So while the first network (the generator) creates pictures of cats, the discriminator’s job is to compare those generated images against real-world samples (e.g., actual pictures of cats) and figure out if they’re fake or not. Based on each result, the system then goes back and tweaks the generator network’s parameters to make the output image appear more and more realistic.

If you’re only using a single, unconditioned GAN, the output image is typically only going to be in the 128×128 to 256×256 pixel range, Tyka explained. So to increase the size of these machine-generated images, he stacked multiple, separately trained GANs on top of one another. “The second stage is a superres GAN which [sic] is conditioned on the output of the former,” Tyka said. “I.e. in addition to the discriminator loss (which tries to make it look ‘real’) there is an additional term that makes sure the output is a plausible high-res version of the respective low-res input.” This second stage effectively increases the image resolution to 768×768 or 1024×1024 pixels.

By training the second-level (or even third-level) GAN on higher-resolution images of specific facial details like eye, hair and skin texture, it can act as an upscaler for the GANs stacked below it. Eventually Tyka wants to generate 4K-quality pictures, though he’s currently having difficulty finding a sufficiently robust data set for training such a system.

Getting the results you see here are easier said than done, however. There was plenty of work to complete before the first simulation ever ran. “GANs are hard to train and hard to control,” Tyka explained. “Grooming the input data is important, making sure all images are high-res, don’t have artifacts and are not drawings but real photos is time consuming.”

What’s more, keeping the adversarial networks in sync requires a fair amount of trial and error. “GANs are annoying because there isn’t a global objective function. The two networks are each other’s objective functions so to speak, so the goalposts are moving,” Tyka explained. “It’s hard to compare different runs with different parameters because there isn’t a good, stable metric for how well a particular net is doing.”

Still, Tyka’s desired end result for this project does not revolve around accuracy or fidelity. “The goal, like with many art projects, is to make compelling artwork which [sic] inspires or moves or makes you think,” he concluded. “That’s hard to quantify so I just follow my gut.”

Source: Engadget - Read the full article here

Author: Daily Tech Whip

This article is part of our 'News Tiles' service. The site is currently in Beta. When it is fully operational you will be able to search through and arrange the 'Tiles' to display a keyword, product or technology over your chosen time period. For example you would be able to display all of the leading tech articles on the new Kindle Fire, in one spot in real time. You will also have access to our own original reporting and analysis as well as a polished place to post your own thoughts & reviews here, amongst the Daily Tech Whip Community. Please let us know if you have any feedback via the contact form or via Twitter. Don't forget to come back next week and see our full site and claim your name and your own free tech blog.

Share This Post On