Pikazo Uses Artificial Intelligence to Turn Pics into Masterpieces

The Pikazo app uses a special neural algorithm to transform snapshots into works of art.

Many children are natural-born artists, creating crayon masterpieces that their parents hang on their refrigerators. As they become adults they often lose the freedom to express themselves in art. When a coworker says she’s spending the weekend painting, most people assume she’s painting her house.

That innate talent is something artificial intelligence (AI) designers built into the Pikazo app, which allows mobile phone shutterbugs to fuse their photos with masterpiece patterns to create unique pieces of digital art.


Pikazo is built on what its founder calls a deep neural network, a computer system modeled on the human brain and nervous system. Leveraging AI and brought to life by a robust amount of Intel computing performance, the app lets people reclaim their natural creativity and create artwork with the stylistic flourishes of a master.

Unlike Snapchat or Instagram images that insta-exit everyone’s memory, the works produced by Pikazo feel like art.

“People say, ‘Wow, I want to look at that some more,’” said Noah Rosenberg, co-founder of Pikazo.

What makes Pikazo unique is that it doesn’t just apply a filter to the images. Instead, the app’s neural network rearranges elements of the original image in unexpected ways — like recasting faces into abstract portraits.


Pikazo uses this neural style transfer algorithm to merge the style from one image to the other, resulting in a brand-new work that looks like both original images, but unlike anything seen before.

Art from Man and Machine – How it Works

The Pikazo app merges two images, one with the subject to be “painted,” (such as a selfie photo), and the other showing the style in which to paint it — be it like da Vinci’s Mona Lisa or Picasso’s cubism.

“If I gave it an image of me, and I gave it an image of Starry Night, it would merge the two,” explained Rosenberg.

original image
Original image

This is how it works. The neural network, or software modeled after the human brain, consists of layers of detectors that are trained to recognize images. When Pikazo recognizes a specific shape, it activates a particular neural pathway. The layers of detectors let it build up a deep understanding of an image.

Pikazo app
Pikazo image.

For instance, the algorithm moves from recognizing a simple shape, such as a circle, to actually identifying the specific kind of circular shape — such as the moon — and then goes further into more nuanced details, such as textures.

What makes Pikazo unique is that it doesn’t just apply a filter to the images. Instead, the app’s neural network rearranges elements of the original image in unexpected ways — like making faces abstract.

“Sometimes it’ll put an eyeball where a mouth should go,” said Rosenberg. “It produces some really surprising results.”

Amateurs use Pikazo for the whimsy and creativity it affords. But Rosenberg said professional artists also use the app as a tool to help them envision their work.

“It helps them rapidly audition a bunch of different approaches to a subject,” he said.

From DeepDream to Reality

Rosenberg said it all started with Google and a project they released in 2015 called DeepDream. A Google engineer wanted to understand how the detectors of a deep neural network worked.

The team showed an existing neural network trained to recognize images of dogs and sheep new images of a dog or sheep and asked it to highlight the pixels that it used to recognize the animals.

Pikazo app

Then, a researcher in Germany who saw DeepDream realized that he could use the same principles to get a neural network to identify the pixels that created brush strokes. He tweaked the program, so that it developed an algorithm that could “paint” a subject image.

Pikazo’s code initially leveraged existing open-source neural network projects by Visual Geometry Group and Torch, a machine-learning framework that could render images through graphics processing units (GPUs).

“Pikazo starts with a seed and expands out from there; it finds out how far it can go until it discovers the edges of each of the symbols [in the original images],” Rosenberg explained. “It’s a really beautiful and intricate process.”

It was also a complex process that required the neural network to run in the cloud, as opposed to on the user’s mobile device. And the GPUs that were available in the cloud only allowed Pikazo to process a 3-megapixel image that took 30 to 45 minutes to create.

Looking for a solution to reduce the rendering time, Rosenberg came to Intel.

He said Intel helped Pikazo move its neural network code library to run on Intel Xeon processors and Intel-based cloud services.

With the software optimized, Pikazo could move away from more expensive specialty hardware used to generate artwork for the app. Reducing costs meant Rosenberg could lower the price charged for creating larger, poster-sized version of artwork created with the app.


Pikazo creates basic images for free and sells enhancements to create higher-resolution images. At first, a high-resolution image cost $15. Now it costs app owners just $1.

This helped Pikazo grow its user base and enlarged the scale for artwork creation. Pikazo initially produced images that were about three inches across when printed out — smaller than a smartphone screen. Now, Rosenberg said Pikazo has the memory capacity to support much larger images.

“We can cover the Sistine Chapel,” he said.

Here’s a look at the many faces of Mona Lisa created using Pikazo.

Share This Article

Read Full Story