Artificial Intelligence

Getting Creative with Artificial Intelligence

Kristin Houser Writer & Editor, LA Music Blog

Computers that learn can help artists and makers take their crafts to the next level.

From drum machines to digital cameras, technology has a history of revolutionizing the ways artists create. Now, unlikely new tools from the world of tech are altering everything from music to art to filmmaking.

According to Nidhi Chappell, director of Machine Learning at Intel, artificial intelligence (AI) is branch of computer science where machines can sense, learn, reason, act and adapt to the real world, amplifying human capabilities, automating tedious or dangerous tasks, and solving some of society’s most challenging societal problems.

Similarly, she said, machine learning (ML) is a subset of AI where computers have the ability to accumulate and process huge amounts of data that enables them create mathematical algorithms. This allows computers to act or “think” without being explicitly directed to perform specific functions.

Both AI and ML technologies have been used for decades in fields such as education, finance and medicine, but as Shun Matsuzaka, Founder of McCann Millennials, put it, “It is generally believed that artificial intelligence can’t ‘create’ or be effective in the creative field.”

Not anymore.

Today, Matsuzaka and other innovators across the globe are using AI and ML tech as tools for writing music, painting pictures and much more.

“Technically, I see a wide open field for AI-enabled artistic tools. Music tools. Animation tools. Sculpture tools. Writing tools,” said Karl Stiefvater, Founder of Pikazo, an app that uses an artificial neural network to create art. “The space is very exciting.”

Matsuzaka, Stiefvater and others are merging the worlds of AI and art in surprising ways. These four projects highlight the role tech can play in this budding creative renaissance.

AI-CD ß

In March, McCann Erickson Japan announced the addition of a very unique member to its team: AI-CD β, an artificially intelligent creative director.

AI-CD β was created as part of the McCann Millennials project, an innovative undertaking led by members of the agency born between 1980 and 2000.

“With the McCann Millennials project being young, there was nobody who was a creative director,” said Matsuzaka. “We decided to create one ourselves.”

The team used iconic past advertisements to inform the AI, hoping it would guide them to create future works of equal caliber.

“We analyzed all of the award-winning work from the past 10 years of the ACC CM Festivals, which is Japan’s most authoritative awards show,” explained Matsuzaka. “We placed original tags on [the ads] to analyze and develop an algorithm for creating visuals that emotionally move people, then installed that into the AI.”

For its first assignment, an ad for Clorets Mint Tab, AI-CD β gave the direction to “convey ‘wild’ with a song in an urban tone, leaving an image of refreshment with a feeling of liberation.” This gave his team a great deal of room for interpretation, but like any good hire, AI-CD β’s direction should get more refined with experience.

iq-email-cta-in-article-generic

“The AI will evaluate and learn from the result after a commercial is aired, so that its precision will be improved on future projects,” said Matsuzaka. “We will check whether the commercial achieved the client’s goal — if there was any perception change after seeing the commercial — and based on the result, we will evaluate whether the direction was correct or not.”

He continued, “We will feed that back into the AI database, and by doing so, the algorithm will be updated, and the AI will have learned from the result.”

Project Magenta

Just two months after Google’s “smart software” AlphaGo became the first computer program to beat a world champion in the game of Go, the company announced Magenta, a new project that asks, “Can we use machine learning to create compelling art and music?”

“The goal of Magenta is to build machine intelligence-powered tools to support human creativity,” said Doug Eck, Senior Staff Research Scientist for Project Magenta. “We’ve done so much in related areas like speech recognition and translation. Now we want to see what’s possible in the space of creativity.”

The project uses TensorFlow, an open-source machine learning system created by Google Brain, to develop algorithms that can learn how to create art and music. After Google’s Magenta team builds the open-source infrastructure for the project around TensorFlow, they’ll share the tools they’ve created with the public to use for music making. Eck would like to see a vibrant community emerge around Magenta.

“We want to bring in more people than just the narrow band of folks who tend to write complicated machine-learning algorithms,” Eck continued. “We want to be the glue between art and science.”

RobotArt

Teams from universities and high schools across the globe competed in the inaugural RobotArt competition, building 16 robots capable of independently creating paintings on canvas.

“My goal for the first year was just to show that there was interest in the competition, so my expectations were low,” said Andrew Conru, Founder of RobotArt. “I was really surprised, however, by the diversity of the submissions as well as the quality of many.”

Team TAIDA from National Taiwan University took home the grand prize of $30,000 with their Robot Artist, which was, according to their contest submission, “Inspired by human artists’ painting behavior.”

The robot combines the artistic technique of underpainting with non-photorealistic rendering (NPR) technology and a seven-degrees of freedom (DoF) robot arm to mimic a given image. An external camera attached at its three-finger gripper, and it receives visual feedback. This allows the robot to view and analyze the painting throughout the process, just like a human painter would.

“In order to make the robot paint more aesthetically, we consulted a Taiwanese professional artist, Chin-Yi Zhong, for some unbiased advice of the aesthetics and painting methodology,” shared Team TAIDA.

Pikazo App

Amateur artists don’t need to be able to build a robot to get creative with AI; they can just download Pikazo.

The app, which is described as “a collaboration between human, machine and our concept of art,” takes any image and re-imagines it in a chosen style. For example, a user could combine a selfie with one of the app’s Picasso styles.

Pikazo is far from just a filter app, though. It actually analyzes an image for general content (facial features, background objects) before taking in the smaller stylistic details.

“It’s a non-traditional application of neural networks where the network is used to ‘hallucinate’ a desirable image,” said Stiefvater, explaining that “desirable” is a quality that’s determined by the network based on the content it analyzes.

The app then uses this information to decide how to apply a style, paying attention to the relationships between the various elements.

Pikazo CEO Noah Rosenberg said in an interview with The Creators Project that this phenomenon is called “training the network.”

“I think of it like the pins in a Plinko machine,” he said. “The subject image pixels go through this maze and bounce around and hopefully fall into buckets that match the style image details.”

Credit: Pikazo Lead Programmer Ryan McNeely
Credit: Pikazo Lead Programmer Ryan McNeely

While these AI-focused projects can be helpful tools for artists to use to explore their creativity, it begs the question: Will the next “Mona Lisa” be created independently by an artist or powered by ones and zeros?

“I believe that our future artistic masterpieces will be created by using advanced – possibly artificially intelligent – software tools,” said Steifvater, noting that creatives have long used tools to further their work.

“But I don’t think that’s much different than painters today using paints manufactured at a factory instead of mixing their own recipes as they did in the Renaissance.”

Share This Article

Related Topics

Tech Innovation

Read This Next