Adam, Eve, and the Apple of Intelligence

If making—and appreciating—art makes us human, what happens when we get help making a masterpiece from something unhuman?

Main Ai Art
Playing in harmony? / Illustration by Gosia Herba

In one of Swedish artist Pierre Brassau’s paintings from 1964, whorling, cobalt blue channels on a sunset-orange canvas are bifurcated by a purposeful vertical magenta line, suggesting a craving for order in a post-war world. 

“Pierre Brassau paints with powerful strokes, but also with clear determination,” declared one of many critics fawning over the four paintings by the as-yet-unseen new abstractionist at an exhibit in Gothenburg, Sweden. Just one critic remained unimpressed, claiming “only an ape could have done this.”

They were right. For Pierre was really Peter, a 4-year-old chimpanzee who lived in Sweden’s Borås Zoo. Peter reportedly preferred eating the oil paints to painting with them. His favorite color was cobalt blue, for its tart flavor.

The Peter/Pierre prank was pulled off by journalist Åke Axelsson, who wanted to test the prowess of snooty art critics claiming to be able to distinguish between good and bad abstract art.

Perhaps sheepishly, the critic who praised Pierre’s powerful strokes maintained the Brassaus were the best in the exhibition. One even sold for the equivalent of $700 in today’s dollars after the hoax was revealed.

If beauty—and art for that matter—is in the eye of the beholder, who cares if it was best captured by something other than human?

Apparently, humans care.

Brassau
The artist Brassau at work. / Image courtesy WikiCommons

Fast forward more than half a century and the debate has swapped apes for Apple. In a 2017 experiment out of Rutgers University, Facebook, and the College of Charleston, researchers were surprised that most participants preferred computer-generated artwork over the human-made stuff, calling it more novel, complex, and surprising. What’s more, the majority could not distinguish between pieces made by humans and those made by artificial intelligence (AI) programmed to review 80,000 paintings from the last few hundred years and then to generate new visuals and styles.

Not every participant was thrilled by the duping. “Humans have a strong bias against thinking about computers as being creative,” says app developer and Santa Clara University Computer Science and Engineering Assistant Professor Maya Ackerman in the Proceedings of the National Academy of Sciences journal. This is because art creation is seen as a strictly human venture. 

Our ability to be creative, in other words, is what makes us human. Art gives us meaning.

But Ackerman and her colleagues in the emerging field of computational creativity do not view the issue in such black-and-white terms. Sometimes they envision a future where AI is not poo-pooed in the (often snooty) art world but celebrated for its creative capacities. 

Invited to speak at the UN AI for Global Good Summit in Geneva this summer, Ackerman says she focused on the positive impacts creative computers can have on society. “We always talk about AI creating a better world, but most of us imagine AI doing all the boring stuff—vacuuming, cooking, folding laundry—so we’re freed up to be artists, singers, and writers. This is so misguided,” she says. “If being human is being creative, if that’s our highest vision of how we spend our time, AI can help with that. It can help us engage in the stuff that makes us happy. It can help us be more human.”

Still, questions remain: If computers can tap into the mysterious creative process, if an algorithm can create work deemed more beautiful than solely human-made counterpieces, as this technology becomes widely accessible, enabling more and more humans to be creative, who gets credit? Who has the talent: the computers or a bunch of primates?

<The Infinite Loop>

“On your desk, do you have an inkwell and blocking paper? No? Good lord, why not?” exclaims SCU Associate Professor of Computer Science and Engineering Ahmed Amer. In mock exasperation, Amer spotlights the anxiety humans feel over being replaceable by new technology. Using word processing software instead of a quill does not make this reporter less of a writer, he insists. “There’s this feeling many have that if anyone [or anything] can do a thing, then it’s no longer a valuable skill. I believe they’re wrong.”

Amer recounts a story from his first job out of college in which he was tasked with creating promotional materials for an event.

Pictureoftheartistasayoungchimp
Picture of the artist as a young chimp. /Illustration by Gosia Herba

“Commissioning an artist would have cost us thousands of dollars and taken much longer. So instead we slapped something together in Photoshop on a shoestring budget,” he says. “It’s not going to hang in the Louvre,” but it got the job done.

To be sure, there will be casualties. Professional calligraphers, for example, aren’t getting as many commissions as they did 200 years ago. Art has always been pervious to technological advancement, but that doesn’t destroy the need for human creative input, Amer says. “When people talk about AI and creativity, the way I think about it is, we’re just creating better brushes.”

Photography is a prime example of this art-making evolution. At first, photography was dismissed because it came from a machine versus human hands. And many artists viewed it as a threat to their mediums.

Though it did replace some things, such as painted portraiture as a marker of social hierarchy, photography was ultimately hugely beneficial to the arts. It breathed new life into an old art form, allowing painters to abandon the pursuit of hyper-realism popular in the mid-1800s and explore other avenues of expression. See: the Impressionists.

Each time a new artform is introduced, there is a period of revolt before acceptance and eventual embrace. And the cycle continues ad nauseum. Photography was followed by film which was followed by animation which was followed by generative art. That last category is art created in part by a nonhuman autonomous system. Think of Electric Sheep, the ever-morphing neon blobs that appear when your computer goes to sleep created by open source code developed by mathematician and software artist Scott Draves.

Electric Sheep in action. / Video courtesy electricsheep.org

“Wherever there is controversy in AI as an artistic tool, I predict the same trajectory,” writes Aaron Hertzmann, principal scientist at Adobe Research in San Francisco, in his 2018 article “Can Computers Create Art?” in the journal Arts. “Eventually, new AI tools will be fully recognized as artists’ tools.” Ultimately, Hertzmann concludes that computers, while useful tools of artistic expression, are not artists. The humans programming the computers are.

Take Harold Cohen. In the early 1970s, the classically trained British painter began displaying artwork produced by his computer program AARON. Though the artwork AARON produces has become more complex over the decades, AARON cannot change styles based on whim or learn new imagery without a human writing new code for it.

But what about the Painting Fool, a computer program started by University of London computational creativity professor Simon Colton? The Painting Fool’s website is narrated in the first-person by the program, which calls itself an “aspiring painter” aiming to be taken seriously as “a creative artist” that gets to sign its own work. It will do so, it claims, by exhibiting behaviors deemed skillful, appreciative, and imaginative in its artwork.

The field of computational creativity lingers between the lines drawn by the two algorithmic artists.

In fastfoward: See the Painting Fool at work. / Video from thepaintingfool.com

In the new textbook series Computational Synthesis and Creative Systems, published in 2019, computational creativity is defined as an emerging branch of AI that explores the potential of computers to be more than tools and become creators and co-creators in their own right.

Amer cautions against reading too much into that, though, lest you fall down a deus ex machina rabbit hole. “Have you come across the term ‘artificial general intelligence’? If you do, run the other way,” he says. “It’s talking about when computers will think and do like humans. It belittles computers, and it belittles us because we don’t know [the limit of] what we can do yet.”

<What Do We Know?>

Consider the machine learning technique within AI called a neural network. It’s an algorithm modeled after the human brain and designed to recognize patterns. But neuroscientists still know very little about the mind. Despite grand advancements in the field, there’s still a seemingly infinite number of pathways to explore.

So with AI, it seems, we’ve identified the treasure before plotting the map.

We don’t know from which areas of the brain originate understanding and creativity, explains neuroscientist John E. Dowling in his 2018 book Understanding the Brain: From Cells to Behavior to Cognition. We definitely don’t know what consciousness is.

Part of the issue is our estimation of creativity—and, by association, art and beauty and aesthetic appreciation—has evolved as humans have evolved, modifying and transforming since cave people carved figures into their cave walls.

Santa Clara electrical and computer engineering professor Aleksandar Zecevic, who studies the nature of truth and beauty, as well as interdisciplinary aesthetics, says, “First of all, the whole idea of beauty being completely relative, I think, is wrong simply because all of us have certain built-in preferences.” This is because humans have collectively been exposed to certain forms occurring in nature for millions of years, so those forms resonate with us. 

There’s a famous saying about art Zecevic uses to describe this. He cannot remember who said it, but it’s along the lines of, “It’s not that I know what I like, it’s that I like what I know.” But while there are certain things most humans are drawn to, because we’re anatomically built the same way, the other half of the equation isn’t so simple. “Your impression of something also depends on your experiences and your emotional status,” he says. “So you and I can look at exactly the same painting and have completely different impressions.”

This is where computers are lagging behind humans, he says. Programmers can feed AI a bunch of information about those things that have been proven favorable or beautiful to most humans but the computer can only spit out more of the same. “Humans are going to sometimes produce something totally new and different.”

2048px Juan Gris Portrait Of Pablo Picasso Google Art Project
And now for something completely different: Pablo Picasso’s genius can be seen in the cubist movement, one that looks at the world in an entirely new way. Here is his portrait of fellow cubist Juan Gris. / Image courtesy Google Art Project

Take Cubism. For hundreds of years, perspective was immovable in art. “There was this absolute point of reference from which to look at the world,” Zecevic says. Then, at the turn of the 20th century, in walks Picasso, who throws perspective in the trash. Or rather, shifts it slightly. All of a sudden, we could see “multiple views at the same time, like a broken mirror.”

There is a parallel spark in the realm of science around the same time, he says, with the rise of quantum mechanics and Einstein’s theory of relativity. Now, there are no absolutes. “There are multiple points of reference, just as in art, and they’re all equally valid.” What’s more, he says, these movements were born not of reaction but of intuition; yet another very murky, very human quality. We (and we alone) are capable of imagining what’s possible.

Pretty things can come from computers, Zecevic says—those things we’ve all agreed are beautiful over 200,000 years of modern human existence. But that only comprises the tip of the iceberg, the stuff we can see and know.

The new stuff, the stuff of creativity, is going to come from the unconscious mind that lies beneath the surface, where imagination, intuition, and shifting perspective are hidden from the light.

Not everyone sees it exactly this way, of course. Rafael Pérez y Pérez, a professor at Universidad Autónoma Metropolitana in Mexico City and the chair of the international Association for Computational Creativity from 2014 to 2019, built MEXICA, a computer program that produced the first book of short stories written completely by AI.

The tales in MEXICA: 20 years-20 stories about the God-like ancient inhabitants of Mexico are perfunctory and repetitive, though sometimes the prose can be downright elegant and lyrical. “The princess woke up while the songs of the birds covered the sky,” begins one.

Pérez y Pérez explains in the afterword that MEXICA is a tool to better understand the creative writing process. It follows a theory popularized in the late 1990s called the ER Model, or engagement and reflection. Engagement refers to idea production (i.e., characters, plot, etc.) and reflection refers to evaluation and modification. MEXICA writes stories as a sequence of actions then reflects on what it’s written to ensure the actions are justified, and the resulting narrative is novel.

SCU adjunct lecturer in creative writing and author David Keaton says this model makes sense, to a point. In recording the number of times an action has been employed in previous stories, Keaton says MEXICA “does sound a lot like scouring a manuscript for redundancies, variety, etc.” Also, the ongoing rating of narrative flow and coherence, narrative structure, content, and suspense will ring familiar to anyone who’s taken an intro to creative writing class.

But for Keaton, what really separates man from machine is the ability to end the creative process. MEXICA determines a story is finished by ensuring all plot conflicts are resolved. If only Shakespeare could walk away so easy. “Maybe the unease and lack of satisfaction by the artist in the art they produce, no matter how many revisions or reflections, is the one thing that will distinguish a computer creator from a human one,” Keaton says.

<My Coworker, the Computer>

From 2013 to 2016, the executive branch of the European Union funded an international action called PROSECCO, to “Promote the Scientific Exploration of Computational Creativity.” Anchored in the belief that computers can be more than facilitators of human creativity—in the sense that Photoshop facilitates a graphic designer’s vision for a new Nike swoosh—PROSECCO envisions a future in which computers can rise to the level of co-creators that share responsibility with a human peer.

This view is not shaped by a desire to replace humans with machines, nor by a perceived lack of human creativity in the marketplace, but by the very fact that large chunks of human creativity remain unknown and therefore untapped. Researchers in the field agree that this ambitious vision will take years and years to realize.

But while the world awaits the coming of a fully computerized artist that can not only paint or write a story or play an instrument, but also evaluate its results and explain how the art was created, computational co-creativity has already arrived.

Robotmusician
Robot jam: When AI helps make music who gets credit? / Illustration by Gosia Herba

“I believe that in many ways co-creativity is a lot more exciting than creative machines who don’t need us,” SCU’s Ackerman says. “Collaboration with creative machines offers the opportunity to take human creativity to new heights, elevating our humanity, and bringing joy to the lives of many.” 

Coming from this belief that computers can be valuable members of the team versus machines gunning for world domination, in early 2019 Ackerman and her team at WaveAI released ALYSIA, an app that harnesses AI to write original songs. A classically trained opera singer, Ackerman came up with the idea while earning her Ph.D. in computer science and attempting to write her own music as a hobby. 

“I could sing, I could write lyrics, I learned how to be a producer, all that stuff, but the natural language of the music wasn’t gelling,” she says. “I couldn’t figure out how to create music that fit the lyrics.” Figuring she could use what she knew about AI to solve her songwriting woes, Ackerman created a machine learning model for vocal melodies. 

Fed a diet of thousands of songs—from many different artists in many different styles—the computer learned what melodies would work best with what syntax, exponentially expediting the songwriting process. Give it a phrase, and ALYSIA will spit out several melodies to choose from. 

Speakers on! Hear AI-assisted composing in action. / Video from YouTube

Today, ALYSIA is available for free in the AppStore (with an optional upgrade for a small fee to access more personalization and editing capabilities). The lyrics assistant generates editable lines based on a user’s chosen themes. Jonesing for a sappy ballad? Enter “love” and “time,” and ALYSIA might suggest, “Hope and peace will find us here,” for the first line. The melody partner then transforms those lyrics into original tunes, based on the chosen genre such as country or R&B. Once a hit is written, the user or a robot sings and records the final song.

ALYSIA cuts through the static that often fogs the human brain; self-flagellation, doubt of one’s own talent, merciless writer’s block. “Machine learning inverts everything,” Ackerman says. “You give it a song and it asks, ‘What happened here?’ It doesn’t rely on human introspection—it just looks at the results and learns the essence of writing music.”

Before releasing the app to the public, Ackerman teamed up with James Morgan, an instructor of digital media art at San Jose State University, to record an operatic aria using an Italian version of ALYSIA trained on the music of Puccini. Its name, of course, is Robocini. 

The opera takes place in the World of Warcraft, an online virtual game populated by players around the world, and is sung by a new mother who’s returning to her job as a raider. Morgan wrote the lyrics in English, which were then translated by a student who speaks Italian. Robocini composed the music notes, and Ackerman sang. 

A raider—and new mother—flying home to her family is in awe of the life she sees below her in this segment of an opera performed in World of Warcraft, sang by Maya Ackerman, and composed with the help of artificial intelligence called Robocini. / Video courtesy James Morgan

Having previously worked with human musicians to score a musical, Morgan says this process felt much more collaborative despite there being a non-human on the team. “It was like a black box. I would tell the live musicians what I’d like for one of the songs or give them lyrics, and then they would come back with this music that was completely done,” he says. “The process was so opaque. I didn’t have any real sense for what was going on.” 

But working with Robocini, he learned something. “I started to pick up kind of an intuitive sense of being able to read music a little bit.”

As for whether he ever felt his artistic vision was challenged or musical talent threatened by an unfeeling robot, Morgan maintains the opposite occurred. Robocini brought a deep level of expertise to the table, “whereas I didn’t have the time or energy to go off and study how to write an opera,” he says. “By bringing in a collaborator, and in this case a collaborator that sits and works quietly on my laptop, I won’t say [the process] was easy but it made it very light.” In other words, Robocini made him more creative.

<Democratizing Creativity>

Technology throws wide open the door to the world of art. Those who would never have had the access to tools of creation just a few decades ago now do, says Brian Smith, VR specialist and director of the SCU Imaginarium. Donning a headset and using touch controllers to “sculpt” in another dimension what appears to be an elephant head with the VR tool Quill, Smith says, “I do one virtual painting a day whereas if I was painting in oil, it would take me forever.”

Plus, these tools eliminate at least the immediate need for a physical medium, which can be hard to work with, expensive, and time-consuming. Imagine sculpting a life-sized elephant head out of clay. In this way, VR and other computer tools free up the artist to be more creative, to dare to imagine the previously impossible. “If you spend your life making one masterpiece song or painting, I’m not sure what that’s accomplishing,” Smith says, seemingly forgetting that Michelangelo painted the Sistine Chapel for years and sculpted the Pietà without the use of VR. “Using these tools helps you explore more.”

Still, he’s not wrong about the empowerment that accompanies the technological democratization of artistic tools once only available to artists with a capital A.

It’s early photography all over again. The second version of the Brownie, introduced in 1901, produced decent snapshots and cost $2. Suddenly, everyone could make photography, whether or not they had training or “an eye” for it. Today, cameras have been replaced with the smartphone and anyone can make a pretty good photo, thanks to easy-to-use, built-in filters.

As AI has revolutionized and leveled the playing field, humans will inevitably ask “Is it going to change our perception of [an] art form? Are we going to start rethinking the value of it?” asks David Ayman Shamma, a Bay Area computer scientist who served as director of research at Yahoo! Labs and Flickr.

When creativity is democratized, what becomes of talent—or, rather, our perception of it? If we’ve arrived in a world where anyone can take a decent photo, or write a song, or sculpt an elephant head in a virtual world, will the next Beethoven or Van Gogh find the footing to rise above?

Shamma, for one, isn’t too worried about it. “I play guitar. I’m not great but I learned enough where I can do something and be happy with it,” he says, noting that playing music or being otherwise creative is a matter of personal fulfillment. Plus, there’s no real correlation between the increased production of art and it being “good” art.

When creativity is democratized, what becomes of talent—or, rather, our perception of it?

Humans, after all, don’t need computers to tell us whether we like or dislike something. We excel at doling out judgment.

The real stars will continue shining the brightest, says SCU’s Ackerman, despite democratization. “They’re just going to have a few more tools helping them out.” She points to the music software GarageBand, which upset many musicians when it first came out because it produced songs without the need for live instruments. “But then it opened up the door to new art forms. … It ultimately took us to new levels,” she says. “When technology helps you with a creative task, you can focus on other things. And people are so creative—it’s not like we’re going to stop.”

By debating the democratization of creativity, we may be losing sight of the benefits of, you know, actually creating. “A lot of times people have misconceptions about creating art and who is worthy of it,” says SCU psychology major Kyra Sjarif ’14, now an art therapist at an in-patient rehabilitation center in Philadelphia. “From my perspective, creativity is something that everyone has access to. And it’s not necessarily about production. It’s about the doing.”

Flow is something Sjarif talks a lot about—that experience of being so completely present and focused in the creative process that we lose track of time. “There’s a lot of inherent value in the process,” she says. Instead of worrying about whether this painting or song will be a masterpiece, we should relish the “intrinsic joy” that’s sparked while painting or singing.

Edmond De Belamy From La Famille De Belamy
A portrait of Edmond De Belamy created by AI—along those of the fictional Belamy family. / Image courtesy Christie’s

When the AI-generated “Portrait of Edmond De Belamy” was auctioned at the famed New York auction house Christie’s for a staggering $432,000 in 2018, who rejoiced? Surely not the credited artist, which signed the print of the blurry, lifeless-eyed Edmond with a line of code from its algorithm. Though the three humans who make up the Paris-based arts collective Obvious that produced Edmond were likely thrilled by the cash infusion. 

But certainly, the promise of monetary returns is not enough to fuel the desire to create. It wasn’t for Van Gogh and it wasn’t for Congo the Chimp, the London Zoo’s artistic wunderkind in the 1950s. Congo was prolific—painting more than 400 canvases in frenetic, bold, Pollock-esque splatters—and his art beloved by famous human artists including Picasso, Miro, and Dali. 

In 2005, three of his tempera paintings were sold at auction for nearly $26,000.

Sadly, Congo died of tuberculosis in 1964, but it’s safe to assume he would not have cared about his newfound wealth.

In an old newsreel covering one of his exhibits, Congo is filmed hard at work. Sitting behind a desk, he dips a brush in a small pot of paint, carefully brings the brush to his puckered lips for a quick taste of tempera, and begins wildly painting large swaths of bright paint on a dark canvas. He looks to the camera on occasion and appears content, happy even. The primate was in the flow.

Make AI the Best of Us

What we get out of artificial intelligence depends on the humanity we put into it.

The Co-Op

Santa Clara University has long been a bastion of interdisciplinary learning. A new fund is taking cross-collaboration to new heights.

Human at Heart

How Santa Clara University is distinguishing itself as a leader in one of the fastest-growing industries in the nation.

A Campus on the Rise

New buildings on campus—count ’em, six in total—aren’t the only changes brought by a successful $1 billion fundraising campaign. Come explore what’s new.