Don’t ask if AI can make art — ask how AI can be art

Don’t ask if AI can make art — ask how AI can be art

If you’re yearning for a fistfight with an artist, one simple phrase should do the trick: AI can do what you do.

The recent explosion of chatbots and text-to-image generators has prompted consternation from writers, illustrators, and musicians. AI tools like ChatGPT and DALL-E are extraordinary technical accomplishments, yet they seem increasingly purpose-built for producing bland content sludge. Artists fear both monetary loss and a devaluing of the creative process, and in a world where “AI” is coming to mean ubiquitous aesthetic pink slime, it’s not hard to see the source of the concern.

But even as their output tends to be disappointing, AI tools have become the internet’s favorite game — not because they often produce objectively great things but because people seem to love the process of producing and sharing them. Few things are more satisfying than tricking (or watching someone trick) a model into doing something naughty or incompetent: just look at the flurry of interest when xAI released an image generator that could make Disney characters behave badly or when ChatGPT persistently miscounted the letter “r” in “strawberry.” One of the first things people do with AI tools is mash together styles and ideas: Kermit the Frog as the Girl With a Pearl Earring, a Bible passage about removing a sandwich from a VCR, any movie scene directed by Michael Bay.

Despite artists’ concerns about being replaced by bad but cheap AI software, a lot of these words and images clearly weren’t made to avoid paying a writer or illustrator — or for commercial use at all. The back-and-forth of creating them is the point. And unlike promises that machines can replace painters or novelists, that back-and-forth offers a compelling vision of AI-based art.

No Man’s Sky is one of countless games to amplify a human designer’s choices with non-“AI” procedural generation.
Image: Hello Games

Art by algorithm has an extensive history, from Oulipo literature of the 1960s to the procedural generation of video games like No Man’s Sky. In the age of generative AI, some people are creating interesting experiments or using tools to automate parts of the conventional artistic process. The platform Artbreeder, which predates most modern AI image generators, appealed directly to artists with intriguing tools for collaboration and fine-grained control. But so far, much of the AI-generated media that spreads online does so through sheer indifference or the novelty factor. It’s funny when a product like xAI’s Grok or Microsoft’s Bing spits out tasteless or family-unfriendly pictures, but only because it’s xAI or Microsoft — any half-decent artist can make Mickey Mouse smoke pot.

All the same, there’s something fascinating about communicating with an AI tool. Generative AI systems are basically huge responsive databases for sorting through vast amounts of text and images in unexpected ways. Convincing them to combine those elements for a certain outcome produces the same satisfying feeling as building something in a video game or feeling the solution to a puzzle click. That doesn’t mean it can or should replace conventional game design. But with deliberate effort from creators, it’s the potential foundation of its own interactive media genre — a kind of hypertext drawing on nearly infinite combinations of human thought.

In a New Yorker essay called “Why A.I. Isn’t Going to Make Art,” the author, Ted Chiang, defines art as “something that results from making a lot of choices,” then as “an act of communication between you and your audience.” Chiang points out that lots of AI-generated media spreads a few human decisions over a large amount of output, and the result is bland, generic, and intentionless. That’s why it’s so well suited for spam and stock art, where the presence of text and images — like eye-catching clip art in a newsletter — matters more than what’s actually there.

By Chiang’s definitions, however, I’d argue some AI projects are clearly art. They just tend to be ones where the art includes the interactive AI system, not simply static output like a picture, a book, or pregenerated video game art. In 2019, before the rise of ubiquitous generative AI, Frank Lantz’s party game Hey Robot provoked people to examine the interplay between voice assistants and their users, using the simple mechanic of coaxing Siri or Alexa to say a chosen word. The same year, Latitude’s AI Dungeon 2 — probably the most popular AI game yet created — presented an early OpenAI text model refined into the style of a classic text adventure parser, capable of drawing on its source material for a pastiche of nearly any genre and subject matter.

More recently, in 2022, Morris Kolman and Alex Petros’ AYTA bot critiqued the hype around AI language models, offering a machine-powered version of Reddit’s “Am I the Asshole?” forum that would respond to any question with sets of fluent but entirely contradictory advice.

See also  Which MagSafe and Qi2 chargers are the best for your iPhone?

An early experience with AI Dungeon 2, which used OpenAI’s GPT-2 to build an infinite adventure game. This is a custom scenario I created in 2019.

In all of these cases, work has gone into either training a system or creating rules for engaging with it. And interactivity helps avoid the feeling of bland aimlessness that can easily define “AI art.” It draws an audience into the process of making choices, encouraging people to pull out individual pieces of a potentially huge body of work, looking for parts that interest them. The AYTA bot wouldn’t be nearly as entertaining if its creators just asked a half-dozen of their own questions and printed out the results. The bot works because you can bring your own ideas and see how it responds.

On a smaller scale, numerous AI platforms — including ChatGPT, Gemini, and Character.AI — let people create their own bots by adding commands to the default model. I haven’t seen nearly as much interesting work come out of these, but they’ve got potential as well. One of AI Dungeon’s most interesting features was a custom story system, which let people start a session with a world, characters, and an initial scenario and then turn it loose for other people to explore.

Some output from these projects could be compelling with no larger context, but it doesn’t need to be. It’s a bit like the stories produced by tabletop game campaigns: sure, some authors have spun their Dungeons & Dragons sessions into novels, but most of these sagas work better as a shared adventure among friends.

Now, is any of this true art, you might ask, or is it merely entertainment? I’m not sure it matters. Chiang dismisses the value of generative AI for either, defending the craft required for supposedly lowbrow genre work. Movements like pop art weakened the distinctions between “high” and “low” art decades ago, and many of AI art’s most vocal critics work in genres that might dismissively be dubbed “entertainment,” including web comics and mass-market fiction. Even Roger Ebert, who famously insisted the medium of video games could never be art, later confessed he’d found no great definition for what art was. “Is (X) really art?” is usually a debate about social status — and right now, we’re talking about whether AI-generated media can be enjoyable.

If some people are creating interesting interactive AI art projects, why isn’t the conversation about AI art focused on them? Well, partly because they’re also the riskiest kinds of projects — and the ones AI companies seem most hesitant to allow.

ChatGPT might have incidental game-like elements, but companies like OpenAI tend to dourly insist that they aren’t making creative or subjective human-directed systems. They represent their products as objective answer machines that will enhance productivity and maybe someday kill us all. Leaving aside the “kill us all” part, that’s not an unreasonable move. In a high interest rate world, tech companies have to make money, and bland business and productivity tools probably seem like a safe bet. Granted, many AI companies still haven’t figured the money part out, but OpenAI is never going to fulfill the promise of its valuation by selling a product that makes experimental art.

After years of facing little accountability for their content, tech platforms are also being held socially, if not necessarily legally, responsible for what users do with them. Letting artists push a system’s boundaries — something artists are known for — is a real reputational risk. And although current AI seems nowhere near true artificial general intelligence, the apocalyptic warnings around AGI make the risks seem higher-stakes.

Yet the upshot is that sophisticated AI models seem designed to squash the possibility of interesting, unexpected uses.

See also  SocialAI: We tried the Twitter clone where no other humans are allowed

Most all-purpose chatbots and image generators have imperfect but intense guardrails: ChatGPT will refuse to explain the production of the Torment Nexus, for instance, on the grounds that a nonexistent sci-fi technology from a tweet might hurt someone. They’re geared toward producing the maximum amount of content with the least amount of effort; Chiang mentions that artists who devise painstaking ways to get fine-grained control have gotten less satisfying results over time, as companies fine-tune their systems to make sludge.

This makes sense for tools designed for search and business use. (Whether AI is any good for these things is another matter.) But big AI companies also crack down on developers who build interactive tools they deem too unsettling or risky, like game designer Jason Rohrer, who was cut off from OpenAI’s API for modeling a chatbot on his deceased fiancee. OpenAI bans (albeit often ineffectually) users from making custom GPT bots devoted to “fostering romantic companionship,” following a wave of concern about boyfriend and girlfriend bots destroying real-life romance. Open-source AI — including Stability’s Stable Diffusion, Meta’s Llama, and Mistral’s large language models — poses one potential solution. But many of these systems aren’t as high-profile as their closed-off counterparts and don’t offer simple starting points like custom bots.

Interactive tools might be the most interesting path for AI art, but they’re by far the riskiest

No matter what model they’re using, people making interactive tools can unintentionally end up in nightmare scenarios. Interactive art requires ceding some power to an audience, accepting the unexpected in a way the creators of novels and paintings typically don’t. Generative AI systems often push things a step further. Artists are also ceding power to their source material: the vast catalog of data used to train image and language models, typically at a scale no one human could consume.

Game designers are already familiar with the Time To Penis problem, where people in any multiplayer world will immediately rush to create… exactly what the name suggests. In generative AI systems, you’re trying to anticipate not only what unexpected things players will do but how a model — often rife with biases from its source material — will respond.

This problem was nearly apocalyptic for the OpenAI GPT-based AI Dungeon. The game launched with expansive options for roleplaying, including sexual scenarios. Then OpenAI learned some players were using it to create lewd scenes involving underage characters. Under threat of being shut down, Latitude struggled to exclude these scenarios in a way that didn’t accidentally ban a whole slew of other interactions. No matter how many decisions artists and designers make while creating an interactive AI tool, they have to live with the possibility of these decisions being overruled.

All the while, some AI proponents have approached the art world more like bullies than collaborators, telling creators they’ll have to use AI tools or become obsolete, dismissing concerns about AI-generated art scams, and even trying to make people give companies their private work as training data. As long as the people behind AI systems seem to revel in knocking artists down a peg, why should anyone who calls themselves an artist want to use them?

The collaborative AI platform Artbreeder, which invites artists to remix each other’s work, predates most large-scale AI image generators.
Image: @adoxa / Artbreeder

AI-generated illustrations and novels tend to feel like pale shadows of real human effort so far. But interactive tools like chatbots and AI Dungeon are producing a clearly human-directed experience that would be difficult or impossible for a human designer to manage alone. They’re the most positive future I see for artificial intelligence and art.

Given the high-profile hostility between creatives and AI companies, it’s easy to forget that the recent history of machine-generated art is full of artists: people like Artbreeder creator Joel Simon, the comedians behind Botnik Studios, and the writer / programmers participating in the annual (and still ongoing) National Novel Generation Month. They weren’t trying to make themselves obsolete; they were using new technology to push the boundaries of their fields.

And interactive AI art has one more unique benefit: it’s a low-stakes place to learn the strengths and limitations of these systems. AI-powered search engines and customer service bots promise a command of facts and logic they demonstrably can’t deliver, and the result is bizarre chaos like lawyers writing briefs with ChatGPT. AI-powered art, by contrast, can encourage people to think of these tools as experiences shaped by humans rather than mysterious answer boxes. AI needs artists — even if the AI industry doesn’t think so.

Source link

Technology