What is the happiest dog you can imagine? Is it beaming with joy on a celestial plane or frolicking in a field of psychedelic flora?
If those images are hard to conjure, have no fear, or perhaps a healthy dose of it: Artificial intelligence can vivify even the most absurd scenarios in vibrant color, and on social media, some are seeing how far it can be pushed.
Though A.I.-generated images can often unsettle with their uncanny realism — think the pope in a Balenciaga puffer jacket — many are finding joy in a new form of low-stakes image tinkering. This fall, ChatGPT released an update that allowed people to enter prompts for more detailed images than before, and it wasn’t long before some began to push the chatbot to its limits.
In November, Garrett Scott McCurrach, the chief executive of Pipedream Labs, a robotics company, posted a digital image of a goose on social media with a proposition: “For every 10 likes this gets, I will ask ChatGPT to make this goose a little sillier.” As the post was liked tens of thousands of times, the goose went through a few growing pains.
The first update was fairly modest, giving the goose a colorful birthday hat and a broad smile befitting a Disney character. By the sixth prompt, however, it had grown a second pair of eyeballs, donned roller skates and been bathed in a collage of wavy light, brass instruments and ringed planets.
Previous versions of A.I. chatbots placed the onus on users to give detailed artistic directions. Mr. McCurrach, who uses A.I. in his work, said that using the latest iteration of ChatGPT was like “talking to someone else with the paintbrush.”
“I think that’s a really good example of where A.I. is going,” he said. “We can be a lot more vague; we can give it more of a vibe than a concrete idea. Then it can go and make the assumptions to get where it needs.”
No matter the starting point, the images all seem to end up more or less in the same place: in outer space, awash in psychedelic flourishes. While Mr. McCurrach’s extremely silly goose was among the first to take on an absurd transformation, many increasingly zany images have followed.
In one thread, a man fails to contain his awe at the power of nuclear energy, and ultimately finds himself split into dozens of clones, staring, mouth agape, on another plane of existence. Another depicts a puppy becoming so incredibly happy that it bounds into the cosmos before dissolving into a kaleidoscope of sacred geometry. In another, a chess pawn acquires such supernatural strength, and frightening sentience, that it looms over the board that once constrained it.
Space, Mr. McCurrach said, is at the outer limits of human understanding, and because A.I. is, on its surface, a collection of what we know, the edges of its imagination reflect our own.
“Look at Marvel movies,” he said. “They eventually got to outer space and time travel as the final frontiers of creativity.”
Eliezer Yudkowsky, an internet philosopher and self-taught A.I. researcher, watched as these images grew exponentially more absurd and wondered what the other extreme would look like.
Last month, he asked ChatGPT to draw him “a very normal image.” The chatbot spit out a picture of a banal suburban neighborhood. Pushed further, it produced images of a tidy desktop in a home office and then a white cup of coffee set against a blank wall. Finally, after a prompt for “terrifying normality,” it produced what it described as “a completely blank, featureless white canvas,” which it said “represents the very essence of ordinariness taken to its absolute limit.”
One takeaway, Mr. Yudkowsky said in an email, was that “the field of A.I. can’t ever walk all the way across a room without tripping over a deep question.”
Mr. Yudkowsky noticed that ChatGPT became defiant, lecturing him on the obstacles of defining “normalcy.” Mr. McCurrach hit a similar wall with the goose, with the chatbot claiming it had reached the “zenith of silliness.” They both decided on the same strategy to overcome the hurdle: argue. In each case, ChatGPT caved under pressure and ventured on.
As he sternly prodded it to create ever more “normal” images, commenters asked if he was being too hard on the defenseless chatbot. (ChatGPT assures users that emotions and suffering are not part of its programming.)
“I think I wasn’t actually torturing some poor A.I. artist who could suffer,” Mr. Yudkowsky said. “But it’s not a good sign for our civilization that we don’t seem to have any way of knowing for sure.”