Have you wondered what would happen if you fed AI its own AI-generated material instead of human-made material? Here’s what happened in one study.

We have seen enormous strides in AI algorithms for generating images and text. Up to this point, AI algorithms have been trained on human-made data. But have you ever wondered what would happen if they were trained solely on their own data? It turns out that several researchers at Rice University and Stanford University wondered the same thing.

An example of the artifacts caused by AI feeding on itself.
Adobe Firefly Beta AI image. Robot typing on keyboard. Not for commercial use.

Above: Adobe Firefly Beta AI image. Robot typing on keyboard. Not for commercial use.

AI feeding on itself

It’s not much of a stretch to think that as we keep churning out more AI-generated images on the internet, these will in turn inform future AI models. In other words, future AI models will become increasingly trained on synthetic data that it has itself created.

This creates what researchers call a self-consuming (autophagous) loop. In doing so, what can possibly occur is that the biases and artifacts can be amplified. And the more AI feeds on itself for information, the more biases and artifacts can be created.

Upon reading this, I was reminded of how genetic inbreeding of royal families frequently led to increased vulnerability to birth defects, DNA mutations, or reduced intelligence. The lack of diversity in DNA was genetically damaging.

Diversity

It turns out that diversity is truly a strength. With enough fresh real data, the quality and diversity of the generative models do not degrade even over several generations. 

A self-consuming (autophagous) loop doesn’t fare so much. In fact, the Self-Consuming Generative Models Go MAD article at one point states the following: 

“The bottom line across all three autophagous loop models is that without enough fresh real data each generation, future generative models are doomed to go MAD.”

Self-Consuming Generative Models Go MAD
Cows in front of amusement park in India. Film.

Above: Cows in front of amusement park in India. This is a scan of a photographic print. Nikon D70 and Fuji Velvia Realia 35mm film.

MADness

The team coined the condition Model Autophagy Disorder (MAD). As you might guess, this term was also a tip of the hat to Mad Cow Disease. Their conclusion? 

“Without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”

Self-Consuming Generative Models Go MAD

Society benefits from a richer palette of people, ideas, concepts, approaches, DNA, and more. And similarly, it seems this is true of AI models as well. 

Adobe Firefly Beta AI image. Cows in field. Not for commercial use.

Above: Adobe Firefly Beta AI image. Cows in field. Not for commercial use.

Telltale signs of AI-generated art

Many of us can spot some of the still-telltale signs in current AI-generated images. The obvious one is still the fingers. Current AI models still struggle with the complexity of the fingers, including where to place them, the length of the digits, and sometimes, even the number of fingers on a hand.

There are other things too. AI-generated art often has this almost “fantasy” sort of feel to many of the images, even if they are supposed to be photorealistic. 

And it still struggles with other complexities, such as letters, keys on a piano, “dead-looking eyes,” and intentionality of the various elements in the photo.

If AI models were fed a continued diet of their own images, artifacts such as these would continue to proliferate or possibly be amplified.

Adobe Firefly Beta AI image. Robot typing on keyboard. Not for commercial use.

Above: Adobe Firefly Beta AI image. Robot typing on keyboard. Not for commercial use.

Additional thoughts

I have mostly written about images. However, MAD can occur with text or video-based models as well. 

I should also point out that the paper is not yet peer-reviewed. It’s a new study, after all. Until it is peer-reviewed and others can replicate the findings with other AI models, one should not leap to foregone conclusions.

Adobe Firefly Beta AI image. Scary robots in post-apocalyptic world. Not for commercial use.

Above: Adobe Firefly Beta AI image. Scary robots in post-apocalyptic world. Not for commercial use.

Regardless, the paper should make us question how useful AI models truly are without human input. If this paper is any indication, it’s not so great. And for those who wonder if AI will become sentient and go completely SkyNet on us, this may offer some measure of relief.

Night selfie in a post-apocalyptic scene at night.

Above: Night selfie in a post-apocalyptic scene at night. This is a real long-exposure photo. However, night photography is particularly prone to people thinking that it is “Photoshopped” or otherwise “fake.” And with the proliferation of AI-generated images, more people than ever are going to automatically think that these night photos are made by typing in key prompts instead of the skill and time that it takes to create them.