Generative AI was once skilled on centuries of artwork and writing produced by means of people.
However scientists and critics have puzzled what would occur as soon as AI was extensively followed and began coaching on its outputs.
A brand new find out about issues to a few solutions.
In January 2026, synthetic intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau revealed a find out about appearing what occurs when generative AI programs are allowed to run autonomously – producing and decoding their very own outputs with out human intervention.
The researchers connected a text-to-image gadget with an image-to-text gadget and allow them to iterate – picture, caption, picture, caption – time and again and over.
Irrespective of how numerous the beginning activates have been – and irrespective of how a lot randomness the programs have been allowed – the outputs temporarily converged onto a slim set of generic, acquainted visible issues: atmospheric cityscapes, grandiose structures and pastoral landscapes. Much more hanging, the gadget temporarily “forgot” its beginning suggested.
The researchers referred to as the results “visual elevator music” – delightful and polished, but devoid of any actual which means.
As an example, they began with the picture suggested, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The ensuing picture was once then captioned by means of AI. This caption was once used as a suggested to generate the following picture.
After repeating this loop, the researchers ended up with a bland picture of a proper inner house – no other folks, no drama, no actual sense of time and position.
A suggested that starts with a primary minister below pressure ends with a picture of an empty room with fancy furniture.
Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY
As a pc scientist who research generative fashions and creativity, I see the findings from this find out about as a very powerful piece of the controversy over whether or not AI will result in cultural stagnation.
The consequences display that generative AI programs themselves have a tendency towards homogenization when used autonomously and time and again. They even counsel that AI programs are lately working on this method by means of default.
The acquainted is the default
This experiment would possibly seem irrelevant: Most of the people don’t ask AI programs to without end describe and regenerate their very own photographs. The convergence to a collection of bland, inventory photographs took place with out retraining. No new information was once added. Not anything was once realized. The cave in emerged purely from repeated use.
However I feel the setup of the experiment will also be regarded as a diagnostic software. It unearths what generative programs maintain when nobody intervenes.

Lovely … uninteresting.
Chris McLoughlin/Second by means of Getty Photographs
This has broader implications, as a result of trendy tradition is an increasing number of influenced by means of precisely all these pipelines. Photographs are summarized into textual content. Textual content is changed into photographs. Content material is ranked, filtered and regenerated because it strikes between phrases, photographs and movies. New articles on the net are actually much more likely to be written by means of AI than people. Even if people stay within the loop, they’re ceaselessly opting for from AI-generated choices slightly than ranging from scratch.
The findings of this contemporary find out about display that the default habits of those programs is to compress which means towards what’s maximum acquainted, recognizable and simple to regenerate.
Cultural stagnation or acceleration?
For the previous few years, skeptics have warned that generative AI may just result in cultural stagnation by means of flooding the internet with artificial content material that long run AI programs then educate on. Through the years, the argument is going, this recursive loop would chop range and innovation.
Champions of the era have driven again, mentioning that fears of cultural decline accompany each new era. People, they argue, will all the time be the general arbiter of ingenious choices.
What has been lacking from this debate is empirical proof appearing the place homogenization in fact starts.
The brand new find out about does now not check retraining on AI-generated information. As a substitute, it displays one thing extra basic: Homogenization occurs prior to retraining even enters the image. The content material that generative AI programs naturally produce – when used autonomously and time and again – is already compressed and generic.
This reframes the stagnation argument. The danger isn’t just that long run fashions would possibly educate on AI-generated content material, however that AI-mediated tradition is already being filtered in ways in which want the acquainted, the describable and the traditional.
Retraining would enlarge this impact. However it’s not its supply.
That is no ethical panic
Skeptics are proper about something: Tradition has all the time tailored to new applied sciences. Pictures didn’t kill portray. Movie didn’t kill theater. Virtual equipment have enabled new kinds of expression.
The find out about displays that after which means is pressured via such pipelines time and again, range collapses now not on account of unhealthy intentions, malicious design or company negligence, however as a result of most effective positive types of which means live on the text-to-image-to-text repeated conversions.
This doesn’t imply cultural stagnation is inevitable. Human creativity is resilient. Establishments, subcultures and artists have all the time discovered tactics to withstand homogenization. However personally, the findings of the find out about display that stagnation is an actual chance – now not a speculative concern – if generative programs are left to function of their present iteration.
Additionally they lend a hand explain a not unusual false impression about AI creativity: Generating unending permutations isn’t the similar as generating innovation. A gadget can generate hundreds of thousands of pictures whilst exploring just a tiny nook of cultural house.
In my very own analysis on ingenious AI, I discovered that novelty calls for designing AI programs with incentives to deviate from the norms. With out it, programs optimize for familiarity as a result of familiarity is what they have got realized easiest. The find out about reinforces this level empirically. Autonomy on my own does now not ensure exploration. In some circumstances, it hurries up convergence.
This development already emerged in the true global: One find out about discovered that AI-generated lesson plans featured the similar flow towards typical, uninspiring content material, underscoring that AI programs converge towards what’s conventional slightly than what’s distinctive or ingenious.

AI’s outputs are acquainted as a result of they revert to reasonable presentations of human creativity.
Bulgac/iStock by means of Getty Photographs
Misplaced in translation
On every occasion you write a caption for a picture, main points can be misplaced. Likewise for producing a picture from textual content. And this occurs whether or not it’s being carried out by means of a human or a device.
In that sense, the convergence that happened isn’t a failure that’s distinctive to AI. It displays a deeper assets of bouncing from one medium to every other. When which means passes time and again via two other codecs, most effective essentially the most strong components persist.
However by means of highlighting what survives all over repeated translations between textual content and pictures, the authors are ready to turn that which means is processed inside of generative programs with a quiet pull towards the generic.
The implication is sobering: Even with human steering – whether or not that implies writing activates, deciding on outputs or refining effects – those programs are nonetheless stripping away some main points and amplifying others in tactics which are orientated towards what’s “average.”
If generative AI is to complement tradition slightly than flatten it, I feel programs want to be designed in ways in which withstand convergence towards statistically reasonable outputs. There will also be rewards for deviation and reinforce for much less not unusual and no more mainstream kinds of expression.
The find out about makes something transparent: Absent those interventions, generative AI will proceed to flow towards mediocre and uninspired content material.
Cultural stagnation is not hypothesis. It’s already going down.