This was inspired by an article about the Rice University study comparing self-consuming AI to mad cow disease, a topic practically begging for a cartoon. To quote one researcher:
“The problems arise when this synthetic data training is, inevitably, repeated, forming a kind of a feedback loop — what we call an autophagous or ‘self-consuming’ loop,” said Richard Baraniuk, Rice’s C. Sidney Burrus Professor of Electrical and Computer Engineering. “Our group has worked extensively on such feedback loops, and the bad news is that even after a few generations of such training, the new models can become irreparably corrupted. This has been termed ‘model collapse’ by some — most recently by colleagues in the field in the context of large language models (LLMs). We, however, find the term ‘Model Autophagy Disorder’ (MAD) more apt, by analogy to mad cow disease.”
He added that “one doomsday scenario is that if left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet.”
Get my weekly newsletter and support these cartoons by joining the Sorensen Subscription Service! Also on Patreon.