We talked about AI generated “gray goo”, Now the broader media is getting hip
The term “gray goo” was coined 3 decades ago to explain what could happen if self replicating nano-particles got out of control and ate up everything on the planet. The only thing left would be a bubbling gooey mass of self replicating nano-bots.
AI gray goo is not exactly the same but it is similar. The idea is that AI is currently “trained” on the mass of content, generated mostly by humans, that exists on the Internet. But what happens when AI starts building AI on top of AI? We asked in a prior letter whether this would result in ever more refined, and better, computation, or whether it would result in a kind of intellectual hot dog meat of hyper-processed, hyper-refined computation that is less than the sum of its parts.
Grey goo, informational hot dog meat, whatever. It’s a risk.
(From Axios)
The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.
"Model collapse" is researchers' name for what happens to generative AI models, like OpenAI's GPT-3 and GPT-4, when they're trained using data produced by other AIs rather than human beings.
Feed a model enough of this "synthetic" data, and the quality of the AI's answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the "tail" choices that keep their output interesting.
"Model Autophagy Disorder," or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
"Habsburg AI" is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: "A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features."