Artificial intelligence, in the form that bothers people most these days, is a matter of the consumption of very large masses of text, and their re-packaging and re-use of that text to look and sound like something new and original.
Should this worry us? Maybe not. It may be about to self-destruct. After all, as it happens more and more often, the AI algorithms are more and more busy digesting AI-generated texts. As a group of (admittedly human) researchers noted recently,“We find that use of model-generated content in training causes irreversible defects in the resulting models."
The models as they consume their own work will self-degrade and become useless over time.
One of the members of the group of scholars involved is Ross Anderson, a Cambridge University professor. He has put the problem this way, in a blog post, “Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale."
That is a relief.
Comments
Post a Comment