In recent discussions of artificial intelligence, the idea of "large language models" has become closely associated with that of AI itself.
A "large language model" is an algorithm that is fed, as a training set of data, a huge array of human-generated writing, generally internet available, that includes relevant problems (the questions that users might ask it) as well as their solutions.
Anyway, in a new paper computer scientists challenge how intelligent this sort of artificial intelligence is. The paper is called "The Wall confronting large language models" and it is the work of Peter Coveney and Sauro Succi.
The idea is that LLMs aren't some wave of the future, about to be scaled up in ways that will put people like me out of work. Rather, LLMs are hitting a wall at their current level of application. (Whew.)
LLMs, they say, are built on stochastic, non-Gaussian architectures that make them prone to error accumulation. The models can be reworked to try to filter out the errors, but the energy and costs entailed in doing so is enormous, and increases dramatically as one tries to scale up the systems.
You can access the paper for free through this LinkedIn posting by Srini Pagadyala.
https://www.linkedin.com/feed/update/urn:li:activity:7356789505602850817/
For the record, Peter Coveney is affiliated with University College London. Sauro Succi is with the Italian Institute of Technology. Srini Pagadyala is a business consultant who describes himself as a "digital transformation thought leader"..
Comments
Post a Comment