The Writer-Novelist In The World of AI

Yeah, they’re asymptotic. I mentioned this a few months ago. Basically:

Neural nets [which include LLMs] are functions of universal approximation. Very broadly, that means that to get arbitrarily close to the function being modelled, the number of neurons in the net tends to infinity.