In many cases, LLMs are turning out to be the default solution to business problems, including domains that do not require language understanding. It seems like Transformers are indeed all we need.
Reiterating this, Denny Zhou, research director at Google DeepMind, recently released a new paper. While sharing his research on X, Zhou mentioned that, “We have mathematically proven that Transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed.”
This echoes AI researcher Andrej Karpathy’s recent remarks on next-token prediction frameworks, suggesting that they could become a universal tool for solving a wide range of problems, far beyond just alone.
LLMs are not really just “language experts” anymore. According to Karpathy, the “language” part has become historical because these models were first trained to predict the next word in a sentence, but in reality, they can work on any kind of data that’s broken down …