Meta’s FAIR (The Fundamental AI Research) on Monday unveiled a new research study that explores a ‘Chain of Continuous Thought’ technique, or COCONUT.
This overcomes the limitations of the Chain of Thought, or CoT technique, where the explicit reasoning process is generated in natural language tokens.
“Chain-of-thought (CoT) reasoning involves prompting or training LLMs to generate solutions step-by-step using natural language. However, this is in stark contrast to certain human cognition results,” said the researchers.
Meta provides an analogy and cites neuroimaging studies which show that ‘language network – a set of brain regions responsible for language comprehension and production – remain largely inactive during various reasoning tasks.’
This gives rise to an issue where the amount of reasoning needed for each particular token varies depending on the complexity of the problem. Still, LLMs allocate ‘nearly the same computing budget for predicting every token’.
Meta explores reasoning in an abstract manner, which involves modifying the CoT process. Instead of making the model …