LLMs have shown that they can excel at various things – but the study finds out flaws in the system.
Artificial intelligence processor concept. Stock photo.
Researchers have found out that with all the progress claimed to be made by large language models (LLMs), generative artificial intelligence (GenAI) has still got a lot to learn and they are still incapable of being trusted fully.
The study could have serious implications for generative AI models deployed in the real world.
This is especially because a LLM that seems to be performing well in one context might break down if the task or environment slightly changes.
The study is being conducted by researchers from Harvard University, Massachusetts Institute of Technology (MIT), University of Chicago Booth, and Cornell.
Problems with LLMs
LLMs have shown that they can excel at various things – like writing, generating computer programs, and more activities.
This can …