Last year, Meta launched a large language model called Galactica. The model could summarise academic papers, solve maths problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. And, it was launched before ChatGPT.
However, unlike ChatGPT, which broke the hype cycle, Meta’s ambitious offering couldn’t survive even a week. Just three days post-release, Meta realised the model suffered from hallucinations and blurted random results. Meta panicked and withdrew the model.
Now, there is a demand from the research community that the model should be brought back and it’s only getting louder.
Researchers believe that hallucinations are a part of the learning process for LLMs and urge that the model be weighed to assess whether the benefits outweigh the problems caused by occasional hallucinations.
Keep reading with a 7-day free trial
Subscribe to Sector 6 | The Newsletter of AIM to keep reading this post and get 7 days of free access to the full post archives.