Three reasons why we shouldn’t use ChatGPT, Bard or any other LLM-based chatbots: hallucinations, hallucinations, and hallucinations. And one of the many compelling reasons to use these platforms — hallucinations. Interestingly, trust is the only factor that makes hallucinations risky, otherwise we’d say, hallucinations acche hain as they make the models more creative.
Sebastian Berns, a doctoral researcher at Queen Mary University of London, is a big proponent of this crooked feature of chatbots which others abhor. He likes to use these chatbots because they hallucinate, turning them into valuable “co-creative partners”.
Keep reading with a 7-day free trial
Subscribe to Sector 6 | The Newsletter of AIM to keep reading this post and get 7 days of free access to the full post archives.