Chatbots are Poisoned
Also in today's edition: Star War Inspires, Losses Don’t Matter & Bluesky Goes Crazy
It’s getting harder to trust chatbots. Whether they are telling the truth or just hallucinating, nobody knows. And it’s getting difficult to find the answer to the problem, because nobody knows how they work. Last month, Google CEO Sundar Pichai admitted that Google doesn't fully understand how its AI chatbot Bard comes up with certain responses.
Just imagine, how hard the problem of hallucination will turn out to be when datasets, on which AI chatbots are trained, are at target. The rise of data poisoning, in which malicious actors inject false information into training datasets, has aggravated the entire issue.
There are certain ways to do it. Hackers can target expired domains and manipulate the content on the platform. For instance, expired domains hosting image URLs in datasets can be purchased by malicious actors who then substitute the images with malicious content, thereby tainting the training dataset. This method is called splitview poisoning.
Keep reading with a 7-day free trial
Subscribe to