A chatbot generating complicated chemical compositions for biological weapons takes tech revolution to a whole new (and dangerous) level. But is it moral to make such things easily accessible to anyone at the stroke of the keyboard? The answer is, of course, no.
In the discussion around responsible AI, OpenAI has shown great maturity by acknowledging the moral issues with chatbots. Taking a step further, the company recently announced its plans to set up a Red Teaming Network with experts interested in improving the safety of OpenAI’s models. The company is inviting domain experts from diverse fields to help enhance the safety of its models.
Keep reading with a 7-day free trial
Subscribe to Sector 6 | The Newsletter of AIM to keep reading this post and get 7 days of free access to the full post archives.