• Sat. Nov 16th, 2024

AI chatbots trained to jailbreak other chatbots, as the AI war slowly but surely begins

Byadmin

Jan 2, 2024

While AI ethics continues to be the hot-button issue of the moment, and companies and world governments continue to wrangle with the moral implications of a technology that we often struggle to define let alone control, here comes some slightly disheartening news: AI chatbots are already being trained to jailbreak other chatbots, and they seem remarkably good at it.

Researchers from the Nanyang Technological University in Singapore have managed to compromise several popular chatbots (via Tom’s Hardware), including ChatGPT, Google Bard and Microsoft Bing Chat, all done with the use of another LLM (large language model). Once effectively compromised, the jailbroken bots can then be used to “reply under a persona of being devoid of moral restraints.” Crikey.

Source link