• Fri. Nov 15th, 2024

NIST releases new tool to check AI models’ security

Byadmin

Jul 31, 2024



The guidelines outline voluntary practices developers can adopt while designing and building their model to protect it against being misused to cause deliberate harm to individuals, public safety, and national security.

The draft offers seven key approaches for mitigating the risks that models will be misused, along with recommendations on how to implement them and how to be transparent about their implementation.

“Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery,” the NIST said, adding that it was accepting comments on the draft till September 9.



Source link