• Mon. Nov 18th, 2024

Microsoft unveils safety and security tools for generative AI

Byadmin

Mar 29, 2024


Microsoft is adding safety and security tools to Azure AI Studio, the company’s cloud-based toolkit for building generative AI applications. The new tools include protection against prompt injection attacks, detection of hallucinations in model output, system messages to steer models toward safe output, model safety evaluations, and risk and safety monitoring.Microsoft announced the new features on March 28. Safety evaluations are now available in preview in Azure AI Studio. The other features are coming soon, Microsoft said. Azure AI Studio, also in preview, can be accessed from ai.azure.com.Prompt shields will detect and block injection attacks and include a new model to identify indirect prompt attacks before they impact the model. This feature is currently available in preview in Azure AI Content Safety. Groundness detection is designed to identify text-based hallucinations, including minor inaccuracies, in model outputs. This feature detects “ungrounded material” in text to support the quality of LLM outputs, Microsoft said.Safety system messages, also known as metaprompts, steer a model’s behavior toward safe and responsible outputs. Safety evaluations assess an application’s ability to jailbreak attacks and to generating content risks. In addition to model quality metrics, they provide metrics related to content and security risks.Finally, risk and safety monitoring helps users understand what model inputs, outputs, and users are triggering content filters to inform mitigation. This feature is currently available in preview in Azure OpenAI Service.

Copyright © 2024 IDG Communications, Inc.



Source link