• Fri. Nov 22nd, 2024

A GRC framework for securing generative AI

Byadmin

Nov 21, 2024



Web-based AI tools – Web-based AI products, such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, are widely accessible via the web and are often used by employees for tasks ranging from content generation to research and summarization. The open and public nature of these tools presents a significant risk: Data shared with them is processed outside the organization’s control, which can lead to the exposure of proprietary or sensitive information. A key question for enterprises is how to monitor and restrict access to these tools, and whether data being shared is adequately controlled. OpenAI’s enterprise features, for instance, provide some security measures for users, but these may not fully mitigate the risks associated with public models.

AI embedded in operating systems – Embedded AI products, such as Microsoft Copilot and the AI features within Google Workspace or Office 365, are tightly integrated into the systems employees already use daily. These embedded tools offer seamless access to AI-powered functionality without needing to switch platforms. However, deep integration poses a challenge for security, as it becomes difficult to delineate safe interactions from interactions that may expose sensitive data. The crucial consideration here is whether data processed by these AI tools adheres to data privacy laws, and what controls are in place to limit access to sensitive information. Microsoft’s Copilot security protocols offer some reassurance but require careful scrutiny in the context of enterprise use.

AI integrated into enterprise products – Integrated AI products, like Salesforce Einstein, Oracle AI, and IBM Watson, tend to be embedded within specialized software tailored for specific business functions, such as customer relationship management or supply chain management. While these proprietary AI models may reduce exposure compared to public tools, organizations still need to understand the data flows within these systems and the security measures in place. The focus here should be on whether the AI model is trained on generalized data or tailored specifically for the organization’s industry, and what guarantees are provided around data security. IBM Watson, for instance, outlines specific measures for securing AI-integrated enterprise products, but enterprises must remain vigilant in evaluating these claims.



Source link