• Wed. Oct 16th, 2024

How to manage generative AI programs – governance, education, regulation

Byadmin

Oct 15, 2024



The CoE police: Leadership, enforcement, and automation

Policing new technology initiatives involves creating a small set of common standards that should govern all the teams taking part. For generative AI projects, this could include creating consistent approaches to managing prompt recipes, agent development and testing, and access to developer tools and integrations. These rules should be lightweight, so that compliance is easy to achieve, but it also has to be enforced. Over time, this approach reduces any deviation away from the standards that have been designed and reduces management overheads and technical debt.

For example, these rules are necessary to manage the use of data in projects. Many generative AI projects will involve handling and deploying customer data, so how should this be implemented in practice? When it comes to customers’ personally identifiable information (PII) and the company’s intellectual property (IP), this data should be kept secure and separate from any underlying large language model (LLM), while still allowing it to be used within projects. PII and IP can be deployed and provide valuable additional context via prompt engineering, but it should not be available for the LLM to use as part of any re-training or retention.

The best approach around governance is to be pragmatic. This can involve picking your battles carefully, as being heavy handed or excessive in enforcing rules can hinder your teams and how they work, as well as increasing the costs associated with compliance. At the same time, there will be instances where your work is necessary and will involve closing experiments down where they risk privacy, or risk ethical use of data, or would cost too much over time. The overall aim is to avoid imposing cumbersome standards or stifling enthusiasm, and to concentrate on how to encourage best practices as standard.



Source link