MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
generative
Search

How to manage generative AI programs – governance, education, regulation

Tuesday October 15, 2024. 11:00 AM , from InfoWorld
Generative artificial intelligence has grown extremely rapidly, attracting investment and sparking developer interest and creativity. Companies see generative AI as a route to building innovative services and revolutionizing their industries.

However, while developers are keen to build with generative AI, the challenge is how to move beyond initial tests into running at scale. They are now working through problems around how to integrate and manage their projects so that pilot schemes can move into production. This is a common experience in new technology projects as they move from being islands of experimentation to widespread integration across organizations. This proliferation can introduce governance and scalability challenges if these are not considered early.

Governance and generative AI

As you move beyond experimentation into production, getting the right approach to governance is essential. To embrace the transformative potential of generative AI, it’s crucial to balance enthusiasm with effective governance. While generative AI leverages the power of an organization’s data and intellectual property, its rapid growth can disrupt established processes. Without clear guidelines, advocates, and enforcers, confusion and risks can escalate.

Creating a central team that enables and works across departments is the best approach. Whether you call it a center of excellence (CoE) or a community of practice (CoP), this team will play a pivotal role in creating common rules and processes around how generative AI is used.

At the same time, your approach should involve representatives from multiple departments, so that your team is directly connected to what the business needs. This will give them the skills and support they need around solving business problems, and help the team avoid focusing specifically on issues like privacy and security alone. Think of your generative AI CoE as a team deploying out in the field helping deliver business outcomes, as opposed to an isolated department just throwing down edicts like thunderbolts from Mount Olympus.

For a CoE, there are three main responsibilities: policing, teaching, and refereeing. These three areas are essential for everyone to understand, so that all your actions and choices are geared to the same goals.

The CoE police: Leadership, enforcement, and automation

Policing new technology initiatives involves creating a small set of common standards that should govern all the teams taking part. For generative AI projects, this could include creating consistent approaches to managing prompt recipes, agent development and testing, and access to developer tools and integrations. These rules should be lightweight, so that compliance is easy to achieve, but it also has to be enforced. Over time, this approach reduces any deviation away from the standards that have been designed and reduces management overheads and technical debt.

For example, these rules are necessary to manage the use of data in projects. Many generative AI projects will involve handling and deploying customer data, so how should this be implemented in practice? When it comes to customers’ personally identifiable information (PII) and the company’s intellectual property (IP), this data should be kept secure and separate from any underlying large language model (LLM), while still allowing it to be used within projects. PII and IP can be deployed and provide valuable additional context via prompt engineering, but it should not be available for the LLM to use as part of any re-training or retention.

The best approach around governance is to be pragmatic. This can involve picking your battles carefully, as being heavy handed or excessive in enforcing rules can hinder your teams and how they work, as well as increasing the costs associated with compliance. At the same time, there will be instances where your work is necessary and will involve closing experiments down where they risk privacy, or risk ethical use of data, or would cost too much over time. The overall aim is to avoid imposing cumbersome standards or stifling enthusiasm, and to concentrate on how to encourage best practices as standard.

To make the most of generative AI, your CoE should be accessible and encourage experimentation across the business. Providing guardrails to start with can help teams gain experience in building generative AI services, prompt recipes, or automated agents. Over time, you can remove some of the stricter controls. Once teams have more experience, you can help them build their own agents and submit prompt recipe ideas. As generative AI applications tend to be modular in design, you can apply the same control, monitoring, and value assessment approach across common components too.

The aim for the CoE should be to provide control layers that make it easier to adopt and build, rather than stopping projects in their tracks.

The CoE teachers: Best practice and community

This is ideally where your CoE is spending most of their time. Generative AI projects can include various different ways for users to interact. From creating lists of prompts that can deliver great results from LLMs through to fully-featured interactive and autonomous agents that can process full transactions, the idea is to provide more value to users in the ways that suit them best.

To deploy these kinds of projects effectively, organizations first need to empower their teams around how to build the services and then how to scale them up. Beyond defining the standards involved and enforcing them, your CoE should also create and share best practices and principles to guide new teams and foster knowledge sharing. Teaching around generative AI and its potential will be needed to support uptake and help people to experiment.

It’s important to understand that these principles are not standards. While standards exist to provide a baseline for activity and how items like data are processed, principles provide a guiding framework for how to build on those standards. For example, you may have a standard for securing customer PII, but your principles will determine how you use and work with that PII data to create value for the user and the business. These principles allow different teams to experiment and carry out different approaches around agent development.

For the CoE, creating a role that explores the potential for generative AI and shares these best practices widely is essential. This gen AI evangelist position can help teams understand agents and tools that they can use, iterate on their ideas, and share knowledge from other teams as well. Over time, this should foster a strong community internally that are encouraged to share their experiences and their successes with each other, helping everyone to progress their projects faster.

The CoE referee: Mediation and making decisions

In any area of technology, there are multiple ways to get to an end result or business objective. It’s inevitable that people will disagree on the best approach from all the options available. In generative AI, topics like using retrieval-augmented generation (RAG) versus LLM fine-tuning or content versus model tuning will stir up passionate debate.

The CoE has an essential role to play in this process. Effective generative AI governance has to involve representatives from the different teams involved, enabling decision-making even in the face of differing views. At the same time, you can help make decisions quickly on which is the best approach for a given problem, or where all those involved need to carry out some experiments and bring back more proof to back up their arguments. The main criteria here for the CoE is that everyone should respect the decision and work to support the business goals.

If we want everyone to respect those decisions, we will need buy-in from multiple stakeholders. The CoE can be perceived as an ivory tower if it is not involved in those day-to-day initiatives, or doesn’t have skin in the game. To avoid this, put your focus on taking action and remaining consistent in your decisions, as this helps resolve disputes faster.

Investing in generative AI

Generative AI has huge potential. According to Accenture, generative AI will be used in 40% of all working hours and reinvent how work is done. To build these projects, organizations will need support, governance, and skills development. When business leaders understand this potential, they will put huge amounts of resources in place to make projects work. The alternative is watching competitors carry this work out, and falling behind.

Creating a CoE to manage generative AI effectively will increase the chances of success for everyone. A CoE maximizes the value from generative AI programs and mandates participation around those objectives. By getting this approach right — by policing, teaching, and refereeing around generative AI programs — the generative AI CoE can powerfully stimulate adoption and growth, aligning IT and business stakeholders.

Dom Couldwell is head of field engineering, EMEA, at DataStax.



Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
https://www.infoworld.com/article/3547327/how-to-manage-generative-ai-programs-governance-education-...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Fri 15 - 23:56 CET