Leaders Can Regulate Generative AI While Still Reaping Its Full Potential
While there is no debate on the tremendous value that Generative AI can bring across the value chain, there are several potential risks for enterprises leveraging this technology such as copyright infringement, security and privacy violations, hallucination of large language models, harmful and toxic content, and more. As the adoption of generative AI is accelerating at a rapid pace, there is an immediate need for building institutions, technical capabilities, regulations, and protocols for governance of generative AI systems. The overall policy landscape is yet to catch up with the pace of technological advancements. Considering this, it is imperative for enterprises to build technical and policy driven guardrails to safeguard themselves against any hazards.
How can enterprises mitigate Generative AI limitations and the risks using technical interventions?
Although many moderation mechanisms for toxicity and harm prevention are natively baked into the Open source LLMs, enterprises that use these foundation models in their own environment or via APIs need to invest in building an additional fortification layer that can monitor the input prompts and outputs for any type of violations regarding privacy, security, or toxicity. This platform can filter out sensitive PII, or confidential IP related inputs and other confidential data according to pre-set organizational policies before it is passed on to the model. This layer can also act as a guardrail for output interfaces as well, where the output will be validated against facts and inaccuracies will be filtered out before they are presented to the user.
The solution can run additional screenings on the input and output content for detecting toxicity and bias via semantic and contextual similarity. Accordingly, the input or output can be modified or blocked depending upon the policies of the organization or raise red flags to the user and request them to double check the output. These violations can be logged into a tracking system, where proper assessments can be made on what went wrong. There is a need for explainability of this platform as well, as users would need to know why the inputs and outputs are being moderated. The platform should be flexible to adjust to different unique needs of the enterprises and should achieve all the outcomes without compromising on the performance of the large language models.
Read More: Transforming Businesses: Key Components of AI Orchestration and How it Works
Enterprises need to evaluate appropriate tools and components for building this platform and tailor it to the specific use-case and associated risks for that particular use-case like mitigating bias, plugging gaps related to security and IP, etc. Several promising open-source and commercial options are entering the market, and organizations will have to decide whether to build with existing partners or customize and integrate readily available solutions. This activity needs to be an ongoing exercise as the overall landscape of generative AI is rapidly evolving.
How can enterprises mitigate these risks using policy-based interventions?
To enforce responsible use of generative AI systems, enterprises must look into all aspects of people, process, technology and frame policies accordingly. Organizations must develop their own frameworks founded on solid principles and spanning across the AI lifecycle. There should be a structured approach to realizing these principles in practice, without stifling innovation and experimentation. Depending on the organization, this could take different forms. Some of the possible interventions are:
- Supervisory committees that analyze and review each use-case for possible risks and suggest mitigation approaches like human in the loop, tool-based interventions etc.
- A review board that performs frequent auditing and inspection of compliance.
- Systems for maintaining extensive documentation on guidelines, rulebooks for validation and testing, and reference architectures for responsible deployment.
- Establishing clear accountability for policy enforcement in each of the stages and use-cases.
- Conducting periodical workforce training for sensitization and awareness to responsible AI principles.
One of the key aspects of this is fostering a culture that empowers teams to raise concerns regarding the safe and ethical use of Generative AI and a forum for getting those concerns redressed in time.
Top Story in AI ML: A Guide on How MLCopilot is Using LLMs & Empowering Developers to Straighten ML Processes
Lastly, for the collective good, all organizations should come forward and build an atmosphere of collaboration and share their best practices on how they are self-regulating and work with policy makers to improve the safety of the AI ecosystem. Enterprises need to collaborate with their SI partners, leading think tanks, academia, government agencies and industry bodies to accelerate the development of robust generative AI governance systems.
As we have seen countless times with any disruptive technology, organizations that are agile enough to adopt and internalize the safeguards will be able to leverage the transformative potential and gain a distinct early-mover advantage.
Comments are closed.