General Purpose AI (GPAI): What You Need to Know
In recent times, there’s been a lot of buzz around Generative AI tools, especially after the release of multiple Large Language Models, and Image generators, like DALL-E or Midjourney.
These creations have once again put general purpose AI (GPAI) under the spotlight and once again brought forth hypothetical questions like if GPAI should be regulated or not.
Before we go further and explore the possibilities, let’s first understand the concept of GPAI, what it means, when it was introduced, etc.
What is General Purpose AI?
Two years ago, in April 2021, the European Commission introduced GPAI. The original AI Act proposal excused the GPAI creators from adhering to a number of legal paperwork and other responsibility standards.
Recommended: Why Human Controls Are Critical for Ethical AI in Life Sciences
The reason being it only applied to high-risk AI which is clearly mentioned and explained in the Act based on their use and context.
Another provision, Article 28, supported this assertion and suggested that GPAI developers would only be held responsible for compliance if they materially tweaked or evolved the AI system for high-risk use.
But now according to recent reports, European Parliament, too, is considering certain ‘obligations’ in reference to the original GPAI providers.
The basic purpose of the EU’s AI Act is to categorize and classify across various chains of actors that are involved when a system is developed and deployed using AI.
Here are 5 considerations to guide GPAI regulation
The AI Act’s approach to general-purpose AI is ideal for establishing a regulatory tone for addressing AI harms globally. With the recent increase in public interest in generative AI, there is also a chance that the regulatory position will wind up being overly adapted to the current issues.
What’s surprising is that newer innovations like ChatGPT, DALL-E 2, and Bard are not even the real problem; in fact, they are just the tip of the iceberg.
Recommended: Responsible AI -How to Adopt an Ethical Model Within a Regulatory Environment
GPAI is an enormous category
The first and foremost aspect to understand is that GPAI is a humongous category and so, it is only logical to apply it to wide spectrums of technologies instead of restricting it to only chatbots and LLMs.
To ensure that the EU AI Act is futuristic in nature, it is mandatory that it involves of a larger scale. First, a proper description of GPAI should include numerous techniques (“tasks”) that can be used as the foundation for other AI systems.
The Council of the EU defines it as:
“Intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.”
GPAI may cause wide-ranging harm
While these risks cannot be completely overcome in the application layer, we can negate the fact that they may have an impact on a variety of applications and actors farther down the line. We should think about the present situation of AI technology, its applications, and the way they work while developing regulatory methods for GPAI.
For instance, GPAI models run the risk of generating anti-democratic discourse, such as hate speech directed at sexual nature, racial, and religious minorities. These models run the risk of solidifying constrained or skewed perspectives in the data that underlies them.
GPAI should be regulated throughout the product life-cycle
For GPAI to take into consideration the variety of stakeholders involved, it must be governed throughout the product cycle rather than only at the application layer. The first stage of development is critical, and the businesses creating these models must take responsibility for the information they utilize and the architectural decisions they make. The existing architecture of the AI supply network efficiently enables actors to profit from a remote downstream application while avoiding any commensurate responsibility because of the absence of regulation at the development layer. This includes the process of gathering, cleansing, and annotation of data as well as the creation, testing, and assessment of models.
A standard legal disclaimer will not be enough
It will not be possible for the creators of GPAI to absolve them using a basic legal disclaimer. This kind of approach can result in a dicey loophole that will release the original developers of all liabilities and instead places the onus on downstream actors, who are underequipped to manage all risks. The Council’s general approach does have an exception where it allows GPAI developers to absolve themselves of any responsibility as long as they exclude all high-risk uses in the instructions and are sure that the system will not be misused.
Recommended: Detecting, Addressing and Debunking the Hidden AI Biases
Enable a wider consultation involving non-industry participants, society, and researchers
A basic, uniform documentation practice to evaluate GPAI models, generative AI models in particular, across various harms are an ongoing field of research. To avoid superficial checkbox exercises, regulation should prevent narrow methods of evaluation.
Before being implemented or made available to the general public, GPAI systems must undergo meticulous vigilance, validation, and inspection. Recent suggestions that would bring GPAI models under the purview of the AI Act either postpone the formulation of specific standards for the future (to be determined by the Commission) or attempt to do so in the AI Act’s wording.
For instance, the distribution of possible effects within an agreeing society can vary depending on if a prototype is built and utilized for an entire community or a small community.
The EU AI Act is about to become the first broad law for artificial intelligence, and one day, it will become a uniform standard for all nations. And that’s why it’s crucial it handles this field of AI and translates it into a global template everyone could follow.
Comments are closed.