OpenAI Announces Initial support for ChatGPT Plugins
Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier have created the first plugins in OpenAI ChatGPT
OpenAI, the makers of ChatGPT, announced a series of new plugins to ensure its artificial intelligence algorithms are not abused. While the primary mission at OpenAI remains building smarter Artificial General Intelligence (AGI) in each iteration, the new plugins would align with ever-growing demands for safe and sound environment where these generative AI capabilities are developed and promoted. The new ChatGPT plugins systematically address the transformation of existing Large Language Models (LLMs) into smarter Augmented Language Models (ALMs). These ALMs outperform most regular LLMs across different parameters. Security, performance and real-time information retrieval are some of these parameters where ALMs based on ChatGPT plugins outperform the existing batch of LLMs. And, there’s more to OpenAI’s decision to go ahead with third-party and self-hosted ChatGPT plugins for users and open-source developers.
Let’s find out.
ChatGPT Unlocks New Range of Use-cases by Offering New Plugins
LLM developers have been eyeing different ways to integrate OpenAI’s ChatGPT to their existing machine learning models and experiments. Some LLMs such as LangChain, CopilotX, Toolformer, and ACT-1 have managed to successfully address the existing barriers in LMs and Generative AI. To assist further research and expand the range of applications in the field of AGI, OpenAI decided to implement its iterative deployment approach while rolling out the new set of plugins. The first set of plugins feature contributions from Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier. Additional set of plugins include self-built web browser and code interpreter.
See Zapier’s ChatGPT plugin in action here:
In the coming weeks, OpenAI could allow access to larger group of LLM developers that would be tested in “alpha” mode before integrating ChatGPT plugins into their existing products. We are guessing this is the exact roadmap followed for opening new plugins for Klarna, Zapier and others!
At the time of publishing this story, it was confirmed that OpenAI would be extending the “alpha” access to a limited number of developers and ChatGPT users that signup here in the waiting list.
Five Key Things ChatGPT Plugins Address in the AGI Development
The embedded AGI philosophy within ChatGPT allows this model to grow at a rapid pace. However, to secure it from falling into abusive hands, the new plugins were necessary to tighten the learning loops with careful iteration. It’s still early days in ChatGPT development as far as understanding where open standards would be needed and where they need to be curtailed by users and policy makers. Allowing developers, users and subscribers of ChatGPT to bring out their own sets of plugins would help in building a decentralized repository of training data from out-of-the-box models and applications.
AI Stories for You: Announcing GitHub CoPilotX: World’s Most Powerful Generative AI Pair Programming Tool
Here are the five things that new plugins in ChatGPT would address in transforming current set of LLMs.
Cut out “hallucinations”
Ever since its launch and meteoric rise int he AI domain, researchers were always skeptical about its proximity to other LLMs that face with the problem of ‘hallucination.” We all know now that Microsoft has heavily invested in OpenAI, and ChatGPT is in a head-on battle with its closest rival in generative AI, Google Bard. That’s why we should also understand that the concept of LLM hallucinations are popularized by Google AI researchers. OpenAI has found a solution to stop LLMs from hallucinating using an advanced machine learning technique called reinforcement learning with human feedback (RLHF). The new ChatGPT plugins are exactly built on the code-based calculations based on real-time evidence-based references.
New plugins would go beyond their LLM capabilities to solve existing limitations with developing ChatGPT into a trustworthy AI development platform. It would delimit the “bad users” and abusive code generators from leveraging existing GPT models. This action would reduce the chances of negative consequences that arise from “mistaken or misaligned” actions. ChatGPT team at OpenAI have addressed the safety issued by carefully identifying the abusive actions and using these findings to build “safety-by-design” action plan for compliant users. The abusive actions that were tracked and used to build the current set of safety mitigations include use of ChatGPT for sophisticated prompt injection, send fraudulent and spam emails, bypass safety restrictions, or misuse information sent to the plugin.
Improving the Overall Web Citizenship Experience
This aspect of new plugins relates to ChatGPT’s ability to use web browsing data from Bing Search API. Currently, OpenAI ChatGPT’s Browsing plugin is a standalone service and doesn’t operate as part of its existing infrastructure. It could mean ChatGPT users may have to allow the plugin to crawl the website, or see “click failed” message instead. This browsing is currently applicable to web content data, and built in such a way that it avoids sending excessive traffic to the test websites. Browsing plugin also allows users to view and cite sources in their GPT responses. This adds a layer of truthfulness and trustworthiness in terms of attributing the source of information to the original content creator. Original content creators would receive the “recognition” through additional traffic coming to their website where the information (used in the ChatGPT responses) were first published.
AI Machine Learning News: Mozilla Invests $30 Million to Create Mozilla.ai– A Startup for the Ethical AI Community
Improving Code Interpretation with Basic Programming Skills
A report had questioned ChatGPT’s sincerity and trustworthiness while writing and executing a “secured code.” ChatGPT’s code interpreter douses the questions related to code security and visibility, at least for some time now.
ChatGPT’s Code Interpreter would reduce the dependence on any human-level programming skills while working with LLMs. The entire code generation, execution and corrections would occur within a sandboxed environment. This Python-based code interpreter narrows down the programming workflows involved in data analysis, visualization, file transformation, data integration and so much more. Of course, these AI-generated codes may still require certain level of supervision to remove errors. That’s why OpenAI opened up its access to third-party plugins.
Develop New Alignment Techniques to Boost AGI Research
What’s the use of AI if it can’t understand the needs of its human users? When an AI tool starts giving random responses or goes into an endless loop of garbled replies, it becomes an alignment issue. ChatGPT’s dependence on its self-hosted and third-party plugins could address the issues with alignment and enable ALMs with what they need the most to develop into reliable explainable Artificial Intelligence (X-AI) — Alignment Techniques.
“We want to be transparent about how well our alignment techniques actually work in practice and we want every AGI developer to use the world’s best alignment techniques.” – OpenAI Authors [Jan Leike, John Schulman, Jeffrey Wu]
With so much happening every day at OpenAI, we should expect tigher regulations and swifter alignments with OpenAI’s WebGPT, InstructGPT, Codex and Copilot.
[To share your insights with us, please write to firstname.lastname@example.org]
Comments are closed.