Why ChatGPT’s Development Is an Evolution, Not a Revolution
It’s not surprising that there’s significant enthusiasm around ChatGPT. OpenAI projects will make $1 billion in revenue by 2024 by licensing its technology. Startups are rushing to develop services around ChatGPT or are creating copycats to automate processes for roles and business functions. Enterprise technology providers are embedding generative AI in productivity software and systems of record to further digitize workflows. And users are leveraging the technology to speed daily work for everything from writing emails to developing software.
The release of ChatGPT, a large-language generative AI model chatbot developed by OpenAI, stunned industry watchers around the world. For the first time, AI has reached a breaking point and mainstream appeal. ChatGPT gained 100 million active users in January 2023, just two months after its release. As a result, it’s now the fastest-growing consumer application in history.
So, it may seem surprising to say that ChatGPT and its ilk represent an evolution, not a revolution, in the development of AI. However, the fast pace of this tool’s development is creating confusion about what generative AI can do, as well as security risks that should be addressed now, rather than later.
AI’s Fast Growth Parallels Other Technology Developments
So, why is ChatGPT an evolution, not a revolution?
ChatGPT is a large-language generative AI model that builds on the work of past AI technology, using natural language processing (NLP) to interpret data and reinforcement learning to get smarter. It leverages supercomputing to process the massive amounts of data it’s fed, 175 billion parameters to start.
Right now, ChatGPT is in a classic hype cycle. Media and analysts are promising that the technology will transform life and industry as we know it. We’ve been here before. Cell phone Internet browsers were supposed to replace computers. Instead, they’ve extended our connectivity. Facebook was supposed to transform human engagement. Now, it’s just one of many social media channels. Cloud services were set to eradicate on-premises data centers; now some leaders are retrenching due to cloud’s spiraling costs.
ChatGPT and other generative AI tools will follow a similar path, with excessive promises and dashed expectations, until the technology becomes more stable and predictable and delivers real ROI for important use cases. Time Magazine stated, “As companies hurry to improve the tech and profit from the boom, research about keeping these tools safe is taking a back seat. In a winner-takes-all battle for power, Big Tech and their venture-capitalist backers risk repeating past mistakes, including social media’s cardinal sin: prioritizing growth over safety.”
Understanding Differences Between AI, AGI, and Generative AI
ChatGPT’s prodigious talents are creating confusion about the different types of AI. Reporters and users are using terms like AI, AGI, and generative AI interchangeably, missing important nuances.
AI models are trained on domain-specific data to execute one task at a time. Think of customer service chatbots that serve up FAQs and provide rote answers. They can’t perform any duties outside their narrow focus, nor can they learn new information, without extensive model retraining.
AGI, sometimes called “strong AI,” is trained on large language models, gaining the ability to “learn,” think,” and “feel.” AGI can apply common sense reasoning with new problems to come up with logical solutions. It can learn any task that humans can, without making errors, and operate independently. At this point, AGI is still a vision that hasn’t been achieved.
Generative AI, the technology beyond ChatGPT, is a major leap forward on the path to AGI. It can answer questions with detailed responses. It can also take a simple prompt and then create, whether that is developing digital art, writing copy, or creating code. While generative AI can think and learn and feel, it still needs further training to avoid creating issues and learn how to act appropriately. The media has been full of stories of generative AI going off the rails: from a Bing chatbot professing love to a New York Times reporter to chatbots expressing anger and making biased statements. Generative AI, it seems, is not quite ready for prime time.
These distinctions are important, because users are treating generative AI chatbots as though they are trusted, tested products, when they are actually working prototypes of new solutions. As a result, AI ethicists and leaders have raised a red flag that the technology needs further work: to set guardrails for appropriate usage; to avoid creating bias; and to prevent it from becoming an echo chamber for certain voices or ideas, much as bots have done to social media these days.
ChatGPT Is Inspiring Important Conversations About Security
The good news is that ChatGPT is inspiring important conversations about security in the C-suite. Although the technology is just a few months old, there have already been some significant security violations surrounding generative AI chatbots.
ChatGPT exposed personal information for some of its users in March, while Samsung has detected three leaks of confidential corporate information about its semiconductor products. Already 3.1 percent of workers have pasted corporate confidential information into the chatbot, which synthesizes it and makes that data available to others. As a result, multiple banks, including Bank of America, Goldman Sachs, Citigroup, Deutsche Bank AG, and Wells Fargo & Co., have banned the use of ChatGPT at work.
Generative AI companies are in a catch-22. Other industry leaders hesitate to use them, because these solutions lack controls to prevent data leaks. However, for generative AI solutions to work they need to ingest and synthesize massive amounts of data, including corporate information.
This standoff will occur until enterprise-class security measures can be put in place to allow for large-scale implementations, with appropriate governance and guardrails. While this process typically takes several years, it will likely happen faster, due to the incredible interest in ChatGPT. The attention to security in parallel with the rise of this new technology is a first in the historical patterns of emerging tech.
In the interim, many companies may choose to develop and deploy on-premises AI solutions for key use cases. By doing so, they give users tools to work more productively and create operational efficiencies by linking these chatbots to other internal applications via APIs. Since chatbots get data fed from a variety of sources including websites, databases, social media, user input, and predominantly through APIs, it can open up Pandora’s Box for security. Enterprises can protect their data by using intelligent API security solutions that provide security posture management, threat protection, and threat management across the software development lifecycle and hybrid cloud infrastructures. While it’s impossible to prevent users from entering confidential and sensitive data into generative AI chatbots, enterprises can control the terms on how these tools are used and ensure confidential and sensitive data information doesn’t leave corporate networks.
Generative AI and AGI will significantly transform business in the years and decades to come. However, it’s up to enterprise leaders to shape its development: to keep the technology focused on solving problems effectively, while implementing safeguards to protect corporate secrets.
Overall, large language AI models like ChatGPT have taken center stage in the media and mainstream, but contrary to all the hype is a long way from being prime time. As more individuals and enterprises embrace this technology it will continue to be an evolution for years to come.
Comments are closed.