Analyzing Generative AI and Cybersecurity Risk Across the Enterprise
This AIThority guest post highlights the growing enterprise use and risks of Generative AI in cybersecurity posed to businesses.
Language Processing is the Most Popular
Within the artificial intelligence (AI) world, Large Language Models (LLMs) dominate the imagination of digital creators everywhere. ChatGPT and Drift, both conversational AI bots that produce human-like text, are neck in neck in popularity – with Drift inching past ChatGPT by 1%. We observed heavy AI/ML traffic from the United States and India. Typically, users leverage these AI chatbots as exploratory tools to help create content and integrate AI capabilities into other applications.
Australians Rate AI Applications Based on Trust, Friendliness, and Diversity
ChatGPT Making an Impact in Manufacturing…and the US
The manufacturing sector is generating massive amounts of ChatGPT transactions. In fact, manufacturing accounts for ~21% of transactions, with finance coming in second at 14%. The rapid adoption and heavy use of generative AI is potentially part of the Industry 4.0 trend – where the manufacturing sector is becoming increasingly digitized, connected, and modern.
Artificial Intelligence and the Trust Deficit: A Call for Greater Transparency
Since the United States is one of the most prolific generators of AI-related transactions across all verticals – including manufacturing, the widespread adoption of AI will have a big impact on productivity and efficiency, but it comes with additional risks that threat actors will try to exploit.
Securing Transactions Using Generative AI and Cybersecurity
After manufacturing, tech and finance make up ~18% and ~15%, respectively, of all AI/ML traffic. With the technology and finance sector being such heavy users of AI applications, it’s no surprise that these sectors are also blocking the most AI/ML-related traffic. The most blocked AI application is a popular AI chatbot.
The majority of these policy-based blocks are instituted to ensure that organizations do not suffer accidental data leaks, so 10% of all AI/ML-related transactions are immediately blocked using URL filtering policies – before a user has the chance to share potentially confidential information with the application.
Recommendations for Organizations on Generative AI and Cybersecurity
It’s inevitable that AI-powered tools that help employees produce good results at a faster rate will gain a strong foothold in the corporate world. Instead of fighting this trend, organizations should embrace, customize, and secure their employees’ daily use of AI-powered tools.
Get ahead of the curb by creating guidelines on how your employees should interact with applications like ChatGPT or Drift AI chatbot.
For example, emphasize to employees the importance of not entering material or confidential information into conversational AI bots and at the same time implement security controls to prevent confidential data from leaking out. Moreover, encourages employees to review thoroughly and fact-check content generated by AI tools. Represent AI applications for what they are – tools in a digital creators’ toolkit. Search engines are tools.
Spell checkers are tools.
Online translators are tools. These tools are all helpful but cannot completely replace a human being making both informed and intuitive decisions (not in its current form).
If your organization plans to create its own internal AI-powered application for employees, ensure that any AI project: follows the same secure product lifecycle framework as your other external and internal products, meets legal and ethical standards, and proactively adapts its usage and security policies to the rapidly evolving nature of AI technology.
Recommended AI ML Insights:
Comments are closed.