Menlo Security: 55 Percent of Generative AI Inputs Include Sensitive Information
Menlo Labs Threat Research team finds PII the most frequent instance of potential exposure and data loss, even as organizational security policies increase by 26%
Menlo Security, a leader in browser security, released its latest report “The Continued Impact of Generative AI on Security Posture”. This report marks the second installment of generative AI reports which analyzes the changing behavior of employee usage of generative AI and the subsequent security risks these behaviors pose to organizations. In the last thirty days, over half (55%) of Data Loss Prevention events detected by Menlo Security included attempts to input personally identifiable information. The next most common type of data that triggered DLP detections included confidential documents, which represented 40% of input attempts.
“Our latest report highlights the swift evolution of generative AI, outpacing organizations’ efforts to train employees on data exposure risks and update security policies”
From July to December 2023, the market and nature of generative AI usage have transformed considerably. New platforms and features are becoming popular, leading to a diverse and specialized market. However, the enterprise has introduced new cybersecurity risks with it.
Recommended AI News: Wearable Devices and Qualcomm Collaborate for XR Experiences with Mudra Neural Technology
For example, according to the Menlo report, there was an 80% increase in attempted file uploads to generative AI websites. Researchers attribute this increase partly to the many AI platforms that have added file upload features within the past six months. Once users were introduced to it, however, they quickly took advantage. While copy and paste attempts to generative AI sites decreased minimally, it’s still a frequent occurrence, highlighting the need to implement technology to control these actions. These two generative AI uses present the largest impact on data loss due to the ease and speed at which data could be uploaded and input, such as source code, customer lists, roadmap plans, or personally identifiable information (PII).
“Our latest report highlights the swift evolution of generative AI, outpacing organizations’ efforts to train employees on data exposure risks and update security policies,” said Pejman Roshan, Chief Marketing Officer at Menlo Security. “While we’ve seen a commendable reduction in copy and paste attempts in the last six months, the dramatic rise of file uploads poses a new and significant risk. Organizations must adopt comprehensive, group-level security policies to effectively eliminate the risk of data exposure on these sites.”
Enterprises do recognize the risk and are increasingly focused on securing against data loss and data leakage resulting from rising generative AI usage. In the last six months, the Menlo Labs Threat Research team discovered a 26% increase in organizational security policies for generative AI sites. However, the majority are doing so on an application by application basis rather than by establishing policies across generative AI applications as a whole. If policies are applied on application by application basis, organizations must either constantly update their application list or risk gaps in safeguards to generative AI sites that employees are using. This necessitates the need for a scalable and efficient way to monitor employee behavior, adapt to evolving functionalities introduced by generative AI platforms, and address the resulting cybersecurity risks.
Recommended AI News: Reviving Te Reo Māori: Unleashing the Power of AI for Language Preservation and Promotion
Key findings that point towards a need for group level security rather than domain level security include:
- For organizations that have security policies on an application basis, 92% have security-focused policies in place around generative AI usage while 8% allow unrestricted generative AI usage
- For organizations that have security policies on generative AI apps as a group, 79% have security-focused security policies in place while 21% allow unrestricted usage
- While most traffic is directed towards the main six generative AI sites, when looking at generative AI as an entire category, file uploads are 70% higher, highlighting the unreliability of ensuring security policies on a application by application basis
In June 2023, Menlo Security issued its first generative AI report analyzing generative AI interactions from a sample size of 500 global organizations. This report compares the data from the previous findings to data collected between July and December 2023 from the same sample of organizations.
Recommended AI News: CloudCADI-A One-Stop Cloud FinOps Product is Now Available in the Microsoft Azure Marketplace
[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]
Comments are closed.