More than 1 in 4 Organizations Banned Use of GenAI Over Privacy and Data Security Risks – New Cisco Study
The Cisco 2024 Data Privacy Benchmark Study reveals that most organizations are limiting the use of Generative AI (GenAI) over data privacy and security issues. 27% had banned its use, at least temporarily.
Cisco released its 2024 Data Privacy Benchmark Study, an annual review of key privacy issues and their impact on business. Ahead of International Data Privacy Day, the findings highlight the growing Privacy concerns with GenAI, trust challenges facing organizations over their use of AI, and the attractive returns from privacy investment. Drawing on responses from 2,600 privacy and security professionals across 12 geographies, the seventh edition of the Benchmark shows that privacy is much more than a regulatory compliance matter.
Growing Privacy Concerns with Generative AI
“Organizations see GenAI as a fundamentally different technology with novel challenges to consider,” said Dev Stahlkopf, Cisco Chief Legal Officer. “More than 90% of respondents believe GenAI requires new techniques to manage data and risk. This is where thoughtful governance comes into play. Preserving customer trust depends on it.”
Among the top concerns, businesses cited the threats to an organization’s legal and Intellectual Property rights (69%), and the risk of disclosure of information to the public or competitors (68%).
Recommended AI News: Dynatrace to Acquire Runecast to Enhance Cloud-Native Security and Compliance
Most organizations are aware of these risks and are putting in place controls to limit exposure: 63% have established limitations on what data can be entered, 61% have limits on which GenAI tools can be used by employees, and 27% said their organization had banned GenAI applications altogether for the time being. Nonetheless, many individuals have entered information that could be problematic, including employee information (45%) or non-public information about the company (48%).
Slow Progress on AI and Transparency
Consumers are concerned about AI use involving their data today, and yet 91% of organizations recognize they need to do more to reassure their customers that their data is being used only for intended and legitimate purposes in AI. This is similar to last year’s levels, suggesting that not much progress has been achieved.
Organizations’ priorities to build consumer trust differ from those of individuals. Consumers identified their top priorities as getting clear information on exactly how their data is being used, and not having their data sold for marketing purposes. When asked the same question, businesses identified their top priorities as complying with privacy laws (25%) and avoiding data breaches (23%). It suggests additional attention on transparency would be helpful — especially with AI applications where it may be difficult to understand how the algorithms make their decisions.
Privacy and Trust: the Role of External Certifications and Laws
Organizations recognize the need to reassure their customers about how their data is being used, and 98% said that external privacy certifications are an important factor in their buying decisions. This is the highest we’ve seen over the years.
“94% of respondents said their customers would not buy from them if they did not adequately protect data,” explains Harvey Jang, Cisco Vice President and Chief Privacy Officer. “They are looking for hard evidence the organization can be trusted. Privacy has become inextricably tied to customer trust and loyalty. This is even more true in the era of AI, where investing in privacy better positions organizations to leverage AI ethically and responsibly.”
Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have had a positive impact on them, and only 6% said the impact has been negative. Strong privacy regulation boosts consumer confidence and trust in the organizations they choose to share their data with.
Further, many governments and organizations are putting in place data localization requirements to keep certain data within country or region. Whilst most businesses (91%) believe that their data would be inherently safer if stored within their country or region, 86% also said that a global provider, operating at scale, can better protect their data compared to a local provider.
Recommended AI News: Essential AI Chooses Google Cloud to Power Enterprise Decision Making with Generative AI
Privacy: a Valuable Investment
Over the past five years, privacy spending has more than doubled, benefits have trended up, and returns have remained strong. This year, 95% indicated that privacy’s benefits exceed its costs, and the average organization reports getting privacy benefits of 1.6 times their spending. Further, 80% indicated getting significant “Loyalty and Trust” benefits from their privacy investments, and this is even higher (92%) for the most privacy-mature organizations.
In 2023, largest organizations (10,000+ employees) increased their privacy spending by seven to eight percent since last year. However, smaller organizations saw lower investment, for example, businesses with 50-249 employees decreased their privacy investment by a fourth on average.
Recommended AI News: Capgemini Enables Improved Efficiency and Productivity for the Auto Club Group
[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]
Comments are closed.