Generative AI – The Under-Appreciated Consequences for Data Security
Not a week goes by without a new headline about how another business was breached, exposing data it was entrusted to protect. The new world of Generative AI (GenAI) has exacerbated data security risks in a way never seen before. We are on the precipice of a domino effect, one that could dismantle the very notion of access control for enterprise data.
With the advent of new technology comes new forms of risk. Although not trivial, we are not focused on the dangers of exploitation but on collateral risks that arise when GenAI works exactly as intended – risks that significantly impact data security.
Why Is GenAI Different?
There are important characteristics that set GenAI apart from earlier AI innovations. These include:
Representation learning – Unlike the traditional machine learning (ML) paradigm, where domain experts must define task-specific features that require raw data to be transformed first, GenAI models are built on deep learning techniques, where algorithms work directly on raw data without any preprocessing and infer relevant features.
General-purpose models – With ML, every automation task needs a dedicated modeling effort. On the other hand, GenAI models are designed to characterize distributions of input data, regardless of the eventual task that the model might be used for. Such “foundation models” are trained on large “humanity-scale” datasets, such as the entirety of content on the internet or the collection of all published works, which allows models to reflect an understanding of our world. Larger datasets (and larger model sizes) typically lead to better model accuracy.
Instruction following – The final piece is the ability for users to interact with GenAI models using natural language. Users can leverage the same general-purpose model for different tasks by providing appropriate instructions in natural language – without having to learn a new programming language or advanced skills in mathematics and statistics.
While these differences might seem just technical, they have profound implications for how GenAI can be used and by whom. GenAI democratizes intelligent automation by lowering the barriers between business teams and technology teams. It allows employees to bypass IT and technology teams to experiment with the power of AI. An empowered workforce with access to GenAI tools enables rapid cycles of experimentation and exploration, boosting adoption and innovation. However, this acceleration also magnifies organizational risk, especially for one of cybersecurity’s most fundamental principles – the principle of least privilege.
Also Read: AiThority Interview with Jonathan Kershaw, Director of Product Management, Vonage
Upending the Principle of Least Privilege
Least privilege access is a security model where any entity – a user, a program, or a process – is given only the minimum level of access and permissions to resources necessary to perform its legitimate functions, and nothing more. When you have traditional software automating defined tasks, it is a tractable problem that can be addressed. For users and employees, this is often accomplished by providing role-based access controls (RBAC); but in practice, most organizations struggle to implement them effectively.
GenAI, however, exacerbates the struggle by upending the entire paradigm. Least privilege itself becomes a constraint that conflicts with the very way these systems are designed to operate. GenAI works better when it has access to more data, and enterprise GenAI tools tend to perform better and lead to higher productivity gains when they have access to more business data and business context. And because of the very nature of GenAI, which allows users to leverage it in a variety of ways, users continue to find new applications of GenAI, most of which emerge from organic experimentation and curiosity, rather than top-down business-driven planning.
If an entity cannot be defined by the nature of tasks it will be used for or the types of data it needs access to, it becomes infeasible to set up least privilege access permissions. In addition, a user may have appropriate access to a dataset and legitimately provide it as input to a GenAI tool. But once that data is ingested, it is no longer bound by the user’s original permissions. Instead, it can be absorbed into the model, surfaced in future outputs, or even become accessible to others leveraging the same tool. In short, GenAI does not necessarily inherit the access controls of the data, rendering least privilege effectively unenforceable.
A New Layer of Enterprise Infrastructure
GenAI tools are a new layer of the enterprise infrastructure – one that needs expansive access to data and systems, and whose applications and usage cannot be planned or forecasted reliably. What are the implications for business leaders? It boils down to laying a solid foundation based on good data hygiene, starting with complete visibility of all data assets within an enterprise, understanding their sensitivity levels, and appropriately classifying them by tagging with sensitivity labels. While this has remained an unfulfilled promise of data loss prevention (DLP) systems, newer technologies and platforms have emerged (e.g., data security posture management) that leverage GenAI to help address these challenges.
Balancing Opportunity and Risk
The public conversation around GenAI has primarily focused on two themes – its operational implications, such as the path to artificial general intelligence (AGI) and potential job displacement; and its technical vulnerabilities, like prompt injection or adversarial attacks. While these debates are important, they often overshadow the more immediate and systemic challenge of data security. Unlike speculative risks about AGI timelines or niche technical exploits, the exposure of sensitive enterprise data through GenAI is happening now, at scale, and with far-reaching consequences. Without decisive action, organizations risk not only regulatory penalties and financial losses, but also the erosion of customer trust and competitive position. Business leaders who recognize this shift will be best positioned to enable innovation while safeguarding trust.
About The Authors Of This Guest Article:
This article was authored by Madhu Shashanka, Chief Scientist and Co-Founder, Concentric AI; and Lane Sullivan, Chief Information Security and Strategy Officer, Concentric AI
Also Read: What is Shadow AI? A Quick Breakdown
[To share your insights with us as part of editorial or sponsored content, please write to ughosh@itechseries.com]
Comments are closed.