Next DLP Extends Visibility and Adaptive Controls for Leading Generative AI Tools
Addressing Data Protection Challenges Across Extensive List of Generative AI Tools
Next DLP (“Next”), a leader in insider risk and data protection, announced the extension of the company’s generative AI (“GenAI”) policy templates from ChatGPT to include Hugging Face, Bard, Claude, Dall.E, Copy.Ai, Rytr, Tome and Lumen 5, within the company’s Reveal platform. This extension of visibility and control enables customers to stop data exfiltration, expose risky behavior and educate employees around the usage of GenAI tools.
Read More about Interview AiThority: AiThority Interview with Keri Olson, VP at IBM IT Automation
“This extension of our policy templates to include top Generative AI tools is driving decision making within our customer’s environments on the risk and required security associated with their use.”
CISOs around the world are grappling with the proliferation of GenAI tools including text, image, video and code generators. They worry about how to manage and control their uses within the enterprise and the corresponding risk of sensitive data loss through GenAI prompts. Researchers at Next investigated activity from hundreds of companies during July 2023 to expose that:
- 97% of companies had at least one user access ChatGPT
- 8% of all users accessed ChatGPT
- ChatGPT navigation events account for <0.01% of traffic. For comparison, Google navigation events consistently account for 5-10% of traffic.
AiThority Interview Insights: AiThority Interview with Gijs van de Nieuwegiessen, VP of Automation at Khoros
“Generative AI is running rampant inside of organizations and CISOs have no visibility or protection into how employees are using these tools,” said John Stringer, Head of Product, at Next DLP. “This extension of our policy templates to include top Generative AI tools is driving decision making within our customer’s environments on the risk and required security associated with their use.”
With these new policies, customers gain enhanced monitoring and protection of employees using the most popular GenAI tools on the market. From educating employees on the potential risks associated with using these services, to triggering when an employee visits the GenAI tool websites, security teams can remind and reinforce corporate data usage protocols.
In addition, customers can set up a policy to detect the use of sensitive information such as internal project names, credit card numbers, or social security numbers in GenAI conversations, enabling organizations to take preventive measures against unauthorized data sharing. These policies are just two of many possible configurations that protect organizations whose employees are using GenAI tools.
Latest AiThority Interview Insights : AiThority Interview with Matthew Tillman, Co-Founder and CEO at OpenEnvoy
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.