CalypsoAI Launches VESPR, The Leading AI Security Tool Available for U.S. National Security Users
Artificial Intelligence (AI) is changing the world as we know it. But until, most users could not fully trust the AI tools they use. CalypsoAI, a start-up founded by DARPA veterans, is providing a solution to that problem with the launch of the first secure end-to-end AI security tool, VESPR.
Created with critical input from existing national security customers and born out of years of independently funded research into adversarial machine learning, VESPR ensures that an enterprise is deploying secure, explainable, and compliant Machine Learning (ML) models.
“VESPR is an exciting next step in bringing trusted and secure AI to our customers in the national security community and other highly regulated industries,” said Neil Serebryany, CEO of CalypsoAI. “CalypsoAI looks forward to continuing to provide innovative solutions, focused on ensuring that AI technology is deployed securely and transparently across enterprises.”
Recognizing that AI is increasingly used across the defense and intelligence communities to make mission-critical decisions, VESPR accelerates AI deployment by giving operators confidence that their systems can withstand an adversarial attack.
Early adopters of VESPR include the U.S. Air Force and the Department of Homeland Security. “CalypsoAI’s VESPR product is an important tool for deploying safe and trustworthy AI solutions across various critical mission sets,” said Tony DeMartino, Chair of CalypsoAI’s National Security Advisory Board. “We look forward to scaling VESPR across the national security community to address the challenge of AI safety and security.”
VESPR provides advanced AI testing capabilities with a streamlined workflow to ensure that every machine learning algorithm put into production has been verified secure. VESPR provides unparalleled security and assurance to a variety of AI systems, from computer vision to natural language processing.The VESPR process ensures testing, evaluation, verification, and validation (TEVV) throughout the secure machine learning lifecycle (SMLC), from the research and development phase through model deployment. The end result is AI systems that provide accurate and comprehensive monitoring and reporting on model capabilities, vulnerabilities, and performance.