DynamoFL Raises $15.1Million Series A to Scale Privacy-Focused Generative AI for the Enterprise
DynamoFL announced that it has closed a $15.1 million Series A funding round to meet demand for its privacy- and compliance-focused generative AI solutions. Coming off a $4.2M seed round, the company has raised $19.3M to date. DynamoFL’s flagship technology, which allows customers to safely train Large Language Models (LLM) on sensitive internal data, is already in use by Fortune 500 companies in finance, electronics, insurance and automotive sectors.
Recommended AI News: Plume Reveals the Latest Trends in the Wi-Fi-Connected VR Headset Market
The round, co-led by Canapi Ventures and Nexus Venture Partners, also had participation from Formus Capital, Soma Capital and angel investors Vojtech Jina, Apple’s privacy-preserving machine learning (ML) lead, Tolga Erbay, Head of Governance, Risk and Compliance at Dropbox and Charu Jangid, product leader at Snowflake, to name a few.
The need for AI solutions that preserve compliance and security has never been greater. LLMs present unprecedented privacy and compliance risks for enterprises. It has been widely shown that LLMs can easily memorize sensitive data from its training dataset. Malicious actors can exploit this vulnerability to extract sensitive users’ personally identifiable information and sensitive contract values with carefully designed prompts, posing a major data security risk for the enterprise. The pace of innovation and adoption in the AI sector is punctuated by the rapidly changing global regulatory landscape, many of which require that enterprises detail these data risks, but enterprises today are not equipped to detect and address the risk of data leakage. In the EU, the GDPR and the impending EU AI act, along with similar initiatives in China and India, as well as AI regulation acts in the US, require that enterprises detail these data risks. However, today they are not equipped to detect and address the risk of data leakage.
More clearly needs to be done. As government agencies like the FTC explore concerns around LLM providers’ data security, DynamoFL’s machine learning privacy research team recently showed how personal information – including sensitive details about C-Suite executives, Fortune 500 corporations, and private contract values – could be easily extracted from a fine-tuned GPT-3 model. DynamoFL’s privacy evaluation suite provides out of the box testing for data extraction vulnerabilities and automated documentation to ensure enterprises’ LLMs are secure and compliant.
“We deploy our suite of privacy-preserving training and testing offerings to directly address and document compliance requirements to help enterprises stay on top of regulatory developments, and deploy LLMs in a safe and compliant manner,” said DynamoFL co-founder Christian Lau.
“Privacy and compliance are critical to ensuring the safe deployment of AI across the enterprise. These are foundational pillars of the DynamoFL platform,” said Greg Thome, Principal at Canapi. “By working with DynamoFL, companies can deliver best-in-class AI experiences while mitigating the well-documented data leakage risks. We’re excited to support DynamoFL as they scale the product and expand their team of privacy-focused machine learning engineers.”
The company’s solutions help organizations privately fine-tune LLMs on internal data while identifying and documenting potential privacy risks. Organizations can choose to implement DynamoFL’s end-to-end suite or to implement their Privacy Evaluation Suite, Differential Privacy and/or Federated Learning modules individually.
Recommended AI News: Plume Launches Uprise to Transform Connectivity Services for MDUs
DynamoFL was founded by two MIT PhDs who have spent the last six years researching the cutting-edge, privacy-focused AI and ML technology forming the basis of the company’s core offerings. The team balances expertise in the latest research in privacy-preserving ML, with researchers and engineers from MIT, Harvard and Cal-Berkeley, and experience in deploying enterprise-grade AI applications for Microsoft, Apple, Meta and Palantir, among other top tech companies.
“This investment validates our philosophy that AI platforms need to be built with a focus on privacy and security from day one in order to scale in enterprise use cases,” said DynamoFL CEO and co-founder Vaikkunth Mugunthan. “It also reflects the growing interest and demand for in-house Generative AI solutions across industries.”
“While AI holds tremendous potential to transform every industry, the need of the hour is to ensure that AI is safe and trustworthy. DynamoFL is set to do just that and enable enterprises to adopt AI while preserving privacy and remaining regulation-compliant,” said Jishnu Bhattacharjee, Managing Director, Nexus Venture Partners.”We are thrilled to have partnered with Vaik, Christian and team in their journey of building an impactful company.”
Recommended AI News: Codoxo Launches Generative AI for Healthcare Payment Integrity
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.