AiThority Interview with Jerry Caviston, CEO at Archive360
Jerry Caviston, CEO at Archive360 comments further about how AI adoption should be planned with the aim of augmenting business processes in this AiThority interview:
______
Hi Jerry, tell us about your time in SaaS and your role at Archive360
I’ve been working in the storage, security and archiving space for my entire career. Notably, at EMC I was part of the first-to-market dedicated archiving platform, Centera where I realized the need for managing and governing data was going to exponentially grow. My experience working with archival data and at Iron Mountain convinced me that the legacy archive model — where vendors lock customers into proprietary systems, take ownership of their data and focus only on historical data — had to end. Enterprises needed archive solutions that enable them to easily ingest data from any source, not just from specific applications, and efficiently govern and format that data so it can be safely and readily leveraged by AI and analytics to create value.
We’d love to know more about some of your products’ latest enhancements
The Archive360 Modern Archiving Platform empowers enterprises and government agencies to unlock the full potential of their archival assets with extensive data governance, security and compliance capabilities, and primed for intelligent insights. It transforms the archive from a moribund cost center into a valuable AI-ready data cloud.
The platform ingests data from all enterprise applications, modern communications, and legacy ERP into a data agnostic, compliant active archive that feeds AI and analytics. It enables organizations to control how AI and analytics consume information from the archive, and to simplify the process of connecting to and ingesting data from any application, so organizations can start realizing value faster. These capabilities reduce the risk AI can pose to organizations by inadvertently exposing regulated data, company trade secrets, or simply ingesting faulty and irrelevant data. As a result, the enterprise can provide AI the most relevant data from today alongside relevant information from the past, all while remaining in full control of data access and permissions.
For organizations to unlock the full potential of their existing data, what practices should they follow (besides deploying supporting technologies to help them extract insights as needed)?
First, they need to understand what data they have and create a centralized means by which AI and analytics can access it. But that’s just the beginning. Not all data should be exposed to AI. Data that contains personally identifiable information, for instance, needs to be masked to comply with privacy regulations, and organizations do not want to expose regulated data or trade secrets. Organizations need a way to efficiently govern their data so they can ensure compliance and mitigate risk. Finally, IT needs an automated means of formatting data so that it’s ready for analytics and AI platforms — tackling this job manually will significantly slow time to value.
Also Read: AiThority Interview with Dr. Petar Tsankov, CEO and Co-Founder at LatticeFlow AI
For enterprises looking for a platform to activate their archive data for AI, what deployment tips should they keep in mind?
Make sure that, instead of multiple, disconnected point solutions for data from different types of applications, the organization should deploy one that enables a data-centric approach which activates data, reduces technical debt, enhances data compliance, and accelerates AI readiness.
Ensure that the platform and associated tools have cloud-native support for enterprise databases, such as SAP, Oracle, and SQL Server. It should enable streamlined ingestion and governance of structured data alongside unstructured content to provide a unified view across the organization’s data landscape.
The archive should also come with built-in connectors to leading analytics and AI platforms such as Snowflake, Power BI, ChatGPT and OpenAI to ensure that all archived data is available for analysis.
What lags do current business and enterprise teams face when storing and archiving data across functions today?
One of the biggest challenges is the siloed nature of most archiving systems. Typically, vendors’ systems only work with specific applications, and that makes it extremely difficult to govern and provide access to archive data.
Another big issue is that most archive platforms lock customers into proprietary systems, and, even worse, take ownership of their data. Moving to another system means losing access to previously archived data without paying enormous fees and spending a great deal of time and resources extracting it.
Also Read: Developing Autonomous Security Agents Using Computer Vision and Generative AI
The mainstreaming of AI will fundamentally reshape business processes and the roles that support them—particularly in how organizations manage, govern, and leverage data.
From our vantage point, AI adoption isn’t just about automation—it’s about augmentation. Business processes will evolve to become more predictive, adaptive, and data-driven. For example, compliance monitoring, legal hold, and data classification—once manual and reactive—will shift to real-time, proactive, and AI-enabled processes. This will reduce human error, increase speed, and unlock new efficiencies.
Roles will also transform. New responsibilities will emerge around AI governance, data quality, and ethical oversight. We’re already seeing job titles like “AI Compliance Officer” or “Data Risk Analyst” gain traction—roles that didn’t exist a few years ago.
We see a future where the archive itself becomes intelligent—not just a place to store information, but a strategic asset that feeds secure, curated data into AI models. By doing this, businesses can ensure that the AI systems they deploy are both high-performing and compliant.
Five takeaways you’d leave with our readers to prepare for an AI powered business future?
- The archive doesn’t have to be a cost center for storing data that never gets accessed: In the AI era, archive data can become a treasure trove for generating valuable insights.
- AI shouldn’t ingest data indiscriminately: Some archive data will fall under strict regulations governing how it can be used and exposed, with severe penalties for noncompliance. Organizations need an efficient means of governing the archive so that AI only ingests appropriate data.
- Data silos are a significant barrier to realizing value with AI: Storing data in discrete archives segregated by application makes it extremely difficult to govern that data and provide AI with access.
- Own and control the data: Data is often an enterprise’s most valuable asset. IT shouldn’t hand control and ownership of it to a vendor who will force them to spend exorbitant amounts of time and money to extract it from their platform should they choose to work with another vendor.
- Ensure data can be efficiently formatted for AI and analytics: Data isn’t useful to AI and analytics unless it’s in a format these technologies can use. Formatting should be automated to reduce costs and increase time to value.
[To share your insights with us, please write to psen@itechseries.com]
Jerry Caviston, is CEO at Archive360
Archive360 is a cloud-based modern archiving platform that integrates with your data ecosystem and existing security, compliance, analytics, and AI tools.
Comments are closed.