Artificial Intelligence | News | Insights | AiThority
Bitcoin
$11,301.98
+47.3
(+0.42%)
Ethereum
$392.32
+1.04
(+0.27%)
Ripple
$0.30
-0.01
(-3.24%)
Litecoin
$58.21
+0.3
(+0.52%)
EOS
$3.04
-0.01
(-0.36%)
Cardano
$0.14
0
(+3.38%)
Stellar
$0.11
0
(+1.12%)
NEO
$12.65
+0.13
(+1.04%)
NEM
$0.06
-0
(-1.2%)
DigitalCash
$87.14
-0.67
(-0.76%)
Tether
$1.00
0
(0%)
Binance Coin
$23.05
+0.97
(+4.39%)
QTUM
$2.79
-0.03
(-0.96%)
Verge
$0.01
-0
(-3.64%)
Ontology
$0.73
+0.02
(+3.4%)
ZCash
$85.81
+3.06
(+3.7%)
Steem
$0.21
-0
(-0.27%)

AiThority Interview with Jeff Denworth, VP Products and Co-Founder at Vast Data

0 3
AiThority Interview with Jeff Denworth, VP Products & Co-Founder, Vast Data

Know My Company

How do you interact with AI and other intelligent technologies that you work with, in your daily life?

We’re fortunate to be living on the cutting edge of IT. We provide systems to research environments that are pioneering the use of new Deep Learning and Machine Learning algorithms and get to see first hand how these algorithms are operationalized in customer production computing environments for real-time inference and decisions.

Read More: AiThority Interview Series with Kobi Marenko, CEO and Co-Founder at Arbe

How did you start in this space? What inspired you to found Vast Data?

We’re a collection of storage engineers that identified two opportunities. We saw, at the infrastructure level, that a collection of new technologies – specifically low-cost Flash, storage class memory and NVMe-over-Fabrics storage networking – could be used to build an altogether new type of storage architecture that could break decades of storage tradeoffs and form a single tier of storage built entirely from high-performance flash media, but at the costs of a HDD-based archive. This opportunity married well to what we saw happening at the application level where new classes of Machine Learning and Deep Learning codes were challenging the way that customers had historically stored their data. These new codes require the fastest access to the largest amounts of data, which makes the 30-year old storage tiering model essentially obsolete in the AI era.

Why are enterprises laying a huge impetus on moving to the Cloud from an on-premise Database Server?

This is not our observation. In fact, we have found that a number of customers have prototyped their Deep Learning training environment in some cloud platform that’s easy to spin up. However, when the investments get meaningful and the storage demands of production workloads need innovative approaches to keep GPUs active and efficient, customers will then start to build out their training and inference environments on-premises to save firstly on time to market, but also significantly save on cost. IDC calls this dynamic “cloud repatriation” and the performance demands of AI infrastructure is one of the leading drivers to build up on-premises capabilities.

Read More: AiThority Interview Series with Dr. Massimiliano Versace, CEO, and Co-Founder at Neurala

What is the current biggest threat to Enterprise Data?

Gravity. What we find is that customers are storing data across various independent silos of infrastructure, and as capacities grow, the weight of this data and its inability to be moved limits an organization’s ability to mine, refactor and derive correlations and value from vast reserves of data.

With hundreds of parallel but crucial components, how easily are enterprises able to adapt to Vast Data’s offerings?

To stay Agile, VAST builds from commodity components and infuses its architecture with many web-scale infrastructure concepts. That said, the solution is delivered as a black box appliance that VAST can remotely support just as easy as our customers can. We routinely deliver systems that are measured in the petabyte scale that take only a few hours to get into production.

Read More: AiThority Interview Series With Scot Marcotte, Chief Technology Officer at Buck

What industries can leverage the most from Vast Data’s products? 

We’re finding success in markets that live at the intersection of storage performance and storage capacity. For example, in financial services, we’re helping many of the world’s leading trading firms optimize their algorithmic agenda while helping health sciences firms advance the state of the art in Genomics, Proteomics, Neurology, and Digital pathology. Auto manufacturers build autonomous capabilities into their consumer and industrial products using VAST and we help governments prevent threats before they happen. We enable content companies to create and distribute rich content and we assist enterprises to protect their critical data assets – all with one system.

What are the biggest challenges and opportunities for other companies in dealing with rising technology prices?

Again – this is also not our observation. In the past 18 months, we’ve seen the cost of commodity flash storage decline by 75% which is now validating our design thesis around enabling the use of hyperscale flash – without the overhead of conventional enterprise storage – to bring the cost points to a level that’s competitive with mechanical (HDD-based) storage.  Similarly, we’re expecting a continued decline over the next 18 months, which makes things very exciting.

Read More: AiThority Interview Series with Stuart Brock, Director at Seal Software

What is that interesting use case when Vast Data’s capabilities benefitted an enterprise?

In the AI space, our customers have realized that Deep Learning and Machine Learning are essentially a ‘random read’ I/O problem and that there’s no way to pre-fetch data to service a random read request. These same customers have come from a place where they built hybrid, tiered storage architectures where they were not benefiting from the value of flash storage as random read requests kept going to their HDD tier of infrastructure. We’re working with a large automotive company to help them understand that the only solution to realizing consistent performance had to be a move to all-flash, and VAST was the most cost-effective way to build out petabytes of training infrastructure.

How does Vast Data leverage AI? Would we see Vast Data developing hardware for AI?

Internally to the system, we use statistical methods to determine how to predictively place data onto flash to ensure that low-cost drives can be deployed for up to a decade.  In terms of our application focus, our original and enduring vision is on making it possible to elevate all of an organization’s data onto a scalable, affordable and fast tier of NVMe Flash, which best serves an organization who is looking to apply AI to their vast data reserves.

Read More: AiThority Interview Series with Jeff Epstein, VP of Product at Comm100

Which events and webinars would you suggest to our readers as being the best in grasping information on emerging technologies?

From an infrastructure perspective, the Nvidia GPU conference is a periodic state of the union on what’s going on in AI infrastructure.

What is your take on the weaponization of AI?

For decades, technology has always been weaponized – AI will be no different.  Fortunately, the world is also inherently good and technology has also been a tool to counterbalance against bad actors and continues to protect us from technology weaponization.

Where do you see AI/Machine Learning and other smart technologies heading beyond 2025?

The rate of technology change we’re experiencing right now is so intense that any guess at this point would surely underestimate the progress we’ll make over the next five years.

Read More: AiThority Interview Series with Daniel Clark, CEO at Brain.fm

What start-ups are you keenly following?

In the past five years, there has been a flood of AI investments, so much so that it’s all too much to track in total. Organizations are thinking of amazing ways to use AI to change the way we interact with the world. One example of this is a VAST Data customer, Zebra Medical, who is a healthcare startup that is working on augmenting the patient care path by using Deep Learning AI to detect patient medical issues from Radiology scans.

Tag the one person in the industry whose answers to these questions you would love to read.

Jeffrey Dean, Google Senior Fellow in the Research Group, who leads the Google Brain project.

Thank you, Jeff! That was fun and hope to see you back on AiThority soon.

Jeff Denworth is the VP of Products and Marketing and a co-founder of VAST Data, Jeff has over a decade of experience with big data and cloud storage technologies. Prior to CTERA, Jeff served as VP of Marketing at DataDirect Networks (DDN) where he oversaw marketing, business and corporate development during a time of rapid sales growth. Previous to DDN, Jeff held sales roles at Cluster File Systems, Inc and Dataram Corporation.


Headquartered in New York City, VAST Data is a storage company breaking decades-old storage tradeoffs to bring an end to complex storage tiering and HDD usage in the enterprise. With VAST, customers can now consolidate applications onto a single tier of storage that meets the performance needs of the most demanding workloads, is scalable enough to manage all of a customer’s data and is affordable enough that it eliminates the need for complex storage tiering and archiving of data to slow storage systems.

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.