Know My Company
How did you arrive at NVIDIA? Which technologies are you currently handling at NVIDIA?
The world’s data is doubling every year and holds the key to transforming every industry and our lives. AI and Machine Learning can help make sense of this data but require tremendous amounts of computing. With the end of Moore’s law, completely new approaches are needed to process the incredible amount of data available. NVIDIA solved this challenge with the invention of GPU-accelerated computing, providing a clear path forward.
In my current role, I focus on GPU-accelerated computing in the data center, helping customers transform their businesses with AI and High-Performance Computing (HPC). We’re focused on delivering a full platform, including powerful computing hardware infrastructure along with software and tools that allow developers and data scientists to make use of these offerings.
What is your experience with the journey into AI ML and Data Science?
Throughout my career, I’ve been fortunate to have had the opportunity to work on several innovative technologies that have transformed the world.
It started nearly 20 years ago in Mobile Computing. My first engineering job was to create a mobile Internet browser just as the combination of 3G and smartphones were about to revolutionize personal computing.
After that, I had the opportunity to work on Cloud Computing and applications based on it. We created products for always-on, any-device enterprise collaboration using the software as a service. For the first time, online meetings gave employees the freedom to collaborate with anyone, anywhere in the world and it really inspired me.
And now I’m very excited to be working at NVIDIA just as modern AI and Deep Learning are driving the greatest technological breakthroughs of our generation.
NVIDIA recently crushed 8 AI Performance records. What is your role and the skills your team displayed?
Waiting isn’t fun, but that’s often what the world’s leading AI researchers and Data Scientists must do. They often wait for hours to test and retrain their models. That’s why leadership in AI demands leadership in AI infrastructure. A strong AI infrastructure helps reduce wait times, enabling researchers to rapidly train their AI models and be the first to market with their products. This is why the performance records set by NVIDIA matter.
MLPerf measures the time-to-train AI models in six categories: image classification, object detection (heavy weight), object detection (light weight), translation (recurrent), translation (non-recurrent) and reinforcement learning. NVIDIA submitted for all six tests and set eight AI performance records three at maximum scale using multiple server nodes and five measured on a pre-accelerator basis.
We approached this problem holistically and made full use of our hardware, data center scale design and software prowess to achieve these results. Through software improvements alone we were able to achieve up to 80% better performance on our MLPerf submissions in just seven months. And when combined with better scaling on state-of-the-art infrastructure powered by DGX SuperPOD we were able to achieve up to 5X higher performance.
How do you manage to level speed and accuracy to customers and partners?
NVIDIA is focused on democratizing AI. The software used to accomplish our record-setting performance is available free of charge on the NVIDIA GPU Cloud (NGC), our hub for containerized software. It includes the optimizations that NVIDIA made to the deep learning frameworks like TensorFlow, MXNet, and PyTorch as well as training scripts that can be further customized by customers for their own datasets and applications.
Our AI platform is also accessible in every computing form factor from our partners. It’s available in the cloud from all leading Cloud providers, servers from all major data center and edge server makers, and deskside for developers.
Back to the Deep Learning realm, what is Accelerated Computing? How is it different from Cloud Computing and Edge?
Moore’s law is over. In the absence of automatic performance gains provided by Moore’s law, new approaches to computer architecture are needed to meet the insatiable computing demands of modern applications like AI and Deep Learning.
GPU-accelerated Computing, invented by NVIDIA, combines the highly specialized parallel processor called the GPU with a CPU to dramatically improve application performance. While GPU-accelerated computing starts with the GPU, it continues through system design, system software, algorithms, and optimized applications. All aspects are critical.
Since Computing-intensive applications like AI and Machine Learning will be deployed everywhere in the Cloud and at the Edge, accelerated computing is needed everywhere. It’s up to each organization to determine the right combination to best serve their customers and optimize their operations.
Which businesses are currently the fastest to adopt Accelerated Computing for AI? How are they benefiting from NVIDIA’s Accelerated Computing platform?
Pioneers of Accelerated Computing and AI include consumer Internet companies, Healthcare, Manufacturing, Transportation, Retail, telecommunications, smart cities, and Scientific Computing, but virtually every industry is now adopting accelerated computing.
The NVIDIA platform provides end-to-end development of AI from data processing to training of AI models and deploying them in applications.
For AI model training the NVIDIA platform essentially provides a time machine by dramatically reducing the training time. This helps data scientists and researchers iterate faster to create new AI models and solve many different types of use cases. AI models that used to take eight hours to train just two years ago can now be trained in just 80 seconds.
When these AI models are deployed in applications like conversational AI, understanding video, and providing recommendations, the NVIDIA platform enables real-time processing and efficiently serve them to billions of users.
What is the Future of Data Centers?
With 175 Zettabytes of data expected by 2025, the application workloads in the data centers are increasingly shifting to Data Analysis, Machine Learning and Artificial Intelligence. This requires the data centers of the future to be increasingly accelerated and power efficient.
Data centers will also expand to include edge data centers, which will process data closer to the source, along with centralized Cloud and enterprise data centers. Processing data at the edge will provide better response times and optimized network bandwidth usage.
What is your opinion on “Weaponization of AI/Machine Learning”? How do you promote your ideas?
We believe in the power of AI to improve the quality of life for millions of people around the world. Our goal is to build the best AI platform so that researchers and companies can make the world a better place. See our blog at NVIDIA.com for hundreds of stories about companies and researchers using AI for good.
AI is making products and services better, and it will have an even bigger impact in the years to come, Autonomous Vehicles will save thousands of lives and lower transportation costs; AI will accelerate Drug discovery and help us cure diseases.
The Crystal Gaze
What Cloud Analytics and SaaS start-ups and labs are you keenly following?
NVIDIA has a program called Inception, which nurtures exceptional start-ups who are revolutionizing industries with advances in AI and Data Science. At GTC Silicon Valley earlier this year we showcased companies such as Kinetica and Vyasa Analytics that are doing some groundbreaking work.
What technologies within AI/NLP and Cloud Analytics are you interested in?
I’m very interested in the technologies that are making conversational AI a reality. As humans, we express ourselves through language. I believe computers will soon be smart enough to understand the full meaning of our spoken words and help us appropriately. This will dramatically boost our productivity and provide new ways for brands to engage with their customers.
Another interesting area is sensor fusion. Combining sensory data from different sources and using that information to better understand the context and take actions is critical to addressing many of our grand challenges. Sensor fusion of data from the camera, radar and lidars are helping make autonomous vehicles a reality. In a similar manner, sensor fusion is helping retailers reimagine how they serve their customers, improve hospital care for patients and improve our interaction with smart devices.
Finally, the computational complexity for larger AI training runs has exploded by more than 300,000x in the last six years and shows no signs of stopping. The ability to train bigger and bigger AI models on more data is becoming critical to helping address harder use cases with AI. Progress in several technological areas is needed to enable extreme scaling up of performance including fast, low latency interconnects between processors in a node as well as across multiple nodes, direct memory access between processors working on the same problem and software algorithms to make it all work.
As a tech leader, what industries do you think would be the fastest to adopting Analytics and AI/ML with smooth efficiency? What are the new emerging markets for these technology markets?
AI is a new way to write software, one in which software writes software. This will transform every industry.
In the case of autonomous driving, these vehicles will transform the way we live, work and play to create safer and more efficient roads. To realize these benefits, the car of the future will require a massive amount of computational horsepower. NVIDIA’s platform delivers industry-leading performance to make autonomous driving a reality.
For healthcare, developers can harness AI to transform healthcare workloads to improve processing speed and image quality with the goal of detecting and diagnosing diseases more effectively.
In the area of retail, organizations can use AI to span every aspect of the industry whether it’s to optimize their supply chain, use existing data to increase conversation or customize shopping experiences with predictive modeling and personalization.
What’s your smartest work-related shortcut or productivity hack?
For the past few months, I’ve been focused on “deep work.” I’ve limited my notifications on mobile as well as on my laptop to try and concentrate on a single task for long stretches of time like 1-2 hours and sometimes even more. This has significantly boosted my productivity.
In general, our brains can only focus on one task at a time. When we try to multitask, we basically context switch from one task to another, which reduces our focus and reduces our productivity.
Tag the one person in the industry whose answers to these questions you would love to read:
I would tag Sumit Gupta, VP of products at IBM. He has been a mentor to me and a key thought leader in the industry. I would love to have him share his thoughts around these topics.
Thank you, Paresh ! That was fun and hope to see you back on AiThority soon.
Product management and marketing leader in deep learning, machine learning, HPC, cloud computing and enterprise collaboration. Led several products to industry-leading position and managed product roadmaps, global product launches, sales enablement initiatives, and strategic initiatives. Passionate builder of high-performance product management and marketing teams with mentoring and coaching. Specialities: Product Management, Product Marketing, Business Strategy, Mentoring, and Sales Enablement. Domains: Artificial Intelligence, Deep Learning, Machine learning, High-Performance Computing, Cloud Computing, Enterprise SaaS Applications, Online Marketing, Analytics, Enterprise Collaboration.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI, the next era of computing with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”