Automatic Machine Learning at Scale
H2O.ai, the open source leader in AI and machine learning (ML), unveiled the first phase of its strategic collaboration project with Intel, codenamed Project Blue Danube, focused on accelerating H2O.ai technologies on Intel platforms, including the new 2nd Gen Intel® Xeon® Scalable processors. The combination allows enterprise organizations that are embarking upon the AI journey with highly scalable, cost effective and faster path to insights by combining H2O.ai’s accelerated machine learning technology with Intel’s most advanced processor and memory architecture in order to gain a competitive edge.
“AI is eating the cloud after eating software and hardware. AI is powering the most significant value creation in the datacenter and on the cloud. Open source H2O AI in partnership with Intel democratizes AI on large data sets previously impossible and scoring on the edge ubiquitously. Making it faster, cheaper and easier to train, develop and deploy AI for enterprises,” said Sri Ambati, CEO and Founder of H2O.ai. “AI is an ecosystem play and the tight-knit co-development in hardware and open software from Intel and H2O.ai nurtures a forest of innovation and not just raising a tree.”
“Advanced machine learning software like H2O.ai, powered by 2nd Gen Intel Xeon Scalable processors and Intel Optane DC persistent memory, will help companies of all sizes achieve an AI transformation,” said Lisa Davis, VP and GM of Intel’s Digital Transformation & Scale Solutions, Enterprise & Government, Data Center Group. “These AI-ready platforms can enable rapid insights for every business, and we’re happy to collaborate with software innovators like H2O.ai to achieve this goal.”
H2O.ai and Intel: Democratizing AI Together
H2O.ai and Intel are collaborating to accelerate machine learning algorithms and libraries on Intel platforms. The new 2nd Gen Intel Xeon Scalable processors and Intel® Optane™ DC persistent memory were developed to deliver agility, scale and security for AI workloads. Codenamed, Project Blue Danube V1.0, unveiled today, enables the world’s leading enterprises to create highly scalable, high performance, more secure and accelerated data science workflows on the world’s most pervasive platform. The results that H2O achieved on the latest Intel platform are impressive versus traditional systems include:
- Scale up and out: Capable of handling 4X the data set size than traditional memory systems
- Faster Time to Insights: H2O with optimized XGBoost delivers 4.5X improvement in training time
- Server consolidation and efficiency: Run 100GB on a single machine (Intel® Xeon® Platinum 8200 processor plus Intel® Optane™ DC persistent memory) versus a 4-node cluster (Intel® Xeon® processor E5-2600)
Accelerating Data Science Workflows
H2O AutoML is the leading open source, scalable and distributed in-memory AI and machine learning platform. H2O AutoML supports the most widely used statistical and machine learning algorithms including gradient boosted machines, generalized linear models, deep learning and more. The H2O platform is extremely popular in both the R and Python communities and is used by over 18,000 companies and hundreds of thousands of data scientists.
In addition, H2O.ai is a member of the Intel AI Builders Program, an ecosystem of industry leading independent software vendors (ISVs), system integrators (SIs), original equipment manufacturers (OEMs), and enterprise end users who have a shared mission to accelerate the adoption of artificial intelligence across Intel platforms.
New Intel Architecture is Continually Enhanced for AI Frameworks
The world’s data centers run on Intel platforms for their outstanding performance, security, scalable storage, and memory. Since their launch in July 2017, Intel® Xeon® Scalable processors have been aggressively and continually enhanced to run demanding AI applications and frameworks alongside the traditional data center and cloud applications at which they already excel, allowing companies to use the same enterprise systems for machine learning, deep learning and traditional enterprise workloads.