Hazelcast Speeds Time-to-Market for Operationalization of Machine Learning in Enterprise Applications
Hazelcast Jet adds high-scale, low latency machine learning inference execution and more
Hazelcast, the leading in-memory computing platform, announced the easiest way to deploy machine learning (ML) models into ultra-low latency production with its support for running native Python- or Java-based models at real-time speeds. The latest release of the event stream processing engine, Hazelcast Jet, now helps enterprises unlock profit potential faster by accelerating and simplifying ML and artificial intelligence (AI) deployments for mission-critical applications.
Recent research1 shows that 33% of IT decision-makers see ML and AI as the greatest opportunity to unlock profits, however, 86% of organizations are having difficulty in managing the advances in technology. From its recent integration as an Apache Beam Runner to the new features announced today, Hazelcast Jet continues to simplify how enterprises can deploy ultra-fast stream processing to support time-sensitive applications and operations pertaining to ML, Edge computing, and more.
At the time of this announcement, Kelly Herrell, CEO of Hazelcast said,
“Last year we simplified streaming by delivering the industry’s only all-in-one processing system, eliminating the need for complex IT designs built from many independent components. Now we’re moving the industry forward again by simplifying how enterprises can deploy ultra-low latency machine learning models within that efficient system.”
Kelly added, “There are millions of dollars to be won when microseconds count and Hazelcast Jet makes that reality, faster than any alternative, especially for applications leveraging artificial intelligence and machine learning.”
Fast-moving enterprises, such as financial services organizations, often rely on resource-heavy frameworks to create ML and AI applications that analyze transactions, process information and serve customers. These organizations are burdened with infrastructural complexity that not only inhibits their ability to get value from ML and AI, but introduces additional latency throughout the application. With its new capabilities, Hazelcast Jet significantly reduces time-to-deployment through new inference runners for any native Python- and Java-based models. The new Jet release also includes expanded database support and other updates focused on data integrity.
“With machine learning inferencing in Hazelcast Jet, customers can take models from their data scientists unchanged and deploy within a streaming pipeline,” said Greg Luck, CTO of Hazelcast.
Greg added, “This approach completely eliminates the impedance mismatch between the data scientist and data engineer since Hazelcast Jet can handle the data ingestion, transformation, scoring and post-processing.”
A High-Performance Platform for Real-Time Machine Learning Inference
Since Python is the leading programming language used by data scientists to develop ML models, businesses need a fast and easy way to deploy Python models into production. However, companies often struggle as they lack the optimal infrastructure to efficiently operationalize those models.
Enterprises must either convert the models to another language to run within their infrastructure or bolt-on a separate subsystem, both of which result in low performance. To address that challenge, Hazelcast Jet now features an “inference runner” that allows models to be natively plugged into the stream processing pipeline.
In a major leap forward for the industry and customers, Jet allows developers to deploy Python models in a stream processing architecture which enables enterprises to feed real-time streaming data directly into the model. This is a stark contrast compared to other frameworks that call out to external services via REST, which not only adds significant round-trip network latency but also requires administrative overhead in maintaining the external services, especially for ensuring business continuity. This challenge is compounded as more and more ML models are operationalized.
In Jet, the Python models are run locally to the processing jobs, eliminating unnecessary latency and leveraging the built-in resilience to support mission-critical deployments.
This major advancement adds significantly to the benefits of the industry-leading, real-time performance from Hazelcast. ML Inference jobs can be scaled to the number of cores per Jet node and then scaled linearly by adding more Jet nodes to the job. When combined with fault tolerance, security and scale, Hazelcast Jet provides enterprises with a platform for executing high-speed, real-time ML deployments in production.
Expansion of Stream Processing Guarantees
Hazelcast Jet now incorporates new logic that runs a two-phase commit to ensure consistency across a broader set of data sources and sinks. This new logic expands upon the “exactly once” guarantee by tracking reads and writes at the source and sink levels and ensures no data is lost or duplicately processed when a failure or outage occurs. Customers can, for example, read data from a Java Message Service (JMS) topic, process the data and write it to an Apache Kafka topic with an “exactly once” guarantee. This guarantee is critical in systems where lost or duplicate data can be costly, such as in payment processing or e-commerce transaction systems.
Change Data Capture Integration
To allow databases to act as streaming sources, Hazelcast Jet now includes a change data capture (CDC) integration with the open-source project Debezium. The CDC integration adds support for a number of popular databases including MySQL, PostgreSQL, MongoDB and SQL Server. Since CDC effectively creates a stream out of database updates, Hazelcast Jet is a natural fit to efficiently process the updates at high-speed for the applications that depend on the latest data.
Currently, Hazelcast delivers the in-memory computing platform that empowers Global 2000 enterprises to achieve ultra-fast application performance – at any scale. Built for low-latency data processing, Hazelcast’s cloud-native in-memory data store and event stream processing software technologies are trusted by leading companies such as JPMorgan Chase, Charter Communications, Ellie Mae, UBS, and National Australia Bank to accelerate data-centric applications.