Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”] Releases New Streaming Endpoints With One-Click Deployment for Real-time Machine Learning Applications, the data science platform simplifying model management and introducing advanced MLOps to the industry, today announced its streaming endpoints solution, a new capability for deploying ML models to production with Apache Kafka in one click. is the first ML platform to enable one click streaming endpoint deployment for large-scale and real-time predictions with high throughput and low latency.

85% of machine learning models don’t get to production due to the technical complexity of deploying the model in the right environment and architecture. Models can be deployed in a variety of different ways. Batch deployment for offline inference and web service for more real-time scenarios. These two approaches cover most of the ML use cases, but they both fall short in an enterprise setting when you need to scale and stream millions of predictions in real time. Enterprises require fast, scalable predictions to execute critical and time sensitive business decisions.

Recommended AI News: Levolution’s Levl Token Is Set to Go Live on One of Blockchain’s Most Established Exchanges Hitbtc is thrilled to announce its new capability of deploying ML models to production with a streaming architecture of producer/consumer interface with native integration to Apache Kafka and AWS Kinesis. In just one click, data scientists and engineers can deploy any kind of model as an endpoint that can receive data as stream and output predictions as streams.

Deployed models will be tracked with advanced model management and model monitoring solutions including alerts, retraining, A/B testing and canary rollout, autoscaling and more.

This new capability allows engineers to support and predict millions of samples in a real-time environment. This architecture is ideal for time sensitive or event-based predictions, recommender systems, and large-scale applications that require high throughput, low latency and fault tolerant environments.

Recommended AI News: Visbit Launches Visbit Lite and DeepSmooth to Unlock 60FPS Experience for High-Resolution VR Videos

Playtika has 10 million daily active users (DAU), 10 billion daily events and over 9TB of daily processed data for our online games. To provide our players with a personalized experience, we need to ensure our models run at peak performance at all times,” says Avi Gabay, Director of Architecture at Playtika. “With we were able to increase our model throughput by up to 50% and on average by 30% when comparing to RESTful APIs. also allows us to monitor our models in production, set alerts and retrain with high-level automation ML pipelines.”

The new release extends the market footprint and enhances the prior announcements of NVIDIA DGX-Ready partnership and Red Hat unified control plane.

Recommended AI News: HERE Offers SMEs Free Delivery Tool to Meet Demands Of COVID-19

Leave A Reply

Your email address will not be published.