Tecton Releases Notebook-Driven Development, Providing the Fastest Path for Data Teams to Build Batch, Streaming and Real-Time Features for Machine Learning and to Deploy Them in Production Quickly and Reliably
Tecton, the leading machine learning (ML) feature platform company, announced version 0.6 of its flagship feature platform. The release introduces new capabilities that accelerate the process of building production-ready features and expands support for streaming data.
Tecton was founded by the creators of Uber’s Michelangelo platform to make world-class ML accessible to every company. Tecton is a fully-managed ML feature platform that orchestrates the complete lifecycle of features, from transformation to online serving, with enterprise-grade SLAs. The platform enables ML engineers and data scientists to automate the transformation of raw data, generate training data sets and serve features for online inference at scale. Tecton solves the many data and engineering hurdles that keep development times painfully high and prevent predictive applications from ever reaching production. Customers span Fortune 500 across all major verticals as well as tech-forward innovators like Convoy, HelloFresh, Plaid and Tide.
Recommended AI: Google Cloud Announces Biggest-ever Upgrade to Vertex AI
“We’re excited to be releasing the latest version of Tecton,” said Mike Del Balso, co-founder and CEO of Tecton. “This latest release makes it simpler than ever to build high quality features to power real-time ML applications. Tecton now integrates more deeply into the data science workflow by allowing users to develop and test production-ready features directly in notebooks using Tecton’s feature engineering framework. We’ve also improved our streaming capabilities to process and serve very fresh data, ultimately resulting in higher model accuracy and better business outcomes for our users.”
Tecton 0.6 introduces notebook-driven development to build ML features and generate training datasets. Data scientists and ML engineers can now leverage Tecton’s feature engineering framework in their core modeling workflow without ever leaving their notebook. When it comes time to productionize, feature definitions can be applied to a repo and pushed to production in a matter of minutes. This unique approach offers speed and flexibility in feature development while preserving the best practices of a GitOps workflow including “features-as-code,” version control, and CI/CD.
Recommended AI: AI in Retail: Israeli Startup Hexa is Enabling Shoppers to View Products in 360 Degrees & Try Them Virtually
Today Tecton is also introducing its stream ingest API, which provides more flexibility in managing streaming features. Tecton now provides teams with the choice to either automate their streaming pipelines with Tecton, or to transform their streaming data outside of Tecton using the stream processing engine of their choosing. Streaming data processed outside of Tecton can now be ingested directly into the feature platform, allowing teams to standardize on a single platform to store and serve all their feature data.
Streaming features need to be processed as fast as possible to provide very fresh data for more accurate predictions. Tecton 0.6 introduces a new continuous mode for non-aggregate streaming features, allowing feature data to be processed and updated within seconds of arriving from the stream. Use cases that rely on low-latency features, like fraud detection or real-time pricing, can now make more accurate predictions.
Recommended AI: Is Customer Experience Strategy Making or Breaking Your ‘Shopping Festival’ Sales?
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.