How can Startups Make Machine Learning Models Production-Ready?
Today, every technology startup needs to embrace AI and machine learning models to stay relevant in their business. Machine learning (ML), if implemented well, can have a direct impact on a company’s ability to succeed and raise the next round of funding. However, the path to implementing ML solutions comes with some specific hurdles for start-ups.
Let’s discuss the top considerations for getting ML models production-ready and the best approaches for a startup.
Availability of Data
An ML model is only as good as the data used to train it. For most startups, the biggest challenge is obtaining enough data related to the business problem they are trying to address in order to train the model sufficiently. Generic datasets are not useful when it comes to solving the unique and often complex problems that startups typically focus on.
One approach is to start with a simple ML model that can work with sparse data, refine the output with rule-based extraction techniques and roll out the model as a subset of the feature to customers. Then improve the model by setting up a pipeline for a collection of labeled data. Techniques such as data fingerprinting using autoencoders can also be used to incrementally develop the ML model.
Choice of Model
With the spiraling popularity of neural networks and their success in face recognition and other object recognition problems, many startups try to implement neural networks to solve business problems. But deep learning networks require even larger amounts of training data than traditional ML models, which can stall a project indefinitely.
A good starting point is selecting ML models based on regression, decision trees, and Support Vector Machine (SVM). After acquiring a large amount of training data, other deep learning models can be considered.
AI explainability is a key requirement for startups especially in the Security, FinTech and healthcare domains due to legal and regulatory requirements. If the AI is making decisions, the way in which the decisions are being made needs to be transparent and understandable by humans. If not, the model cannot be ‘trusted’ to perform accurately and without bias. Thus, the explainability of the model becomes relevant when the algorithm is making decisions where there is an element of risk or when it is trying to identify the root cause of a problem.
Recommended: SAS Accelerates Development of Analytics and Data Science Talent With New Academic Program
Consider a loan processing system that uses machine learning to approve or reject a loan application. A simple decision tree model might start with looking at how complete the application is, and then the credit rating of the applicant followed by the number of applicants and so on to reach a decision. A similar problem when solved using a deep neural network, the layers do not necessarily map to human recognizable features and hence hard to explain.
Insert a recommended approach. (Example: Avoid starting with DL/ neural network models, which are more complex and almost impossible to explain.)
Why is a data pipeline necessary when there is no data?
Most people assume that once a model goes into production, the job is done. In reality, it’s just the beginning.
In order to measure the performance of the model in production and iteratively improve upon it, a data pipeline is required to collect data, label data, retrain the model, and validate before deployment.
The model performance in the test and production environments could vary based on the distribution and the size of data. Many times the choice of the algorithm also depends on the scale of execution. One might need to compromise on the model accuracy and choose a simpler algorithm to ensure that the model scales and also controls cost.
Recommended AI ML News: Intel Introduces IoT-Enhanced Processors to Increase Performance, AI, Security
The Right Expertise in Building Machine Learning Models
Does every startup need a data scientist? Can ML services such as AutoML do the job? Unfortunately, no. What these platforms provide is a set of tools for data analysis and model building. Startups still need a seasoned data scientist who’d flawlessly discover features, figure out the model, and choose the right validation method.
Every startup needs a data scientist with a deep mathematical background, problem-solving skills and engineering expertise. Most data scientists and machine learning experts have a post-graduation in mathematics and are best at building complex models, but aren’t necessarily good at implementing an incremental engineering solution. Individuals excelling in both engineering and mathematical skills are rare to find and expensive to hire.
Pairing up a data scientist with a product engineer works very well. While the product engineer helps with the data pipeline and the extraction rules, the data scientist focuses on feature engineering, model development and validation.
Read More: Pulse Secure Fortifies Secure Access Through McAfee’s Security Innovation Alliance
Take an Iterative ML Approach
A machine learning solution can take considerable time to build and it might require a year or two to attain absolute accuracy in terms of performance. It also requires IT infrastructure to store and process the data, which could turn out to be an expensive pursuit.
Startups cannot afford to wait for a year to figure out if the problem can be solved or not making the use of machine learning. Therefore it is imperative to know the efficacy of the solution as early as possible. In other words, fail fast. Similar to the lean methodology for product development, startups need to adopt an iterative approach to ML model development – starting with simple models, setting up a data pipeline to collect labeled data, and move towards more complex algorithms.
(To share your insights on AI ML and Data Science, please write to us at firstname.lastname@example.org)