The 5 Costs Hindering Enterprise AI in 2021
Enterprise AI is a buzzing trend. Organizations that efficiently leverage data science and machine learning (ML) techniques are more likely to improve business operations and processes on the whole. However, many organizations still lack the basic principles that must be followed to create value from AI when used at scale and, likewise, are often faced with AI that generates rising costs and falling revenues. Organizations are looking to remedy this — a study by MarketMuse revealed that 80% of IT and corporate business leaders want to learn more about the cost of implementing AI technology in an enterprise.
When organizations begin to adopt Enterprise AI, the most common approach is to start with a finite list of select use cases. A study by Accenture uncovered that companies that use this multi-use case technique to begin with report nearly three times the return from AI investments compared to companies pursuing siloed proof of concepts. Naturally, when organizations find success with their first list of use cases, they repeat the process, adding additional cases. By the tenth or twentieth AI use case, this will usually still have a positive impact on the balance sheet.
However, there is a point where the economic value of Enterprise AI decreases — when the marginal value of the next use case is lower than the marginal costs. It then becomes economically impossible to scale use cases or doing so would come at a detrimental loss. Moreso, it’s a mistake to think that an organization can easily generalize Enterprise AI everywhere within the business by simply taking on more AI projects throughout the company.
Each implementation requires a strategic, well-considered approach – one size doesn’t fit all. So, what are the costs, and how can an organization best manage them?
Data Cleaning and Preparation
The most difficult or time-consuming part of the data process within an organization tends to be data cleaning and wrangling. Indeed, about 80% of data scientists spend their time finding, cleaning, and preparing data. To that end, this provides a huge task both in cost and employee time, especially when organizations are doing this for every single use case or AI project.
Read More on Enterprise AI : H2O.ai brings AutoML to Equifax Ignite with Driverless AI
To avoid repeating this work across the wider organization, data scientists need to prioritize data prep efficiency and reuse. This can be achieved by implementing systems that allow data to be found, cleaned, and prepared only once. This will simultaneously reduce workload and overall costs.
Operationalization and Pushing to Production
With multiple workflows underway during the process of operationalization, the first version of any ML model could take months from release to production. This is because consistent packaging, release, and operationalization are complex and, without any way to do it consistently, it can be extremely time-consuming.
This leads to a huge cost, not only in staff hours but also in lost revenue for the amount of time the ML model is not in production and able to benefit the business.
Organizations need to invest in establishing consistent processes to manage the packaging of code, release, and operationalization. By incorporating reuse from design to production, they can scale without the need to recode models and pipelines from scratch.
Data Scientist Cost and Retention
By nature, data scientists are driven by efficiency, which means they don’t like to do things twice if they don’t need to. Therefore, if they are spending too much time preparing and cleaning data or partaking in repetitive work instead of problem-solving, they tend to become dissatisfied and, in turn, the company will spend money dealing with constant turnover.
Here, reducing costs is a matter of proper tooling and providing the necessary resources for staff to capitalize on lessons learned from past projects and reuse work.
Data is constantly changing, which causes models to drift. As such, the model can either become less effective in a best-case scenario and, in the worst case, it can become harmful to an organization. Not only that, but the more use cases an organization takes on, maintenance becomes an issue — driving the costs even further.
MLOps can control the cost of maintenance, shifting from a one-off task handled by a different person — usually the original data scientist who worked on the project — for each model into a systematized, centralized task. Part of maintenance is also ensuring easy reuse across the organization, which means that people can easily access information, including seeing data transformation and models.
Complex Technological Stacks
All technologies in the AI space are rapidly evolving, which may lead to switching from one to another, a costly undertaking. This is especially heightened within large organizations, in which different teams or even geographies could be using different technologies altogether.
Organizations need to operate in unison, with a centralized and governed platform that can adapt even across changes in technology to ensure resilience and capitalization.
Shifting to AI as a Revenue Centre
In today’s world, it’s not enough for organizations to simply leverage Enterprise AI techniques at any price — they must do so efficiently. To achieve this, companies need to move AI from a cost to a revenue center. If organizations can address the five major costs outlined above, and decrease both marginal costs and incremental maintenance costs, they will be in a much better position to scale and profit from Enterprise AI in the future.
[To share your insights, please write to us at firstname.lastname@example.org]