Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Applied AI Adoption: How to Get it Right

As the field of applied AI matures and there are more and more capabilities available to be used, it is often less a question of how to build powerful technology and more one of how to achieve a meaningful impact with that technology.

At the end of the day, AI is a tool after all.

It is a means to an end, not an end in it of itself. In the technology space, we care about the successful adoption and deployment of AI because of the innumerable problems it can solve. Yet, time and again we see companies, of all shapes and sizes, blindly invest in or deploy AI without a clear understanding of what problems they are looking to solve or the value they are trying to generate. This results in huge amounts of wasted time and money.

But it doesn’t have to be this way.

Regardless of the type of AI or the problem you are dealing with, there are a few key areas that are worth considering in order to deploy AI effectively:

  • Problem Identification and Definition
  • Evaluation and Expectation
  • Experimentation and Improvement

Each one of these areas has its own challenges but ignoring any of them can jeopardize whole projects or initiatives. Here is how companies and their technology teams can best approach each of these areas and drive AI adoption success.

Problem Identification and Definition

The main challenge in applied Data Science today is not how to solve a problem but knowing what problem to solve to begin with. Clearly scoping and understanding a problem is often the hardest challenge we face as technologists and business leaders. While certain problems will require varying degrees of technological complexity and innovation to solve, without an initial hypothesis it will be incredibly difficult to create impact. We often see isolated teams of data scientists given a vague mandate who may go on to create technically impressive prototypes or Proof of Concepts (POCs.) Their work may even result in academic publications, but what they build fails to be deployed within the business and create any user value or impact.

Deep Learning Anomaly Detection Gets Big Boost with Imagimob AI’s tinyML Platform

There are a number of reasons for this but solving a problem that is not a priority or one that does not create value is often part of the story.

Recommended AI ML Blog; Biggest AI-based Telehealth Updates for 2022

Evaluation and Expectation

After problem definition, one of the most common challenges in applied AI is a misalignment in expectations of performance among stakeholders. What is good enough? What type of mistakes are acceptable, and which are unacceptable? What role does subjectivity and potential or perceived bias in data sets play?

Related Posts
1 of 7,331

How important are trust and explainability?

These are just a few questions where the agreement needs to be reached and where expectations may initially be misaligned. And this misalignment isn’t just limited to between internal and external teams but is commonplace among teams “in-house.”

For instance, when dealing with sarcasm detection, a product manager might have an expectation that anything with less than 70% accuracy will not meet a customer’s expectation while a data scientist might see 60% accuracy as a massive achievement given the complexity of the problem. Without alignment, this can lead to tension and debate over whether an AI solution is “good enough” to be deployed to customers. The risk is that you invest in building an AI solution that is then determined to not be accurate enough to be used by anyone. It is imperative to align on expectations first before trying to solve a problem instead of trying to do so retroactively.

Once these expectations have been agreed upon, it is important that stakeholders settle on the proper metrics and methods of evaluation. This is obviously easier said than done, however, to start with it is key that business leaders and technology teams keep two prevailing facts in mind when building evaluation frameworks: all AI systems will eventually make a mistake; and even though AI will often outperform humans, it will sometimes make really stupid mistakes that a human would never make (e.g., an automatic camera following the bald head of a referee instead of the ball during a soccer match). Mistakes such as these can be discouraging and cause decision-makers to completely scrap AI projects but they do not necessarily indicate that an AI solution is ineffective.

By keeping these two things in mind, businesses can approach evaluating AI solutions more realistically.

Experimenting and Improving in AI Adoption Stages

Once we have a clear problem to solve with aligned expectations of performance around agreed evaluation metrics, we can start experimenting to find the best possible solution to the problem. This has been, arguably, the step with the most attention in the data science and broader technology community over the years.

The well-known, and well-ignored advice is to start with the simplest possible solution that might work end to end. Once this solution is built, and assuming there is a clear evaluation framework to compare different alternatives, we can experiment and compare as many approaches as we want and select the best one given the needs. Here we need to remember to be problem-driven. Sometimes an off-the-shelf solution will do the trick and engineers with machine learning expertise can integrate such solutions into a broader offering.

Other times you may need novel research and will want to involve data scientists with deeper expertise in a given problem space. But it is important to not just assume that novel research is needed without examining the powerful existing solutions that already exist.

When experimenting it is also important to remember that effective AI cannot be created in a vacuum. Without the right technological readiness, it will be challenging to build AI in the first place and then impossible to deploy and maintain it thereafter.

Machine Learning Operations or MLOps has adapted much of traditional DevOps to the data science life cycle. Importing concepts like continuous improvement and development as well as ongoing monitoring. Such an environment is necessary to create ease of experimentation with minimum friction for testing new models and ideas. If your data scientists are able to build POCs and prototypes easily, these can be shown either to clients or internally earlier in order to iterate more quickly.

After this initial experimentation when a system is in production and integrated into your product it is important to remember that AI solutions are not “fire and forget.”

Once live, they need to be monitored and potentially re-tuned or even changed completely over time. Again, this sounds obvious to anyone who has worked in software development. However, due to the relative immaturity of AI, we often do not see widespread adoption of MLOps practices like continual improvement and monitoring.

Editorial note: Miguel Martinez, Co-Founder and Chief Data Scientist at Signal AI is a co-author of this post

[To share your insights with us, please write to sghosh@martechseries.com]

 

Comments are closed.