The Three Scariest Parts of AI
Your job can be unpredictable.
One day…. the AI project mandated by the board just went into production without a hitch and is exceeding its forecast bottom line impact. You’re a superhero. Your CEO is on the cover of your industry’s biggest publication. You and your team have gotten bonuses and promotions.
Another day… your team failed to deliver on the company’s most visible, most strategic project, a board-driven game changer and the business’ first foray into AI. It’s not clear who will stay and who will go, but the team’s reputations are mangled. Your PR team is in a full-time damage control mode in response to the brand-killing headlines.
Read More: The Evolution of the Chatbot Experience
The high priority, size and visibility of AI projects brings excitement and stress. Add to that the fact that Gartner tells us that over half of all enterprise AI projects fail, and we can see why AI can be scary.
From a front-row vantage point, these are the three scariest things that put budgets, timelines and projects in jeopardy:
-
Underestimating the training data challenge
You need much, much, much more training data than you imagine. You can’t buy it off the shelf in the quantities you need, with the use case-specific annotation your algorithm requires. You almost certainly can’t produce it internally, either. You can’t expect your data science team to do the job. This is the most common scenario we step into: The data science team is overwhelmed by training data and the entire project is on the precipice.
-
Ignoring the value of agile
Traditional application developers learned a long time ago that the waterfall method of development — where even huge applications are planned, architected, coded and tested as a monolith — is impractical. Today, agile methodologies, where smaller chunks of functionality are built and tested iteratively, are dominant. And yet in our experience, most enterprise AI projects follow the old waterfall method. The result? The AI project is all cost and no benefit until every aspect of the model is at the required level of confidence. If that ever happens.
Read More: How Can App Developers Maximize Their App’s Revenue?
-
Failing to keep bias out of your model
A lot has been written about bias in Machine Learning, and for good reason. Models do what they are taught to do, and if they’re trained on biased data, their behavior will reflect that bias. Fortunately, rooting bias out of training data is a well-established discipline, even if data scientists themselves aren’t always data bias experts. Ignore this issue, at your own peril. With data bias, your facial recognition algorithm makes embarrassing mistakes and your autonomous vehicles fail to distinguish white trucks from cloudy backdrops.
2019 will see more AI projects coming online and some are bound to be scary, but by anticipating your training data needs, iterating often and testing for bias, you can go into your next project confident that this will be one of the two projects that succeeds.
Read More: Why Facial Recognition Providers Must Take Consumer Privacy Seriously
Copper scrap purchase Copper scrap chemical treatment Metal waste melting
Copper cable recycling best practices, Recycling yard management, Industrial copper waste solutions
Thanks for sharing the article, and more importantly, your personal experience! Mindfully using our emotions as data about our inner state and knowing when it’s better to de-escalate by taking a time out are great tools. Appreciate you reading and sharing your story, since I can certainly relate and I think others can too.
Meat Pie in Aluminium Foil Pie Dish or Easter Grain Pie