Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

To Get the Most Out of AI, You Need to ‘Boss It Around’

By Roee Barak, Upword’s CEO & Founder

In many areas, artificial intelligence is making routine what used to be impossible – and that’s perhaps especially true in business. When users first witness what AI is capable of, they are often in awe – and when they realize they can utilize this technology to accomplish far more than they were capable of previously, they sometimes become overly dependent on it. Bad idea; despite its advanced capabilities, AI still has some serious flaws.

In order to be truly effective, AI needs a human “boss.” In the human-AI partnership, it’s the human who needs to come first – who needs to “lead” the AI system into providing results that make sense, by reviewing and applying experience and logic to the results provided by these amazing tools.

For example, AI can sometimes display “tunnel vision,” unaware of “big picture” policy issues and long-term corporate goals. It’s sort of like a star employee who is very good at their job – but lacks knowledge or awareness of a company’s long-term strategic goals. What you want from that employee is their productivity in their specific area – not an overhaul of the company based on their limited knowledge. Users need to treat AI systems as that “talented employee” – keeping in mind that they need to be in control of the overall project strategy. Just like a talented employee needs mentoring and guidance, so too AI. With that human guidance, companies can extract the greatest value from their new “star performers;” without it, the company could find itself in big trouble. And, when introducing new AI tools, which are increasingly employed to do jobs and tasks, they should be treated just like a new employee. This means that they need a boss to mentor and guide them, and supervise and check their work, at least in the beginning. And with AI, we are still at the beginning; and some micromanagement is often needed.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

There’s no question that AI tools have been a major boon to productivity – boosting output by nearly 500%, according to studies – and enabling businesses to become more agile, competitive, and efficient. And now that we’ve gotten used to employing them, doing business without these tools is inconceivable. As time goes on, AI tools will improve even more – and with them, the commensurate benefits in efficiency and profitability businesses will be able to extract from them. Indeed, for better or worse, business research today is often just posing a question to an AI tool.

It is true that humans are learning more about the nuances of this process all the time. For example, it is increasingly understood that by asking the right question, and using the right prompt, most AI tools will give you far better results than you could have gotten manually in such a short period of time. Automated AI systems will parse data in a matter of minutes, if not seconds – far faster than any human could hope to – and present it in a logical manner that is easily understood and comprehended. The temptation to take those results and run with them is, understandably, very strong.

Related Posts
1 of 7,731

But those automated results need to be understood, reviewed, and checked for accuracy. As smart as it is, AI sometimes comes up short. AI tools sometimes produce inaccurate, incorrect, or even illogical results – and without a human supervising the data generation process, companies could find themselves facing fines, lawsuits, and damaged reputations. Air Canada furnishes a good example of what can happen when AI runs unchecked: The company was recently ordered to pay damages to a passenger who paid full fare for tickets to a grandparent’s funeral, based on incorrect information furnished by an AI-powered chatbot. The company’s defense was that it could not be held responsible for incorrect information furnished by the chatbot – an argument rejected by the court, which ordered Air Canada to refund the overcharge. Had a human reviewed the information offered by the chatbot, the airline could have avoided the expense – and embarrassment – that ensued.

But it’s not just about company coffers: Overreliance on automatically generated AI data can damage or even derail a career. In order to make an effective presentation – whether in person, in a presentation, or in a Zoom meeting – the presenter needs to be intimately familiar with the information they are presenting. This is difficult if the presenter is simply using information produced by AI. For example,  if the automated data is incorrect, they are likely to be called out on it – with the audience or stakeholder demanding to know the source of data, the reasoning behind a statement, or the logic of an argument. And the presenter will likely not be able to answer in an effective and competent manner. A similar situation could arise even if the data is correct—those listening to the presentation could very well start asking follow-up questions, or want to know the source or reasoning behind it.

Also Read: The Ethical Dilemmas of Generative AI

In order to effectively – and safely – utilize their automated results, AI users need to engage in some “active learning,” where they evaluate the results and apply knowledge, facts, and experience to the review process. If the user follows that path, they could ask themselves the same questions likely to be posed to them – giving them time to find the answers they need. But ignoring that review could put them in jeopardy – making them look like fools when presenting information that on the surface appears to be correct, but might be riddled with flaws and or other factors that lead to questions.

It’s a fact that more than half of Americans are concerned about AI’s effects on their lives. Among other things, some fear losing their jobs to AI, some fear AI systems will compromise their privacy, some fear politicization of results. And it’s understandable why people fear AI: It’s been presented in the media as a monolithic, independent “monster” that is going to change life fundamentally, turning us all into its servants, if not destroy us. But that’s not the case: AI is just the latest in advanced tools that we can use to make business, and life, easier and better. We don’t work for AI – it works for us. AI users should keep this in mind when using advanced tools to do their business research. It’s the human user who is in charge, who needs to lead – and the best way to do that is to utilize their experience and knowledge to ensure that the results AI tools provide are accurate, correct, and logical.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.