Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Why AI is Both a Risk and a Way to Manage Risk


Spending on Artificial Intelligence (AI) is expected to more than double from $35 billion in 2019 to $79 billion in 2022, according to IDC forecasts. But as we enter the fourth industrial revolution powered by AI, technologists have divided themselves into utopian and alarmists camps. That’s a false and dangerous dichotomy. We need to adopt a pragmatic mindset that sees AI as both a risk and a way to manage risk.

The Risk of Rogue AI

From killer robots to racism, today’s headlines provide AI alarmists with ample fodder. The risks associated with AI grow as technology improves and proliferates. But unlike other paradigm-shifting technologies like the printing press, mass production, or digital commerce, it’s the invisible aspects of AI that we most need to worry about: algorithms that learn from patterns and can trigger costly errors and, left unchecked, can pull projects and organizations in entirely wrong directions with catastrophic consequences.

For the first time in history, a single person can customize a message for billions and share it with them within a matter of days. A software engineer can create an army of AI-powered bots, each pretending to be a different person, promoting biased content on behalf of political or commercial interests or worse, attack vulnerable systems.

Read More: How CMOs Succeed with AI-Powered CX

Trusted AI Can Help Manage Risk

The doomsday scenarios aren’t a fait accompli, but they do underscore the need for AI systems that engage with humans in transparent ways.  Every time a new technology is introduced, it creates new challenges, safety issues, and potential hazards. For example, when pharmaceuticals were first introduced, there were no safety tests, quality standards, childproof caps or tamper-resistant packages. AI is a new technology and will undergo a similar evolution.

To trust an AI system, we must have confidence in its decisions. Increasingly, bankers are asking important questions about how AI will affect consumers. The Defense Department has signaled that it understands the importance of empowering ethicists to guide AI technologies.

Meanwhile, we’re beginning to include AI in our long-overdue conversations about criminal justice. These are all good signs, but we need to rapidly scale our ethical inquiries by using “supervisory” AI systems to provide visibility and control over “production” AI systems.

Read More: Shaped by AI, the Future of Work Sees Soft Skills and Creativity as Essential

Only Pragmatism Will Safeguard Our Values

AI systems must reflect our values. We can do this through investment, education, and policy. But first, we must dispense with the utopian and alarmist positions. Utopians assume that every AI solution will automatically be an improvement over what came before it, and therefore miss the opportunity to address critical questions about values before deployment.

At the opposite end of the spectrum, alarmists assume the worst and therefore fail to show up to the debate. A pragmatic approach that sees AI as both a risk and a way to manage risk by pairing AI with other AI is the prerequisite mental model for grappling with the issues raised by the fourth industrial revolution.

Read More: Efficient Ways the AI Will Boost Your E-Commerce Sales

Leave A Reply

Your email address will not be published.