Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Research Shows Humans Are Attacking Artificial Intelligence Systems

Online hackers are increasingly hijacking search engines and social media platforms to carry out cyber attacks, a research group led by De Montfort University Leicester (DMU) has found.

Artificial intelligence (AI) software found in commonly used search engines, social media platforms and recommendation websites is being manipulated by hackers more frequently than people realise, according to a new report.

Published by the European Union-funded project SHERPA which has been established to enhance the responsible development of AI and examine the impact of smart information systems (SIS) on ethics and human rights the report states that attacks against AI systems are already occurring regularly but are not easy to identify.

Read More: NVIDIA DGX-Ready Program Goes Global, Doubles Co-Location Partners

“Our consortium partners found that hackers tend to focus most of their efforts on manipulating existing AI systems for malicious purposes instead of developing new attacks that use machine learning,” explained SHERPA Project Coordinator Professor Bernd Stahl from DMU.

SHERPA researchers including representatives and consortium partners from F-Secure; a cyber security firm that builds detection and responsible solutions to keep businesses and people safe online identified a number of potentially malicious uses for AI that are well within reach of today’s attackers, including the creation of sophisticated disinformation and social engineering campaigns.

Andy Patel, a researcher with F-Secure’s Artificial Intelligence Center of Excellence, said: “Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers.

Related Posts
1 of 40,645

Read More: People.ai Accelerates AI Ml Partnerships and People Developmet with New Hires

“But human attacks against AI actually happen all the time.”

The report also notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content, and some AI models have even been withheld from the public to prevent them from being abused by attackers.

“At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it,” said Andy.

“AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect. And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”

Professor Stahl added: “Our project’s aim is to understand ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these. We can’t have meaningful conversations about human rights, privacy, or ethics in AI without considering cyber security.

“And as a trustworthy source of security knowledge, F-Secure’s contributions are a central part of the project.”

Read More: The Reality Of Data Bias And How To Make The Most Of It

1 Comment
  1. Scrap copper sorting says

    Copper scrap contracts Copper scrap recovery methods Scrap metal recovery processing
    Copper cable scrap inspection, Scrap metal recycling economics, Copper recovery solutions

Leave A Reply

Your email address will not be published.