Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

HiddenLayer Creates a Threat Intelligence Team Focused on Thwarting ML Attacks

HiddenLayer, the developer of a unique security platform that safeguards the machine learning models enterprise organizations use behind their most important products, announced the formation of its Synaptic Adversarial Intelligence (SAI) team to raise awareness surrounding the threats facing machine learning (ML) and artificial intelligence (AI) systems.

The SAI’s primary mission is to educate data scientists, MLDevOps teams, and cyber security professionals on how to evaluate the vulnerabilities and risks associated with ML/AI so they can make more security-conscious implementations and deployments. The insights gathered by the SAI team are leveraged to conduct risk assessments and generate intelligence reports that expose the adversarial ML threat landscape. Collectively, the multidisciplinary cyber security experts and data scientists have many decades of experience in cyber security and deep backgrounds in malware detection, threat intelligence, reverse engineering, incident response, digital forensics, and adversarial machine learning.

Recommended AI News: Omnicom Precision Marketing Group Leads Forrester’s Creative Agency Assessment

Related Posts
1 of 41,124

Until recently, most adversarial ML/AI research has focused on the mathematical aspect, making algorithms more robust in handling malicious input. Now security researchers are increasingly exploring ML algorithms and how models are developed, maintained, packaged, and deployed, hunting for weaknesses and vulnerabilities across the broader software ecosystem. They have uncovered a number of new attack techniques and, in turn, developed a greater understanding of how practical attacks are performed against real-world ML implementations.

“Alongside our commitment to increasing awareness of ML security, we will also actively assist in the development of countermeasures to thwart ML adversaries through the monitoring of deployed models, as well as providing mechanisms to allow defenders to respond to attacks,” said Tom Bonner, Senior Director of Adversarial Machine Learning Research at HiddenLayer. “There has been a tremendous effort from several organizations, such as MITRE and NIST, to better understand and quantify the risks associated with ML/AI. We look forward to working alongside these industry leaders to broaden the pool of knowledge, define threat models, drive policy and regulation, and most critically, prevent attacks.”

Recommended AI News: 1 in 3 Employees Do Not Understand the Importance of Cybersecurity at Work

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.