Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

IEEE Computer Society Emerging Technology Fund Recipient Introduces Machine Learning Cybersecurity Benchmarks

At the virtual Backdoor Attacks and Defenses in Machine Learning (BANDS) workshop during The Eleventh International Conference on Learning Representations (ICLR), participants in the IEEE Trojan Removal Competition presented their findings and success rates at effectively and efficiently mitigating the effects of neural trojans while maintaining high performance. Evaluated on clean accuracy, poisoned accuracy, and attack success rate, the competition’s winning team from the Harbin Institute of Technology in Shenzhen, with set HZZQ Defense, formulated a highly effective solution, resulting in a 98.14% poisoned accuracy rate and only a 0.12% attack success rate.

Latest Insights: Embrace AI to become a W.I.T.C.H. Leader

“The IEEE Trojan Removal Competition is a fundamental solution to improve the trustworthy implementation of neural networks from implanted backdoors,” said Prof. Meikang Qiu, chair of IEEE Smart Computing Special Technical Committee (SCSTC) and full professor of Beacom College of Computer and Cyber Science at Dakota State UniversityMadison, S.D., U.S.A. He also was named the distinguished contributor of IEEE Computer Society in 2021. “This competition’s emphasis on Trojan Removal is vital because it encourages research and development efforts toward enhancing an underexplored but paramount issue.”

In 2022, IEEE CS established its Emerging Technology Fund, and for the first time, awarded $25,000 USD to IEEE SCSTC for the “Annual Competition on Emerging Issues of Data Security and Privacy (EDISP),” which yielded the IEEE Trojan Removal Competition (TRC ’22). The proposal offered a novel take on a cyber topic, because unlike most existing competitions that only focus on backdoor model detection, this competition encouraged participants to explore solutions that can enhance the security of neural networks. By developing general, effective, and efficient white box trojan removal techniques, participants have contributed to building trust in deep learning and artificial intelligence, especially for pre-trained models in the wild, which is crucial to protecting artificial intelligence from potential attacks.

With 1,706 valid submissions from 44 teams worldwide, six groups successfully developed techniques that achieved better results than the state-of-the-art baseline metrics published in top machine-learning venues. The benchmarks summarizing the models and attacks used during the competition are being released to enable additional research and evaluation.

“We’re hoping that this benchmark provides diverse and easy access to model settings for people coming up with new AI security techniques,” shared Yi Zeng, the competition chair of the IEEE TRC’22, research assistant at Bradley Department of Electrical and Computer Engineering, Virginia TechBlacksburg, Va., U.S.A. “This competition has yielded new data sets consisting of trained poisoned pre-trained models that are of different architectures and trained on diverse kinds of data distributions with really high attack success rates, and now developers can explore new defense methods and get rid of remaining vulnerabilities.”

During the competition, collective participant results yielded two key findings:

Related Posts
1 of 40,970
  1. Many classic techniques for mitigating backdoor impacts can overcorrect, where they “unlearn” key elements of the code, resulting in low model performance as they normally ignore measuring the impact on the poisoned accuracy, a novel metric proposed and highlighted throughout the IEEE TRC’22.
  2. Many existing techniques are of low generalizability, i.e., some methods are only effective on certain data sets or specific machine learning model architectures.

These findings point to the fact that for the time being, a generalized approach to mitigating attacks on neural networks is not advisable. Zeng emphasized the urgent need for a comprehensive AI security solution: “As we continue to witness the widespread impact of pre-trained foundation models on our daily lives, ensuring the security of these systems becomes increasingly critical. We hope that the insights gleaned from this competition, coupled with the release of the benchmark, will galvanize the community to develop more robust and adaptable security measures for AI systems.”

Recommended: Enhancing AI: Why New Technology Must Include Diversity

“As the world becomes more dependent on AI and machine learning, it is important to deal with the security and privacy issues that these technologies bring up,” said Qiu. “The IEEE TRC ’22 competition for EDISP has made a big difference in this area. I’d like to offer a special thanks to my colleagues on the steering committee—Professors Ruoxi Jia from Virginia TechNeil Gong from DukeTianwei Zhang from Nanyang Technological UniversityShu-Tao Xia from Tsinghua University, and Bo Li from University of Illinois Urbana-Champaign—for their help and support.”

Ideas and insights coming out of the event, along with the public benchmark data, will help make the future of machine learning and artificial intelligence safer and more dependable. The team plans to run the competition for a second year, and those findings will further strengthen the security parameters of neural networks.

“This is precisely the kind of work we want the Emerging Technology Fund to fuel,” said Nita Patel, 2023 IEEE Computer Society President. “It goes a long way toward bolstering iterative developments that will strengthen the security of machine learning and AI platforms as the technologies advance.”

Latest Insights: Synthetic Data: A Game-Changer for Marketers or Just Another Fad?

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.