Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

NRI Secure Launches Security Assessment Service “AI Red Team,” for Systems Utilizing Generative AI

NRI SecureTechnologies, Ltd. (NRI Secure), a leading global provider of cybersecurity services, launched a new security assessment service, “AI Red Team,” targeting systems and services using generative AI.

Vulnerabilities and Risks of AI

In recent years, the use of generative AI, especially Large Language Models (LLMs)1, has continued to grow in many fields. While expectations for LLMs have increased, LLMs have also highlighted the existence of vulnerabilities, such as prompt injection2 and prompt leaking3, as well as hallucination4, sensitive information disclosure, inappropriate content generation, and bias risk5 (see Figure). Companies utilizing LLM technologies need to be aware of these issues specific to generative AI and apply appropriate countermeasures. For this reason, the importance of security assessment specific to generative AI is now being called for, and various countries are beginning to mention the need for assessment by independent outside experts.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 banner

Overview and Features of this Service

In this service, NRI Secure’s experts conduct simulated attacks on actual systems to evaluate, from a security perspective, AI-specific vulnerabilities in LLM-based services and problems in the overall system, including peripheral functions linked to the AI.

AI does not function as a service by itself; rather, it constitutes a service by linking with its peripheral functions. It is necessary not only to identify the risk of LLM alone, but also to evaluate it from a risk-based approach, which is to say, “If the risk becomes apparent, will it have a negative impact on the system or end-users?”

Recommended AI News: Smartling and Iterable Unveil Powerful Integration for Seamless Cross-Channel Localization

Therefore, this service provides a two-stage assessment: Identifying risks in the LLM alone, and evaluating the entire system, including the LLM. The results of the assessment are summarized into a report, detailing the problems found and recommended mitigation measures.

Related Posts
1 of 40,668

The two main features of this service are:

1. It performs efficient, comprehensive, and high-quality assessments using our proprietary automated tests and expert investigations

NRI Secure has developed its own assessment application that can be automatically tested by employing DAST6 for LLM. Using this application, vulnerabilities can be detected efficiently and comprehensively. Furthermore expert engineers in LLM security perform manual assessment to identify use-case-specific issues that cannot be covered by automated testing, and also investigate detected vulnerabilities in depth.

2. It assesses actual risk across the entire system and reduces countermeasure costs

Generative AI has the nature of determining its output probabilistically. Additionally, because it is difficult to completely understand internal operations, there are limits to how much of and how many vulnerabilities can be uncovered through a partial system evaluation. NRI Secure combines its long-accumulated expertise in security assessment to comprehensively assess the entire system and determine whether AI-caused vulnerabilities are apparent or not. This service also supports “OWASP Top10 for LLM,”7 which is difficult to deal with only by evaluating AI-specific problems.

If the AI itself appears to have vulnerabilities, the system will then evaluate the actual degree of risk from the perspective of the entire system, and can propose alternative countermeasures to avoid having to deal with vulnerabilities in the AI itself, which would be difficult to implement. As a result, the cost of countermeasures can be expected to be reduced.

Recommended AI News: Builder.ai’s Natasha: Now Everywhere, Your Constant Companion in Software Development

NRI Secure is developing the “AI Blue Team” service to support continuous security measures for generative AI, which will be a counterpart to this service, and will conduct regular monitoring of AI applications. The service is scheduled to launch in April 2024, and we are currently looking for companies that can participate in the PoC (Proof of Concept).

NRI Secure will continue to contribute to the realization of a safe and secure information system environment and society by providing a variety of products and services that support information security measures of companies and organizations.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.