Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

ISACA Prepares Enterprises for Managing Generative Artificial Intelligence Risk with New Guidance

As excitement around the benefits of generative artificial intelligence applications like OpenAI’s ChatGPT and Google’s Bard has grown, so have the notes of caution from many in the industry, who point to a range of potential risks that could come with the tech. ISACA’s new resource, The Promise and Peril of the AI Revolution: Managing Risk, acknowledges the benefits of generative artificial intelligence (AI), but explores the rapidly evolving risk landscape and the steps that risk professionals should take to keep up with it.

Read More about AiThority Interview : AiThority Interview with Rebecca Clyde, Co-founder and CEO of Botco.ai

ISACA prepares enterprises for managing generative artificial intelligence risk with new guidance

The paper examines several different types of potential risk that enterprises could face with generative AI, including invalid ownership, weak internal permission structures, data integrity and cybersecurity and resiliency impact, not to mention larger societal risk. As AI will likely affect businesses in every industry, organizations must take four important steps to maximize AI value while installing appropriate and effective guardrails, as part of a continuous risk management approach:

  1. Identify AI benefits.
  2. Identify AI risk.
  3. Adopt a continuous risk management approach.
  4. Implement appropriate AI security protocols.
Related Posts
1 of 40,704

AiThority Interview Insights: AIThority Interview with David Lambert, VP & GM, Strategy & Growth, APAC, Medallia

Following these steps will allow leaders to strike a good balance of risk versus reward as AI-enabled tools and processes are leveraged in their enterprises. In addition to breaking down the above four steps, the ISACA paper includes eight protocols and practices for building AI security programs in the fourth step, including:

  • Trust but verify.
  • Design acceptable use policies.
  • Designate an AI lead.
  • Perform a cost analysis.

“While some leaders may prefer to wait to adopt AI tools, it can be a risk to your organization to delay the implementation of proper security and risk management plans; AI risk isn’t just a precaution – it’s a necessity,” says Jason Lau, Chief Information Security Officer of Crypto.com and ISACA Board Director. “It is imperative that leaders prioritize establishing the correct infrastructure and governance processes for AI in their organizations, ensuring they align with core ethics, sooner rather than later.”

 Latest AiThority Interview Insights : AiThority Interview with Ketan Karkhanis, EVP & GM, Sales Cloud at Salesforce

 [To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.