Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI Risks Exist, But Who will Manage These is a Bigger Threat!

To address the visible and unforeseen AI risks borne out of these systems, breakthrough R&D and resilient universal governance are required. 

Earlier this week, global leaders discussed some of the biggest AI risks on back-to-back days at the AI Safety Summit in Seoul. The first day (21 May) witnessed a co-chaired session featuring the South Korean president Yoon Suk Yeol and the UK PM Rishi Sunak. They presented the safety measures taken as per the Bletchley Declaration. While there is absolutely no stopping the AI juggernaut in 2024, or for the years to come, there is one question that is lingering in everyone’s mind!

“Who will manage the large-scale AI risks that come alongside AI capabilities?”

Based on my analysis over the years, AI risks identification and mitigation requires a deep understanding of the different persona and voices of the stakeholders– current and future. It’s a community-based approach, which essentially requires the participation of AI experts, researchers, policymakers, owners, and public. While AI risks could be discussed as a broad spectrum issue, there is a serious need to open up the forum to address macro-level challenges related to ethics, responsiveness, biases, inclusion, ownership and autonomy. There are nuanced layers to addressing the u***** risks from AI– and therefore, the need to lay out the “if-then” commitments in AI safety is a compelling proposition for global leaders representing the different parts of the world.

According to a recent paper, published here, the world needs to move away from making vague proposals that fall short of real commitments. For instance, many countries have introduced the basic initial guidelines and regulations to govern “frontier AI.” Yet, these efforts seemingly fall short of making any concrete impact toward mitigating AI risks related to rapid, uncontrolled, and uncertain mechanisms and applications.

Are we ready for generalist, autonomous AI systems?

According to experts, there are many open technical challenges in safety and regulation of use of generalist, autonomous AI systems. To address the visible and unforeseen AI risks borne out of these systems, breakthrough R&D and resilient universal governance are required. Experts specifically mentioned the realistic AI risks commonly linked to the dangerous model capabilities and their liabilities toward the societal norms. The paper named these among dangerous capabilities of AI models:

Related Posts
1 of 6,946
  • autonomous self-replication
  • large-scale persuasion
  • breaking into computer systems,
  • developing (autonomous) weapons, or
  • making pandemic pathogens widely accessible

The US, considered to be the cradle of AI development, has a larger role to play compared to all other countries of the world. Recently, the U.S. Secretary of Commerce Gina Raimondo released a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI), describing the department’s approach to AI safety under President Biden’s leadership. It’s a remarkable effort toward making AI leaders and organizations more accountable toward identifying risks and safeguarding the societal norms within the legal frameworks. We should encourage the AI community to jointly think and act on AI safety regulations by bringing together the brightest minds in academia, industry and government.

Fixing Capabilities of AI Models versus Human-led Regulations

If AI development continues at its current pace (which is overwhelmingly underreported in the industry), it would be extremely difficult to control their capabilities.

To mitigate AI risks, governance mechanisms should align with the liability frameworks specifically designed to keep a check on frontier AI development and their ownership. AI safety should be incentivized to prevent harm and encourage a “race-to-the-top” in safety measures among top companies in the AI marketplace.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@itechseries.com]

Comments are closed.