Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

OpenAI’s Superalignment Team

What is The News About?

An individual from OpenAI’s Superalignment team, which is in charge of creating methods to control and direct “superintelligent” AI systems, has revealed that the group was offered 20% of the company’s computing resources. However, the team was unable to complete their tasks since requests for a fraction of that computer were frequently denied.

Week’s Top Read Insight:10 AI ML In Supply Chain Management Trends To Look Out For In 2024

Several team members resigned this week because of this and other issues; one of them was co-lead Jan Leike, a former DeepMind researcher who worked on OpenAI’s ChatGPT, GPT-4, and InstructGPT projects.

Why Is It Important?

Related Posts
1 of 40,156

Leike and OpenAI co-founder Ilya Sutskever, who left from the business last week, founded the Superalignment team last July. The overarching objective was to control superintelligent AI by resolving the fundamental technical issues within the next four years. Team members included engineers and scientists from OpenAI’s former alignment division and other departments around the company. Their goal was to conduct research that would help ensure the security of models used internally and externally and to reach out to and collaborate with other organizations in the AI industry through programs like a research grant program. A corpus of safety research was published by the Superalignment team, and they were able to channel millions of dollars in grants to other researchers. Unfortunately, the Superalignment team had to battle for more upfront investments as product launches started to consume more and more of OpenAI’s leadership time. These investments were vital to the team’s stated goal of creating superintelligent AI for the benefit of all humans.


1. Enhanced AI Safety: The Superalignment team’s work aimed to ensure that superintelligent AI systems are safe and beneficial for everyone.

2. Collaboration and Funding: They published important safety research and provided millions in grants to support other AI safety projects.

Must Read: What is Experience Management (XM)?

[To share your insights with us as part of editorial or sponsored content, please write to]

Comments are closed.