Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Groq Raises $640Million to Meet Soaring Demand for Fast AI Inference

Groq, a leader in fast AI inference, has secured a $640M Series D round at a valuation of $2.8B, with the round led BlackRock Private Equity Partners.

Groq to Scale Capacity, Add Exceptional Talent, and Accelerate the Next Gen LPU™

Groq, a leader in fast AI inference, has secured a $640M Series D round at a valuation of $2.8B. The round was led by funds and accounts managed by BlackRock Private Equity Partners with participation from both existing and new investors including Neuberger Berman, Type One Ventures, and strategic investors including Cisco Investments, Global Brain’s KDDI Open Innovation Fund III, and Samsung Catalyst Fund. The unique, vertically integrated Groq AI inference platform has generated skyrocketing demand from developers seeking exceptional speed.

Also Read: Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation

“The market for AI compute is meaningful and Groq’s vertically integrated solution is well positioned to meet this opportunity. We look forward to supporting Groq as they scale to meet demand and accelerate their innovation further,” said Samir Menon, Managing Director, BlackRock Private Equity Partners.

“Samsung Catalyst Fund is excited to support Groq,” said Marco Chisari, Head of Samsung Semiconductor Innovation Center and EVP of Samsung Electronics. “We are highly impressed by Groq’s disruptive compute architecture and their software-first approach. Groq’s record-breaking speed and near-instant Generative AI inference performance leads the market.”

“You can’t power AI without inference compute,” said Jonathan Ross, CEO and Founder of Groq. “We intend to make the resources available so that anyone can create cutting-edge AI products, not just the largest tech companies. This funding will enable us to deploy more than 100,000 additional LPUs into GroqCloud. Training AI models is solved, now it’s time to deploy these models so the world can use them. Having secured twice the funding sought, we now plan to significantly expand our talent density. We’re the team enabling hundreds of thousands of developers to build on open models and – we’re hiring.”

Today, Groq also announced that Stuart Pann, formerly a senior executive from HP and Intel, joined its leadership team as Chief Operating Officer.

“I am delighted to be at Groq at this pivotal moment. We have the technology, the talent, and the market position to rapidly scale our capacity and deliver inference deployment economics for developers as well as for Groq,” said Stuart Pann, Chief Operating Officer at Groq.

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

Groq also gains the world-class expertise of its newest technical advisor, Yann LeCun, VP & Chief AI Scientist at Meta.

Developers Flock to Groq

Related Posts
1 of 41,052

Groq has quickly grown to over 360,000 developers building on GroqCloud™, creating AI applications on openly-available models such as Llama 3.1 from Meta, Whisper Large V3 from OpenAI, Gemma from Google, and Mixtral from Mistral. Groq will use the funding to scale the capacity of its tokens-as-a-service (TaaS) offering and add new models and features to GroqCloud.

Mark Zuckerberg, CEO and Founder of Meta recently shared in his letter entitled, Open Source AI Is the Path Forward,”Innovators like Groq have built low-latency, low-cost inference serving for all the new models.”

Scaling Capacity

As Gen AI applications move from training to deployment, developers and enterprises require an inference strategy that meets the user and market need for speed. The tsunami of developers flocking to Groq are creating a wide range of new and creative AI applications and models, fueled by Groq instant speed.

To meet its developer and enterprise demand, Groq will deploy over 108,000 LPUs manufactured by GlobalFoundries by the end of Q1 2025, the largest AI inference compute deployment of any non-hyperscaler.

Mohsen Moazami, President of International at Groq, former leader of Emerging Markets at Cisco, is leading commercial efforts with enterprises and partners including Aramco Digital and Earth Wind & Power to build out AI compute centers globally. This will ensure developers have access to Groq technology regardless of their location.

“Aramco Digital is partnering with Groq to build one of the largest AI Inference-as-a-Service compute infrastructure in the MENA region,” said Tareq Amin, Chief Executive Officer, Aramco Digital. “Our close collaboration with Groq is transformational for both domestic and global AI demand.”

Accelerating Innovation

Groq LPU™ AI inference technology is architected from the ground up with a software-first design to meet the unique characteristics and needs of AI. This approach has given Groq an advantage to bring new models to developers quickly, at instant speed. The investment will enable Groq to accelerate the next two generations of LPU.

Morgan Stanley & Co. LLC served as exclusive Placement Agent to Groq on the transaction.

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.