Microsoft To Reduce Its Reliance On Nvidia
Microsoft’s first AI processor
Codenamed “Athena,” Microsoft’s first AI processor is expected to make its debut at the company’s annual developers’ conference. With this device, the corporation may be able to use fewer Nvidia-designed AI processors, which have been in short supply due to high demand. The chip was developed specifically for processing AI models.
OpenAI, on the other hand, is thinking about developing its own AI chip after hearing about AI chip development plans. Training and inferencing artificial intelligence chips are now available from both Google and Amazon.
Recommended AI News: Amazon and Anthropic Announce Strategic Collaboration to Advance Generative AI
AI chipmaking industry
Tech behemoths are trying to cut back on their reliance on chipmakers and increase their return on investment as artificial intelligence (AI) takes center stage. One rumor stated that Microsoft-backed OpenAI, the firm that created ChatGPT, is trying to enter the AI chipmaking industry, and another claimed that Microsoft plans to introduce its first chip built to process AI models at the company’s annual developers’ conference. “a culmination of years of work,” as reported by The Information, “could help Microsoft lessen its reliance on Nvidia-designed AI chips, which have been in short supply as demand for them has boomed.”
Recommended AI News: Cloudera Signs Strategic Collaboration Agreement with AWS
Huge generative AI models
Microsoft’s LLMs for cloud customers like OpenAI and Intuit, as well as the AI features in Microsoft’s productivity apps, are currently powered by Nvidia GPUs. Microsoft may be able to minimize its reliance on Nvidia’s H100 GPU, which is reportedly in short supply due to rising demand if the chip proves to be successful.
OpenAI is considering developing its own artificial intelligence processors. This news arrives just after a report said OpenAI is contemplating developing its own AI processor. The corporation has been discussing AI chip plans for at least a year. It was also stated that OpenAI CEO Sam Altman has prioritized the purchase of additional AI chips.
Tensor Processing Units (TPUs) are used by Google to educate their huge generative AI models like PaLM-2 and Imagen. Amazon has its own custom-built hardware for both training and inferencing, called Trainium and Inferentia, respectively.
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.