Mobilint Introduces MLA100 MXM, an 80 TOPS NPU Module for High-Efficiency Embedded AI PCs
Mobilint, a leading South Korean fabless AI chipmaker, today announced the official launch of MLA100 MXM, a high-performance embedded NPU module designed for ruggedized AI boxes and on-device AI systems. Delivering 80 TOPS (Tera Operations Per Second) of performance within a 25W power envelope, the MLA100 MXM brings powerful AI capabilities to the edge. The MLA100 MXM’s first official live demonstration will take place at the Embedded Vision Summit 2025 during its exhibition days, May 21–22, at the Santa Clara Convention Center in California.
Also Read: The Growing Role of AI in Identity-Based Attacks in 2024
Built on the standardized Mobile PCI Express Module (MXM) interface, the MLA100 MXM targets specialized applications, such as embedded AI systems used in robotics and industrial automation, where space, power, and thermal efficiency are critical.
MLA100 MXM NPU module is powered by ARIES, Mobilint’s flagship AI accelerator chip. Featuring high compute density with eight cores and broad AI model compatibility, ARIES was launched as a direct alternative to traditional GPUs used in on-premises servers. It supports a wide range of models, including vision models and Transformer architectures such as large language models (LLMs) and vision-language models (VLMs), allowing efficient deployment of complex AI workloads at the edge.
Mobilint’s AI development ecosystem, centered around its proprietary software development kit (SDK), enables seamless integration and optimization of AI models to its hardware, including MLA100 MXM. The SDK includes an advanced model compiler, an optimized software stack, and developer tools, which enable businesses to accelerate AI adoption and reduce time to market.
Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML
Industry partners, including a leading Korean edge AI solutions provider and major conglomerates, are conducting proof-of-concept tests, integrating Mobilint’s MXM NPU modules into embedded systems and products.
“Our goal with MLA100 MXM is to bring server-class inference to robotics and other edge devices,” said Dongjoo Shin, CEO and CTO of Mobilint. “Real-world applications require hardware, software, and algorithms to be co-optimized for efficient operation. Our advanced software and algorithmic stack will certainly help embedded systems developers get the most out of their AI models.”
[To share your insights with us, please write to psen@itechseries.com]
Comments are closed.