[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

ModelNova™ Fusion Studio Expands Arm Ecosystem Support, Delivering End-to-End Edge AI Development for Arm-Based Silicon

Latest release introduces native ExecuTorch support for Arm Ethos-U85 and Ethos-U55 NPUs, Arm SDS Framework integration, Arm Keil MDK toolchain support, and a complete in-tool MLOps workflow — available at modelnova.com.

Embedded World 2026 — embedUR systems, a Silicon Valley–based embedded software and Edge AI engineering company, today announced a significant expansion of Arm® ecosystem support within ModelNova™ Fusion Studio, its desktop application for end-to-end Edge AI development. The announcement, made at Embedded World 2026 (Hall 4, Booth 600), introduces Fusion Studio’s deep integration with Arm’s most widely adopted Edge AI technologies.

Built for scalable Edge AI development

Edge AI development spans a diverse landscape of silicon platforms, software frameworks and toolchains. Developers building on these platforms have historically faced a fragmented workflow — assembling separate model conversion utilities, platform-specific compilers, quantization scripts, and deployment tools before they can bring an AI workload to production.

Fusion Studio is designed to address this directly. The platform integrates support for Arm Ethos™ NPU architectures, toolchains, and frameworks throughout the development lifecycle — from model training and optimization through firmware build and on-device deployment.

Also Read: AiThority Interview with Glenn Jocher, Founder & CEO, Ultralytics

Related Posts
1 of 42,687

Arm-native capabilities in Fusion Studio

The latest release of Fusion Studio introduces several capabilities purpose-built for the Arm Edge AI ecosystem:

  • ExecuTorch Support for Arm Ethos-U85 and Ethos-U55: Native support for ExecuTorch-optimized models targeting Arm’s Ethos-U85 and Ethos-U55 NPUs, compiled through the Vela toolchain. Developers can train, quantize, and deploy NPU-optimized models directly from the desktop application without managing command-line conversion tools or framework dependencies.
  • Ensemble Series Platform Integration: Full agent and model integration with the Alif Semiconductor Ensemble Series development platforms and sensors. Developers can deploy directly to Ensemble Development Kits from within Fusion Studio, with live inference playback and real-time camera capture on target hardware.
  • Arm SDS Framework Integration: Integration with Arm’s Synchronous Data Stream (SDS) Framework enables dataset capture, playback, and inference workflows directly from Arm-based hardware targets — streamlining the data pipeline from sensor to trained model.
  • Arm Keil® MDK Integration: Support for Arm’s Keil MDK toolchain, including Fixed Virtual Platform (FVP) simulation and direct Ensemble DevKit deployment, enabling developers to build, test, and iterate without leaving the Fusion Studio environment.
  • In-Tool MLOps End-to-End Flow: A complete MLOps pipeline — from dataset capture and annotation, through model training with ExecuTorch and Arm Kleidi™AI libraries integration, to firmware build and direct deployment on Arm-based targets — all executed locally with no cloud compute costs.

Expanding on Fusion Studio

The new Arm-native capabilities build on a platform that has been available in beta since 2025. Fusion Studio currently offers a library of over 160 pre-trained and optimized AI models spanning computer vision, audio, and text generation, with 21 industry-specific starter packs designed to accelerate proof-of-concept development. The platform supports deployment to Raspberry Pi 4 and 5 with integrated camera support and real-time inference and runs on PC (CPU/CUDA) and macOS with Apple Silicon host machines.

Recent beta releases introduced LLM-powered AI Assistance that provides contextual guidance across model selection, dataset evaluation, training configuration, platform optimization, and troubleshooting — along with training checkpoint and continuation support for interrupted workflows.

Also Read: ​​The Infrastructure War Behind the AI Boom

[To share your insights with us, please write to psen@itechseries.com]

Comments are closed.