Avesha and SUSE Launch Joint AI Blueprint for Enterprises, Combining EGS and SUSE AI for Complete GPU Orchestration and Self-Service Simplicity
Collaboration delivers powerful, yet easy-to-use AI infrastructure featuring a unified self-service portal, intelligent GPU orchestration, and secure workload governance for enterprises
Avesha, a leader in dynamic AI infrastructure orchestration, and SUSE, a global leader in secure open source solutions, launched a joint AI infrastructure blueprint that combines Avesha’s Elastic GPU Service (EGS) with SUSE AI. This integrated solution provides enterprises with a production-grade AI stack that is both powerful and intuitive, enabling scalable, self-service AI across teams and projects.
The blueprint lets enterprises deploy, manage, and monitor AI workloads across hybrid cloud environments with zero friction. It includes a modern self-service portal, dynamic GPU resource allocation, and comprehensive workload observability—delivering AI infrastructure that is as easy to use as it is powerful.
Also Read: AiThority Interview with Dr. Petar Tsankov, CEO and Co-Founder at LatticeFlow AI
“Avesha EGS was built to simplify the most complex part of AI infrastructure: GPU orchestration,” said Raj Nair, CEO of Avesha. “Our partnership with SUSE lets us leverage SUSE AI to deliver a game-changing experience for enterprise users. This partnership gives our joint customers complete control of their workloads through beautiful UI, powerful automation, and enterprise-grade security.”
Blueprint Overview: The AI Stack for Modern Enterprises
Avesha’s Elastic GPU Service (EGS)
- Dynamic GPU orchestration across clusters and clouds
- Reallocating unused GPU
- Elastic bursting for rapid access to cloud GPUs from on-prem environments
- Preemption and priority-aware scheduling for mission-critical workloads
- Unified observability for usage, cost, and performance
- Project/team isolation and governance for GPU initiatives
SUSE AI
- Built on SUSE Rancher Prime for GPU-aware Kubernetes management
- GenAI & MLOps integrations (eg Ollama, MLFlow, Pytorch, etc)
- Full-stack security with SUSE Security runtime protection
- Impactful insights into AI workloads with AI Observability
- GitOps-driven deployment pipelines
- Enterprise-ready, hardened, and FIPS-compliant
Together, Avesha and SUSE deliver true self-service AI—empowering data scientists, ML engineers, and platform teams to collaborate and launch GPU-powered projects with ease.
Solving Enterprise Challenges in AI Infrastructure
Enterprises need to scale, secure, and govern AI—without runaway costs or complexity. The Avesha–SUSE blueprint addresses these needs by:
- Eliminating underutilized GPU resources through real-time orchestration
- Enabling project- and team-level isolation with precise resource controls
- Providing a no-code self-service interface to spin up GPU workloads
- Simplifying AI model deployment across on-prem and cloud environments
- Securing every layer with zero-trust container runtime protection Project/team isolation and governance for GPU initiatives
“SUSE AI gives enterprises the choice to use the right tools to innovate with confidence,” said Abhinav Puri, VP and GM of Portfolio Solutions and Services, SUSE. “Our collaboration with Avesha brings together security, scalability, and simplicity—making enterprise-grade AI infrastructure truly accessible to every team.”
Ready for Deployment
The Avesha + SUSE AI blueprint is available immediately through both companies and their partner ecosystems. Target industries include finance, healthcare, manufacturing, government, and telco, where GPU-intensive AI workloads and robust governance are mission-critical.
Also Read: Developing Autonomous Security Agents Using Computer Vision and Generative AI
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Comments are closed.