Napatech Accelerates Infrastructure Services Processing for Data Center Applications
Napatech, the leading provider of programmable Smart Network Interface Cards (SmartNICs) used for Data Processing Unit (DPU) and Infrastructure Processing Unit (IPU) services in telecom, cloud, enterprise, cybersecurity and financial applications worldwide, today announced a set of new SmartNIC capabilities that enable standard, unmodified applications in edge and core data centers to benefit from offloaded and accelerated compute and networking functions.
As enterprises, communications service providers and cloud data center operators deploy virtualized applications and services in edge and core data centers, they increasingly leverage workload-specific coprocessors to offload functions such as Artificial Intelligence (AI), Machine Learning (ML), storage, networking and infrastructure services from general-purpose server CPUs. This architectural approach not only maximizes the availability of server compute resources for running applications and services but also improves system-level performance and energy efficiency by running the offloaded workloads on devices optimized for those specific tasks such as programmable SmartNICs, also known as Data Processing Units (DPUs) or Infrastructure Processing Units (IPUs).
Thanks to this offload trend as well as an acceleration in global data center deployments, programmable SmartNICs represent the fastest-growing segment of the NIC market, with a Total Available Market (TAM) forecasted to reach $3.8B/year by 2026 according to Omdia.
Recommended AI News: Leading Global Fintech Provider Selects Infobird Co., Ltd. to Deliver its Industry Leading Digital and Intelligent Customer Engagement Solution
To maximize the portability of their software and to accelerate their time-to-market, developers of cloud applications and services incorporate industry-standard Application Programming Interfaces (APIs) and drivers within their software. Data center operators therefore need to be able to select offload solutions that are compatible with the relevant standards, to avoid having to create custom, vendor-specific versions of their software. The latest upgrade to Napatech’s Link-Virtualization software, release 4.4, addresses this challenge by incorporating networking and virtual switching features that implement full support for the relevant open standards, while delivering best-in-class performance and functionality.
Specifically, Link-Virtualization now supports a fully hardware-offloaded implementation of the Virtio 1.1 Input/Output (I/O) virtualization framework for Linux, including the standard kernel NIC interface, which means that guest Virtual Machines (VMs) do not require a custom or proprietary driver. Link-Virtualization also supports the open-standard Data Plane Development Kit (DPDK) fast-path running in guest VMs to maximize the performance of functions such as Open Virtual Switch (OVS). Link-Virtualization is also fully compatible with OpenStack, allowing a seamless integration into cloud data center environments worldwide.
Recommended AI News: KPMG and Microsoft Collaborate to Help C-Suites Predict Tax Outcomes and Set Business Strategy With New Cloud Technologies
Other new features incorporated in Link-Virtualization include IPv6 VxLAN tunneling, RPM-based setup for OpenStack Packstack, configurable Maximum Transmission Unit (MTU), live migration on packed ring, port-based Quality of Service (QoS) egress policing and more. The software is available on Napatech’s portfolio of SmartNICs, powered by AMD (Xilinx) and Intel FPGAs, that span 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, 50 Gbps and 100 Gbps port speeds.
As one example of the industry-leading performance delivered by Link-Virtualization, the complete offload of the OVS data path onto the SmartNIC means that only a single host CPU core is required to run the OVS control plane while delivering industry-leading throughput of 55 million packets per second for Port-to-VM-to-Port (PVP) traffic and 130 million packets per second for Port-to-Port (PTP) traffic. Reclaiming host CPU cores previously required to run OVS and making them available to run applications and services leads to a significant reduction in the number of servers required to support a given workload or user base. This in turn drives significant reductions in overall data center CAPEX and OPEX. It also results in lower system-level power consumption and improved energy efficiency for the edge or cloud data center. To aid in the estimation of cost and energy savings for specific use cases, Napatech provides an online ROI calculator, which data center operators can use to analyze their projected savings.
“Napatech’s Link-Virtualization software enables data center operators to optimize the performance of their networking infrastructure in a completely standards-compatible environment, which maximizes their flexibility in selecting applications,” said Napatech CMO Jarrod J.S. Siket. “Besides full support for standard APIs, the solution also incorporates critical operational features such as Receive Side Scaling (RSS) for efficiently distributing network traffic to multiple VMs and Virtual Data Path Acceleration (vDPA), which enables the live migration of running workloads to and from any host, whether or not a SmartNIC is present.”
Recommended AI News: Digital Workforce Unlocks Business Opportunities Worth Six Figures for Planet Mark
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.