Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

NTT and Red Hat Fuel AI Analysis at the Edge with IOWN Technologies

Joint solution enables real-time AI analysis of massive data sets while reducing power consumption and latency

As part of the Innovative Optical and Wireless Network (IOWN) initiative, NTT Corporation (NTT) and Red Hat, Inc., in collaboration with NVIDIA and Fujitsu, have jointly developed a solution to enhance and extend the potential for real-time artificial intelligence (AI) data analysis at the edge. Using technologies developed by the IOWN Global Forum and built on the foundation of Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes, this solution has received an IOWN Global Forum’s Proof of Concept (PoC)1 2 recognition for its real world viability and use cases.

“The demand for AI inferencing is growing, and telco edge has a pivotal role to play. NVIDIA has been collaborating with NTT and IOWN to combine the APN network with an accelerated data processing pipeline and AI, showcasing computer-vision and image-processing technology that’s both low latency and power efficient.”

As AI, sensing technology and networking innovation continues to accelerate, using AI analysis to assess and triage input at the network’s edge will be critical, especially as data sources expand almost daily. Using AI analysis on a large scale, however, can be slow and complex, and can be associated with higher maintenance costs and software upkeep to onboard new AI models and additional hardware. With edge computing capabilities emerging in more remote locations, AI analysis can be placed closer to the sensors, reducing latency and increasing bandwidth.

Recommended AI News: Nova Credit Expands Income Navigator in Property Management with SafeRent Solutions

This solution consists of the IOWN All-Photonics Network (APN) and data pipeline acceleration technologies in IOWN Data-Centric Infrastructure (DCI). NTT’s accelerated data pipeline for AI adopts Remote Direct Memory Access (RDMA) over APN to efficiently collect and process large amounts of sensor data at the edge. Container orchestration technology from Red Hat OpenShift3 provides greater flexibility to operate workloads within the accelerated data pipeline across geographically distributed and remote data centers. NTT and Red Hat have successfully demonstrated that this solution can effectively reduce power consumption while maintaining lower latency for real-time AI analysis at the edge.

The proof of concept evaluated a real-time AI analysis platform4 with Yokosuka City as the sensor installation base and Musashino City as the remote data center, both connected via APN. As a result, even when a large number of cameras were accommodated, the latency required to aggregate sensor data for AI analysis was reduced by 60% compared to conventional AI inference workloads. Additionally, the IOWN PoC testing demonstrated that the power consumption required for AI analysis for each camera at the edge could be reduced 40% from conventional technology. This real-time AI analysis platform allows the GPU to be scaled up to accommodate a larger number of cameras without the CPU becoming a bottleneck. According to a trial calculation, assuming that 1,000 cameras can be accommodated, it is expected that power consumption can be further reduced by 60%. The highlights of the proof of concept for this solution are as follows:

  • Accelerated data pipeline for AI inference, provided by NTT, utilizing RDMA over APN to directly fetch large-scale sensor data from local sites to the memory in an accelerator in a remote data center, reducing the protocol-handling overheads in the conventional network. It then completes data processing of AI inference within the accelerator with less CPU-controlling overheads, improving the power efficiency in AI inference.
  • Large-scale AI data analysis in real time, powered by Red Hat OpenShift, can support Kubernetes operators5 to minimize the complexity of implementing hardware-based accelerators (GPUs, DPUs, etc.), enabling improved flexibility and easier deployment across disaggregated sites, including remote data centers.
  • This PoC uses NVIDIA A100 Tensor Core GPUs and NVIDIA ConnectX-6 NICs for AI inference.

Recommended AI News: US Mobile Operators Conduct Cross-Carrier Drone Safety Connectivity Tests

This solution helps set the stage for intelligent AI-enabled technologies that will help businesses sustainably scale. With this solution, organizations can benefit from:

  • Reduced overhead associated with collecting large amounts of data;
  • Enhanced data collection that can be shared between metropolitan areas and remote data centers for quicker AI analysis;
  • The ability to utilize locally available and potentially renewable energy, such as solar or wind;
  • Increased area management security with video cameras acting as sensor devices.

Supporting Quotes

Chris Wright, chief technology officers and senior vice president of Global Engineering at Red Hat and board director of IOWN Global Forum
“Over the last few years, we’ve worked as part of IOWN Global Forum to set the stage for AI innovation powered by open source and deliver technologies that help us make smarter choices for the future. This is important and exciting work, and these results help prove that we can build AI-enabled solutions that are sustainable and innovative for businesses across the globe. With Red Hat OpenShift, we can help NTT provide large-scale AI data analysis in real time and without limitations.”

Katsuhiko Kawazoe, senior executive vice president of NTT and chairman of IOWN Global Forum
“The NTT Group, in great collaboration with partners, is accelerating the development of IOWN to achieve a sustainable society. This IOWN PoC is an important step forward toward green computing for AI, which supports collective intelligence of AI. We are further improving IOWN’s power efficiency by applying Photonics-Electronics Convergence technologies to a computing infrastructure. We aim to embody the sustainable future of net zero emissions with IOWN.”

Kenichi Sakai, senior vice president of Fujitsu LTD, Infrastructure System Business Unit
“We have been contributing to the realization of sustainable and smarter society by applying our server technologies including PRIMERGY CDI (Composable Disaggregated Infrastructure) which enables disaggregated computing. These PoC results show that IOWN’s feasibility has increased towards the commercialization in 2026 and that IOWN has a potential for AI applications. Fujitsu enables higher performance and power efficiency with the composability of PRIMERGY CDI and continues to contribute to the realization of IOWN computing infrastructure.”

Ronnie Vasishta, senior vice president of telecom, NVIDIA
“The demand for AI inferencing is growing, and telco edge has a pivotal role to play. NVIDIA has been collaborating with NTT and IOWN to combine the APN network with an accelerated data processing pipeline and AI, showcasing computer-vision and image-processing technology that’s both low latency and power efficient.”

Recommended AI News: New Benchmarking Research Identifies Top Banks for Credit Card Digital Experience

[To share your insights with us as part of editorial or sponsored content, please write to]

Comments are closed.