Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Liqid Makes it Simple to Plunge into AI Workloads with ThinkTank

Liqid ThinkTank Systems Simplify CDI Adoption while Maximizing GPU Flexibility, Agility and Efficiency for Demanding Workloads

Liqid, the world’s leading software company delivering data center composability, announced general availability of Liqid ThinkTank, the first fully integrated composable disaggregated infrastructure (CDI) system, designed to address the most challenging GPU-centric workloads. Liqid ThinkTank simplifies the adoption of CDI for customers and partners by offering a turnkey solution that handles the most challenging AI-driven workloads. Liqid Matrix™ CDI software provides the flexibility to quickly compose precise resource amounts into host servers, and move underutilized resources to other servers to satisfy changing workload needs, enabling never-before-seen levels of efficiency in the datacenter.

Recommended AI News: PuzzleHR Announces the Launch of Direct Connect Mobile App

“GPUs are increasingly ubiquitous, yet incredibly inefficient in deployment, as well as difficult to balance against NVMe storage and next-gen accelerators like FPGA and storage-class memory, placing true AI+ML operations out of reach for many organizations, as even buying time in the cloud can be prohibitively expensive for these workloads”

“GPUs are increasingly ubiquitous, yet incredibly inefficient in deployment, as well as difficult to balance against NVMe storage and next-gen accelerators like FPGA and storage-class memory, placing true AI+ML operations out of reach for many organizations, as even buying time in the cloud can be prohibitively expensive for these workloads,” said Wendell Wilson, Founder & Lead Architect at Level1Techs. “Liqid ThinkTank delivers a simple, footprint-friendly solution to these problems. The company’s software-defined CDI platform enables IT users to manage bare-metal hardware resources through software, akin to how VMware manages higher order applications up the stack. The breakthrough democratizes AI for organizations seeking to reap the benefits of these next-generation applications without incurring the costs.”

Think outside the box with Liqid ThinkTank

Liqid ThinkTank enables customers to meet the growing demand for GPU computing, quickly adapt to changing infrastructure performance requirements, and drive new levels of data center efficiency. With Liqid ThinkTank, organizations can:

  • Accelerate time-to-results for AI-driven GPU workloads at scale with a unified CDI solution;
  • Maximize GPU compute capabilities in tandem with storage and accelerators for new levels of efficiency, while extracting maximum value from hardware investments;
  • Use fewer resources to do more work, reducing physical and carbon footprints with superior resource utilization.
Related Posts
1 of 41,052

Recommended AI News: US Insurance Carriers Engage BPO Providers to Recover From Pandemic

Unlike traditional GPU servers that are slow to deploy and scale, constrain GPU performance, and poorly utilize resources, Liqid ThinkTank uses software to assign as many GPUs to a server as are needed, irrespective of whether the devices are physically designed to fit in the server. The software-defined simplicity of Liqid ThinkTank enables accelerated time-to-results by allowing IT teams to quickly deploy and scale GPU and other resources to support the most challenging workloads across the AI workflow, from data preparation and analytics to training and inference.

Liqid ThinkTank Delivers an Integrated CDI System that Works Smarter to Solve Hard Problems

Liqid ThinkTank ships in rack-scale, small, medium, and large systems. A typical large system consists of the following:

  • Liqid Matrix CDI software (software licenses and director)
  • Up to 16x GPUs from Nvidia or AMD
  • Up to 60 TB of NVMe storage
  • Up to 4x PCIe expansion chassis
  • Liqid host bus adapters (HBA)
  • Liqid PCIe fabric switching
  • ioDirect Peer-to-Peer technology for GPU-to-GPU and GPU-to-Storage
  • Ubuntu/Linux AI software stack with Liqid CDI enhancements

“With ThinkTank, Liqid simplifies CDI by delivering an easy-to-deploy system that tackles the most challenging AI workloads quickly and efficiently,” said Ben Bolles, Executive Director, Product, Liqid. “Liqid Matrix software ensures valuable GPU and other resources are not locked in the box, but rather shared across the fabric to perfectly match the needs of each workload. The Liqid ThinkTank delivers a turnkey, software-defined solution that will accelerate the adoption of composability for organizations of all sizes that require the type of performance needed to achieve fastest time-to-value, efficiently.”

Recommended AI News: Elliptic Labs Shipping on Three Redmi Smartphones

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.