MemVerge and Penguin Computing Introduce World’s First Big Memory Computing Solutions
Configurations with Intel® Optane™ Persistent Memory and Memory Machine™ software from MemVerge are tested and supported with HPC, AI/ML, and Data Center Apps.
MemVerge, the pioneers of Big Memory software, and Penguin Computing, a subsidiary of SMART Global Holdings, Inc., a leader in HPC, AI, and enterprise data center solutions, announced a partnership to deliver the world’s first Big Memory Computing Solutions. These solutions are designed for Big Memory Computing which uses DRAM and persistent memory virtualized into massive pools of software-defined memory. These solutions meet the exploding demand from real-time applications for memory infrastructure that scales cost-effectively to many terabytes and is highly available.
“The move to in-memory computing is creating significant draw for the MemVerge technology across a broad spectrum of our technological disciplines and vertical markets,” said Kevin Tubbs, Senior VP, Strategic Solutions Group at Penguin Computing. “We specialize in designing and packaging critical emerging technologies to our AI and HPC customers in highly tuned, well supported reference architectures.”
“Big Memory will change data center infrastructure everywhere,” said Charles Fan, Co-Founder and CEO at MemVerge. “MemVerge is committed to working with select strategic partners to lead this next wave of innovation. The combination of Penguin Computing’s expertise in HPC, AI/ML and Data Center apps, Intel’s innovative Optane persistent memory and our Memory Machine software enables our customers to enjoy a new software-defined memory service without compromises.”
Recommended AI News: VIAVI Launches Comprehensive VPN Management Solution For Large To Medium Enterprises
By 2024, almost a quarter of all data created will be real-time data, and two-thirds of Global 2000 corporations will have deployed at least one real-time application that is considered mission-critical, according to IDC. Big Memory solutions from Penguin Computing enable applications to harness the power of a memory-centric architecture by providing access to persistent memory (PMEM) without changes to applications. This allows the memory pool to run as fast or faster than DRAM and makes the memory infrastructure highly available.
Penguin Computing Big Memory Solutions
Without Big Memory Solutions, DRAM runs slightly faster than PMEM. With Big Memory Solutions, DRAM and PMEM work together as fast or faster than DRAM. Memory Machine software enables IT organizations to implement server refreshes with memory that costs significantly less without sacrificing performance.
Once DRAM and PMEM are virtualized, Big Memory Computing solutions make PMEM appear as DRAM allowing any application to plug-and-play with the pool of memory. With a single Memory Machine software virtualization layer, an entire data center full of applications can have quick and easy access to a lower-cost pool of DRAM and PMEM.
During the last 50 years of DRAM history, memory has been a super-fast data storage tier, but fragile without the data protection services that are common with SSD and disk systems. Big Memory Solutions transform memory into a high availability tier with the world’s first memory data services based on ZeroIO snapshots.
Recommended AI News: CenturyLink Helps Fuel Kid’s Bodies And Minds During Uncertain Times
Traditional snapshots to storage for large in-memory databases are disruptive to business and used sparingly. Hundreds of gigabytes take minutes to snapshot to storage and hours to recover. Memory Machine ZeroIO snapshots from DRAM to PMEM happen at memory speeds, so even a terabyte in-memory database can be snapshot and recovered in 1-2 seconds. Because ZeroIO snapshots are so fast, they are non-disruptive which encourages frequent snapshots, reducing recovery time.
This unprecedented speed and stability opens a myriad of opportunities for IT organizations supporting HPC and AI/ML applications. They can enable real-time analytics to cost-effectively analyze hundreds of terabytes of data, accelerating processing and analysis. Their Big Memory Server deployment will allow AI/ML neural networks to analyze datasets with hundreds of terabytes for faster and more accurate fraud detection. Additionally, they can boost the productivity of video applications by allowing them to recover their massive memory footprints in seconds after a system crash.