Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Verge.io Unveils Shared, Virtualized GPU Computing to Cut Complexity and Cost

Virtualization of GPU resources simply and affordably supports AI/ML, remote desktop and other performance-intensive workloads

Verge.io, the company with a simpler way to virtualize data centers, has added significant new features to its Verge-OS software to give users the performance of GPUs as virtualized, shared resources. This creates a cost-effective, simple and flexible way to perform GPU-based machine learning, remote desktop and other compute-intensive workloads within an agile, scalable, secure Verge-OS virtual data center.

Latest Aithority Insights: AiThority.com to Attend The Character of AI – A Technology Ethics Conference (Virtual)

“The ability to deploy GPU in a virtualized, converged environment, and access that performance as needed, even remotely, radically reduces the investment in hardware while simplifying management”

Verge-OS abstracts compute, network, and storage from commodity servers and creates pools of raw resources that are simple to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC in universities, ultra-converged and hyperconverged enterprises, DevOps and Test/Dev, compliant medical and healthcare, remote and edge compute including VDI, and xSPs offering hosted services including private clouds.

Current methods for deploying GPUs systemwide are complex and expensive, especially for remote users. Rather than supplying GPUs throughout the organization, Verge.io allows users and applications with access to a virtual data center to share the computing resources of a single GPU-equipped server. Users/administrators can ‘pass through’ an installed GPU to a virtual data center by simply creating a virtual machine with access to that GPU and its resources.

Alternatively, Verge.io can manage the virtualization of the GPU and serve up vGPUs to virtual data centers. This allows organizations to easily manage vGPUs on the same platform as all other shared resources.

According to Darren Pulsipher, Chief Solution Architect of Public Sector at Intel, “The market is looking for simplicity, and Verge-OS is like an ‘Easy Button’ for creating a virtual cloud that is so much faster and easier to set up than a private cloud. With Verge-OS, my customers can migrate and manage their data centers anywhere and upgrade their hardware with zero downtime.”

Related Posts
1 of 41,034

AI and ML NewsAI: Continuing the Chase for Brain-Level Efficiency

“The ability to deploy GPU in a virtualized, converged environment, and access that performance as needed, even remotely, radically reduces the investment in hardware while simplifying management,” said Verge.io CEO Yan Ness. “Our users are increasingly needing GPU performance, from scientific research to machine learning, so vGPU and GPU Passthrough are simple ways to share and pool GPU resources as they do with the rest of their processing capabilities.”

Verge-OS is an ultra-thin software—less than 300,000 lines of code—that is easy to install and scale on low-cost commodity hardware and self-manages based on AI/ML. A single license replaces separate hypervisor, networking, storage, data protection, and management tools to simplify operations and downsize complex technology stacks.

Secure virtual data centers based on Verge-OS include all enterprise data services like global deduplication, disaster recovery, continuous data protection, snapshots, long-distance synch, and auto-failover. They are ideal for creating honeypots, sandboxes, cyber ranges, air-gapped computing, and secure compliance enclaves to meet regulations such as HIPAA, CUI, SOX, NIST, and PCI. Nested multi-tenancy gives service providers, departmental enterprises, and campuses the ability to assign resources and services to groups and sub-groups.

AI ML in Marketing: AI and Big Data Analysis Used to Find Brands’ Emotional Connection

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.