IBM Unveils On-Chip Accelerated Artificial Intelligence Processor
New chip design unlocks the ability to leverage deep learning inferencing on high-value transactions, designed to greatly improve the ability to intercept fraud, among other use cases
At the annual Hot Chips conference, IBM unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.
Recommended AI News: XinFin’s XDC Network Selected As the First Blockchain Company To Join the Global Trade Finance Distribution Initiative
According to recent Morning Consult research commissioned by IBM, 90% of respondents said that being able to build and run AI projects wherever their data resides is important1. IBM Telum is designed to enable applications to run efficiently where the data resides, helping to overcome traditional enterprise AI approaches that tend to require significant memory and data movement capabilities to handle inferencing. With Telum, the accelerator in close proximity to mission critical data and applications means that enterprises can conduct high volume inferencing for real time sensitive transactions without invoking off platform AI solutions, which may impact performance. Clients can also build and train AI models off-platform, deploy and infer on a Telum-enabled IBM system for analysis.
Innovations across banking, finance, trading, insurance
Today, businesses typically apply detection techniques to catch fraud after it occurs, a process that can be time consuming and compute-intensive due to the limitations of today’s technology, particularly when fraud analysis and detection is conducted far away from mission critical transactions and data. Due to latency requirements, complex fraud detection often cannot be completed in real-time – meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place.
According to the Federal Trade Commission’s 2020 Consumer Sentinel Network Databook, consumers reported losing more than $3.3 billion to fraud in 2020, up from $1.8 billion in 20192. Telum can help clients to move their thinking from a fraud detection posture to a fraud prevention posture, evolving from catching many cases of fraud today, to a potentially new era of prevention of fraud at scale, without impacting service level agreements (SLAs), before the transaction is completed.
The new chip features an innovative centralized design, which allows clients to leverage the full power of the AI processor for AI-specific workloads, making it ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering and risk analysis. With these new innovations, clients will be positioned to enhance existing rules-based fraud detection or use machine learning, accelerate credit approval processes, improve customer service and profitability, identify which trades or transactions may fail, and propose solutions to create a more efficient settlement process.
Recommended AI News: Druva Leads Industry With Best-in-Class Customer Support for Its Cloud Platform
Telum and IBM’s Full Stack Approach to Chip Design
Telum follows IBM’s long heritage of innovative design and engineering that includes hardware and software co-creation and integration that spans silicon, system, firmware, operating systems and leading software frameworks.
The chip contains 8 processor cores with a deep super-scalar out-of-order instruction pipeline, running with more than 5GHz clock frequency, optimized for the demands of heterogenous enterprise class workloads. The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.
Leadership in semiconductors
Telum is the first IBM chip with technology created by the IBM Research AI Hardware Center. In addition, Samsung is IBM’s technology development partner for the Telum processor, developed in 7nm EUV technology node.
Telum is another example of IBM’s leadership in hardware technology. IBM Research, among the world’s largest industrial research organizations, recently announced scaling to the 2 nm node, the latest benchmark in IBM’s legacy of contributions to silicon and semiconductor innovation. In Albany, NY, home to the IBM AI Hardware Center and Albany Nanotech Complex, IBM Research has created a leading collaborative ecosystem with public-private industry players to fuel advances in semiconductor research, helping to address global manufacturing demands and accelerate the growth of the chip industry.
Recommended AI News: Enterprises in Brazil Seek Help With Pandemic Cybersecurity and Regulatory Compliance
Comments are closed.