Tips for Building an Enterprise AI Strategy
Today, businesses are under a lot of pressure to get their Artificial Intelligence (AI) strategies right. But when it comes to implementation, they are facing serious challenges around the technology and infrastructure required to support it. AI technology requires an immense amount of processing power and the ability to transfer large amounts of data. As such, it’s become clear that businesses need the right environment to deploy these applications with both latency and cost considerations in mind. Further, businesses are seeing that scaling and operating efficient AI deployments require high-density compute infrastructure and associated power and cooling capacities.
That said, most of today’s enterprises do not have the means to support these accelerated computing operations and are not building their own new modern data centers. This holds especially true in highly competitive markets including financial services, healthcare, retail, media and entertainment, manufacturing and automotive industries.
While these challenges persist, there are important tips to keep in mind to build a success enterprise AI strategy:
Prioritize the Cloud and Security
A successful enterprise AI strategy and implementation is highly dependent on the cloud. Simply put, AI cannot be done without the cloud. As companies increasingly integrate a variety of AI-driven technologies across voice, vision, language, Machine Learning (ML) and Deep Learning (DL), and sometimes described as Machine Learning on steroids, in order to transform their businesses and get the competitive edge, cloud technologies are deeply woven into this process.
Think about it, AI feeds off data, and the more data it accesses, the more intelligent it can become. Since cloud environments can support enormous volumes of data, it can help provide AI systems with the information they need to learn. These systems can also lean on the cloud for data storage, analytics and computational power. For example, Monte Carlo simulation consumes vast amounts of data to allow people to account for risk, analyzing nearly every possible scenario to provide a range of possible outcomes and the probabilities they will occur for any choice of action. The amount of compute needed is unprecedented to maintain this computerized mathematical technique, and the cloud is a central part of this.
Connectivity is also important. The ingestion and transport of data from enterprise on-premise and ‘edge locations’ through to core processing across multiple public cloud providers, requires a secure and performant hybrid-cloud architecture.
As these businesses evolve their AI strategies, many are realizing that they do not have the right facilities to build the powerful, connected and highly-performant environments that are needed to support these accelerated computing operations.
Understand your Processing Needs
AI technology requires an immense amount of processing power and the ability to transfer large amounts of data. In addition, AI, DL and ML increasingly use powerful high-performance graphics processing unit (GPU) based servers such as NVIDIA DGX systems. These environments can draw a very large amount of power per individual cluster (up to 40kw/rack) – and to maximize the return of this AL and DL, you need multiple clusters.
As a result, the old ways of scaling horizontally won’t cut it anymore and proprietary data centers often don’t cut it anymore either for a couple of reasons. First, enterprise data centers aren’t built to support large high-density environments. Second, CIOs don’t want to manage on-premise facilities as they are costly to run and maintain and they would rather free up that valuable real estate if they can. Applications need to scale vertically to meet these growing data demands and organizations need to ensure the data center hosting this data is powerful enough to take on this type of power consumption.
For businesses to realize the full benefits of DL and AI, they need to be constantly (95-100% utilization) processing huge amounts of training data – and that data then needs to go somewhere. The cloud is not effective for this because cloud is not cost effective for high utilization and because of data egress costs. To put this into perspective, the cloud is a good option as long as enterprises are processing huge amounts of data that doesn’t need to come out of the cloud or for cases where there are peaks and troughs in activity.
However, in this AI scenario where enterprises need to constantly transfer their data from the cloud, this process becomes very expensive. Cloud can be effective for DevOps and some production environments where DL creates new models and algorithms, and these can be moved into the cloud. What this means is that AI (including DL and ML) will likely require hybrid environments. When evaluating AI strategies, it’s important to be aware of this cost challenge and fully understand your processing needs to find the best strategy for your organization.
Evaluate Options that are Best for your Enterprise
While seemingly obvious, every enterprise is different. Many enterprises are looking to get rid of their own data centers as they are not set up for the high-performance configurations due to AI infrastructure requirements. Another reason for this desire is that IT does not want to manage data centers. However, high-performance data centers are crucial for these configurations and furthering enterprise AI innovations.
In some other cases due to the abundance of data, organizations may not be able to store all of their data in the cloud. This means that this reliant data will need to be in close proximity to the data in the cloud in order to garner the increased speed and performance that businesses are vying for. Taking the location of where their data is into consideration will guarantee the fastest access to data, the highest levels of connectivity and bandwidth and minimal levels of latency that businesses need to compete today.
Carrier- and cloud-neutral colocation environments can help simplify and strengthen an AI strategy by giving business a better choice for connecting to the cloud. Businesses gain the benefit of low latency connectivity through colocation data centers that offer direct interconnections to multiple cloud service providers within the same facility, which ultimately translates to better service for customers, even as businesses grow.