Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Interview with Joe Fernandes, VP and GM, AI Business Unit at Red Hat

Joe Fernandes, VP and GM, AI Business Unit at Red Hat in this quick chat shares about generative AI, adoption challenges, the evolution of cloud computing, and more…

———-

How does generative AI enhance Red Hat’s open-source solutions, and what unique value does it bring to your hybrid cloud and Kubernetes technologies?

Gen AI innovation can provide value to both our new and experienced users alike. New and less experienced users can get information on how to work with our open source solutions, get better access to the documentation, or access to Red Hat knowledge as part of Red Hat’s subscription value. For experienced users, it accelerates access to detailed technical information and shortens the required for an expert to troubleshoot issues or perform more complex tasks, augmenting the product experience and amplifying their own expertise.

For Red Hat, gen AI allows our extensive knowledge about our open source technologies and how customers are using them to be distributed in a human language-accessible construct, which in turn helps create a better user experience when it comes to using Red Hat solutions, like Red Hat Ansible Lightspeed, Red Hat OpenShift Lightspeed, and Red Hat Enterprise Linux Lightspeed.

Also Read: AiThority Interview with Jie Yang, Co-founder and CTO of Cybever

  • Red Hat Ansible Lightspeed uses gen AI to help Ansible users to quickly convert their automation ideas into Ansible automation playbooks, and create, adopt, and maintain their Ansible Automation Platform content more efficiently.
  • With OpenShift Lightspeed, users have a generative AI-based virtual assistant integrated into Red Hat OpenShift. Developers can utilize OpenShift Lightspeed to do tasks such as automatically deploy an application, debug any issues and leverage more advanced features, helping them bring applications to market more quickly. For OpenShift administrators and platform engineers who are managing the platform, Lightspeed can help them to better navigate installation, integrate the platform into their application deployment pipelines and manage the platform day 2.  For example, an administrator can ask “How do I install the Red Hat OpenShift Virtualization operator?” and it will provide step by step guidance.
  • Similarly, Red Hat Enterprise Linux Lightspeed will apply gen AI to simplify how enterprises deploy and maintain their Linux environments – helping RHEL administrators teams do more, faster.

Highlight key challenges enterprises face when adopting generative AI.

First, an enterprise needs to have a clear view of their use cases and desired outcomes. This could be as simple as prioritizing a specific use case to start with and how to evaluate the impact of integrating Gen AI capabilities against the outcome they are trying to achieve.

The opportunity around gen AI is tremendous. Every customer we work with is at some stage of evaluating key use cases for their business, and we see an increasing number of enterprises moving these use cases from the pilot and proof of concept phase into production deployments.

In these conversations with our customers, we’ve learned that many of them struggle with Gen AI model costs at scale, alignment complexity with their private data and use cases, and deployment constraints.

  • Model costs: Many customers evaluating Gen AI use cases will start with large frontier model services.  However, for many customers this becomes cost prohibitive as enterprises scale their use cases in production and expand to additional use cases. This has inspired our work  with IBM Research on small language models, as an alternative to larger models, and work on the open source Granite model family.
  • Alignment complexity: Regardless of the models being used, customers also struggle with the complexity of aligning them with their enterprises’ private data and use cases. Most customers are deploying a frontier model with a RAG-based solution today to address this, and maintaining their data sets outside of the model in a RAG vector database. Red Hat believes that incorporating fine tuning, to customize a model with your data, and integrating that with RAG is a more effective approach. This strategy has driven our work on InstructLab to make model tuning and customization more accessible.
  • Deployment constraints: In addition to affordability and usability, customers also need flexibility to deploy those models anywhere they need to run. As the number of Gen AI use cases continues to increase, customers won’t be constrained to a single cloud environment. They need to deploy their models wherever their data lives, whether it’s in multiple public clouds, in their on premises data centers, or at the edge. Just as we’ve seen in the cloud native application space, this is driving demand for the flexibility of a hybrid AI platform.

At the end of the day, customers are looking for models that are cost efficient, aligned to their unique data and use cases, and able to run anywhere.

How is Red Hat helping businesses navigate these complexities?

Red Hat’s approach to AI focuses on providing enterprise organizations with a platform that delivers greater trust, expanded choice, and more consistency. Since AI maturity varies widely for each organization, Red Hat provides a comprehensive AI portfolio to support each stage of the AI adoption journey. We focus on helping organizations overcome the challenges of getting started quickly and scaling AI deployments consistently with enterprise-ready, curated open source AI innovation. Our investments in the AI open source community prioritize ease of use, transparency, and interoperability.

Also Read: AiThority Interview with Robert Figiel, VP of Centric Market Intelligence R&D at Centric Software

  • Red Hat Enterprise Linux AI is a foundation model platform to consistently develop, test, and run Granite family large language models (LLMs) to power enterprise applications. The solution, including Granite LLMs and InstructLab model alignment tools, is packaged as a bootable Red Hat Enterprise Linux server image deployable across the hybrid cloud.
  • Red Hat OpenShift AI provides an integrated MLOps platform for building, training, deploying, and monitoring AI-enabled applications and predictive and foundation models at scale across hybrid cloud environments. The solution accelerates AI/ML innovation, drives operational consistency, and promotes transparency and flexibility when implementing trusted AI solutions across the organization.
  • Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is a generative AI service designed to help individuals and teams to create, adopt, and maintain Ansible content with more ease and efficiency. Red Hat Ansible Lightspeed takes natural language prompts and generates code recommendations utilizing IBM watsonx Code Assistant, which is infused with a specially trained, automation-specific foundation model.
  • Red Hat OpenShift Lightspeed is a generative AI-based virtual assistant integrated into Red Hat OpenShift. It applies GenAI to how teams learn and work with OpenShift – enabling users to be more accurate and efficient while freeing up IT teams to drive greater innovation. Using an English natural-language interface, users can ask the assistant questions related to OpenShift. It can assist with troubleshooting and investigating cluster resources by leveraging and applying Red Hat’s extensive knowledge and experience in building, deploying and managing applications across the hybrid cloud.

How do you see AI and cloud computing evolving together, and what impact will this have on enterprise operations and innovation?

We view the hybrid cloud as a necessity for the continued evolution and success of AI strategies. An AI powered application typically needs to run wherever the data lives – whether that’s at the network’s edge, in a private datacenter or in one or multiple public clouds. This is where a hybrid cloud approach excels, letting enterprises use standardized tools and technologies across any environment – and this applies to AI as well.

There may also be a difference between where the best place is to tune and customize a model vs. the best place to serve a model for inference.  Customers may leverage a public cloud environment for model tuning, but then need to deploy that model on premises or at the edge for inference for example. This means that AI might become one of the workloads that will most benefit from Hybrid Cloud.

Enterprises are leveraging AI to scale their cloud infrastructure more efficiently. What according to you are the best practices for optimizing this process?

There are many different ways customers are leveraging AI to manage their cloud infrastructure.  Customers should continue to leverage AIOps enabled solutions to streamline IT operations & service delivery.  We also believe that leveraging AI to expand the use of IT automation and enhance infrastructure and platform administration is key.  This is what Red Hat is driving with Lightspeed across Ansible, OpenShift and RHEL.

A few thoughts on the future—how do you see AI shaping the next generation of enterprise software and cloud solutions?

I think the future of AI resides in smaller, purpose built AI models that are optimized for efficient inference in production. These small models can be tuned on proprietary enterprise data to execute business-specific tasks. We believe customers will use a combination of RAG and fine tuning to get the best performance.  We’ll continue to see synthetic data generation evolve, enabling users to generate data at scale required to tune their models. Synthetic data generators, such as those included in the InstructLab project, use a handful of approved examples to create additional data sets, which are then tuned into the model – for built-to-purpose LLMs.

We’ll see the role of optimization technologies in AI grow, exemplified by projects like vLLM, and techniques like model quantization and sparsification that help make more efficient and optimized model serving/inference  a reality.

We’ll continue to see the evolution from chatbots and copilots to Agentic AI workflows to enable autonomous task execution.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Joe Fernandes is Vice President and General Manager, AI Business Unit at Red Hat. Most recently, Joe was Vice President and General Manager of Red Hat’s Generative AI Foundation Model Platforms business where he leads Product Management, Product Marketing, and Technical Marketing for Red Hat Enterprise Linux AI (RHEL AI) and InstructLab. Joe has also served as Vice President and General Manager for Hybrid Cloud Platforms, which includes Red Hat OpenShift, Red Hat OpenStack, and Virtualization. Joe began his Red Hat career as the Product Manager for the first OpenShift release, built the product management organization, and helped grow OpenShift into a billion-dollar business.

Prior to Red Hat, Joe was the director of Product Management for Application Quality Management solutions at Oracle and served as the Director of Product Management and Marketing for Empirix’s Web business unit prior to its acquisition by Oracle.

Joe holds a Bachelor’s degree in Electrical and Computer Engineering from WPI and a Master’s degree in Business Administration from Boston College.

Red Hat is the leading provider of enterprise open source software—including Linux, hybrid cloud, AI, and automation technologies. We work alongside a community of contributors, customers, and partners to build technology that unlocks opportunities for innovation, everywhere.

Comments are closed.