Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

IBM watsonx Now Available for Enterprise AI Stacks

IBM watsonx's generative AI capabilities would help enterprises scale their machine learning foundation models.

Generative AI has emerged as one of the biggest technology catalyst in the post-pandemic era. It is already considered as the next frontier in workforce productivity and business optimization. As businesses scramble to set up their infrastructure to accommodate generative AI technologies, it is left to be seen how CIOs and CISOs actually answer all the prevailing questions linked to AI’s accountability and trustworthiness. A lack of AI governance could derail the train of innovations coming at the back of recent developments. A recent report cites the economic impact of generative AI on the society — within a year, generative AI capabilities could add up to $4 trillion to the global economy. This is more than the total annual GDP of the UK. And, this is not going to stop– each year, AI’s impact on the global impact would increase by 15 to 40 percent depending on the specific industrial use cases. To compete in the overtly aggressive marketplace, new businesses would need a technology partner that can assist generative AI technology stack development at an enterprise level. IBM has positioned its new capabilities to meet these requirements.

Today, IBM watsonx has arrived to help organizations scale their AI efforts within its AI-ready data platform. The new AI platform consists of:

  • watsonx.ai studio
  • watsonx.data fit-for-purpose store
  • watsonx.governance  toolkit (arriving later this year)

Why IBM watsonx?

IBM watsonx is available on IBM Cloud. IBM Cloud is a world-class AI infrastructure designed to meet ever-growing enterprise data pipelines and analytics workloads. This AI-optimized framework tangibly takes care of the entire lifecycle of AIOps and cloud computing in a remote environment governed by trustworthy guide rails based on local automated policy enforcement.

IBM users can tap the company’s expanded AI capabilities along with AI development toolkits from the Hugging  Face community for a range of generative AI innovations. watsonx is pre-trained on numerous use cases supporting simple and complex NLP tasks such as AI-generated content, code generation, text analytics, classification, conversational AI, summarization, and much more. Organizations can scale their AI projects with a variety of IBM-trained data sets and machine learning foundation models for complex tasks associated with different use cases. So far, generative AI capabilities have been extensively used and tested in customer service, marketing, sales, software engineering, and R&D. Its usefulness is not limited to just one or two industries. From manufacturing to life sciences to banking and finance, every industry can adopt generative AI LLMs to solve specific business challenges with measurable outcomes. Moreover, it will reorganize the global job market, ensuring workforce productivity and organizational efficiencies go hand-in-hand with AI-enabled processes.

IBM watsonx Now Available for Enterprise AI StacksAt the time of this announcement, I interviewed Jay Limburn, Vice President & Distinguished Engineer – Product Management, IBM Data & AI to gather more insights on the role of IBM watsonx in the current context.

Here’s the full transcript of the interview with Jay.

Sudipto: Hi Jay, could you please tell us what are the guide rails supervising IBM watsonx’s AI governance and trustworthiness capabilities?

Jay:

“If a company building AI tools can’t state clear principles they follow to promote ethics and responsibility, or if they don’t have practices in place to live up to those principles – their technology has no place on the market.”

“We are committed to the responsible stewardship of powerful technologies like AI. That means:

Principles – making clear that the purpose of AI we build is to augment human expertise, judgement and intelligence – not replace them – and that AI must be transparent, fair and explainable

Practices – infusing a culture of AI ethics throughout all stages of building and deploying AI systems, and structuring our products – including an entire aspect of our new AI platform – watsonx.governance (coming later this year) – to provide precisely the level of explainability and transparency that we feel must be standard in these systems.

Related Posts
1 of 41,171

Policies – advocating for public policy, including the regulation of AI, that promotes fairness, explainability and transparency, that regulates use cases – where technology actually meets people, and that places the greatest regulatory control on use cases with the greatest risk of societal harm.”

“The goal is simple: trust. Our principles, our practices, and the policies we advocate are all focused on promoting trust in AI. Because if it’s not trusted, society will never fully realize its benefits.

Sudipto: What kind of IT infrastructure should an enterprise have to successfully derive results from IBM’s generative AI technology stack?

Jay:

“For any enterprise, whether large or small, compute infrastructure is the largest cost associated with foundation models. The bigger the model, the bigger the cost to process input and provide output – this is called “usage” or “inference” cost.

That is why IBM announced, back in May, a new GPU offering on IBM Cloud to help clients deploy these foundation models and AI workloads. It’s understandable that not every enterprise will be able to support foundation model workloads, and the need for performance intensive computing as a service (PICaaS) – will ultimately fall to the Cloud/IT provider.

IBM acknowledges these hurdles and to mitigate impact, introduced Vela, IBM’s first AI-optimized, cloud-native supercomputer hosted on IBM Cloud. IBM Research designed Vela to scale up at will and readily deploy similar infrastructure into IBM Cloud data centers.

Vela is now our go-to environment for IBM researchers creating our most advanced AI capabilities, including our work on foundation models, and where we collaborate with partners to train many kinds of models.

By offering an end-to-end performance-intensive computing as a service, IBM is ensuring that their foundation models are built on infrastructure that includes the resiliency, performance, security, that our clients demand.”

Sudipto: With generative AI and no-code computing trends on the rise, where do you see the AIOps market heading with self-service AI workloads?

Jay:

“AIOps is just another part of the puzzle for generative AI. You may have just seen that IBM recently acquired Apptio Inc., a leader in financial and operational IT management and optimization (FinOps) software, to bolster our performance optimization and observability capabilities. We’re excited about this acquisition because Apptio, together with IBM’s IT automation software and it’s watsonx AI platform, will help businesses around the world manage and optimize enterprise IT spend and derive tangible financial value and operational improvement.

And as the data has shown, leveraging generative AI to inform strategic decisions is a growing focal point amongst CEOs. So, by pairing the impressive comptue power and inference that a generative AI platform, such as watsonx, can bring, alongside the observability and resource management that an AIOps solution, can offer, enterprises may now be able to have a full, 360-degree view of their business operations.”

In its official blog, IBM has confirmed its commitment to developing watsonx for broader NLP applications This would activate more than 100 billion foundation models for bespoke and targeted use cases within the enterprise. It also intends to strengthen its overall AI governance capabilities across end-to-end lifecycle development and implementation, enabling AI engineers with better risk mitigation capabilities in a compliant environment.

Comments are closed.