Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

F5 Study: Enterprises Plowing Ahead with AI Deployment Despite Gaps in Data Governance and Security Concerns

F5 (NASDAQ: FFIV) today released a new report that provides a unique view into the current state of enterprise AI adoption. F5’s 2024 State of AI Application Strategy Report reveals that while 75% of enterprises are implementing AI, 72% report significant data quality issues and an inability to scale data practices. Data and the systems companies put in place to obtain, store, and secure it are critical to the successful adoption and optimization of AI.

“AI is a disruptive force, enabling companies to create innovative and unparalleled digital experiences. However, the practicalities of implementing AI are incredibly complex, and without a proper and secure approach, it can significantly heighten an organization’s risk posture,” said Kunal Anand, EVP and CTO at F5. “Our report highlights a concerning trend: many enterprises, in their eagerness to harness AI, overlook the need for a solid foundation. This oversight not only diminishes the effectiveness of their AI solutions but also exposes them to a multitude of security threats.”

As enterprises build out a new stack to support the widening array of AI-powered digital services, the study highlights challenges they face across the infrastructure, data, model, application services, and application layers that must be overcome for widespread scalable adoption.

Read this trending article: Role Of AI In Cybersecurity: Protecting Digital Assets From Cybercrime

The Promise and Reality of Generative AI

Organizations are enthusiastic about the prospects of generative AI’s business impacts. Respondents named it the most exciting technology trend of 2024. However, only 24% of organizations say they have implemented generative AI at scale.

Although the use of generative AI is on the rise, the most common use cases often serve less strategic functions. The most common use cases that respondents say they’ve already deployed include copilots and other employee productivity tools (in use by 40% of respondents) and customer service tools such as chatbots (36%). Tools for workflow automation (36%) were named the highest priority AI use case, however.

Roadblocks to Scaling AI in Infrastructure and Data Layers

Related Posts
1 of 40,243

As enterprise leaders examine challenges to deploying AI-based applications at scale, they cite three main concerns encountered at the infrastructure layer:

  • 62% cite the cost of compute as a major concern to scaling AI
  • 57% cite model security as a primary concern. To address this, enterprise leaders expect to spend 44% more on security over the next few years as they scale deployments
  • More than half of respondents (55%) cite performance across all aspects of the model as a concern

At the data layer, data maturity is a more immediate and potentially bigger challenge impacting the widespread implementation of AI:

  • 72% of study respondents cite data quality and an inability to scale data practices as the top hurdles to scaling AI
  • 53% cite the lack of AI and data skillsets as a major impediment
  • Although 53% of enterprises state that they have a defined data strategy in place, over 77% of organizations surveyed state they lack a single source of truth for their data

Cybersecurity Remains a Key Concern and Consideration

According to the study, cybersecurity is a principal concern for those tasked with delivering AI services. Factors such as AI-powered attacks, data privacy, data leakage, and increased liability rank among the top AI security concerns.

When asked how they plan to defend against these threats to secure AI implementations (or are already doing so), respondents are focused on app services such as API security, monitoring, and DDoS and bot protection:

  • 42% state they are using or planning on using API security solutions to safeguard data as it traverses AI training models
  • 41% use or plan to use monitoring tools for visibility into AI app usage
  • 39% use or plan to use DDoS protection for AI models
  • 38% use or plan to use bot protection for AI models
[To share your insights with us as part of editorial or sponsored content, please write to]

Comments are closed.