Generative AI is Over-hyped: Bias and Security Risks Biggest Barriers
Generative AI tools like ChatGPT3 have emerged as the most disruptive technologies in the post-COVID era. They have the potential to completely transform the way humans use machines and software to perform tasks and communicate with each other. But, is all the buzz around Generative AI worth the attention? According to the latest Salesforce survey of top senior IT executives, generative AI is largely over-hyped and risks biased outcomes. The skepticism grows with Generative AI if organizations fail to erect strong tech infrastructure and data strategy to snare AI’s real potential. While implementing Generative AI remains the top priority for senior IT leaders (33%) in the next 18 months, there are perennial operational challenges, security risks, and latent biases in the technology that could douse the fiery intent of high-growth companies scaling with this technology.
When looked closely, going ahead with Generative AI may become a double-edged sword, especially when biases and security risks crop up.
Let’s elaborate on this.
Game-changer for Customer-centric Organizations
No doubt, Generative AI is a super tool that impacts digital transformation journeys for many industries. If a company has a strong data strategy and loaded business analytics infrastructure, it can load Generative AI into a tool that serves customers more proficiently and coherently. In fact, for 57% of the IT leaders surveyed by Salesforce, Gen AI is a game-changer, and even 80% who felt this is just a bubbling over-hyped phenomenon, agree on Gen AI’s capabilities in marketing communication and customer service.
Security Risks and Biases Could Black Out Gen AI’s Benefits
79% of IT leaders see risks in operating with Gen AI, while 73% feel these tools are massively biased to deliver any kind of benefit in the long run. While inaccuracies surface now and then with utilizing AI for different purposes, it is the bias in the outputs that cause maximum friction in the system. Another reason why Gen AI could become a burden for organizations is its increased carbon footprint. In a recent post, it was mentioned that ChatGPT computing consumed 1287 MWh of energy, releasing 500 tons of CO2 into the atmosphere.
There are global repercussions of using Gen AI tools, but IT leaders are more concerned about the security risks and biases, reported Salesforce.
Approach with Caution and Finesse
If done right, Gen AI could become the pillar of your business analytics. Companies like Salesforce have come up with ethical frameworks and guidelines to understand the finer details of using Generative AI within their existing portfolios. 30% of businesses surveyed by Salesforce agree about moving forward with embedded ethics in place. Overall, the hype is generated from the discussions happening in the public and private space and therefore, IT leaders seek to combine data sources from these areas to train Generative AI algorithms. Clara Shih, CEO of Service Cloud, Salesforce said, “Generative AI represents a step change in how organizations across industries will analyze data, automate processes, and empower sales, service, marketing, and commerce professionals to grow customer relationships — but it comes with new risks and challenges.”
Comments are closed.