Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Generative AI: 5 Guidelines to Ensure Safe, Ethical & Accurate Use

William Faulkner, a renowned American writer once very eloquently said, “You cannot swim for new horizons until you have the courage to lose sight of the shore.” As profound as this may sound, this statement sits quite well with the emerging potential of Generative AI and the many associated risks. Needless to say, Generative AI is constantly evolving and while at it, bestowing mankind with limitless opportunities.

The biggest and most recent example in this segment is OpenAI’s ChatGPT. This chatbot is touted as the most viewed website with a whopping 672 views just in January.

According to a report by Financial Times, more than $2 billion has been invested in Generative AI, marking a jump of 425% since 2020.

Generative AI can be defined as machine learning algorithms that can create interesting meanings from a host of content forms like text, images, and code. But like many technological advancements, Generative AI does come with its share of risks. Today, while several organizations are in a rush to implement it, it’s also important to ensure responsible innovation and development.

With this article, we are focusing solely on the guidelines for Generative AI’s responsible development and enable employees and customers to use this technology ethically, and accurately.

Recommended: Salesforce’s EinsteinGPT – What’s in Store?

Generative AI’s Role at Salesforce

Generative AI’s scope at Salesforce, especially enterprise technology so to speak, is humongous. Currently, it is already embedded in the Customer 360 platform besides their coveted Einstein AI technologies, slated to make roughly 200 billion predictions daily across Salesforce’s business applications. Generative AI is used at four basic levels some of these include:

  • Sales: AI insights are used to discover the appropriate next steps to close deals.
  • Service: Enabling human-like conversations and offering answers to standard, repetitive questions and tasks, allowing agents to focus on more critical requests.
  • Marketing: To understand customer behavior and personalize marketing activities by focusing on the timings and target audience.
  • Commerce: Customized shopping experiences and intelligent e-commerce.

Needless to say, Generative AI plays a vital role in customer engagement, offering a personalized experience, across different verticals including sales, marketing, IT interactions, customer service, and commerce.

Recommended: Squashing Long Wait Time & Call Friction – AI’s Role in Transforming Contact Centers

Guidelines for Trusted Generative AI

Related Posts
1 of 13,389

Salesforce believes in imbibing ethical guidance across all products and innovations to enable responsible innovation for customers and stay ahead of potential issues even before they occur. Let’s take a look at the guidelines for the responsible development and implementation of Generative AI, though all of these are still a work in progress.

Safety

Salesforce believes in walking the extra mile to diminish biases, and toxicity, by controlling explainability, red teaming, and assessments. The idea is to guard and protect any personally identifying information (PII) in the data that was used as a part of the training and create guardrails to avoid any kind of further harm. For instance, ‘force publishing code to a sandbox rather than automatically pushing to production’.

Honesty

While in the process of collating data to train and assess the models, give importance to securely storing data provenance and ensuring that you have valid permission to use data (like open-source, user-provided). Always maintaining transparency when AI-created content is delivered autonomously delivered. This includes a chatbot response to a consumer or the use of watermarks etc.

Accuracy

To help customers train models on their own data, and ensure to delivery of validated results that balance accuracy and precision. The prime focus should be to communicate effectively when there is any kind of ambiguity on the authenticity of the AI responses and empower users to authenticate the responses. In order to do this, citing sources, along with an explainability of why the model gave those specific responses while highlighting the areas that need to be double-checked and creating guardrails to ensure some tasks are prevented from being fully automated.

Recommended: Transforming Customer Experience: The Superpowers of AI ML in Content Marketing

Empowerment

In some cases, it is the best way to fully automate processes, however in other cases, where AI plays a supporting role to a human, or human intervention is required. The correct way is to find the balance to enhance human capabilities and allow the solutions to be accessed by all, e.g. the ALT text for images.

Sustainability

Above all, it is important to develop the right-sized models where possible in to reduce the carbon footprint. While discussing artificial intelligence, a bigger model doesn’t necessarily mean a better model. In some scenarios, smaller, better-trained models have superseded larger, more sparsely trained models.

[To share your insights with us, please write to sghosh@martechseries.com]. 

Comments are closed.