Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Twitter Updates its Responsible Machine Learning Initiative

Responsible Machine Learning development is essential to extract positive outcomes from various AI and Machine Learning initiatives. These initiatives empower AI engineers, data scientists and end-users to build, analyze and utilize various AI ML applications ethically. Almost every major technology innovation company evangelizes the importance of Responsible Machine Learning development is essential to extract positive outcomes from various AI and Machine Learning initiatives. One of them is Twitter.

Twitter has constantly provided updates on its ongoing AI and Machine Learning projects. In its latest blog, the leading microblogging platform famous for its social media listening technology announced the introduction of a collaborative Responsible Machine Learning initiative. By virtue of this announcement, Twitter has reinforced its commitment to build and promote ethical AI practices, taking “responsibility for our algorithmic decisions.”

What is Responsible Machine Learning?

Like all current applications of AI and Machine Learning, Responsible ML too is surrounded by smoke and haze, leading to ambiguity among innovators to practically define the scope of this unique application.

Twitter has tried to define its Responsible ML by mentioning the following pillars:

  • Taking responsibility for [AI ML] algorithmic decisions
  • Equity and fairness of outcomes
  • Transparency about decisions and how to arrive at them
  • Enabling agency and algorithmic choice

But, what exactly is Responsible ML?

After evaluating tons of resources on AI ML development projects, we came to a conclusion that Responsible ML is the scientific practice of developing actionable best practices for data scientists (People), Computing and Data Management (Processes), and Security (Information Technology), in order to enable organizations to innovate ML applications in an ethically responsible manner.

Related Posts
1 of 40,530

Based on this practical example, we can understand the “8 Principles of Responsible ML initiatives” for an organization.

The 8 principles include:

  1. Human Augmentation
  2. Bias Evaluation
  3. Explainability
  4. Replacement ability
  5. Displacement strategy
  6. Practicality
  7. Trust and Privacy
  8. Data Risk and Assessment

Companies like Google and Microsoft use Responsible ML practices to better understand, protect and control machine-level data, models and processes to build trusted solutions. In fact, Google AI provides a deep insight into how Recommended ML models work and what are the various recommendations to examine the efficacy of ML data models.

Back to What Twitter: What is It Doing with Responsible ML

Twitter is using ML to reduce the negative impact of technology. Millions of users use Twitter everyday, and the platform generates billions and billions of new insights and content every minute, if not seconds! The ML used by Twitter is supposedly used to influence the way users interact with the platform and how Twitter analyzes behavioral data to make its platform safer, ethical and transparent. The final outcome: Make Twitter better for people.

Currently, the Twitter ML team is working on its Responsible ML initiative, involving their entire technical, research, trust and safety, and product teams. Jutta Williams [‎@williams_jutta‎] is a key member in this group. She is a Staff Product Manager ML Ethics, Transparency & Accountability (META) — a dedicated group of engineers, researchers, and data scientists who are collaborating across the company. Their mission is to “assess downstream or current unintentional harms in the algorithms that Twitter uses and to help Twitter prioritize which issues to tackle first.”

From removing racial posts to restricting political innuendos on Twitter, the team is constantly researching and understanding the impact of ML decisions on content discovery and recommendations.

Twitter is also relying on feedback from customers — the public to strengthen its Responsible ML initiative. It has started a Twitter campaign to collect more public feedback. If you have any questions about Responsible ML, or the work META’s doing, feel free to ask using #AskTwitterMETA.

Comments are closed.