Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

LSU Professor Sun Working to Address Discrimination in AI/ML Systems

Artificial intelligence (AI) and machine learning (ML) technologies play an increasing role in our society, including in high-stakes decision-making systems like lending decisions, employment screenings, and criminal justice sentencing.

However, one growing challenge with AI and ML systems is avoiding the unfairness they might introduce that can lead to discriminatory decisions. Finding a solution to that problem is the aim of a project by LSU Computer Science Associate Professor Mingxuan Sun and University of Iowa Computer Science Associate Professor Tianbao Yang and University of Iowa Associate Professor of Business Analytics Qihang Lin.

Recommended AI News: Scratch Token in Limited Launch, Powering Standardized NFT Valuations and L****

The work is part of a grant from the National Science Foundation ($500,000) and Amazon ($300,000). Yang serves as principal investigator on the project, and Sun and Lin are co-principal investigators.

The researchers’ objectives are to design new fairness measures and develop numerical algorithms for solving the optimization with fairness guarantee. More specifically, they will develop scalable stochastic optimization algorithms for optimizing a broad family of rank-based, threshold-agnostic objectives.

Learning to rank is to select a set of top-k answers/items with the highest ranking score according to the given scoring function. Ranking algorithms have many applications such as selecting top-k job candidates, predicting top-k crime hotspots, and recommending top-k items.

Related Posts
1 of 40,944

Recommended AI News: AE Studio and Edge Of NFT Launch Edge of AE Studio at NFT LA

“Most current machine-learning approaches are based on optimizing traditional objectives, such as accuracy in the training data, which are insufficient for addressing the minority bias of training data,” Sun said. “In many domains, the data is highly skewed over different classes. For example, an historical data bias or stereotype exists that most software engineers are young males. An unfair ML system would recommend a software engineer position to young males only.”

Sun added that the project will also include integrating the research team’s techniques into education analytics to address fairness and ethical concerns of predictive models, in particular, the “perpetuating biases toward under-represented minority students, first-generation college students, and female students in STEM courses.”

“Our goal is to ensure more fairness between different demographic groups in applications such as recommendations, top-k hotspot predictions, and students’ performance predictions.”

Recommended AI News: Backblaze Doubles Down on Security With Open Bug Bounty Program

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.