Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Google Engineer ‘Pays Price’ for Claiming That Its Articifical Intelligence Is Sensitive

The issue was raised by Google engineer months ago as he kept disputing company’s managers and maintained that the language model of company’s Dailog application/LaMDA had consciousness and also had a soul.

Google, the global search engine giant, has sidelined one of its employees for raising an alarm over its Artificial Intelligence feature and claiming that it is sensitive, thus setting the stage for criticism of company’s most advanced technology.

Latest Aithority Insights: Xage Releases Novel Multi-Layer Multi-Factor Authentication Protection for Critical Infrastructure

Blake Lemoine, a seasoned engineer at Google’s Resonsible AI organisation, recently told mediapersons that he was sent on paid leave after he flagged glaring gaps in the state-of-the art technology. Lemoine said that he has submitted documents to US senator’s office, claiming that evidence was available to show that Google & its technology engaged in religious discrimination.

Addressing the issue, Google spokesperson Brian Gabriel said in a statement, “Our team has addressed & reviewed Blake’s concerns in accordance with AI guidelines and nowhere did we found that the evidence supports his claims.”

Related Posts
1 of 40,972

According to reports, this is not something that hit the company just now. The issue was raised by Google engineer months ago as he kept disputing company’s managers and maintained that the language model of company’s Dailog application/LaMDA had consciousness and also had a soul. On the other hand, Google countered his claims and said that several researches & engineers have conducted review of LaMDA but noone has come to the conclusion that Mr Lemoine has reached. Most AI experts have concurred with Google’s point that Artificial Intelligence is far from recording human sentiment.

Data Privacy and SecurityChicago Public Schools Suffers Major Data Breach Affecting 100K Accounts

AI researchers, however, have not ruled out emergence of a close sync between technology and human behavoiur. They believe, soon a synergy could be developed and both can understand each other.

In last few years, these IT giants have taken a whopping task of creating neural networks. Under the ‘large language models’, the technology is used to summarize articles, answer questions and even write long blogs. However, they come with own shortcomings & associated risks, besisdes being highly flawed. Sometimes, they produce good content, sometimes extremely bad. Overall, they have been found to adopt a pattern & work flawlessly but they indeed can’t think & act like humans.

Top Machine Learning InsightsLivePerson Collaborates with UCSC to Build the Future of Natural…

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.