Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

FLINT Systems Releases The First Linguistic Based AI Chat Detection Tool

FLINT Systems is releasing the first linguistic tool designed to detect whether a document was authored by its attributed author. Unlike other AI detection tools, FLINT technology applies forensic linguistic methodologies to create a digital linguistic fingerprint of an individual’s writing style. It then creates a linguistic fingerprint of the document at question and compares the two. Testing results showed that when documents were created by anyone other than the individual who submitted the document, FLINT Systems correctly identified in over 80% of the cases.

“For several decades we have been developing and applying forensic linguistic methodologies to identify the authors of documents” says leading expert, FLINT Systems Chief Linguistic Officer, Professor Robert Leonard, a pre-eminent forensic linguist, whose clients include the FBI Behavioral Analysis Unit, Joint T******** Task Force, Director of National Intelligence, NYPD, UK’s MI-5 and Secret Intelligence Service (MI6), Apple, Facebook, and the Prime Minister of Canada. Leonard’s methodologies have been accepted by federal and state courts across the nation. “We have been working for several years to automate these well-established methodologies to correctly identify whether a text was authored by a machine or by a specific human.”

Recommended AI: SAS Joins CESMII to Accelerate the Adoption of Analytics and AI

AI authorship tools, such as ChatGPT, Jasper, ParagraphAI or Toolbaz, are making it easier for individuals to automatically generate documents and texts, and then masquerade as the authors of these documents. A recent survey by Intelligent.com found that 30% of college students have already used tools like Chat GPT on their homework assignments. Even more alarming is that over 60% said they plan to use these tools.

Some AI based authorship detection tools, such as GPTZero, have already been developed to counter this phenomenon. Their approach is complicated by the high likelihood that authors will do some level of editing on the AI generated documents, thus making this approach to detection more error prone.

Related Posts
1 of 41,093

Recommended AI: Millennium Physician Group and Navina Release Value-Based Results of Technology Implementation

By applying linguistic fingerprinting technology, the FLINT System can correctly identify when an individual did not author the document, regardless of whether or not there are elements of humanly developed texts interwoven into the AI document.

“We all have our individually unique writing style writing, inserting a single space or multiple spaces between sentences, repeating typos, capitalizing sentences differently” explains Jonathan Barsade, Co-CEO of FLINT Systems. Barsade continues that “Applying hundreds of linguistic features, statistical analysis and technology processes, we are able to create a linguistic fingerprint of individuals, and just like police who use a comparison of fingerprints to identify individuals who were at a crime scene, FLINT Systems applies the linguistic fingerprint to identify whether a document was authored by the individual, an AI bot, or some mixture.”

“The unique strength of the FLINT toolbox is in the integration of multiple methodologies and technologies,” states Ted Gutierrez, Co-CEO of FLINT Systems.

Recommended AI: Stability AI Partners With Krikey AI to Launch AI Animation Tools

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.