Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Maize Analytics Founder and CEO Presents at HHS “Data Min(d)ing: Privacy and Our Digital Identities” Conference


Dr. Daniel Fabbri, Founder & CEO of Maize Analytics spoke at the U.S. Department of Health and Human Services (HHS) conference on “Data Min(d)ing: Privacy and Our Digital Identities,” a meeting featuring a slate of privacy thought leaders from academia, government, and industry.

Dr. Fabbri’s presentation entitled Risks of ‘Black Box’ Machine Learning in Compliance and Privacy Programs, focused on how to effectively monitor accesses to patient data for inappropriate use using machine learning. Machine learning algorithms have the potential to automate the detection of snooping, identify theft and other threats by learning characteristics of good, bad, and anomalous access patterns.

Read More: Oracle Retail Recognized as a Leader in Point of Service in Independent Research Report

The risk Dr. Fabbri warned about is known as ‘black box’ machine learning- when the algorithms do not allow privacy officers to decipher what they are doing. He stated, “if you cannot extract what the machine learning algorithm is doing, how can you define what your policy is, know it is correct, or defend it to regulators?”

Dr. Fabbri argued that machine learning algorithms may be better applied if they keep the compliance and privacy officer “in the loop.” This is achieved when the machine algorithms leverage large-scale data analytics to identify trends and patterns in access data, then recommend the policy (or reason for appropriate or inappropriate access) to the compliance officer. Next, the compliance officer has the opportunity to accept the policy or reject it, and the system applies the approved policy going forward.

Read More: The Promise and Potential of AI for the Insurance Industry

Dr. Fabbri concluded, “Leveraging new automation tools are critical for covered entities to efficiently audit accesses to patients’ records, but to do so effectively they must pull back the curtain to ensure systems enforce their policies and do not make inappropriate assumptions on their behalf.”

Read More: Interview with David Sikora, Chief Executive Officer at ALTR

Leave A Reply

Your email address will not be published.