Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Three Things to Consider in the Emerging AI and ML Cybersecurity Landscape

Cyber threats continue to escalate in both sophistication and volume. Traditional approaches to threat detection, however, are no longer sufficient to ensure protection. Correspondingly, machine learning (ML) has proven highly effective at identifying and warding off cyber attacks.

Machine learning’s power is the result of three factors: data, compute power and algorithms. Due to its very nature, the cyber field produces substantial amounts of data.

Read Also: Source Defense and Omada Team Up for Cybersecurity Threat Prevention & Data Privacy Protection

For example, a corporate network might see billions of daily IP packets, millions of DNS queries, resolved URLs and executed files, and perhaps hundreds of millions of events (processes, connections, I/Os) on its endpoint devices. Vast amounts of computing power are required to extract, clean and process this data, which fortunately is easily, scalably and affordably available through a variety of cloud-based platforms. Equally, increasingly powerful open source ML Cybersecurityalgorithms are available that abstract away the complicated underlying math to enable development, tuning and training of sophisticated models. These factors together provide cybersecurity vendors capabilities that would have been unthinkable in the past.

Typically cybersecurity vendors train their ML models using live customer data, “honeypots” designed to attract attackers, and through the sharing of data within the cyber community.

This enables a more comprehensive view of the threat landscape, for example, creating model features that might include a file’s recency, prevalence and frequency of usage across the entire customer universe. Vendors also train their models with corpora of known types of malware as well as legitimate files. The training includes determining if a file is malicious or not, but also often tries to classify the type of malware, which is vital in determining how to remediate or remove the malware.

Recommended: Fortinet and IBM Collaborate on SkillsBuild to Further Build Cybersecurity Skills

The applications of ML are wide-ranging, including anti-malware, bot detection, anti-fraud and privacy protection. Interestingly, multiple compelling emerging challenges exist with the use of ML within cybersecurity, making it an exciting field with tremendous potential.

Adversarial AI and the Role of ML.

The democratization of AI through the accessibility of large data sets, fast reducing cost of compute at scale and open-source availability of powerful algorithms have proven such a boost for the cybersecurity industry that they’ve also made ML a vital tool in the cyber adversary’s arsenal.

For example, generative adversarial models are used to develop strategies to reduce the risk of an attack being identified by cybersecurity tools. ML-based behavioral anomaly detection systems will learn normal behavior to quickly identify unusual and potentially malicious activity, but so are adversaries developing malware that learns normal user and system behavior to impersonate that behavior and minimize detection risk.

Related Posts
1 of 6,055

An ML cybersecurity system’s efficacy can be significantly affected by the cleanliness of the data used to train the model. Adversaries can take advantage of this fact through a “poisoning” attack that seeks to inject bad training data to influence the model to learn incorrectly. This can happen in various ways, from the generation of fake traffic patterns to poisoning of commercial or open-source malware sample datasets.

Adversaries have been able to leverage ML models designed to prevent false positives as a way to avoid detection. Attackers, for instance, learned that by embedding specific patterns into malware, they could trick a popular anti-malware product into whitelisting the code (flagging the code as legitimate) even though it was malware.

ML’s use to model human communication patterns to develop more realistic and effective phishing attacks is another interesting adversarial example. The state-of-the-art in natural language processing and natural language generation — Open AI’s GPT-3, for example — means it may soon become extremely challenging to discriminate between real and synthetic communications.

ML and Deep Reinforcement Learning.

Conventional ML techniques have been applied in cybersecurity with considerable success, especially in detecting unknown attacks (also referred to as Zero Day attacks). These techniques work exceptionally well in static linear environments. Conversely, today’s sophisticated adversary scenarios are dynamic, multi-vector and sequentially non-linear in character. Merely relying on an ML cybersecurity system to reactively identify one part of the attack sequence is insufficient.

Deep Reinforcement Learning (DRL) is one of the most exciting topics in ML as it combines deep learning techniques (such as convolutional neural networks) with reinforcement learning. This is the core approach behind DeepMind’s AlphaZero breakthrough. The application of DRL to cybersecurity is a crucial step forward in tackling sophisticated threats.

DRL systems learn somewhat like a human. They explore their environment (in the case of cybersecurity, an event space) and learn by receiving feedback and rewards based on the actions that they take. This autonomous approach has demonstrated to be well suited to complex adversarial scenarios, with superior efficacy, generalizability and adaptability.

ML Cybersecurity and Internet of Things (IoT).

Ten of billions of new connected devices come online every year- with more to come. However, many of these IoT devices have limited compute or storage capacity, cannot run endpoint cybersecurity software and are built on proprietary firmware. These devices also tend to be “headless” with limited ability for users to access or update software running on the device. For these reasons, IoT devices are distinctly vulnerable to cyber attack.

The natural solution to this problem is to run IoT cybersecurity at the network level and/or in the cloud. Traditional signature-based network security technologies, however, aren’t designed to address the IoT device security problem. Moreover, most IoT cybersecurity products are currently little more than re-packaged IDS, URL reputation or hardened DNS services. Still, cutting-edge work is happening in the application of ML to this field. Sophisticated models have been designed that can identify infected devices through inspection of just a few packets of data, enabling pro-active detection and blocking of threats.

As is often the case, the most substantial innovations occur at the intersection between adjacent fields of endeavor. It is an exciting time in both the ML and cybersecurity fields. We’re seeing the power of ML harnessed to drive important innovations in the cyber field — innovations that will ultimately help keep all of us safer.

Leave A Reply

Your email address will not be published.