SecureWorks Corp.

05/08/2024 | News release | Distributed by Public on 05/08/2024 06:44

History of Intrusion Detection & Prevention Systems

Intrusion Detection vs Intrusion Prevention vs Next Generation IPS vs Next Generation Firewall vs NDR

We've seen a lot of improvements to intrusion detection system (IDS) and intrusion prevention system (IPS) products the past several decades. They all are still quite similar to their original incarnation, which started with an academic paper written in 1986. The IDS/IPS basic fundamentals are still used today in traditional IDS/IPSs, in next generation intrusion prevention systems (NGIPSs) and in next-generation firewalls (NGFWs). This is a look at the beginning stages of intrusion detection and intrusion prevention, its challenges over the years and expectations for the future with the next iteration, network detection and response (NDR).

Where It All Started

IDS and IPS started with an academic paper written by Dorothy E. Denning titled "An Intrusion-Detection Model," which led Stanford Research Institute (SRI) to develop the Intrusion Detection Expert System (IDES). That system used statistical anomaly detection, signatures and profiles of users and host systems to detect nefarious network behaviors. IDES had a dual approach. It used a rule-based Expert System1 to detect known types of intrusions plus a statistical anomaly detection component based on profiles of users, host systems, and target systems. For example, it could detect that a protocol like HTTP or FTP was being misused as well as Denial of Service (DoS) attacks when an IP address was flooding a network.

2000 - 2005: Intrusion Detection Preferred Over Prevention

In the early 2000s, IDS started becoming a security best practice. Prior to then, firewalls had been very effective for countering the threat landscape of the 1990s. Firewalls process traffic quickly as they have no "deep packet inspection," meaning they have no visibility into the content and context of network traffic. Firewalls only have the ability to react based on port, protocol and/or IP addresses. In the early 2000s, new threats like SQL injections and cross site scripting (XSS) attacks were becoming popular, and these attacks would pass right by the firewall. This was the real beginning of putting the IDS into use. The popularity of IPS would come later.

During the early 2000s, few organizations had an IPS because they were concerned that the IPS could possibly block harmless anomalous traffic from prospects. The IPS works by sitting "in-line" in between an organization's network and the internet. When an event of interest (EOI) enters the network, the IPS would immediately block the EOI, which terminates the sender's connection to the organization's network. However, an EOI isn't always an attack. It simply could be some activity out of the ordinary occurring from the connection with the sender. Rather than take the chance of dropping EOI traffic coming from a prospect that is actually harmless, most companies were using an IDS rather than an IPS. Instead of sitting in between an organization's network and the internet, an IDS sits off to the side and mirrors all the traffic that comes into the network. When it discovers traffic that it perceives may be malicious, the IDS sends an alert to the organization's administrator so that an analyst can review the log activity and decide whether or not it is malicious.

During this time, signatures were written to detect exploits, not vulnerabilities. For any given vulnerability there could be many ways to exploit it. Once criminals discovered a vulnerability, they could create more than 100 different ways to exploit it, causing IDS vendors to write 100 or more different exploit signatures. When one of these known exploit signatures got through the network, the IDS would send an alert to an administrator. Back then, IDS vendors would brag about how many signatures it had in its database thinking the more signatures, the better they were compared to their competitors. However, having the most signatures was not really an accurate gauge for judging the best IDS, as vendors also used other methods to detect threats, including pattern matching, string matching, anomaly detection and heuristic-based detection.

The Adoption of IPS, 2005

When the adoption of IPS began to grow in the latter part of 2005, more vendors began supporting it. As vendors began competing for IPS business, they stopped bragging about the amount of signatures they had in their database. Since the IPS is inline, customers worried that all those signatures would slow down the network because each connection would have to be checked for one of those exploit-based signatures. IPS vendors began creating only one signature to satisfy each vulnerability no matter how many exploits were affiliated with it. Vendors had discovered that an IPS or an IDS that has more than 3,500 signatures is apt to hinder its performance. Today, vendors still pick the most relevant signatures that address the current threats as well as older threats that hackers still use.

To this day, intrusion detection and prevention systems (IDS/IPS or the combined IDPS) are changing and will likely continue to change as threat actors change the tactics and techniques they use to break into networks.

2006 - 2010: Adoption of Faster Combined Intrusion Detection and Prevention Systems

Security companies that offered IDS/IPS solutions stepped up the competition by taking IPS from 1 or 2 Gbps to 5 Gbps, providing the ability to monitor more segmented networks, as well as the DMZ, web farms (the use of multiple servers to host an application so that one server does not become overloaded with traffic), and the area just inside an organization's perimeter before the attack has an opportunity to enter the main network. An IPS operating at 5 Gbps allowed greater capacity to handle the throughput on the device, allowing the device to monitor more network segments. During this period, the majority of IPS platforms could provide up to 40-60 Gbps of protected throughput. Customers began switching from IDS to IPS, and its adoption was seeing double digit revenue growth year over year.

When the Payment Card Industry Data Security Standard (PCI DSS) began requiring organizations that accept payment cards (credit cards) to install either an IDS or a web app firewall, many of those organizations purchased an IDS/IPS. By then, the IPS technology had been more finely tuned and was much better at not blocking harmless traffic, so people began using it in the IPS mode. Meanwhile, botnets were proliferating.

One way attackers were gaining control of user's computers and adding them to botnets was by planting malware on popular websites. If a user's browser plugin like Java or Adobe's Flash or PDF reader contained a vulnerability, when the user clicked on a document or link that used one of those plugins, malware would secretly be downloaded. Additionally, in 2008 hackers were using iframe redirects on popular websites like news sites to redirect a user to the hacker's site. If end users had vulnerabilities in their applications or web browser when they landed on one of those popular sites, the iframe code would redirect them to a malicious website. This required IDS/IPS vendors to provide additional countermeasures. In addition to pattern matching, string matching, anomaly detection and heuristic based detection, vendors added information to block malicious command and control IP addresses as well as websites that were known to host malware, reducing the time it takes to detect threats.

2011 - 2015: Next Generation Intrusion Prevention Systems

The years between 2011 and 2015 were a massive turning point for IDS/IPS vendors as they began creating next generation intrusion prevention systems (NGIPS), which included features such as application and user control. A traditional IPS inspects network traffic looking for known attack signatures and either alerts on the traffic or stops it from proceeding into your network, depending on how it has been deployed. An NGIPS does the same thing and provides extensive coverage of network protocols to detect a wider range of attacks. It also provides application control to limit the portions of an application that users can and cannot use (e.g., users may be able to post on Facebook but unable to upload any photos), and provides user control, which would allow only certain people access to the application.

Another addition to IDS/IPS came about after the 2011 breach on RSA, the security company widely known for its two-factor authentication product. The media referred to the attack as an advanced persistent threat (APT) and later reported that the breach occurred due to a phishing attack that contained a document tainted with malware. Organizations began asking security vendors if they could protect them from a document or executable that has malware embedded in it if the vendor has no signature for that malware. Most security vendors with an IDS/IPS offering could not. Customers were clamoring for an APT remedy, so vendors decided the best fix would be to add sandboxing and/or emulation capabilities on the IDS/IPS, but redesigning the hardware would take anywhere from 12 to 18 months.

Meanwhile, companies like FireEye and Fidelis were growing. They were delivering a device with sandboxing or emulation capability that no other network security vendor had at the time. The sandbox was a whole new technology category addressing the ability to find zero-day malware. Traffic that contained documents or executables via web or email were sent to a breach detection system. Documents and executables would automatically be opened in the sandbox. When anything was discovered to have been malicious, the sandbox would send an alert to an administrator. Although IDS/IPS vendors would not be able to create that type of feature for at least a year, they began using MD5/SHA checksums of known bad files. Each file has a unique checksum. Checksums (also known as hashes or signatures) are strings of characters made from numbers and letters, which can be used to verify the integrity of files and text messages. If the checksum that entered the network matched the checksum the vendor had on file, the sandbox would alert the victim's organization that malware had just entered the network. Back then, this was a milestone for protecting networks.

Although Gartner coined the term "next-generation firewall" in 2003 and predicted they would include IPS features and would be offered in 2006, NGFWs were not widely adopted until 2013. At that time, they began including IDS/IPS functionality, such as using signatures to identify known attacks and looking for anomalies and protocol deviations in the packet flow.

2016 - 2020: Next Generation Firewalls and an Evolving Landscape

Major attacks like WannaCry in 2017 and the SolarWinds breach in 2020 highlighted the need for more advanced solutions. During this time, most enterprises adopted NGFWs by the latter half of the 2010s, and they continued to expand on features like real-time prevention, identity management, and sandboxing.

Real-time prevention helps eliminate risk, damage, and cost to the organization by blocking malware and other threats before they enter the network. Rather than control network traffic based on source and destination addresses, they operate dynamically and immediately to detect and mitigate threats as they occur.

Identification and identity management provide robust identification and access control for the distributed workforce by differentiating between legitimate and illegitimate users, applications, and devices. Having full visibility into entities inside an organization's environment regardless of location and platform (on-prem, cloud, mobile, endpoints, or IoT devices) provides more robust security across all network traffic.

As covered earlier, sandboxing became a popular technique earlier in the decade, and as machine-learning algorithms became more reliable, NGFWs became more powerful and moved beyond static rule-based approaches to analyze patterns, detect anomalies, and even predict potential attacks based on learnings from historical data and continued model tuning.

While NGFWs and other network security controls had previously focused on perimeter defenses, the proliferation of IoT devices (often lacking built-in security) and then the shift to remote work accelerated by the pandemic necessitated a shift in general network security approaches. Secure remote access became more of a priority, with greater visibility, control, and automated response for both IoT and remote devices.

2021 - Onward: Machine Learning, AI and Network Detection and Response (NDR)

Organizations today are continuing to experience the same pain points from the previous decades that necessitate network security solutions, and in some ways those pains have been amplified with the rise of cloud and remote work. Network teams are limited, and few organizations have 100% up-to-date coverage on their endpoints. Traditional firewall alerts that require manual action are not sufficient to respond to threats fast enough. This is why organizations are not only investing in NDR solutions but seeking ways to integrate the data and automation into a unified security operations platform like extended detection & response (XDR).

In addition, most security solutions now include the integration of advanced machine learning algorithms to improve detection rates and reduce false positives. The use of artificial intelligence (AI) also allows for more adaptive and proactive responses to detected threats. Signature-based detection and blocking alone is no longer sufficient to ensure network security threats are detected and responded to effectively. Cybercrime is a constantly evolving threat that organizations have responded to by evolving their IT and security infrastructures. As remote work has become a common trend and more applications are based in the cloud, network flow is high, and threat actors can easily mask behavior in the sheer volume of traffic.

The result has been an overwhelming volume of alerts, and traditional firewalls that require manual response actions are not able to respond fast enough. Security teams are overwhelmed, while security and business leaders are looking to maximize their resources as their organizations struggle to keep up.

Prevention is a fundamental defense against network security threats, and as threat actors develop more sophisticated methods for evading detection and enabling breaches, network security solutions must also evolve to also include robust response capabilities. Network Detection and Response (NDR) solutions provide visibility into network activities to identify threats using advanced detection capabilities and machine learning algorithms. Automated response capabilities can help reduce the workload for security teams by relieving the burden of manual follow-up.

NDR solutions are also being tightly integrated with other security solutions, such as security information and event management (SIEM) systems, XDR platforms, and threat intelligence services. This integration can provide a more holistic view of the security posture and enable more coordinated incident response efforts. That includes the need to adapt to monitor and protect assets in cloud and virtual environments. This could mean the development of new deployment models that are optimized for cloud architectures and the ability to monitor inter-container traffic in virtualized environments.

As a result, NDR solutions are growing in popularity, though not all have the same robust features. Secureworks® Taegis™ NDR goes beyond traditional network security by combining powerful network detection and response capabilities with advanced threat prevention and included device management. Integrated threat prevention and response, including automated threat blocking across both east-west and north-south network traffic, can greatly reduce an organization's risk. Included AI-powered threat detection analyzes network traffic for anomalous application and port usage, identifying potential internal and external threats before they can cause harm, such as data exfiltration or ransomware attacks. Taegis NDR also eliminates the burden of device management, so limited internal resources can be deployed elsewhere.

Learn more about Taegis NDR or contact us directly.

[1] https://en.wikipedia.org/wiki/Expert_system