Is watching network traffic obsolete?

Check for bad effects instead of suspicious traffic

Being a part of the security industry for many years, we loved to watch all of the traffic coming and going from a network or even the servers. There was never enough data and as security folks we wanted to watch every inbound and outbound packet, because, well, it could be malicious. We’d copy all of the traffic when necessary, sift through it, and try to find events that could be from hackers. We’d even throw tons of costly hardware at the problem just to review traffic in real-time. All of this was done to achieve one goal: find attacks before they entered the network and did damage.

Specifically, we would place sensors at key choke points in the network or places where we could see a great deal of the traffic. The sensors could be placed in what was called detection mode, where a copy of the data would be sent to the sensor, or in prevention mode, where all of the traffic would pass through the device. Either way, the sensor would review each packet, collection of packets, or user sessions against a set of known bad “signatures” or just try to identify anomalous behavior. The theory was very similar to anti-virus technology, where known bad packet flows would be detected and propagated to all customers. The industry worked so well at this they generated tens of thousands of signatures.

While this worked well in the early days of the industry, over time the approach lost steam. There were, and still are, fundamental issues with watching all of the traffic and pattern matching against it (and today’s cloud-based environment only exacerbates the problem).

The first issue is one of accuracy. As those in the security industry know, the rate of false positives with this approach is incredibly high. More time is spent chasing down a false alert than actually investigating legitimate attacks.

The second issue with the traffic analysis approach is that it generally cannot detect what are called “zero-day” attacks—attacks for which there are no known signatures. For many organizations with valuable data, this becomes a non-starter as they are under tremendous direct, sustained attack. They need to know that their security technology will find the attacks.

Third, there’s just too much traffic to watch these days, especially with most of the events being false alarms.

A final significant issue is that this approach doesn’t work well with the cloud. Cloud providers own the network, and placing a “sensor” in the direct path of traffic is not so easy. Most cloud providers don’t allow you to place hardware appliances into their environment, and software solutions often can’t keep up with the explosion of traffic that many organizations have experienced. In short, a once useful approach to detecting issues has become cumbersome and outdated.

A new approach that many organizations are focused on now is to generate telemetry events from servers that show evidence of compromise. op-tier DevOps and IT pros are looking for the tell-tale signs of an event rather than searching for it in the haystack of traffic. This focuses attention on systems that exhibit the behavior of a compromise, such as changes in critical files or registry settings, initiation of outbound connections to suspicious servers, and user logins and activity. Each of these “telemetry” events can be correlated to reflect serious events.

Because these events can represent actual issues, there are few false positives. For example, the presence of a malicious file on a system is a binary event. Either it is there or not, and if it exists, it represents a threat. A user login from an overseas IP when the employee is actually standing next to you is also a significant event. A malicious command running on a server is also a telemetry event that denotes a problem. By focusing on actual events on a system, DevOps and IT admins can focus their energy on real threats rather than chasing down false positives.

As organizations move further into the cloud, watching traffic for security events is becoming even more difficult, time consuming, and ineffective. In short, it’s obsolete. There’s just too much data with very low signal to noise ratio. Changing the approach to look at the real indicators of a compromise eliminates the majority of the noise and drives towards accurate detection of issues. The future involves rapid, accurate detection of compromises and then quick remediation. See whether this approach will work better in your environment and potentially save you significant resources and time.

tags: ,