On average, an IT executive has only seven minutes to determine whether their organization is under attack. This is according to a survey of more than 400 IT executives in the UK, France, Germany and Hungary in which respondents were asked about their ability to process and use valuable information from security alerts.

Today, most organizations have log files that record numerous security events from the operating system, applications or users’ actions. In fact, an astonishing 198 million logs on average are collected per day from IDS, DLP, SIEM and other user monitoring systems. For organizations trying to sift through and find positives among false alerts, it’s like finding the proverbial needle in a haystack.

Good data is needed for good analysis, so it is crucial to collect all relevant logs from all possible platforms in order to structure and classify them for further analytics.

Good News and Bad News

Organizations need to collect and store all log messages for a forensic investigation in the event of a breach. Processing and analyzing this magnitude of logs would require a huge task force and be quite costly, so organizations need to have processes in place to help them determine which log messages are relevant.

According to the aforementioned survey, organizations are only able to process a third (31%) of log messages, and so they need to decide where to focus their efforts. One option is fine tuning the alerting systems and continuously updating the rules and patterns. This means that there will be fewer alerts, but a higher proportion will be relevant.

On the other hand, if log analysis is too finely tuned to reduce false positives then it is more likely that the rate of false negatives would increase. It is important to find the right balance and not just blindly optimize for low false positive rate.