A Journey to Full Cyber Event Response

The journey towards automation of detection and containment or eradication is an ongoing one. While many organizations aim to automate many of the containment services, it's important to emphasize that achieving full automation isn't necessarily desirable, and certainly doesn't happen overnight. In fact, achieving any level of automation has its pros and cons. The journey starts with gaining a comprehensive understanding of how a given automation component, such as host isolation for example, can be effectively integrated into an existing process. The journey starts with carefully assessing the potential advantages and disadvantages of a given automation balancing both its benefits and risks. It's important to note that predicting and prioritizing every potential issue without human oversight is a complex endeavor and hence we should not proceed with any automation project where the assessed risk exceeds our risk tolerance. Regardless what level of automation is employed, the security operations team needs to review the alerts from the SIEM and the SOAR systems.
To further elaborate, the process of automation entails several steps, starting with the fine-tuning of the data ingestion, correlation rules and refinement of playbooks. This involves configuring each tool to respond to a broad range of potential anomalies and alerts. The level of automation we can achieve is contingent on several factors, including the architecture of the environment, the maturity of the organization's security processes, and our risk tolerance. We have to aim to strike a balance that aligns with specific requirements. While automation can handle many routine tasks effectively, there will always be scenarios where human expertise and oversight will play a vital role. The prudent approach appears to be to progressively increase automation while maintaining the flexibility to adapt to unforeseen situations and evolving threats.
So how are we able to remediate or contain a threat? Such activities are achieved using the SOARs extensive playbook capabilities. For example, LogPoint provides over 2000 distinct correlation rules/playbooks. There are a significant number of playbooks that can be used as a baseline automating a large number of responses (such as isolating a host). In addition, a SOAR also provides a graphical interface so that every playbook can be further fine tuned for a given organization.
Here are various types of playbooks that can be leveraged:
Detect: This set of playbooks further enrich the data required to determine details of an alert. It is during this stage that additional context can be gathered to either note an alert as false positive or raise an alert to obtain additional attention.
Investigate: This set of playbooks gather relevant information to facilitate an investigation. As an example, it can detail any known history of a suspect IP address based on internal threat intelligence or public one.
Respond: This set of playbooks are those that can carry out an action. As an example, they can disable user accounts or drop a connection.
Custom: Using drag and drop, a trained operator can create custom playbooks. This is the area where a strong partnership with your SOAR vendor can make quite a difference as you are able to bring to bear in a short time frame the magnitude of custom playbooks based on your operating environment and needs. An example of such playbooks can be one that can interface with the ERP, EMR, or SIS directly to acquire information about a person's email address.
In addition to what is noted above, experience using unsupervised ML models to detect anomalous behavior is important. These models are packaged along with classification into detectors. These unsupervised models use multi-dimensional coordinate systems to plot activity, cluster similar activity and measure distances between what is expected and observed. The models used are primarily peer-grouping. Further AI models are used to assign risk scores to observations and to classify the risks. By examining sequences of alerts assigned to a case and evaluating how unlikely a specific sequence of alerts is, combined with human augmented detection rules, you are able to drastically increase the system’s ability to assign risk scores to cases.
AI models are trained as new threats are uncovered. When analyst feedback indicates a good or bad classification or security researchers uncover new attacks or ways to detect attacks, this can be fed back into the models. It is noteworthy that such models are quite applicable across multiple SIEM technologies. For example, we tested learning between LogPoint and Splunk instances and vice versa.
When selecting a vendor it might be good to check if they have detection logic based on AI. Since the ML models assign a degree of “unusualness” to behaviors, classification logic can be used to assign a final score to the observation, greatly reducing the analysts’ time on triage.