Advanced Defense Posture Assessment

Advanced Defense Posture Assessment: Analytical Tradecraft to Evolve Detection Capability and Precision

NG-OPS Advanced Defense
Defensible Security Posture
APT Detection Framework

Multi-dimensional Targeted Threats continue to evolve and exploit vulnerabilities that lead to significant loss of data and resources for organizations of all regions and sizes. These attacks are very much today’s news. They represent a danger to an organization’s intellectual property, financial assets and reputation.

Advanced Defense Posture
The era of advanced threats calls for a new approach to information security. When dedicated cyber adversaries have the means and methods to elude commonly used defenses, such as signature based detection, it is clear that conventional approaches are no longer sufficient.

“Breaches happen in hours but often go un-detected for weeks or even months.”

Advanced targeted threats present challenges that are distinct from traditional security risks. There are too many entry points in today’s virtual enterprise, too many individual endpoint security solutions triggering alerts, too much security threat intelligence to process in real-time. More importantly, there are too few trained personnel who can spot and respond to advanced threats.

The pain points all cry out for a common holistic solution: Advanced Defense based upon Actionable Intelligence and ever evolving Analytical Tradecraft to continually improve detection capability and precision.

Detection Precision versus Cost

This blog is a part of the new Smart Practices Series complemented by the NG-OPS Advanced Security Series which will drill into greater details on the methodology and concepts used by these proposed advanced best-practices. Advanced Defense takes your organization to the next-level of detection capability.

Potential Benefits

  • Baseline and Validate Defensible Security Posture
  • Benchmark against Advanced Defense Reference Architecture (see NG-OPS Advanced Defense series)
  • Identify Gaps in Detection Capability, Visibility, Precision
  • Develop Advanced Defense Strategy & Roadmap with Continuous Analytical Improvement

Features

  • Leverage Intrusion Kill Chain
  • Advanced Defense Reference Architecture
  • APT Detection Framework
  • Defensible Actions Matrix
  • Develop Advanced Defense Strategy & Roadmap

Advanced Defense

Today’s cyber security paradigm is a reactive cycle: when a threat is exposed, it is analyzed and a counter-solution is designed with response times varying from weeks to years. The trouble is that attackers can easily reuse pieces of previous malware, modify them, and create a brand new threat, bypassing the newly updated security measures.

In today’s threat environment the only constant is change. In fact, everything is changing – the way our users work, the types of adversaries we face, and the techniques hackers use to infiltrate our networks. Such threats have become even more sophisticated than ever, bringing new risks and uncertainties that require more visibility in operations.

Attack vs Defense

The Attacker versus Defender View

The need for an Advanced Defense mindset is evident across the industry. Technologies will continue to improve but in parallel we do need to ensure that we also evolve and improve our security detection capability and precision, processes as well as invaluable resources and skills.

“Attackers are constantly evaluating their methods and improvising new techniques.
Defenders must think in those same fluid terms to keep pace.”

Advanced Defense Posture (ADP) Assessment

An ADP assessment evaluates your organization’s evolving ability to detect, contain, investigate and respond to a targeted or advanced threat. The assessment methodology is designed to help organizations to, as follows:

  • Understand defensible security posture
  • Benchmark and validate ability to address stealthy targeted threats
  • Take proactive actions to continually improve detection capability and precision
  • Use a set of indicators or behaviors to enhance situational awareness.

Leveraging the Intrusion Kill Chain

The Advanced Defense Posture assessment makes use of the Intrusion Kill Chain. In any targeted attack there are typically a pre-defined set of phases that act as a ‘signature’. The importance is not that this is a linear flow – some phases may occur in parallel, and the order of earlier phases can be interchanged – but rather how far along an adversary has progressed in order to be able to quickly detect, contain and, respond.

Intrusion Kill Chain

Simplified View of Intrusion Kill Chain

The intrusion kill chain becomes a model for actionable intelligence to help align organizational defensive capabilities to the specific processes an adversary undertakes to target your organization.

The end goal of this is to analyze the data for patterns of attack methods, behaviors of distinct hostile actors, and other indicators which can inform the development of unique adaptive and agile responses. The assessment addresses key questions, for example:

  • What scenarios do we need to be able to detect?
  • What are our options for detecting them?
  • What are the strengths and weaknesses of our detection program today?
  • What is our detection stance against specific actors?
  • What is our overall plan for detection across our enterprise?

ADP Assessment Methodology

The ADP assessment process should include:

ADP-A Methodology

Advanced Defense Posture Assessment Methodology
  • Baseline Current Defensive Posture
    • Conduct discovery sessions to clearly identify defensible architecture, key assets/services and, posture
    • Document baseline across Intrusion Kill Chain using APT Detection Framework
  • Reference Architecture Analysis
    • Identify tools, tactics, techniques  gaps and improvements in detection capability/precision using Advanced Defense Reference Architecture to establish goal  (see NG-OPS Advanced Defense series)
  • Identify Defensible Actions Matrix
    • Determine detection toolset, i.e., tactics, techniques and, procedures to Detect, Deny, Contain, Disrupt Eradicate, Deceive or, Recover
  • Develop Advanced Defense Strategy & Roadmap
    • Develop Advanced Defense Strategy & Roadmap to remediate gaps, deploy improvements and, leverage continuous improvement  (see NG-OPS Advanced Defense series)

Conclusion

Recent incidents clearly demonstrate that cybercriminals can conduct operations that involve intrusion, lateral movement, and data exfiltration in complex networks secured to current best-practices. Attackers can adapt their attack techniques to the unique circumstances of targeted environment.

This level of resourcefulness points to the realization that current best-practices and regulatory compliance are a necessary minimum baseline but are not sufficient alone. Today there is an increasing need for organizations to progressively evolve and advance from current security posture to a more defensible and advanced defense program with visibility, validation and, vigilance.

My solutions include the adoption of a security architectural and design foundation approach that compartmentalizes breaches into managed zones on networks and on endpoints. To strategically leverage the Adaptive Zone Defense series of blogs to develop an innovative architecture foundation with well-organized applications and services, managed communications and – good visibility to flows and logs that can actually detect the cyber kill chain activity and stop the breach.

This requires an ongoing lifecycle process with evolving actionable intelligence and analytical tradecraft to take the now legacy, rapidly deployed and complex infrastructure to consolidate it into a new core foundation based on the architecture/design blueprint, while continually evolving the blueprint based on new business requirements, technology solutions and, regulatory requirements, for more information see: Adaptive Security Lifecycle.

Coming Soon

  • APT Detection Indicators – Part 4: Behavioral Indicators Lifecycle
  • APT Threat Analytics – Part 3: Targets, Threat Actors, Scenarios & Modeling
  • NG-OPS Advanced Defense – Part 2: Analytical Tradecraft Practices
  • NG-OPS Advanced Defense – Part 3: Network Profiling and Validation

Thanks for your interest!

Nige the Security Guy.

APT Detection Indicators – Part 3

APT Detection Indicators – Part 3: Command & Control Channels

APT Detection Indicators – Part 1
APT Detection Indicators – Part 2

When securing a network most organizations are more concerned with controlling inbound traffic than outbound traffic. However, outbound traffic is a significant risk that is used by malware and targeted attackers as channels for Command and Control (C&C) as well as Data Exfiltration.

APT Detection Indicators - Part 3

Understanding C&C and C&C channels is critical to effectively detect, contain, analyze, and remediate targeted malware incidents. Malware allows attackers to remotely control computers via C&C channels using infected computers. These activities pose a threat to organizations and can be mitigated by detecting and disrupting C&C channels on the network.

This APT Detection Indicators – Part 3 blog describes, as follows:

  • Risks associated with Outbound Traffic
  • Typical Command and Control Channels
  • Techniques used to circumvent controls
  • Methods for detecting and preventing evasion techniques

There is no way to eliminate all risk associated with outbound traffic short of closing all ports since attackers are very creative in hiding their activities testing for available protocols to tunnel and leveraging various obfuscation techniques. However a good understanding of the techniques and risks should enable organizations to detect abnormalities (also see: APT Anomaly Detection) and make informed decisions on improving and fine tuning egress policy.

It is vital to practice heightened operational awareness around critical data and assets. Organizations should segment and wrap critical data within the deeper protection of well monitored infrastructure (also see Adaptive Zone Defense). In addition, layered defensive tactics (multiple layers and means of defense) can prevent security breaches and, buy an organization time to detect and respond to an attack, reducing the consequences of a breach.

A Recap on Malware

Malicious software, also known as malware, has existed for almost as long as computers have been around. A lot of effort has been put into stopping malware over the years but malware still remains a growing pandemic. Everyday, a huge amount of malware is released.

Botnet Army

Command and Control Channel Establishment

Botnets

Botnets consist of computers infected with malware which are called bots. These bots connect to a C&C infrastructure to form a bot network or botnet. The C&C infrastructure allows the attacker to control the bots connected to it. Bots can be instructed to steal user data, (financial) credentials or credit card details from the infected computers. A large group of bots can be used to perform a Distributed Denial of Service (DDoS) attack and bring down a server. Criminals also sell bot access to other criminals.

Targeted Attacks

In the case of a targeted attack the attacker wants to infect a specific organization. This is quite different from the regular botnets described above, where the criminal is not interested in which machines they infect. The goal of a targeted attack can be to steal certain data from the target or sabotage target systems.

This is achieved by infecting one or just a few computers with malware which contacts a C&C server. The C&C server allows the attacker to remotely control the infected computers. The control functionality can be used to infect other computers or search for documents the attacker is interested in. After the data of interest has been found the attacker gives instructions to exfiltrate the data. The exfiltration usually happens via a channel separate from the C&C channel.

Detecting targeted attacks is much harder than detecting untargeted attacks. The malware is only sent to a few targets, making anti-virus detection unlikely, as antivirus vendors are unlikely to obtain a sample of the malware. Detecting the C&C traffic also becomes harder as Intrusion Detection System (IDS) signatures for malware are unlikely to be available and the C&C infrastructure is less likely to appear on any blacklists.

Simple malware may be caught by sandboxes, they are useful pieces in Solving the APT Defense Puzzle. But in the case of targeted attacks the malware authors test their attacks before releasing them. Thus, it becomes more difficult to detect, classify, and attribute APT threats via sandbox-based methods. Thus, detection of targeted attacks relies heavily on heuristics or human inspection as the last line of defense.

Malware C&C Network Protocol Usage

Command and Control channels can vary widely in their complexity. The control infrastructure can range from simple HTTP requests to a malicious domain to more complicated approaches involving the use of resilient peer-to-peer technologies that lack a centralized server and are consequently harder to analyze. A small group of malware uses TLS to encrypt (some of) their communication. It is interesting to note is that almost all of the TLS traffic is described as HTTPS traffic. Furthermore, most of the known samples fail to complete the TLS handshake. This may indicate that the malware does not actually implement TLS, but merely communicates on a port which is normally used for TLS connections which is very typical.

APT CandC Example

Advanced Threat Actor using C&C Channel Example

C&C Channel Detection Techniques

The following are some examples of C&C channels and the techniques used to detect them. We will explore this topic in greater detail in future blogs together with the use of open-source tools.

Blacklisting

A simple technique to limit access to C&C infrastructure is to block access to IP addresses and domains which are known to be used by C&C servers.

Signature based

A popular technique for detecting unwanted network traffic is to use a signature based Intrusion Detection System (IDS). The advantage of signature based detection is that known bot traffic can be easily detected if malware researchers have created a signature. The disadvantage is that bots are often obfuscating or encrypting their traffic which makes it much harder or even impossible to write a signature.

DNS protocol based

Malware needs to know the IP address of the C&C infrastructure to communicate. This address can be hard-coded or it can be retrieved from a  domain name. Using a domain name provides more flexibility as it allows the attacker to change the IP address easily. The infected computer doesn’t even need to have outbound connectivity. As long as it can resolve the host name through a local DNS server that performs recursive lookups on the Internet. DNS has been involved in two recent large-scale breaches that resulted in the compromise of millions of accounts.

Network administrators should look for, as follows:

  • DNS responses which have a low to very low TTL (time to live) value, which is somewhat unusual
  • DNS responses which contain a domain that belonged to one of a long list of dynamic DNS providers
  • DNS queries which were issued more frequently by the client than would be expected given the TTL for that hostname
  • DNS requests for a hostname outside of the local namespace which were responded to with a resource record pointing to an IP address within either 127.0.0.0/8, 0.0.0.0/32, RFC1918 IP space, or anywhere inside the public or private IP space of the organization
  • Consecutive DNS responses for a single unique hostname which contained only a single resource record, but which changed more than twice every 24 hours.

Maintaining a DNS server and C&C server at a fixed address increases the chance that it will be taken down. Therefore, attackers have started using fast-flux domains. These are domains for which the owner rapidly changes the IP address to which a domain points and, optionally, the IP address of the DNS server as well.

IRC protocol based

First generation botnets used Internet Relay Chat (IRC) as a channel to establish a central command and control mechanism. They connect to the IRC servers and channels that have been selected by the attacker and waits for commands. Although the IRC botnets are easy to use, control and manage, they suffer from a central point of failure.

Peer to peer protocol based

To overcome the IRC issue, peer to peer architecture is used in the second generation of botnets where instead of having a central C&C server, the attacker sends a command to one or more bots, and they deliver it to their neighbors. Increasingly the peer to peer (P2P) protocol is being used for C&C channels.

Examples include Zeus v3, TDL v4 (Alureon), and ZeroAccess. A roughly 10x increase in the number of malware samples has been observed using P2P in the past 12 months.

P2P C&C channels are often easily identified by DNS, reverse DNS or passive DNS as they generally do not try to hide – unless they are malicious. Typically all members of a malware P2P swarm have been compromised with the same malware. Detect one and you will quickly identify hundreds of compromised assets.

HTTP protocol based

The second generation implementation leveraging a P2P botnet is difficult and complex. Therefore, attackers have begun to use the centralized C&C model once again, using the HTTP protocol to publish the commands on certain web servers.

The vast majority of malware examined is using HTTP as the C&C protocol. According to Mandiant 83% of all backdoors used by APT attackers are outgoing sessions to TCP port 80 or 443. However, only a few samples use TLS to communicate with the C&C server. All of the TLS malware allows connections to servers with invalid certificates. If the servers indeed use invalid certificates this property could be used to detect these use cases. Similarly, the double connection attempt in the case of an invalid certificate might trigger detection.

The majority of the examined malware uses HTTP based C&C channels. The HTTP requests generated by these malware samples are usually GET  requests with a spoofed User-Agent. Where the majority of malware spoofs the User-Agent of the installed Internet Explorer version. Thus, detecting spoofed User-Agents might provide a method for C&C channel detection.

Here are some indicators that can be used to detect C&C channel sessions simply by passively looking at network traffic:

  • The certificate isn’t signed by a trusted CA
  • The domain names are random (i.e. don’t really exist)
  • Validity period is stated to be exactly one month

Temporal based

A bot regularly has to send traffic to the C&C server in order to able to receive new commands. Such traffic is sent automatically and is usually sent on a regular schedule. The behavior of user-generated traffic is much less regular, thus bots may be detected by measuring this regularity

Anomaly detection

Anomaly detection is based on the assumption that it is possible to build a model of legitimate traffic content. Anomaly detection of network traffic can be a very powerful tool in detecting command & control channels. Unfortunately, to be most effective the baselining (defining what is “good” about the network) should take place before the first compromise. However, some forms of anomaly detection still add tremendous value:

  • Develop a quick set of signatures to ensure that each TCP session on port 80 and 443 consists of valid HTTP or SSL traffic, respectively. Use a tool such as FlowGrep, or review proxy logs for failures. This would be a useful exercise in general for all traffic that is not relayed through an application proxy, and is not blocked from direct access to internet resources.
  • Persistent connections to HTTP servers on the internet, even outside regular office hours should be exceptions not the rule, so valid exceptions can be filtered out, making this a potent mechanism to identify compromises. Is the attacker operating from the same time zone as your organization?
  • Persistent requests for the same file on a remote web server, but using a different parameter can indicate data smuggling over HTTP.

Correlation based

One method to reduce the number of false positives for bot detection is to require several correlated events before raising an alert. This allows the  system to use events which by themselves have a high false positive rate. However, by requiring multiple events the system is able to filter out most false positives. The events may be correlated for a single host or for a group of hosts.

The advantage of using correlations to detects bots is that there are fewer false positives compared to using just the individual events. At the same time, this can be a disadvantage because stealthy bots, which generate just one or two events, may not be detected.

CC Channel Detection

C&C Channel Detection Techniques

Social Networks

In order to defeat social network-based botnets, organizations must think ahead of the attackers. Regardless of the channel, provider, or account, social network messages are in text. Thus, if malware wants to use social networks for their C&C, they would encode their commands textually. Just like legitimate messages may include web links, so might C&C messages (e.g., links for downloading payload).

Web-based Attack/Detection Characteristics

By using an HTTP connection as a communication channel, a web-based malware attack can avoid detection by a firewall and increase the threat of the attack. One of the attack characteristics is its small traffic signature, which also fits perfectly well within the normal traffic flow. Since most firewalls do not filter HTTP traffic, it is therefore not easy to detect any abnormal behavior.

In addition, the fast-flux domain technique allows a fully qualified domain name (FQDN) that points to multiple IP addresses. These IP addresses can be scattered all over the world, making a malicious domain difficult to be tracked and analyzed. Attackers can make a fast-flux domain constantly associate with various IP addresses.

However, a fast-flux domain requiring numerous IPs is a useful characteristic. Detection of fast-flux domain techniques together with the use of connection regularity can be used as the basis for web-based detection. In addition to enhancing the accuracy of detection, it can be used also detect different types of botnet/malware.

Conclusion

By using the results of malware analysis to hone C&C channel detection capabilities, an organization can begin remediating a malware incident. Any identified C&C channels serve as helpful indicators of compromise (IOCs) that can be used to detect other instances of the same or similar malware. IOCs related to C&C include domain names, IP addresses, protocols, and even patterns of bytes seen in network communications, which could represent commands or encoded data. Matt Jonkman’s team regularly publishes updated signatures for known Command and Control channels. If setting up such a system sounds like a bit of work, have a look at BotHunter.

CnC Detection IndicatorsComing Soon

In APT Detection Indicators – Part 4 we will add details to this introduction to C&C Channel detection techniques as well as integrate with the prior introductory APT Detection Indicators – Part 2 discussion of free and open source tools (FOSS) with some hands-on examples developing and using Indicators of Compromise. While the likes of Security Onion, as good as it is, doesn’t provide the same level of functionality one might expect from a commercial product, it still offers certain custom features inherent to those products.

Many commercial vendors are now supplementing detection and alerting with visualization techniques. More and more FOSS tools have been meeting the needs of security visualization practitioners for years. Security Onion includes Squert which in turn makes use of AfterGlow and the graphviz libraries to provide on demand visualizations of captured traffic. Making use of the premise of an attacker scanning from a beachhead host (laterally pivoting), related scanning traffic from the pivot host then presents itself in a tidy visualization.

Thanks for your interest!

Nige the Security Guy.

APT Anomaly Detection – Part 1

APT Anomaly Detection – Part 1: Eliminating the Noise

The rapid discovery of a breach is key to minimizing the damage of a targeted attack. Context-aware anomaly detection improves an organizations security accuracy and efficiency by bringing relevant suspect events into focus and thus helps eliminate distracting noise.

APT Anomaly Detection

Improve security analyst efficiency, reduce operational overhead and cost by eliminating noise

In APT Anomaly Detection – Part 1 we present a primer on the various options for Network Behavior Analysis as a complement to other core technologies and tools, adding to the capability to detect and investigate targeted attacks. The series then digs into and focuses upon improving the accuracy of events through triage to improve detection precision as well as eliminate the noise.

Signal to Noise Ratio

It’s a known fact that a lot of time is typically wasted on analyzing false positives generated by technology that is not correctly baselined, customized, tuned, optimized. Depending upon the environment, false positives can often be numerous and very difficult to verify, costing analysts a valuable time determining whether or not something is an event the analyst should be worried about.

Signal to Noise

Security Event Signal to Noise Ratio

Organizations today are exposed to a greater volume and variety of network attacks than ever before. Adversaries are exploiting zero-day vulnerabilities, taking advantage of risks introduced by cloud and mobile computing, and applying social engineering tactics to compromise user accounts. Advanced attackers are both patient and clever, evading detection at the network level. Security professionals wrestle with efficiently detecting these threats and effectively resolving them.

Reportedly Neiman Marcus experienced 60,000 alerts during their latest breach and Target was flooded with alerts. In both cases, the alerts failed to generate proper action. Relying on a tool (or tools) for alerts is useless if it generates too much noise and not enough signal. Too many alerts without the proper context fail to guide the right response.

Insider attacks are on the rise. To monitor and act on internal abuse, as well as comply with data protection regulations, organizations need to tie network security events to local systems and user credentials. Correlating threat information from intrusion prevention systems with actual user identities (logged on to local systems) allows security professionals to identify breaches of policy and fraudulent activity more accurately within the internal network.

Context-Aware Security

Traditional defenses, such as signature-based anti-malware tools and stateful inspection firewall technology, are less and less effective against new threats, they have no knowledge of applications in use, normal traffic patterns or user activity in the context of a network’s normal behavior patterns. New approaches to security, such as those focusing on context awareness and security intelligence, will provide the next generation technology to cope with evolving threats.

Inside IT: Context-Aware Computing

Leveraging Context-Aware Security

Context-aware security is the use of supplemental information to improve security decisions at the time the decision is made. The goal? More-accurate security decisions capable of supporting more-dynamic business and IT environments as well as providing better protection against advanced threats.

If possible, all information security infrastructure must become context-aware – endpoint protection platforms, access control systems, network firewalls, IPS systems, security information and event management (SIEM) systems, secure web gateways, secure email gateways, data loss prevention (DLP) systems, and so on.

The shift to incorporate “application awareness”, “identity awareness”, “virtualization awareness”, “location awareness”, “content awareness” and so on are all facets of the same underlying shift in information security infrastructure to become context-aware.

Why Context-Aware Security is Needed

To understand contextual security, organizations should understand the signature of a typical attack. A common type of APT attack involves embedding Trojan horse code in PDF documents delivered as an email attachment. When the unsuspecting email recipient clicks on the attachment, malicious code is unleashed, but it doesn’t immediately execute, delaying until any antimalware program is no longer watching. When the Trojan does finally execute, it discretely begins collecting data and sending GET requests to commonly visited sites to test network connectivity. If it detects an active network connection, the Trojan initiates a status beacon message to a command-and-control node.

APT Attack Kill Chain 2

The Signature of an Advanced Targeted Threat

As malware authors continue to introduce new antivirus evasion techniques, organizations must learn how to detect attacks that have slipped through the net and are living on the network. As the Mandiant APT1 report illustrated to the security community, attackers are capable of staying inside an organization’s network for years if organizations lack robust measures to detect and remediate attacks.

Network Baseline and Behavior Analysis

Network Behavior Anomaly Detection (NBAD) techniques were originally developed to monitor network traffic thresholds for shifts or changes that could indicate attacks or signal specific issues with systems. Over time, NBAD evolved into Network Behavior Analysis (NBA) which focuses on the establishment of a comprehensive network baseline. This overall baseline is then continually monitored for deviations or exceptions that may alert an analyst to an issue.

Behavior-based Anomaly Detection

Behavior-based Anomaly Detection

There are three main components of a network behavior monitoring strategy for use in information security and network operations:

  • Traffic Flow Patterns and Data: Network flow data such as NetFlow, sFlow, and Jflow.
  • Network Performance Data: Simple Network Management Protocol (SNMP) events and Quality of Service (QoS) data for system and network performance.
  • Passive Traffic Analysis: Passive analysis tools can continually monitor traffic for protocol anomalies, tunneled protocols, use of non-standard ports and protocol field values, etc.

Ultimately, these tools can also provide a much higher degree of visibility into what systems and applications are communicating with one another and where they are on the network, which provides intrusion prevention systems with much needed environmental context.

Forensic NetFlow and IPFIX analysis tools are ideal security layers with which to detect and investigate APTs. Network flows provide a complete account of all network activity both at the perimeter of the network as well as the network core. Advanced flow analysis solutions trigger alarms by monitoring for suspect behavioral patterns within the network flows. Identifying suspicious traffic patterns involves automated correlation of different types of contextual information then, deciphering the intent and danger associated.

One of the best ways to detect if internal hosts are communicating with other external APT launch points, is to compare NetFlow data to a host reputation list. By sending NetFlow from the Internet facing routers to a NetFlow collector that can compare all flows to the host reputation database, internal machines talking with known compromised Internet hosts, can be identified.

Getting started with Contextual Security

In order to combat these increasingly common scenarios, organizations must implement four lines of defense.

Rule Sets: Usually in conjunction with an intrusion detection system such as Snort.

Formulating effective rule sets is a fundamental portion of the contextual approach to network security. Rule sets are typically reactionary since they are usually only formulated after an attack vector has been identified but are still an important tool.

Also see, the APT Detection Indicators series which discusses Indicators of Compromise (IoCs) that can be used to develop and correlate rules.

Statistical Correlation: Utilize statistical and correlation methods to analyze the latest trends in malware.

This is the key that ties all of the other methods together since it meshes rule sets, log examination and data exfiltration monitoring. Correlation methods are used to examine whatever alerts are currently configured and to look for relationships between each alert that is triggered. These relationships can be with regard to type of alert, port number or any other type of selector configured by the security analyst. Statistical methods do not rely on prior knowledge of an attack vector, but rather on the time and frequency of a set of alerts.

Monitoring: Monitor for unusual data exfiltration attempts.

The most important portion of a context-aware security paradigm, examining and blocking data exfiltration attempts is the last line of defense when attempting to combat APT attacks. It is important for an organization to know what should and should not be leaving the network.

Log Analysis: Strongly emphasize the need to manually examine logs.

Automating log reviews with tools such as Splunk is a popular technique, and when operating in a highly trafficked network, automation is indeed a necessity. However, when attempting to discover new attacks against a network, nothing is as effective as human observation and intuition. Human intuition, along with informed experience should alert the security administrator to any site that looks suspicious, which could then spawn a new network monitoring rule to block that avenue of attack in the future.

Context Reduces Noise

As attackers become better at hiding out on networks, organizations need to be aware of the context surrounding security events to better sniff out APTs and reduce the noise. This means setting up the right kind of alerts based on Indicators of Compromise (IoCs) as well as previous attack vectors and correlating the information between triggered alerts. Most importantly this means having some human eyes monitoring data leaving the network and looking over logs to become familiar with the network and spot interesting traffic that may not be coded yet as triggers.

If an organization cannot connect all the dots across its network, it will be unable to fend off a new breed of persistent, stealthy malware. The organization needs to consider is this something that they build and operate in-house since security is mission critical?, whether they partner with consulting or a service to co-source both monitoring and skilled resources in a Hybrid SoC? or, outsource completely to a managed service since security is just not a core competency – although this needs strong process integration in terms of contextual awareness of the internal business operations as well as strict SLAs to ensure preparedness to respond.

Conclusion

Protecting an organizations data from APT invasion is an ongoing and daily task. Vigilance and healthy paranoia is a good defense against the possible insurgence. Many experts combating APTs suggest that organizations always be on the alert, that is assuming an APT is always present or already underway and to operate defensively rather than passively. Use Red Teams (see: APT Red Teams) to keep skills current and hone capabilities and response.

Holistic Logging

Improving communications visibility with evolving contextual anomaly detection is one of the best ways to detect internal malware that has circumvented the traditional firewalls. Many APTs have no trouble sneaking right past even the best security appliances, however, they have a habit of exhibiting the same suspicious behaviors, see Defensible Security Posture for details on the signature of an APT and Cyber Kill Chain.

In APT Anomaly Detection – Part 2 we will expand upon the above topics in more detail as well as discuss the options to add contextual sources, as well as fine tune and improve detection precision to improve analyst efficiency and reduce operational overhead and cost. This post is complemented by the APT Detection Indicators blog series which discusses Indicators of Compromise (IoCs) as well as useful open source tools and techniques to detect APTs.

Thanks for your interest!

Nige the Security Guy.

APT Strategy Guide

APT Strategy Guide Navigating the APT Strategy Series

Continuous monitoring is an important part of an organization’s cyber security efforts. But without establishing an effective security framework first, those efforts may be misspent.

APT Strategy Guide

Strengthening IT infrastructure begins with establishing a sound cyber security threat and risk management governance process. Next, organizations must manage the complexity of their IT infrastructures by using enterprise architecture to consolidate, standardize and optimize the current inventory of IT assets as well as developing “threat aware” mission and business processes.

Organizations must also develop and integrate into their enterprise architecture a security architecture that guides the effective allocation of security controls to their information systems. And finally, organizations must initiate continuous monitoring of all of the above activities to ensure ongoing effectiveness of cyber security and risk management governance, mission/business processes, enterprise and security architectures, and security controls deployed within the enterprise.

APT Strategy Guide

This blog provides an overview to the site to facilitate navigation as well as place the various topic threads in context. The following APT Strategy Guide Map graphic provides an at-a-glance simplified summary of the blogs to help understand how they fit together to form a building block picture.

APT Strategy Maps

APT Strategy Guide Map

APT Strategy Framework

APT Response Strategy

Adaptive Response Strategy

How do you implement a Plan C? Organizations are starting to recognize the ever increasing importance of rapid and well orchestrated incident response capabilities as a key component in their defense-in-depth strategy.

Increased complexity and frequency of attacks, combined with reduced effectiveness of preventative security controls, elevate the need for enterprise-scale security incident response. The APT Response Strategy (aka APT-IR) series of blogs discuss the development and evolution of an effective rapid incident response program, a call to action.

APT Red Teams

APT Red Teams

How do you prevent an APT? The APT Red Teams blog defines core components used by successful red teams and proposes an approach for categorizing and implementing red teams to enable continuous improvement and optimization from counter-intuitive sources and help mitigate advanced threats.

Advanced Persistent Threats (APTs) initiated by an ever growing population of highly capable cyber criminals who are focused, determined and, stealthy are an ever increasing risk. While many organizations have the basic tenets of security in place and regularly test their industry standard best-practice procedures, they are caught off guard by exposed vulnerabilities, risks and threats not previously identified and formally escalated to resolution.

A Red Team approach provides a proven method of risk analysis which uncovers risks and quickly adapts to real world approaches taken by adversaries.

APT Threat Analytics

APT Threat Analytics 1

How can you predict emerging threats? Threat intelligence and analytics continues to dominate the headlines and attention of organizations seeking viable options in their escalating battle against advanced threat actors.

Threat intelligence is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging threat to assets. It is used to inform decisions regarding response in terms of prevention enhancements with mitigating controls or detection adjustments (rules or triggers).

This blog discusses best practices to identify internal and external data sources as well as help overcome many operationalization challenges to take an emerging program to the next level as a strategic component.

APT Detection Indicators

APT Detection Indicators

In a world where organizations need to be watching or monitoring their networks continuously knowing what to look out for is critical. In this blog we discuss how to detect incidents early by identifying “attack precursors” and other leading indicators that help protect your organization from compromise and can stop an attack in its tracks.

Advanced Persistent Threats (APT) typically exhibit recognizable attributes and patterns that can be monitored by readily available, open source tools. These tools can enable early detection of APT behavior, especially by monitoring unusual inter-zone or outbound traffic.

APT Detection Framework

APT Detection Framework Logo 2

The last decade has seen a lot of research on intrusion detection with intelligent data analysis and its implementation in commercial products but there are still some challenges remaining. Effectiveness, accuracy and scalability are key considerations in the technical design, implementation, and operations of an IDS.

This blog presents a proposed APT Detection Framework that can be used both for advanced multistep threat scenario analysis as well as detection system design – to identify an organizations gaps based on new threats and/or indicators of compromise and make iterative improvements.

Defensible Security Posture

Defensible Posture - Part 2

The purveyors of Fear, Uncertainty and Doubt (FUD) assert that preventing today’s advanced threats is unrealistic, internal compromise is inevitable and – that FUD factor is reinforced by more and more reports of malware and advanced attacks penetrating insufficient security controls. However, it’s not all doom and gloom. Although the experts concede that stopping 100% of attacks is a technical impossibility, there are ways for organizations to avoid becoming the next devastated victim.

The basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated

Adaptive Zone Defense

Zone Defense

Limiting and intelligently managing communications between services and systems on an organizations network helps contain an infection or compromise to keep malware or a persistent threat from running rampant. In addition, business needs, regulations and other compliance requirements impact security architecture and design and thus may impose mandated separation or additional boundary controls.

This blog offers guidance to organizations seeking to develop a modular and scalable network segmentation design.

There is a lot of hype and yet no silver bullet. However there is much an organization can do to extend and evolve their defense-in-depth strategy, to improve their detection and containment capability and, to gain key visibility to rapidly respond to a compromise or attempt. There is also a win-win in that many best-practices, controls and, detection techniques needed for APT also help address the Insider Threat.

Thanks for your interest!

Nige the Security Guy.

APT Threat Analytics – Part 2

APT Threat Analytics – Part 2

APT Threat Analytics – Part 1

In today’s threat environment, rapid communication of pertinent threat information is vital to quickly detecting, responding and containing targeted attacks. The key to improving an organizations reaction time is a workflow and set of tools that allows threat information to be analyzed and actionable results communicated across the organization rapidly.

APT Threat Analytics 2

In this APT Threat Analytics – Part 2 blog, we will discuss the options for threat intelligence collaboration and sharing together with a current snapshot of the available tools and standards/developments to help determine whether your organization can benefit from an emerging dedicated in-house threat intelligence  program.

Given the escalating threat landscape, improving threat detection is key for most organizations, specifically developing an intelligence-driven approach. This requires collecting and consolidating threat intelligence data from internal sources (e.g. systems and applications) and external sources (e.g. government or commercial threat feeds) and using analytics to spot attack indicators or anomalous behavior patterns.

The core team should have the capabilities to achieve good visibility and situational awareness across the organization.

  • People: Internal and external networking for developing good sources of intelligence, communication for developing reports and presenting intelligence briefings
  • Process: Designing an end-to-end intelligence process including obtaining and filtering data, performing analysis, communicating
    actionable results, making a risk decision, and taking action
  • Technology: Analysis (drawing connections between seemingly disconnected data), and data analytics techniques

What is Threat Intelligence?

Threat intelligence means different things to  different organizations, so it’s important to first  define what threat intelligence means to your organization. The ultimate goals of threat intelligence gathering and sharing should be, as follows:

  • Develop actionable intelligence
  • Better intelligence translates to better protection
  • Increased protection translates to less fraud and decrease in revenue loss
  • Collective intelligence is far more effective than individual silos
  • Both external and internal sources are needed to address targeted threats

In-House Threat Intelligence

With the increase in advanced, multidimensional threats, organizations can no longer depend solely on existing automated perimeter gateway tools to weed out malicious activity. More and more organizations are considering development of an in-house threat intelligence program, dedicating staff and other resources to network baselines, anomaly detection, deep inspection and, correlation of network and application data and activity.

With the advanced, blended, multidimensional, targeted cyber attacks being levied today, your organization still needs an experienced set of human eyes analyzing data collected —not to mention its reputation.

APT Attack Patterns

Performing in-house threat intelligence need not be complex or expensive. Such a program can be as simple as IT staff being trained to pay closer attention to data. In other cases, threat intelligence might mean having a team of people performing deep content inspection and forensics on a full-time basis. Where an organization falls in that range depends on various factors such as critical assets/data, risk tolerance, core competency, and so on. The organization may elect a DIY in-house model or a co-sourced partner.

One of the biggest benefits of taking control of internal threat intelligence is that it forces organizations to, as follows:

  • Develop a deep understanding of –
    • How systems are used
    • How data is accessed
  • Recognize traffic and usage patterns
  • Pay attention to log data and to correlate that data with a known baseline of how users interact with data, applications and servers.
  • Consolidate and manage log sources

With all of this data at an analyst’s fingertips, organizations can recognize the subtle anomalies that may indicate an attack—the main goal of your threat intelligence effort. For more information see the forthcoming blog: APT Anomaly Detection as well as the new NG Security Operations (SoC V2) series.

APT Behavior

What would constitute a subtle anomaly? One example is inappropriate remote access to critical servers in your environment. Many organizations don’t bother to actively audit remote desktop access to critical servers, but what if out of the blue you detect repeated Remote Desktop Protocol (RDP) sessions and failed logins to a domain controller from a new system in your environment? Your gateway tools won’t help here, and this activity would certainly warrant investigation because it could indicate the beginning (or middle) of an advanced persistent threat (APT).

Picking the Right Tools

There are a few major hurdles when it  comes to performing comprehensive cyber threat  intelligence (CTI) in-house. Organizations need a core set of security tools to provide the essential foundational elements for performing threat intelligence. They should certainly consider leveraging external sources and service providers to fill in gaps in their defenses.

Some of the frameworks, tools, standards, and working groups to be considered are, as follows:

  • OpenIOC – Open Indicators of Compromise framework
  • VERIS – Vocabulary for Event Recording and Incident Sharing
  • CybOX – Cyber Observable eXpression
  • IODEF – Incident Object Description and Exchange Format
  • TAXII – Trusted Automated eXchange of Indicator Information
  • STIX – Structured threat Information Expression
  • MILE – Managed Incident Lightweight Exchange
  • TLP – Traffic Light Protocol
  • OTX – Open Threat Exchange
  • CIF – Collective Intelligence Framework

IODEF

Incident Object Description and Exchange Format (IODEF) is a standard defined by Request For Comments (RFC) 5070. IODEF is an XML based standard used to share incident information by Computer Security Incident Response Teams (CSIRTs). The IODEF Data Model includes over 30 classes and sub classes used to define incident data.

IODEF provides a data model to accommodate most commonly exchanged data elements and associated context for indicators and incidents

OpenIOC

OpenIOC was introduced by Mandiant. It is used in Mandiant products and tools such as RedLine, but has also been released as an open standard. OpenIOC provides definitions for specific technical details including over 500 indicator terms. It is an XML-based standardized format for sharing Threat Indicators.

  • Derived from years of “What Works” for Mandiant
  • Indicator Terms
    • Artifacts on Hosts and Networks
  • Logical Comparisons
    • Groupings, Conditions
  • Ability to Store & Communicate Context
  • Continues to be developed and improved

OpenIOC Process Flow

OpenIOC Process Flow

VERIS

The Vocabulary for Event Recording and Incident Sharing (VERIS) framework was released by Verizon in March of 2010. As the name implies VERIS provides a standard way for defining and sharing incident information. VERIS is an open and free set of metrics designed to provide a common language for describing security incidents (or threats) in a structured and repeatable manner.

  • DBIR participants use the VERIS framework to collect and share data
  • Enables case data to be shared anonymously to RISK Team for analysis
VERIS OverviewVocabulary for Event Recording and Incident Sharing (VERIS) Overview

CIF

The Collective Intelligence Framework (CIF) is a client/server system for sharing threat intelligence data. CIF was developed out of the Research and Education Network Information Sharing and Analysis Center (REN-ISAC). CIF includes a server component which collects
and stores threat intelligence data. Data can be IP addresses, ASN numbers, email addresses, domain names and uniform resource locators (URLs) and other attributes. The data can be accessed via various client programs.

CIF Architecture 2

  • CIF is a cyber threat intelligence management system
  • Can combine known malicious threat information from many sources
  • Use information for action: identification (incident response), detection (IDS) and mitigation

CybOX

cyboxlogo

CybOX is used for defining details regarding measurable events and stateful properties. The objects that can be defined in CybOX can be used in higher level schemas like STIX. The goal of CybOX is to enable the ability to automate sharing of security information such as CTI. It does this by providing over 70 defined objects that can be used to define measurable events or stateful properties. Examples of objects are File, HTTP Session, Mutex, Network Connection, Network Flow and X509 Certificate.

STIX

stix-logo

Mitre developed Structured Threat Information Expression (STIX) is for defining threat information including threat details as well as the context of the threat. STIX is designed to support four cyber threat use cases:

  • Analyzing cyber threats
  • Specifying indicator patterns
  • Managing response activities
  • Sharing threat information

It uses XML to define threat related constructs such as campaign, exploit target, incident, indicator, threat actor and TTP.

STIX V1 Architecture

Structured Threat Information Expression (STIX) Architecture

STIX provides a common mechanism for addressing structured cyber threat information across wide range of use cases:

  • Analyzing Cyber Threats
  • Specifying Indicator Patterns for Cyber Threats
  • Managing Cyber Threat Response Activities
    • Cyber Threat Prevention
    • Cyber Threat Detection
    • Incident Response
  • Sharing Cyber Threat Information

TAXII

taxii-logo

Mitre developed Trusted Automated eXchange of Indicator Information (TAXII) supports sharing of threat intelligence data. The Mitre definition for TAXII states, “Defines a set of services and message exchanges for exchanging cyber threat informationThese models allow for push or  pull transfer of CTI data. The models are supported by four core services:

  • Discovery,
  • Feed management,
  • Inbox
  • Poll

OTX

alienvault-logo-nav-green

Open Threat Exchange (OTX) is a publicly available service created by Alien Vault for sharing threat data. AV-OTX cleanses aggregates, validates and publishes threat data streaming in from a broad range of security devices across a community of more than 18,000 OSSIM and Alien Vault deployments. OTX is a centralized system for collecting threat intelligence. It is provided by AlienVault and interoperates with their Open Source SIEM (OSSIM) system, where SIEM is Security Event and Information Management. OSSIM is free to use. OSSIM users can configure their system to upload their threat data to OTX.

MILE

The Managed Incident Lightweight Exchange (MILE) Working Group is working on standards for exchanging incident data. The group works on the data format to define indicators and incidents. It also works on standards for exchanging data. This group has defined a package of standards for threat intelligence which includes Incident Object Description and Exchange Format (IODEF), IODEF for Structured Cyber Security Information (IODEFSCI) and Real-time Inter-network Defense (RID).

Cyber Security Standards

Structured Cyber Security Standards

TLP

The Traffic Light Protocol (TLP) is a very straight forward and simple protocol. TLP is used to control what can be done with shared information. Shared information is tagged with one of four colors white, green, amber or red. The color designates what can be done with the shared information. Information tagged white can be distributed without restriction. Information tagged green can be shared within the sector or community, but not publicly. Information tagged amber may only be shared with members of their own organization. Information tagged red may not be shared. Given its simplicity TLP can be used verbally, with email or incorporated in to an overall system.

Conclusion – KISS

The simplest way to mine threat intelligence is to leverage the information already on your systems and networks. Many organizations don’t fully mine logs from their perimeter devices and public-facing web servers for threat intelligence. For instance, organizations could review access logs from their web servers and look for connections coming from particular countries or IP addresses that could indicate reconnaissance activity. Or they could set up alerts when employees with privileged access to high-value systems attract unusual amounts of traffic, which could then be correlated with other indicators of threat activity to uncover signs of impending spear-phishing attacks.

Many of the standards are a good fit for organizations with specific needs. If an organization wants to share incident data and be part of the analysis of a broad data set, then VERIS would be the best choice. If an organization wants to share indicator details in a completely public system, then OTX would be a reasonable choice. If an organization is using tools that support OpenIOC, then of course OpenIOC would be the best choice. If an organization is looking for a package of industry standards then the MILE package (IODEF, IODEF-SCI, RID) or the Mitre package (CybOX, STIX, TAXII) would be suitable. Both have the capability to represent a broad array of data and support sharing of that data.

As a recipe for success it is important to start small, start simple, prototype and evolve as organizations gain confidence and familiarity and grow the threat sources, communication and collaboration – so that there is valid data, analysis and, actionable results. Focus on and validate the process.

Next Steps

In future parts we will delve further with some practical use cases containing examples and implementations, review external feed sources (pros and cons), discuss triage, building context, validation of data, and performing analysis as well as discuss the development of organization threat profiles based on risk management process.

In addition, we will be publishing the APT Anomaly Detection blog that complements this work using network flows to baseline a network and detect anomalous activity, so that there isn’t reliance on just one methodology. We will also kick-off the Next Generation Security Operations series that seeks to tie all of the APT Series threads into a holistic architecture and defensive strategy (SoC V2).

Thanks for your interest!

Nige the Security Guy.

APT Red Teams – Part 2

APT Red Teams – Part 2

APT Red Teams – Part 1
APT Red Teams – Part 3

Have you tested your resilience to attack? Addressing security more aggressively and working to identify areas of weakness is a more sensible, and ultimately, more effective approach than working to build a “bigger wall” that you hope attackers can’t get through.

APT Red Teams 2

The Paralyzing Polarization Conundrum

We are at a fascinating point in the evolution of what we now call cyber security defense. Massive data losses, theft of intellectual property, credit card breaches, denial of service – these have become a way of life for all of us. Ironically, as defenders we have access to an extraordinary array of security tools, technology and resources. But all of this technology, information, and oversight has become a veritable “Fog of More” [Source: CSC5]: competing options, priorities, opinions, and claims that can paralyze or distract an organization from vital action.

What are the most critical areas we need to address, how should an enterprise take the first step to maturing their risk management program? Rather than chase every new exceptional threat and neglect the fundamentals, how can we get on track with a roadmap of fundamentals, and guidance to measure and improve? Which defensive steps have the greatest value?

APT Red Teams – Part 2 blog discusses the importance of and contrasts the CSIS Top 20 Critical Security ControlsControl 20: Penetration Tests and Red Team Exercises. There are many parallels between APT and Red Teams. APTs advance on a “continuous area of attack” leveraging blended, multi-step targeted determined attacks while technology is ubiquitous, constantly being implemented, and is often filled with flaws as soon as it is implemented.

Red Team Goals

The goal of Red Teams is to enhance decision making by challenging assumptions and exploring new ideas, typically from the perspective of an adversary or a competitor. A Red Team, for example, might play the role of an attacker and test the security of a system. Alternatively, a red team might review and assess the assumptions of a strategic plan.

Red Teams are particularly suited to business strategy and planning focused on operational process to validate and evolve it. Whether a Red Team adopts a specific perspective, method, or toolkit depends on the nature of the problem and the circumstances of the Red Team. A Red Team that performs a given type of task repeatedly is likely to develop a process framework and an associated toolkit.

Thinking Outside the Box

Think Outside Box

Ideally Red Teams should, as follows:

  • View the target problem from a systems perspective
  • Shed the cultural biases of the decision maker and, as appropriate, adopt the cultural perspective of the adversary or competitor
  • Employ a multidisciplinary range of skills, talents, and methods
  • Understand how things work in the real world
  • Avoid absolute and objective explanations of behaviors, preferences, and events
  • Question everything (to include both their clients and themselves)
  • Break the “rules

One can argue that the best Red Teams are born, not trained. It seems that some people have an instinctive ability to Red Team, while others—despite extensive training—can never escape the secure but confining convention box.

Be Prepared: Test and Validate

Test and Validate

In January 2014 the Ponemon Institute conducted a Cyber Security Incident Response study. In the study they surveyed 674 IT and IT security professionals in order to determine the level of preparedness of their Computer Security Incident Response Teams (CSIRT).

Most respondents agreed that the best thing that their organizations could do to mitigate future breaches is to improve their incident response capabilities.

How frequently does your organization assess the readiness of your incident response teams
(for instance, through tabletop exercises, red teams, or other means)?

IR Readiness

Assessing Incident Response Readiness

Leveraging Red Teams can help both in terms of testing an organizations true resilience to attack as well as preparedness to respond to and investigate an incident. A vital win-win.

SANS Top 20 Critical Security Controls

The SANS Top 20 Critical Security Controls focus on prioritizing security functions that are effective against the latest Advanced Targeted Threats, with a strong emphasis on “what works“. They prioritize and focus on a smaller number of actionable controls with high-payoff, aiming for a “must do first” philosophy. Since the Controls were derived from the most common attack patterns and were vetted across a very broad community of government and industry, with very strong consensus on the resulting set of controls, they serve as the basis for immediate high-value action.

CSC 20: Penetration Tests and Red Team Exercises

Test the overall strength of an organization’s defenses (the technology, the processes, and the people) by simulating the objectives and actions of an attacker.

Why is SANS Control #20 Critical?

In a complex environment where technology is constantly evolving, and new attacker tradecraft appears regularly, organizations should periodically test their defenses to identify gaps and to assess their readiness. Attackers often exploit the gap between good defensive designs and intentions and implementation or maintenance.

SANS Top 20 Controls

Examples include: the time window between announcement of a vulnerability, the availability of a vendor patch, and actual installation on every machine; well-intentioned policies which have no enforcement mechanism (especially those intended to restrict risky human actions); failure to apply good configurations and other practices to the entire enterprise, or to machines that come in-and-out of the network; and failure to understand the interaction among multiple defensive tools, or with normal system operations that have security implications.

Red Team exercises take a comprehensive approach at the full spectrum of organization policies, processes, and defenses in order to improve organizational readiness, improve training for defensive practitioners, and inspect current performance levels. Independent Red Teams can provide valuable and objective insights about the existence of vulnerabilities and the efficacy of defenses and mitigating controls already in place and even of those planned for future implementation.

How to Implement This Control

SANS Control 20

Rules of Engagement

Attackers penetrate networks and systems through social engineering, via phishing attacks and by exploiting vulnerable software and hardware. Once they get access, they often burrow deep into target systems and broadly expand the number of machines over which they have control. For more details on the signature of an advanced targeted threat (APT) see Defensible Security Posture.

A problem that many companies face is that they don’t fully understand the threats against their organizations. Creating a threat profile is a key step in understanding what threats a company faces and the potential impact if an attack were to be successful. A threat profile can also help organizations prioritize resources in order to successfully defend sensitive data. For more information see APT Threat Analytics.

Many organizations do not exercise their defenses so they are uncertain about their capabilities and unprepared for identifying and responding to attack.

Network Security Evaluation

Developing Scenarios to Exercise and Validate Defenses

Organizations should define a clear scope and rules of engagement for penetration testing and Red Team analyses. The scope of such projects should include, at a minimum, systems with the organization’s highest value information and production processing functionality. Other lower-value systems may also be tested to see if they can be used as pivot points to compromise higher-value targets. The rules of engagement for penetration tests and Red Team analyses should describe, at a minimum, times of day for testing, duration of tests, and the overall test approach.

In developing Rules of Engagement they differ from exercise to exercise. The end result is designed to incrementally strengthen an organization’s security posture. It assists in the identification of weak areas and highlights the strengths to improve and evolve the minimum security baseline. Some initial avenues used to identify security weaknesses are, for example:

  • Vulnerability scans
  • Device configuration assessment
  • Web application security
  • Firewall rule assessment
  • Segmentation / Zone controls
  • System decomposition and risk analysis
  • Social engineering
  • Malware
  • Specialized tools
  • Passive attacks correlated to back-end logging/alerts

Each test should have a specific goal, and provide different pertinent information. Each test by itself will not provide an overall picture of the current security state of the target system or network, but when all the areas are evaluated and put together, organizations will have a better overall picture of their security posture.

Red Team efforts can identify multiple areas of concern such as:

  • System vulnerabilities
  • Design flaws or weaknesses
  • Personnel complacency
  • Security monitoring flaws
  • Response procedures

Using the above items, it’s possible to conduct a root cause analysis in an effort to assist the shoring up of the target system or network. Each area tested is designed to mimic the actual multi-step methods and tools used by a would-be hacker seeking to penetrate deeper or move laterally.

APT TRUE Risk

Evaluating and Understanding the TRUE Defensible Posture and Risk

The assessment team should be skilled and have the ability to identify weaknesses and use them to gain access to the target system or network. This enables the opportunity to identify weaknesses, gauge response capabilities and, and correct shortcomings. The overall goal of an assessment is to ensure the organization is as secure as possible and is prepared for future incidents.

Conclusion

Nearly every organization can benefit from some form or degree of Red Teams. Whether the “red team” is a highly structured, a formal unit or a self-appointed devil’s advocate, almost every idea, concept, design, or plan benefits from healthy opposition and testing. Aiming a seasoned Red Team at a problem or system at the right time with the proper mandate can steer a decision maker away from an otherwise pending catastrophe, help validate the strategy and direction and, provide useful feedback and lessons learned for other projects.

In APT Red Teams – Part 3 we will develop this foundation further and discuss some actual example Red Team exercises, as techniques, tools and tests to validate a defensible security posture.

Sources:

Thanks for your interest!

Nige the Security Guy.

Security Strategy Retrospective

Security Strategy Retrospective

Building a strong foundation, enabling budget and support. For most organizations, justifying an ongoing investment in information security has not been easy. How do you measure the value added to the organization versus something more tangible, like implementing a new business function? What is the vision? What is the risk? What is the plan?

Security Retrospective

Back to the Future

This Security Strategy Retrospective blog takes readers back to basic principles. It is an interesting reflection on the challenges that security practitioners faced in the mid-90’s which are very familiar and applicable … even today. Organizations are currently concerned about advanced threats with blended multi-step attacks yet many of the practices and solutions from the past are still valid and very helpful as a key security strategy foundation.

The blog takes you on a journey back to the future to tie today’s technology, solutions and requirements to founding well-proven principles. The magazine feature article reproduced in part below was originally published in June 1997 (an extract is presented here). It is quite dated yet discusses persistent behavior patterns and shares key and still valid principles. The more things change the more they stay the same. We have been here before so let us learn from the past.

In future blogs we will take the principles and practices described below to update for a 21st century security strategy and operations framework, in the age of the APT.

Intranet Security: An Investment in the Enterprise (circa 1997)

Simply because data on an Intranet is behind the company Firewall doesn’t mean it’s safe. Here are some issues to consider when securing your Intranet applications.

Magazine2

Creating a Security Strategy

B2B LogoOrganizations are changing the way they do business. The promise of reaching new markets and easily creating new inter-enterprise relationships through Internet technology is sparking unprecedented interest in going online and moving toward electronic commerce. Thus, an organization-wide security strategy becomes vital when business plans extend that same communications model to suppliers, customers, and partners via extended networks. Therefore internal security implemented consistently across the enterprise is fundamental to both the successful evolution of the business and its ability to compete in the global marketplace.

So how do you create a secure extended infrastructure? Security is a discipline that’s daily becoming more complex as information systems migrate from mainframes to distributed client-server architectures. As stated above, the extended network adds yet another level of complexity: the need to integrate one more set of technologies into what’s already a complex heterogeneous world.

Implemented correctly, the extended network can be the ‘holy grail’ that affords the middleware to hook thin clients to legacy systems, and provides, once and for all, that elusive single sign-on user environment. The illustration below shows a simplified security architecture model that organizations can use as a framework for developing a comprehensive program of organization-wide security.

adaptive

Security Architecture Framework (circa 1997)

Steps to Success

1. Create a Living Security Policy and Standards. (circa 1997)

The information security policy is the first step in the evolution toward a coherent and cohesive security architecture. Just like a development project, a security policy explicitly defines requirements, and together with strong management, change control and quality process, it provides the framework that bounds security within the organization.

What is security policy and standards?

Closed Loop Policy

Security Policy - Procedures, Standards, Process

It’s a set of rules stating what is and is not permitted in a system’s normal operations. Written in general terms, it describes the security requirements that systems must adhere to. Organizations must address a number of policy issues, for example:

  • Who is allowed to use the resources?
  • What is the proper use of the resources?
  • Who is authorized to grant access and approve usage?
  • Who has system administration privileges?
  • What are the user’s rights and responsibilities?
  • How is sensitive information protected?

A security policy and its implementation should be as unobtrusive as possible. If the policy is too restrictive, or improperly explained, it’s likely to be violated. Organizations need to be able to detect and review violations so that they can validate the policy, its effectiveness and appropriateness to address risk and fine tune it – else the policy cannot be used successfully to share requirements and audit.

Opportunity Matrix Summary

Security Program Best-Practices: Opportunity Matrix

For more information, see the Security Governance Series: Security Program Best-Practices – Part 1 and onwards (circa 2013).

2. Engage in Information and Resource Asset Classification. (circa 1997)

How much money, inconvenience, and time should be spent to counter an exposure or class of exposure? Taking an inventory of all infrastructure assets facilitates the identification of all potential targets. At the rate some organizations are growing today, it’s no surprise to discover unknown active equipment on an internal network. Finding this equipment and determining ownership establishes responsibility and accountability for what occurs on the equipment.

Classification involves assessing what resources or assets an attacker might want to steal. The inventory forms a clearer picture of exactly which data is critical, and thus which applications and servers need the most protection, monitoring, and auditing. This step lets the organization focus energy – its resources and budget – on the critical and sensitive resources at the optimal level (see below), establishing a priority and minimum required level of protection so those resources can be managed correctly.

Detection Precision versus Cost

The Security Cost Function

By determining which resources need the highest level of protection, you can minimize the cost and the danger involved with putting your data on the net.

3. Maintain an Evolving Security Standard. (circa 1997)

The previous two steps establish a baseline for the enterprise security plan. Together, the security policy and information / resources classification define the minimum standard of security: what to protect and how well to protect it. Because this standard can rarely be implemented within one budget cycle, organizations should develop an annual security program plan as part of the overall strategic roadmap.

Sample Roadmap

Security Program Roadmap

For more information, see Security Architecture Primer, Security Architecture Baseline, and Develop Security Architecture (circa 2013).

4. Conduct Security Audits and Risk Assessments. (circa 1997)

Regular security audits aren’t just for large companies. They’re for organizations of any size that have data to protect from becoming publicly available, to protect confidentiality, integrity and, availability. Security audits should be carried out at least annually, depending upon how quickly the infrastructure evolves and expands. They enable the organization measure its compliance to the security policy and determine the current situation. A risk assessment identifies and quantifies all potential exposures and threats.

Security Model Framework Interaction

At this stage, the consolidation of the current security status with the identification of potential threats allows the definition of security objectives. These objectives are the level of security (protection) the organization has decided on to protect both its own assets and those of organizational units and partners.

In the first year, the security program should schedule the protection of those resources identified as most critical or at risk. In subsequent years, the security plan should document other desired protection measures, as defined in the objectives, and any newly identified critical resources and threats.

In a perfect world, a three year security plan would allow the organization to attain its defined security objectives. However, in reality, any environment continually evolves, and the requirements for protective measures to deal with potential threats need to evolve with it. Implementation of security measures should progress on a sliding scale, ideally in pace with the objectives and threat/risk.

For more information, see Risk Assessment and Roadmap, ISO 27002 Security Benchmark, Security Health Check and, Adaptive Security Lifecycle (circa 2013).

5. Take Security Measures. (circa 1997)

Traditional security is passive and reactive. Extended network security must be aggressive, holistic and proactive. It must be focused on protecting information – the data contents of the system, not the device itself. That paradigm shift changes the rules.

A robust security solution consists of three types of measures:

  • Protection measures,
  • Monitoring Measures, and
  • Validation Measures

An array of monitoring an detection solutions, such as auditing and tracking products, anti-virus software, intrusion-detection schemes, and event logging strategies, can help security managers head off potential security threats before they evolve into serious problems.

Protective Measures

There are three categories of protective measures, they are prevention, detection and, recovery to “Lock the Door“.

A security prevention mechanism applies and enforces security controls during the system’s operation based upon policy, classification and risk to prevent a security violation from occurring. For example, this protection could include a mechanism restricting physical access to a system or the use of logical access control mechanisms based on encryption to prevent unauthorized users from accessing resources. For an update, see Adaptive Zone Defense – Part 1 and APT Threat Analytics – Part 1.

A detection mechanism detects both attempts to violate security, and successful security incidents, when or after they’ve occurred. Alarms can detect unauthorized physical access, and audit trails can detect unusual system activities after they occur. For an update, see Defensible Security PostureAPT Detection Framework and APT Detection Indicators – Part 1.

A recovery mechanism is used after a security violation has been detected, restoring the system to a pre-violation state. Examples include backup tapes and the addition of redundant hardware to a system. For an update, see APT Response Strategy – Part 1.

Validation Measures

Validation measures are proactive. “The lock is checked“. Examples are: active network probing, which involves using commercial and open-source tools to ensure that all unauthorized ports are locked down; compliance testing, using automated processes to validate the exposure to platform specific administration vulnerabilities; and audit reviews, which entail conducting internal reviews to measure compliance to security policy. For an update, see: APT Red Teams – Part 1.

Monitoring Measures

Monitoring measures are reactive: “The security camera detects an intruder“. Examples of these include audit trails, activity monitoring, and policy breach detection.

It’s important to have monitoring and detection solutions that do most of the monitoring work, then send alarms that alert the network security team to threats. In addition, the best-buys in monitoring and detection products are those that can analyze data, rather than just record it. There’s an remerging market for detect and alert technologies that correlate persistent behavior patterns before the damage is done.

For an update, see the upcoming APT Operational Maturity and APT Intelligent Operations blogs.

6. Keep Track of Security Violations. (circa 1997)

A key security function is the centralized logging of application and system accesses. All the security in the world won’t do any good if there’s no way of identifying, investigating, and pinpointing breaches.

Holistic Logging

In any organization, violations of security policy will inevitably occur. Often, the violation can be classified as caused by an individual’s negligence, an accident or mistake, ignorance of the current policy, or deliberate disregard of the policy. Security vulnerabilities can easily occur as a result of mistakes made by the system administrator introducing weakness, never mind the myriad of patches needed. Obviously, effective audit capabilities must be in place for an organization to fully investigate a violation and determine its classification, what damage has occurred, and what action to take.

But who watches the administrators? Many organizations are tailoring products to audit the administrator, as a hierarchical super-manager overseeing a large distributed base. Network administrators can unwittingly create security holes during routine maintenance and administration duties. The rule of “least privilege” with “separation of duties” is the best practice, where no one person has the keys to the kingdom in all domains, and administration functions, like access control, are shared.

7. Be Prepared. (circa 1997)

The final layer of security is the preparedness to act immediately and correctly when an incident occurs, and to capture and process any perceived threat or incident. This requires the establishment of a reporting process, a response team, and precise well-defined procedures. The need for proper training of staff can’t be over-emphasized, and is a wise investment, saving the organization a great deal of trouble and money in the medium to long term. Tiger teams are very popular ways of honing skills and gaining expertise with security tools. They comprise internal teams of system and network administrators whose purpose it is to test security monitoring and protection measures.

For more information see APT Response Strategy and APT Red Teams (circa 2014).

APT Incident ResponseConclusion (circa 1997)

For most IT departments, justifying an ongoing investment in information security has not been easy. How do you measure the value added to the organization versus something more tangible, like implementing a new business process? Companies often view security measures as insurance, and the risks the measures address are often never quantified and, as such, accepted. Thus, security is a low priority item and the first task to suffer in a budget cut.

However, since the advent of Internet technology and extended networks, the return on investment is very clear. A secure extended network is the key to enabling technology that facilitates the ability to exchange goods, services, and information electronically in a stable, well-managed infrastructure. This provides interaction with customers, partners, and employees promoting loyalty and allowing the enterprise to gain a competitive edge.

Coming Soon

In the upcoming APT Operational Maturity and APT Intelligent Operations blogs we will build upon this dated but valid Security Strategy Retrospective foundation with a similar but updated building block layered security model and an evolving maturity scale. We will discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

Thanks for your interest!

Nige the Security Guy.