Defensible Security Posture – Part 2

Defensible Security Posture – Part 2

Defensible Security Posture – Part 1

How can you leverage the Defensible Actions Matrix? A defensible actions matrix defines processes and procedures that can impact an attacker’s capability at various stages of the cyber kill chain.

Defensible Posture - Part 2

Cyber Kill Chain

In Defensible Security Posture – Part 1 we introduced the concept of the Cyber Kill Chain. As a recap, a “kill chain” describes the progression an attacker follows when planning and executing an attack against a target. Understanding the signature of an APT helps align defensive capabilities, i.e., to identify security controls and actions that can be implemented or improved to detect, deny, and contain an attack scenario.

Cyber Kill Chain

The APT Signature and Cyber Kill Chain

A complex incident may involve multiple kill chains with different objectives that map to various phases of the attack. For example, reconnaissance is performed to assess target feasibility to develop an attack plan. Attackers may also perform further reconnaissance after gaining an initial footprint into the internal network to revise strategy for lateral movement and persistence.

Defensible Actions Matrix

In this Defensible Security Posture – Part 2 blog we provide a case study that makes use of the Defensible Actions Matrix and offers some defensive practical best-practices. The basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated. Common factors associated with APT attacks include the following:

  • Sudden increases in network traffic, outbound transfers
  • Unusual patterns of activity, such as large transfers of data outside normal office hours or to unusual locations
  • Repeated queries to dynamic DNS names
  • Unusual searches of directories and files of interest to an attacker, e.g., searches of source code repositories
  • Unrecognized, large outbound files that have been compressed, encrypted password-protected
  • Detection of communications to/from bogus IP addresses
  • External accesses that do not use local proxies or requests containing API calls
  • Unexplained changes in the configurations of platforms, routers or firewalls
  • Increased volume of IDS events/alerts

The more detailed Detection Framework, presented in APT Detection Framework is used to analyze the potential attack scenarios based upon a threat/risk profile to more definitively identify the above factors, as well as any detection gaps while the Defensible Actions Matrix summarizes it based on actions/controls.

The following example depicts a sample actions matrix using the actions of detect, deny, disrupt, degrade [optional], deceive [optional], and contain. Documenting the capabilities defenders can employ in this matrix as a worksheet enables organizations to quickly assess their Defensible Security Posture as well as identify any gaps or needed compensating controls.

Kill Chain Actions 2

Example Defensible Actions Matrix

Evolving Industry Best-Practices

In order to provide a foundation for the Defensible Actions Matrix we offer a sampling of defensive APT-aligned practical best-practices as an example. These will be expanded upon as a more complete APT Best-Practices framework and evolution in future blogs.

Industry Best Practices

Organizations should use a comprehensive programmatic approach, no one single technology will stop advanced attacks, even products specifically targeted at advanced forms of attack. Ongoing integration and sharing of security intelligence among disparate security
technologies and other external organizations should be a security program goal (see: APT Threat Analytics).

As such, organizations should review existing technologies and increasingly utilize advanced features in the latest products or services to keep up with changes in the threat landscape. This should be performed with the mindset of integrating and unifying security processes between each technology so that effective coordinated response to threats is possible and the detection and reduction of breach events is the result.

Security Governance

Social Media and Information Sharing: Attackers often leverage publicly available information on websites and social media to find information about an organization that can be useful in planning an attack. Information sharing and social media policies should define how material should be handled and exposed via public channels.

Configuration Management: Configuration standards define templates to consistently configure applications and systems based on role, hardening them, removing un-necessary services and, eliminating defaults. Define processes that are enforced with change control for infrastructure integrity to limit the ability of attackers to exploit infrastructure to deliver malicious software to targeted systems.

Privileged Access Management: Organizations face significant security exposure in the course of routine IT operations. For example, dozens of system administrators may share passwords for privileged accounts on thousands of devices. When system administrators move on, the passwords they used during their work often remain unchanged, leaving organizations vulnerable to attack by former employees and contractors.

Organizations should grant user and system accounts leveraging role-based access the least amount of privilege needed to perform the job. Processes to create, audit, and remove accounts and access levels should be well-defined.

Shared Service Accounts: Organizations should ensure that service accounts, including default credentials provided with third-party software, are properly secured with defaults removed and provided only to those who need them to perform their job function.

Database Account Security: Organizations should manage and audit database accounts as part of a larger account management process. This includes ensuring that accounts are only  granted the necessary level of access. Many organizations do not fully leverage the security built-in to databases and DBAs often use default admin accounts and users have full admin rights.

Two-factor Authentication: Two-factor authentication mechanisms is increasingly mandatory for networks or zones with critical data and/or servers. It can reduce the effectiveness of password stealing and cracking attempts.

Network Security Components

Threat Intelligence: A threat intelligence capability leveraging internal and/or external sourced visibility can provide an indication that threat actors are focusing on specific types of attacks and indicators to detect these attacks. For more information see APT Threat Analytics.

Network Zones / Segmentation: Limiting and intelligently managing communications between services and systems on an organizations network helps contain an infection or compromise to keep malware or a persistent threat from running rampant. Ensure proper zoning and segmentation is performed in your internal network environment not just the DMZ and that proper firewall logging and inspection is performed between high- and low-security segments. Treat every system as untrusted. For more information see Adaptive Zone Defense blog.

Advanced Threat Protection: Evaluate and deploy a network-based advanced threat detection/prevention technology to reduce the potential impact of zero-day malware and other targeted attacks. Review existing advanced threat detection/prevention technology and ensure that the prevention capabilities are validated, tested and fully leveraged.

Context-Awareness: Evaluate and leverage context-aware security capabilities of security platform providers. Security platforms must become context-aware — identity, application, content, location, geo-location and so on — in order to make better information security decisions regarding APTs.

Firewall Rules/ACLs: Review and, if necessary, adjust ingress network firewall rules on a regular basis in order to ensure only critical inbound services are permitted to enter the network. This also includes geographical blocking or filtering at the country level where possible based on business need. For more information on Firewall Rule Lifecycle Management, see Security Program Best-Practices – Part 5.

Egress / Outbound Filtering: Egress filtering enables a managed perimeter with a focus on well-defined outbound policy. It declares the acceptable protocols and destination hosts for communication with internal systems, with a focus on any systems with critical or regulatory data, such as PII, ePHI, PCI and so on.

Network activity associated with remote control can be identified, contained, disrupted through the analysis of outbound network traffic implemented through open source software tools.

Remote Access / VPN: Implement internal inspection devices, such as intrusion prevention system (IPS) and network behavior analysis (NBA) technologies between any VPN termination device and the internal network environment so that attacks or behaviors can be discovered or prevented within the remote access network infrastructure.

DNS Sinkholes: A DNS-based sinkhole monitors for name resolution attempts of known malicious or suspicious domains. The resolution response is modified to point to an internal sinkhole server where the malicious or suspicious traffic is routed for further analysis and containment. In addition block “uncategorized” web sites at proxies, employ split‐DNS and split‐routing where possible.

Network IPS: Network intrusion prevention systems (IPS) can actively block network traffic matching patterns associated with malware command-and-control (C2) communication and data exfiltration. Review NG-IPS features and ensure it provides host and traffic anomaly detection (for example, using processing NetFlow data) and has capabilities to prevent or at minimum detect and alert on the anomalous traffic exiting through your perimeter devices.

Network IDS: A network intrusion detection system (IDS) can identify traffic patterns matching network-based scanning, malware C2 mechanisms, and data exfiltration. For more information see: APT Detection Indicators.

Network Security Monitoring: Validate that monitoring controls are in place and appropriate levels of logging are performed off-device in centralized log servers. Deploy security information management systems so that attacks can be detected or analyzed through additional analysis or correlation of incoming events.

Make sure that network visibility extends into virtualized environments either by tapping internal virtual switch traffic out for external inspection or by virtualizing IPS capabilities and running directly within the virtualized environment.

Form a Security Operations Center (SOC) or designate specific individuals to operate as a security operations center in order to properly monitor and respond as well as perform initial triage status for security events. When suspicious anomalies or alerts are received by the security operations center, invoke the incident response process.

Incident Response: Organizations should have a response plan for handling incidents as well as periodically review and test the plan. For more information on response readiness and preparedness please see APT Response Strategy.

Application Security

Web Application Firewalls: Review Web application firewall configuration and implement vendor-recommended prevention settings. Prefer application firewalls that have the capability to share intelligence via reputation feeds, offer fraud detection services, and offer the capability to perform browser and endpoint security and spyware infection assessment.

Endpoint Protection: Host-based malware protection solutions including antivirus software, host intrusion prevention systems, and advanced malware protection solutions help identify, alert and block malicious software.

File Integrity Monitoring: File integrity monitoring involves monitoring system files for unauthorized changes and is often deployed as part of a larger software change management process.

Application Whitelisting: Application whitelisting defines a limited set of software that can be run on a system. Application whitelisting requires continual management of the list of allowed software to keep up with application and operating system updates.

Data Loss Prevention (DLP): Data loss prevention solutions use information tagging, packet inspection, and network monitoring to identify the potential movement of sensitive data outside the network. In addition, organizations can implement policies to manage the use of removable storage devices such as USB to limit these devices being used to steal sensitive information.

Security controls can have various impacts based on their purpose and implementation. Ultimately, the goals of a security control is to detect malicious activity, deny the malicious activity access to targeted assets, disrupt malicious activity that is actively in progress, or contain malicious activity to an area where damage can be mitigated.

The matrix illustrated below provides a partial example worksheet, applying the above best-practices and organizes the controls according to whether their primary goal is to detect, deny, disrupt, or contain.

Defensible Actions Matrix 2

Best-Practice Defensible Actions Matrix Use Case [Partial]

Conclusion

Recent incidents clearly demonstrate that cybercriminals can conduct operations that involve intrusion, lateral movement, and data exfiltration in complex networks secured to current best-practices. Attackers can adapt their attack techniques to the unique circumstances of targeted environment.

This level of resourcefulness points to the realization that current best-practices and regulatory compliance are a necessary minimum baseline but are not sufficient alone. Today there is an increasing need for organizations to progressively evolve and advance from current security posture to a more defensible and advanced program with visibility, validation and, vigilance.

Our solution, and the prime basis for this site, is to adopt a security architectural and design foundation approach that compartmentalizes breaches into small zones on networks and on endpoints. To strategically leverage the Adaptive Zone Defense blog to develop an innovative architecture with well-organized applications and services, managed communications and – good visibility to flows and logs that can actually detect the cyber kill chain activity and stop the breach.

It requires an ongoing lifecycle process to take the legacy, rapidly deployed and the chaotic infrastructure on the edge (innovation) and consolidate it into the core foundation based on the architecture/design blueprint, while continually evolving the blueprint based on new business requirements, technology solutions and, regulatory requirements, for more information see: Adaptive Security Lifecycle

Coming Soon

In this series we will discuss advanced APT-focused best-practices that enable organizations to take their security to the next level and build from Basic to Augmented through to APT-specific Countermeasures to Advanced Security depending upon various factors including assets and threat/risk profile.

APT Best Practices

Evolution Lifecycle using Security Best-Practices

In the upcoming APT Operational Maturity and APT Intelligent Operations blogs we will also discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

Intelligent Operations Evolution

References

This Defensive Security Posture – Part 2 blog is also a part of the APT Strategy Series and Security Architecture Series. For a complete listing of all NigeSecurityGuy blogs see the Security Series Master Index.

Thanks for your interest!

Nige the Security Guy.

Advertisements

APT Threat Analytics – Part 1

APT Threat Analytics – Part 1

APT Threat Analytics – Part 2

How can you predict emerging threats? Threat intelligence and analytics continues to dominate the headlines and attention of organizations seeking viable options in their escalating battle against advanced threat actors.

APT Threat Analytics 1

Threat intelligence is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging threat to assets. It is used to inform decisions regarding response in terms of prevention enhancements with mitigating controls or detection adjustments (rules or triggers).

This APT Threat Analytics – Part 1 blog discusses best practices to identify internal and external data sources as well as help overcome many operationalization challenges to take an emerging program to the next level as a strategic component.

Value Proposition

The reality of continuously evolving, persistent, sophisticated and targeted attacks is wreaking havoc. Many enterprises only have fragmented or partial security monitoring in place, lacking complete visibility to what is happening in their environments. Detecting this stealthy activity often aimed at specific targets requires the ability to monitor and compare anomalous behavior over time, connecting the dots among multiple events.

The effective use and sharing of actionable threat intelligence has the potential to fundamentally change the way organizations prepare for and respond to attacks. By gathering threat intelligence from various sources and performing risk-based analytics enables mitigating controls to be proactively identified – applied to the APT Detection Framework to assess coverage and gaps – and put into place to provide better protection against emerging threats and vulnerabilities.

Terminology Definition

Risk management processes and frameworks are an important component in threat analytics in order to make sense of the feeds and gain actionable and tangible benefit. The importance of these fundamentals will become more clear as we progress through this blog.

Risk is defined as the likelihood that a particular threat using a specific attack will exploit a particular vulnerability of a system that results in undesirable consequences.

APT Risk Framework

Risk Framework

Risk Management – The process concerned with the identification, measurement, control and minimization of security risks in information systems to a level commensurate with the value of the assets protected.

Threats – Threats exist simply as the potential for an event to occur. Attacks occurring to others, but not us, are threats, while threats that have been seen as attacks on us have moved off of the threat phase and into the attack phase.

Catalyst – A catalyst serves as the trigger that causes a threat to be realized as an attack.  Business conditions, financial gain and political motivation may all serve as these triggers. A catalyst moves the flow of risk from existing as threat to occurring as an attack.

Attack – Attacks are threats that have become actualized events that may have an adverse impact. Attacks may be intentional or non-intentional and may occur as directed or non-directed.

Vulnerability – Vulnerabilities can allow an attack to: pose as another user or entity, execute commands or access data that is contrary to the specified access restrictions, reduce availability of an asset, or to hide activities.  Vulnerabilities may also be considered as a point of entry that an attack may attempt to use to gain access to the system or data.

APT Risk Framework 2

Back to Basics: Placing Threats in Context

Anything that may increase risk according to security policy, standards or guidelines may also be considered a vulnerability.  While vulnerabilities may have many causes, some of the most critical are apathy and ignorance. Lack of awareness of security issues is a common cause of vulnerabilities.

Impact – The final phase of risk is actual impact. Loss of Integrity, Compromise of Confidentiality or Reduction of Availability to any asset has an adverse affect on the business.  The implications of a threat becoming an attack that succeeds in causing disclosure, destruction or alteration of business assets range from financial loss to reputation damage.

The interaction of security solutions with risk, once it causes impact, are to reduce the impact, eliminate additional impact and/or institute recovery processes.

Threat Profile

A problem that many companies face is that they don’t fully understand the threats against their organizations. Creating a threat profile is a key step in understanding what threats a company faces and the potential impact if an attack were to be successful. A threat profile can also help companies prioritize resources in order to successfully defend sensitive data.

There are different ways that organizations can create threat profiles, we will cover some examples of applicable risk management frameworks and threat profiles in future blogs. The best option is to use or evolve established risk management processes or frameworks.

Risk Management Example

Risk Management Methodology Example

Threat profiles based on typical risk management methodology include scenarios about known sources of threats and typical outcomes. This valuable information enables organizations to move from being vulnerability-centric, with no real prioritization or protection of assets, to a more threat-centric posture, including strategies for defense and incident response.

Threat Intelligence Sources

The goal of threat intelligence is to reduce the danger of zero-day malware and other unknown attacks by providing early detection and near-real-time alerts.

There is certainly no shortage of threat intelligence sources and the staggering volume of data can be paralyzing. When it comes to intelligence, less is more. It is much better to have a smaller number of high-value sources than to be inundated with low-value ones.

In APT Threat Analysis – Part 2 we will discuss both the commercial and open source projects. Vendors compete head-to-head with each other on product features, functionality, and usability. The key point is that they should also compete on the overall accuracy, timeliness, and scope of their security intelligence – as well as standard APIs for open security intelligence integration with standardized data formats.

By their very nature, open source solutions are extensible and offer the ability to act on intelligence. Ideally CISOs should become active lobbyists for an open security intelligence market. This could help avoid the coming deluge of security intelligence as the right feeds are selected that align with organizations and to use this intelligence for its intended purposes – to lower risk and accelerate incident detection and response.

Threat Data Sources – Internal

For many organizations that consume threat intelligence, a key challenge is how to consume and act on that intelligence. Organizations should start small with just a few threat sources, start with a smaller internal prototype to deal with variances in data quality and relevancy and work out the process and kinks to derive useful actionable results then expand externally in iterative steps.

Once an analyst has gone through the intelligence cycle and has a qualified threat indicator, they must be able to apply it to the organizations security controls.

APT Threat Sources - Internal

Threat Data Sources - Internal

Threat Data Sources – Global

Security teams need even greater coverage of malicious activity from multiple sources to have a better understanding of what’s going on globally—as opposed to just the network under their control. They are turning to external systems that provide worldwide data correlation and analysis.

Most enterprises don’t have the staff or resources to do their own external threat intelligence gathering, so it makes sense to subscribe to a service that provides prepackaged threat intelligence data. This can be used in conjunction with managed security devices or fed into in-house-based sensors to better understand developing threats.

APT Threat Sources - Global

Threat Data Sources - Global

For those organizations who leverage APT Red Teams then that becomes another invaluable threat source and key validation of the organizations current security posture, to enable continuous improvement and optimization from counter-intuitive sources and help mitigate advanced threats.

The Importance of Analytics

To make threat intelligence usable within your organization, you must rein in all the data and make it scalable. Analysts need a platform that enables the prudent integration of both internal and external intelligence.

Intelligence isn’t all about sources and indicators … it is about analysis

Threat indicators typically include data items like IP addresses, fully qualified domain names, MD5/SHA-1 hashes, SSL certificates, filenames, mutexes, and registry keys. Much of today’s threat intelligence comes in the form of simple indicators. Unfortunately, attacks aren’t simple, and lists of indicators alone don’t always offer value to the organization. Please see APT Detection Indicators for more information on developing and using Indicators of Compromise (IoCs).

Threat Analytics Process

From Feeds through Analysis to Governance Actions

Correlation is key to turning all this data into intelligence so that defenses can be updated to address, closing the gaps and window of exposure more quickly. Chaining together multiple events and alerts makes it possible to identify sequences of activity that can equate to a far more severe incident. Correlation also helps validate the severity of threats by lowering the rate of false positives and noise.

Analysts need robust capabilities to query and interact with raw intelligence to determine what the implications to their organization are. The ability to pivot and uncover relationships that would otherwise be obscure is key. Visualization should also be a core capability making analysts’ jobs easier.

Feeds and Feedback

Once the organizations threat profile has been created (from above) and consumers of the threat feeds have been identified, the data has to get to the right places so it can be acted on. It is easiest to get this information to the security operations group because the majority of threat intelligence data will go directly into the SIEM.

Security Model Framework Interaction

Information Security Management Ecosystem

An important, and sometimes overlooked, step in the threat intelligence consumption process is providing feedback on the relevance of the data received and the effectiveness of action taken.

Conclusion

Security Operations

The rise of threat intelligence services is helping enterprises gain more insight and visibility into global and industry-specific threats. But organizations have to figure out how to take that information, make it actionable and put it in the right hands so it gets implemented in a timely manner.

The key feature of leveraging threat intelligence with analytics is that it produces predictive threat warnings and mitigation advice by monitoring security events from a wide and diverse variety of sources. Using heuristics and correlation techniques to analyze millions of global events, organizations look to uncover malicious activities.

 

Instead of using traditional signature-based analysis at the network perimeter, the Threat Management (or SEIM) platforms use IP, URL and file reputation services; contextual analysis; and behavioral rule sets to uncover and block access to malicious content, with some even adjusting or changing their security strategies in real time.

The big advantage is that, done right they consolidate threat, vulnerability, risk, fraud, spam, phishing, attacker and network intelligence information, overcoming the problem of information being fragmented and disparate stored in silos.

Coming Soon

In APT Threat Analytics – Part 2 we touch upon commercial, government and open source threat intelligence sources. We will focus on interesting emerging standards, such as Structured Threat Information Expression (STIX) as well as Trusted Automated Exchange of Indicator Information (TAXII) which are beginning to be embraced by startups such as Confer.

For open source, Bro Network Security Monitor, OSSIM, Snort, and YARA are just a few examples of projects that enable you to share threat intelligence across your ecosystem. The Security Onion packages many open source projects into a Linux distribution (for more information see APT Detection Indicators).

In future parts we will also delve further into the process of reviewing external feed sources (pros and cons), triage, building context, validation of data, and performing analysis as well as discuss the development of organization threat profiles based on risk management process.

References

This APT Threat Analytics – Part 1 blog is also a part of the APT Strategy Series and Security Architecture Series. For a complete listing of all NigeSecurityGuy blogs see the Security Series Master Index.

Thanks for your interest!

Nige the Security Guy.

APT Response Strategy – Part 1

APT Response Strategy – Part 1

APT Response Strategy – Part 2
APT Response Strategy – Part 3

How do you implement a Plan C? Organizations are starting to recognize the ever increasing importance of rapid and well orchestrated incident response capabilities as a key component in their defense-in-depth strategy.

Adaptive Response Strategy Increased complexity and frequency of attacks, combined with reduced effectiveness of preventative security controls, elevate the need for enterprise-scale security incident response.

The APT Response Strategy (aka APT-IR) series of blogs discuss the development and evolution of an effective rapid incident response program, a call to action.

There is no silver bullet for advanced persistent threats (APT). Preventative controls, such as next generation firewalls (NG-FW), endpoint protection platforms (EPP), intrusion prevention systems (IPS), data loss prevention (DLP) tools, Web application firewalls (WAF), Application White-Listing, Sandboxes are all key components but there is no technology guarantee.

Misplaced trust in the ability of preventative controls to stop an attack results in an excessive focus on them in security planning and operations versus response.

“If you go back to the definition of security being protection, detection and response, this feels like the last area that needs work, and the idea of incident response coordination and working on a response is really important and something that isn’t there,”
Bruce Schneier.

Organizations need to stop treating response as just a backup plan if things go wrong and start making it a core and active part of their operational strategy, to have the capability and preparedness and readiness to rapidly deploy.

Plan A: Prevention

If not the most, this must be one of the most important challenges to the security industry. Prevention is crucial. We cannot lose sight of that goal but we must accept the fact that no barrier is impenetrable, and detection / response represents an extremely critical line of defense.

The range of possible breach scenarios is immense and rapidly growing, protecting against them all is challenging if not impossible. It requires identifying the right controls and resources and where to allocate them to counter the threats. Leveraging APT Red Teams to study target selection and attack difficulty is a useful way of understanding and addressing the more likely threat vectors towards continuous improvement.

Bottom-line, prevention and preventative security controls will fail. Plan B depends upon the ability to detect and contain.

Defense in Depth

The Defensive Line: Incident Response

Plan B: Detection

In the Cisco 2014 Annual Security Report, Cisco found that of a sample 30 large firms it studied, 100 percent had traffic going to DNS locations pointing at websites hosting malware while 96 percent had traffic going to compromised servers, 88 percent to suspicious FTP servers, while 79 percent had PCs tunneling connections using VPNs.

Malicious actors are using trusted applications and connections to exploit gaps in perimeter security. Indicators of compromise suggest network penetrations may be undetected over long periods.

UnDetected Breaches Timespan 2013

Timespan of Detection and Response Rates

As the Verizon DBIR 2013 illustrates, most breaches are not discovered internally or contained for weeks, months or even longer. While it can be difficult to detect, positively identify, and respond to an intrusion within minutes or hours, our ability to do so should ostensibly increase the longer attackers persist on internal networks.

Even the most advanced cyber-attacks follow a multi-step process that begins with network reconnaissance and eventually ends with data being ex-filtrated. This gives organizations multiple opportunities to detect an attack and prevent it from spreading across the entire network impacting critical data.

The APT Detection Framework blogs help identify detection gaps while the APT Detection Indicators blogs provide a tutorial on indicators of compromise.

Bottom-line, detection typically fails on a frequent basis, what remains is Plan C which depends upon the effective Incident Response.

Plan C: Response

APT Incident Response

Today, being well prepared for incident response is one of the most cost-effective security measures any organization can take because it can limit the damage and reduces the incident impact and costs since security incidents are typically inevitable.

Organizations must elevate incident response as a key component of their overall business strategy, making sure that all the right components are in place to deal with unforeseen incidents.

Effective incident response means that even when a host or account is compromised, it is short-lived and has a near-zero impact on the business. In order to achieve this, organizations must develop the right mix of people, processes and tools. This grows more and more important as threat actors creatively exploit vulnerabilities and advance their methods.

Incident Response Goals The goals of an effective and rapid incident response include:

  • Discover: Verify that an incident has occurred
  • Triage: Rapidly contain and reduce the impact of the incident
  • Stabilize: Maintain or restore Business Operations/Continuity
  • Diagnostics: Determine attack vectors and how the incident occurred
  • Immunize: Prevent future attacks / incidents
  • Debrief: Improve the organization’s security posture
  • Communications: Keep management informed and follow proper chain of command procedures

Ideally APT-IR focused organizations need seek to balance people, process and technology. Solid incident responders with solid technology but no well designed, validated and, established processes and procedures will leave their organizations less than optimally defended. An effective and properly maturing APT-IR requires all three dimensions – people, process, technology – to work well and improve together.

It is important that organizations create lightweight policies, plans, and procedures related to incident response with management buy-in to effectively protect the organization against incidents and cyber security attacks. From our experience and research, a snapshot of the current incident response capability maturity is, as follows:

IR Capability Maturity 2

Incident Response Program Capability Maturity

In order to prepare for incidents organizations need to know:

  • How to manage and organize for effective incident response (IR)
  • What tools, techniques and practices are needed
  • How to investigate and respond to attacks from persistent attackers
  • How to use the results of IR processes to better secure the enterprise
  • How to ensure that a similar attack is not successful a second time

Recommendations

On a high level, to organize for effective security incident response, organizations need to:

  • APT-IR –
    • Charter:
      • Define enterprise APT-IR mission and capability (in terms of scope, resources skills, contacts, escalation)
    • Organization:
      • Make it a priority to build an APT-IR team consisting of experienced, full-time members
      • Involve cross-functional multi-disciplinary areas of the organization in the process
    • Rules of Engagement:
      • Create clearly defined rules of engagement for the incident response team
      • Define incidents, handling and escalation to clearly distinguish and prioritize
    • Tools/Techniques:
      • Invest in technologies that support the collection of information to identify indicators and potential threats
      • Understand compliance and threat trends (hone awareness and skills)
    • Preparedness:
      • Prepare templates / tools to manage and respond (engage in test drills and readiness)
      • Assess the readiness of incident response team members on an ongoing basis
    • Effectiveness Metrics:
      • Develop useable operational metrics to measure the overall effectiveness and evolve
    • IoC Sharing:
      • Consider sharing threat indicators with third-party organizations to foster collaboration

Incident Response Life Cycle

In APT Response Strategy – Part 2 we will present and discuss the Incident Response Life Cycle, to effectively and rapidly handle incidents and attacks, to mitigate risk across the organization together with incident response maturity.

Incident Response Lifecycle

Incident Response Life Cycle

As a preview of Part 2, the IRP should cover all steps throughout the life cycle process, including:

  • Preparation
  • Detection
  • Analysis
  • Containment
  • Eradication
  • Remediation/Recovery
  • Post-Incident activities

Stay tuned for a lightweight yet effective APT Response Strategy that focuses more upon necessary triage steps and actions.

Conclusion

Organizations are facing a greater attack surface, the growing proliferation and sophistication of attack models, and increasing complexity within the hyper-extended network. Many are struggling to solidify a security vision supported by an effective strategy that uses new technologies, simplifies their architecture and operations, and strengthens their security teams. The NigeSecurityGuy blog can help.

Our Adaptive Zone Defense blog establishes a key foundation to help organize and structure applications, systems and, data in order to both simplify operations as well as limit and manage communications to enable protection, detection and, containment.

The Defensible Security Posture blog leverages a threat-centric defensible security model to enable defenders to address the full attack continuum, across all attack vectors, and to respond at any time, all the time, in a continuous fashion—before, during, and after an attack.

In a world where organizations need to be watching or monitoring their networks continuously knowing what to look out for is critical, APTs typically exhibit recognizable attributes and patterns that can be monitored by readily available, open source tools. The APT Detection Indicators blog series helps identify indicators of compromise (IoCs).

In order to begin to understand and to be able to defend against targeted attacks the APT Detection Framework blog series presents a detection matrix that is needed for visibility, analysis and, to ensure that all threat scenarios are considered with no gaps in defense. The APT Red Teams series blog seeks to help enable continuous improvement and optimization via security posture validation and testing from counter-intuitive sources to help mitigate advanced threats.

References

This APT Response Strategy – Part 1 blog is also a part of the APT Strategy Series and Security Architecture Series. For a complete listing of all NigeSecurityGuy blogs see the Security Series Master Index.

Thanks for your Interest!

Nige the Security Guy.

Adaptive Zone Defense – Part 3

Adaptive Zone Defense – Part 3

Adaptive Zone Defense – Part 1
Adaptive Zone Defense – Part 2
Adaptive Zone Defense – Part 4

How do you contain an advanced threat? We continue our series focused on limiting and intelligently managing communications between services and systems on an organizations network to baseline and be able to identify anomalies.

Zone Defense 3

In Adaptive Zone Defense – Part 1 we presented the business case and security justification for limiting and managing traffic flows together with presenting the conceptual high-level Adaptive Zone Defense design as a foundation.

In Adaptive Zone Defense – Part 2 we presented another key foundation, known as Application Architecture Taxonomy that talks to a framework for application and system placement, organization and, management within the proposed zones.

In Adaptive Zone Defense – Part 3 we build upon the above foundation and framework to provide a practical use case that illustrates the more detailed implementation of the design with inter-zone flows together with some practical deployment and migration considerations.

Adaptive Zone Design Recap

As we discussed in Part 1, the conceptual Adaptive Zone Defense design proposes 7 foundational layers or zones that are described as, the Untrusted Zone for assets not under the organizations control, Semi-Trusted (DMZ) Zone for assets that are externally shared (either publicly or to 3rd parties), Trusted Zone for internal systems, Restricted Zone for high risk and/or mission critical systems, a Management Zone for network services and management systems and, an Audit Zone to isolate and protect security logging and monitoring.

There is also the concept of a Sub-Zone that is basically a Zone within a Zone that enables special cases, such as regulatory mandated segmentation.

Conceptual Zone Design

High-Level Zone Design

Data Classification Mapping to Zones

Regulations and other legal and compliance requirements impact security requirements and thus may impose mandated separation or additional boundary controls. The following graphic presents an example of where data classification can map to zone placement, impacting policy and controls based also upon risk assessment.

Data Classification Mapping

Data Classification Mapped to Zones

This clearly illustrates that the Application Architecture Taxonomy is key since you cannot create and deploy zones absent a holistic analysis based upon several impacting criteria (see Example Application Architecture Taxonomy Matrix in Part 2). All too often a set of DMZs are designed and deployed that do not reflect a complete analysis or do not have strong governance and management, so that they devolve into a level of complexity that introduces insecurity — since the design is not faithfully employed and/or exceptions ever increasingly become the norm and are never addressed.

Applications and systems are a subset of Services that often have complex interfaces and connectivity, such that there are Applications and their Relationships as well as Users and their Relationships that need to be considered both to design zones as well as decide upon deployment placement, policy and, controls. If this has never been done before in an open network then it is like “untangling the spider’s web” of connectivity but – it is necessary in order to understand anomalies as well as intelligently manage traffic.

Virtualization is a key component and driver in terms of requirements, data points and constraints for the application and system zone design. As much as possible there needs to be a holistic and consistent approach across physical, virtualized and, 3rd party security controls (as needed for high risk, high bandwidth or, mission critical) in the hyper-extended enterprise (insourced, co-sourced, managed).

Example Zone Use Cases

The following section applies a typical deployment use case to the high-level design so that the organization can begin to model and validate the design before committing to it in deployment. In Adaptive Zone Defense – Part 4 we will further discuss how to model and validate the zone design as it is prototyped, checked in a proof-of-concept and, continually validated and evolved as the migration of applications rolls out in phased stages with any exceptions or new requirements addressed.

Adaptive Security Zones enable:

  • Protection:
    • A ‘Managed Boundary’ for all user access to applications and systems
    • Implement granular controls on traffic, users and assets
    • Enforce policy and regulations
  • Detection:
    • Gain visibility of traffic, users and assets
  • Containment:
    • Control communications and resources on both inbound and outbound requests
    • Set a default deny policy on all inter-segment connections

Zone Design Use Case – External View

High Level Zone Design

Example Zone Design Use Case - External

The above Example Zone Design Use Case – External presents the following zone placeholders (which depends on the analysis from the Application Architecture Taxonomy):

  • External User Community
    • Public (Internet-Accessible), Remote (IPsec/SSL VPN users), Partner (Site-to-Site etc.)
  • DMZ Zone
    • Public Servers (Internet Accessible)
    • Remote Access Services
    • Infra Services (DNS, SMTP, etc.)
    • B2B Partner connections
  • Trusted Zone (Internal) –
    • B2B Services (Limited Access)
    • Dev.-Test Platforms (Controlled Environ)
    • Load Balancing, Web, App Services
    • Production Databases
  • Restricted Zone
    • High Risk / Critical Servers
    • High Risk / Critical Databases
  • Management Zone
    • Infrastructure Services (Internal)
    • Network Management
    • Virtualization Management
    • Backup
  • Audit Zone
    • Telemetry (DLP, NetFlow, PCAP, etc.)
    • SIEM
    • Analytics, Event Correlation
    • Central Logging

Zone Design Use Case – Internal View

Zone Design - Internal View

Example Zone Design Use Case - Internal

The above Example Zone Design Use Case – Internal presents the internal view where open access is progressively limited leveraging consistent access controls into the environment, as follows:

  • Internal User Community
    • Internal, Developer, Privileged, IT Admin users
  • Secure Business Unit
    • Enables business units where regulations, legal and, compliance requirements impose mandated separation or additional boundary controls
  • Data Center Perimeter
    • VPN Gateway
    • Access Gateway

Secure Access Anytime from Any Device

The rationale behind limiting all access consistently is that the internal network is just a transport, like the Internet, wireless and, mobile with a mix of users (employees, contractors, temps) thus should not be implicitly trusted. As users become more and more mobile it makes access and user experience more challenging to support an open internal access security model and then a different remote access, vendor access, partner access security model. The complexity and fragmentation introduces significant risk.

Global Services Data Center

Global Services Data Center Security Model

It is far simpler, in our opinion to treat the data center(s) as the (Server object) and wrap a consistent security perimeter around them that enables a Secure Access Anytime from Any Device (Client object) model. This model also facilitates the flexible secure movement of services across co-sourced, outsourced and, re-insourced hybrid clouds and service platform providers reducing risk and complexity.

The following example presents a basic but often typical example of a security snapshot and roadmap that helps organizations evolve towards Adaptive Zone Defense and a Secured Services Data Center security model on various needed fronts in a phased program.

Secured Service Roadmap

Secured Services Data Center Roadmap

Deployment & Migration Considerations

Zone Migration Flow

  • Governance & Process Integration – It is key to establish a strong governance structure with process integration for IT deployments
    • Strategy, Organization, Education
    • Standard Operating Procedures (SOP)
  • Zone Data Center Deployment
    • Network Security Tiger Team – Leverage a cross-functional team (see below) to target batches of applications for analysis and migration
      • Perform initial heavy-lifting
      • Development and refinement
      • Process integration
      • Documentation and Operational transition
    • Phased: Zones, Application Controls, User Access Control, Role-Based Access, Secure Access Anywhere (Mobility)
  • Application Architecture Taxonomy – See Adaptive Zone Defense – Part 4 for more details
    • Application Security Profile Form
    • Application Security Profile Catalog
  • Enterprise Role Management – Evolve toward Identity Management to enable granular access controls with zero trust
    • Role-Based Access (Organize Users)

Conclusion

Security zone isolation for managed communications is a lot of work, at least initially, but it offers a tangible Return on Security Investment (RoSI) that helps stop that bad end-user(s), a weak remote office(s), a malware infection or, a persistent attacker from compromising the whole network. In that regard the advanced threat defense benefits are … priceless!

Add to that the benefits of the Secured Services Data Center with consistent access across the whole user community with the ability to place services in-house, co-sourced or, outsourced with agility and flexibility and it becomes a modular plug-and-play win-win.

In Adaptive Zone Defense – Part 4 we will tie everything together with the previous taxonomy, tables and worksheets (Part 2) to describe the rationale and process flow from examining a new application or system, reviewing the data classification, risk assessment and, criteria and, identifying the placement, policy and controls.

References

This Adaptive Zone Defense – Part 3 blog is also a part of the APT Strategy Series and Security Architecture Series. It complements and builds upon the Adaptive Zone Defense and the Defensible Security Posture blogs.

Thanks for your interest!

Nige the Security Guy.

APT Detection Indicators – Part 2

APT Detection Indicators – Part 2

APT Detection Indicators – Part 1
APT Detection Indicators – Part 3
APT Detection Indicators – Part 4

Advanced Persistent Threats (APT) typically exhibit recognizable attributes and patterns that can be monitored by readily available, open source tools. These tools can enable early detection of APT behavior, especially by monitoring unusual inter-zone or outbound traffic.

APT Detection Indicators 2

While there is no silver bullet in the fight against concerted and targeted attacks, a holistic framework that includes varied methodology and tools while embracing layered defensive tactics can prove very helpful. This post is a complement to the blog on APT Red Teams – Part 1 to enable a continuous improvement and optimization cycle from counter-intuitive sources and help mitigate advanced threats.

Monitoring to Detect APT Behavior

Monitoring a combination of network data and host file integrity data can be key for detecting APTs. A combination of open source tools, such as Snort, Splunk, Sguil, and Squert are uniquely suited to monitor patterns of activity in data over time to see a potential attack.

The information contained in this blog, together with APT Detection Framework and APT Red Teams series represents only a starting point for observing anomalous activity on hosts and on the networks and is not meant as a complete APT program. Attack vectors are constantly changing and it is up to the reader to stay abreast of conditions that may warrant changes in APT strategy or seek to partner for expertise and best-practice help.

The APT Malware Signature

  • APT malware hides in plain sight –
    • Avoids detection by using common network ports, process injection and Windows service persistence
    • APT malware initiates outbound network connections
    • Monitor outbound network traffic
      • Identifies APT outbound beaconing attempts
  • Avoids anomaly detection through –
    • Outbound HTTP connections
    • Process injection
    • Service persistence
  • APT Communication –
    • 100% of APT backdoors made only outbound connections
    • 83% used TCP port 80 or 443, Many Proxy-aware
    • Simple malware signatures such as MD5 hashes, filenames, and traditional anti-virus methods usually yield a low rate of true positives

Detect the Breach

Addressing the Gap: Detect the Breach

Factors associated with APT attacks include the following:

  • Sudden increases in network traffic, outbound transfers
  • Unusual patterns of activity, such as large transfers of data outside normal office hours or to unusual locations
  • Repeated queries to dynamic DNS names
  • Unusual searches of directories and files of interest to an attacker, e.g., searches of source code repositories
  • Unrecognized, large outbound files that have been compressed, encrypted password-protected
  • Detection of communications to/from bogus IP addresses
  • External accesses that do not use local proxies or requests containing API calls
  • Unexplained changes in the configurations of platforms, routers or firewalls
  • Increased volume of IDS events/alerts

APT Detection & Analysis Tools

An APT depends on remote access and control, as such, the network activity associated with remote control can be identified, contained and disrupted through the analysis of inter-zone and outbound network traffic. Techniques for the detection of APT can be implemented through open source software tools and used to implement methodologies, such as:

  • Snort: an open source network-based intrusion prevention and detection system (IDS/IPS) originally developed by Martin Roesch. Snort employs signature and protocol, as well as anomaly-based inspection
  • Scapy: a packet manipulation program. Scapy can create packets for a wide range of protocols. It can send and receive packets and match requests and replies. It is extensible via Python scripts and can be used for a variety of detective measures
  • OSSEC: a host-based open source IDS, as opposed to Snort. Its correlation and analysis engine provides log analysis, file integrity checking, Windows registry monitoring, rootkit detection, and time-based alerting as well as active response and can support most operating systems
  • Splunk: a search, monitoring and reporting tool integrating logs and other data from applications, servers and network devices. The data repository is indexed and can be queried to create graphs, reports and alerts
  • Sguil: includes an intuitive GUI that provides access to real-time events, session data, and raw packet captures. Sguil facilitates the practice of network security monitoring and event driven analysis
  • Squert: a web application used to query and view event data stored in a Sguil database. Through the use of metadata, time series representations, weighted and logically grouped result sets it provides additional context to events

Analyze NIDS Alerts with Snorby

Analyze NIDS alerts with Snorby

These tools fit into the general category of network security monitoring (NSM), as described in several books by Richard Bejtlich, see “The Practice of Network Security Monitoring, Understanding Incident Detection and Response” for more details.

Intrusion detection (IDS) alone is sub-optimal and organizations really need a combination of tools as an NSM to deal with various data types. Sguil is the de-facto implementation of NSM.

The Security Onion

The Security Onion (SO) is a Linux distro for intrusion detection, network security monitoring, and log management. It’s based on Ubuntu and contains Snort, Suricata, Bro, Sguil, Squert, Snorby, ELSA, Xplico, NetworkMiner, and many other security tools. The easy-to-use Setup wizard allows organizations to build an army of distributed sensors for the enterprise in minutes. For more information see the Security Onion Blog.

Squert to Analyze NIDS-HIDS Alerts

Use Squert to analyze NIDS/HIDS alerts and other data

Security Onion is configured for immediate use by default. Ideally, it should be installed on dedicated hardware but short term it can be deployed via virtual machines or systems booting from LiveCD OS media. The first step to enhancing SO for optimal correlation functionality is to add Splunk. A commercial license is required for full action-based alerting, however, the limited community release can suffice with a small data footprint and console-only monitoring. Splunk installation is remarkably straightforward.

Conclusion

It is vital to practice an elevated operational awareness around critical data and assets, for example, sensitive data (PII, PHI, CHD), source code, and intellectual property. Segment and wrap critical data in managed secure zones within the deeper protection of well monitored defensible infrastructure (see Defensible Security Posture) with network taps, network flows (NetFlow) and, logging tools to increase visibility and enable on-demand packet capture.

Pivot Data Types Sguil Send Pcaps

Pivot between multiple data types with Sguil, send pcaps to Wireshark & Network Miner

It is impossible to protect everything equally. Incremental efforts, targeted at protecting high value data, typically through smaller and protected network segments (see Adaptive Zone Defense) provide far greater gains than broader, less focused efforts on lower value targets. Similarly, layered defensive tactics (multiple layers and means of defense) can contain security breaches and buy time to detect and respond to an attack, reducing the consequences of a breach.

It is all about starting with a Back to Basics: Defensible Architecture that is well-organized and managed based upon asset and data classification and risk assessment that defines policy, placement and, controls.

Focus on Detect, Contain, Investigate, Respond, Eradicate, Recover

Even the best monitoring mindset and methodology may not guarantee discovery of the actual APT attack. Instead, use more comprehensive analysis and correlation to discover behaviors indicative of APT-related attacks, lateral movement and, data exfiltration.

References

This APT Detection Indicators – Part 2 blog is a part of the APT Strategy Series and complements and builds upon the Adaptive Zone Defense, Defensible Security Posture and the APT Red Teams blogs.

In APT Detection Indicators – Part 3 we will discuss the use of the above tools in more detail together with a methodology to integrate and leverage events and data. In this series we will share more Indicators of Compromise (IoC) examples and scenarios together with posts on APT Defensive Actions and Adaptive Response Strategy.

Thanks for your Interest!

Nige the Security Guy.

APT Red Teams – Part 1

APT Red Teams – Part 1

APT Red Teams – Part 2
APT Red Teams – Part 3

How do you prevent an APT? The APT Red Teams blog defines core components used by successful red teams and proposes an approach for categorizing and implementing red teams to enable continuous improvement and optimization from counter-intuitive sources and help mitigate advanced threats.

APT Red Teams

Advanced Persistent Threats (APTs) initiated by an ever growing population of highly capable cyber criminals who are focused, determined and, stealthy are an ever increasing risk. While many organizations have the basic tenets of security in place and regularly test their industry standard best-practice procedures, they are caught off guard by exposed vulnerabilities, risks and threats not previously identified and formally escalated to resolution.

A Red Team approach provides a proven method of risk analysis which uncovers risks and quickly adapts to real world approaches taken by adversaries.

There are many parallels between APT and Red Teams. APT advances on a continuous area of attack because technology is ubiquitous, constantly being implemented and refreshed, and often contains new un-realized flaws as soon as it is implemented. Information security must find a way to leverage the similarity of Red Team services and APT to protect against this new threat much like DevOps and fast-track developer sprint cycles.

What is a Red Team?

A Red Team is typically an independent internal or 3rd party group that challenges an organization to validate and improve its effectiveness. The testers assess organization security, often unbeknownst to IT security staff. This type of Red Team provides a more realistic picture of the security readiness than exercises, role playing or announced assessments. The Red Team may trigger active controls and countermeasures within a given operational environment and both validate security defenses as well as the ability to detect and respond by Security Operations resources (either in-house, co-sourced or 3rd party managed).

Authorized Read Teams

Red Team Scope

In order to succeed the Red Team needs to be an “authorized, adversary-based assessment for defensive purposes.”

  • Authorized
    • A Representative with legal control of the facility, system, or entity to be red teamed has agreed to the process
  • Adversary-based
    • Activity is centered around what would one or more adversaries do if they were attacking the target
    • Take into account the adversaries’ knowledge, skills, commitment, resources, and culture = finite
  • Assessment
    • Comparison of the capability maturity and state of the target with respect to actions by the adversary and the ability to detect, contain, respond

Ideally Red Team assessments should be performed throughout the infrastructure and system lifecycle but especially in the design and development phase where cooperative Red Team assessments cost less, and critical vulnerabilities can be uncovered and mitigated more easily. Many organizations use Red Teams to validate a new service or production platform pre-deployment.

Red Team assessments are a flexible tool that program managers and sponsors use to identify critical vulnerabilities, understand threats, deliver effective and secure components, systems, and plans … and consider alternative strategies and courses of action.

Red Team Value Proposition

Red teamers are different from regular security personnel. They take a fresh approach to provide creative, and/or apply concepts in new ways to determine what information is valuable.

The general objective for a successful Red Team is to present a safe, but real world example of infiltration and data breaches, and identify what information is vulnerable based on threat scenarios. They will then evaluate the usefulness of the information to an adversary. Some typical strengths of successful Red Teams, but not other security professionals, are:

  • Uncover low hanging fruit missed by regular or current procedures
  • Uncover temporal risks
  • Inform the Defense and/or Response Teams
  • Redefine how to discover adversaries or indicators of compromise
  • Review high impact targets for potential concerns or threats
  • Provide an un-biased opinion (which may conflict with management or audit viewpoints)
  • Provide feedback, lessons learned and methods to mitigate risk or improve and fine tune process/procedures

Red Team analysis can also be described as an alternative approach to conventional security reviews. It re-thinks threat analysis by using alternative approaches:

Creative Red Teams

  • Creative
    • Think like your enemy, not simply resort to previous audit reports and threat matrices results in new approaches to detection, response, or remediation
  • Continual
    • Brings a new mindset and ongoing analysis identifies new risks as well as re-evaluates the evolving threat
  • Collaborative
    • Multiple people working on the same problem are able to bring different viewpoints and offer diversity to the team
  • Counter-intuitive
    • Red Teams and personnel bring value by finding threats before others know they exist
    • Assets previously identified as low risk can be re-assessed by a Red Team

How do you prevent APT?

Most groups suggest a similar approach to Red Team exercises. By using a Red Team it is possible to predict APT threats by analyzing predictive threat intelligence and developing threat scenarios based on the core target infrastructure and critical data and assets. From the threat scenarios canned infections and exploits can be developed and tested to determine and validate if they are blocked, detected, or contained. By gaining an understanding of the enemy it is possible to prevent, detect, isolate or remove them.

APT Defense Flow

Often, exploits used to execute an APT are not particularly advanced or complex. Instead, the APT attacker researches the target and chooses exploits appropriately. As suggested by multiple sources, APT executes similar to Red Teams in that they use the following phases:

  1. Planning and information gathering,
  2. Attack and compromise,
  3. Establish command and control,
  4. Authorization and credential theft,
  5. Manual exploitation and information gathering,
  6. Data exfiltration,
  7. Maintain presence.

Each phase of an APT attack offers the Red Team a set of test scenarios and a chance for the defense to catch or remove the threat, to evolve the detection and response capabilities and its maturity. While not exhaustive or indicative of a complete security program, consider the following prevention and detection capabilities as basic examples to be used at each phase to prevent, identify, or remove access:

1. Planning and Information Gathering

While no one method can prevent this activity, solid security basics in terms of data classification policy, training, and security awareness may be able to assist organizations and their employees on what information should not be released. This is true especially if the training focuses on real world attacks with a Red Team approach.

Information about internal vulnerabilities is extremely useful knowledge for adversaries and is often disclosed publicly in social media. A company sending out a press release that they just signed a contract with ABC company to provide security is a huge advantage to an attacker.

The APT/Red Team focus on research is on ABC products now. Even if a vulnerability does not exist, it could in the future and information like this should be known by personnel and appropriately safeguarded when possible.

2. Attack and Compromise

Traditional defense mechanisms may help prevent a successful attack. Testing responsiveness to attacks performed by a Red Team which appear as an adversary is an important element of detection and response. Since no patch system can prevent the unknown (or zero day), detection as soon as possible can provide clues into what allowed the access and how it can be removed and prevented in the
future.

A strong security architecture foundation helps a lot, with a Defensible Security Posture together with segmentation using the Adaptive Zone Defense strategy to support rapid Detection, Containment, Response, Investigation, Eradication, Recovery.

3. Establish Command and Control

A typical defense in depth strategy can detect anomalous data points. Also, while an intrusion detection system might be able to spot the traffic, Red Teams can assist in testing and increasing detection capacities, such as outbound traffic analysis or Indicators of Compromise. For more information see the APT Detection Indicators blog.

4. Authorization and Credential Theft

Organizations can prepare against this attack by restricting the opportunities for privilege escalation, reducing account access and regularly changing credentials, leveraging privilege management, role-based access, identity management, multi-factor authentication, and so on. ISO 27001/2 can help a lot as an internationally recognized security best-practice standard (see: ISO 27002 Security Benchmark).

Red Teams test the assumption that unneeded accounts are removed by leveraging unchecked service accounts or privileged accounts given to staff. If access is logged and monitored appropriately, it may be possible to spot anomalous activity. If the defense team did not notice the credential use it may necessary to apply additional information security controls to provide visibility.

5. Manual Exploitation and Information Gathering

Additional information about an attack can be disclosed through logs or other monitoring techniques in line with defense in depth. Red Teams can help create realistic expectations and real world experience of the footprint left in this stage.

Logs indicating successful access are difficult to review appropriately and highlight the importance of identifying an issue before this phase of attack. It is all about developing threat scenarios and sequences of indicators or compromise that can be correlated to detect suspicious or anomalous activity.

6. Data Exfiltration

Performing monitoring of data access could help prevent excessive information harvesting. Red Teams test this stage of protection by executing data exfiltration techniques used by real world hackers. Successful Red Team exfiltration can show information copied to network shares.

Without previously completing a data discovery project, it can be difficult to know exactly where sensitive information exists. Red Teams help show how the information can be gathered and exploited. This can help drive projects which more granularly restrict access and remove unneeded information that has propagated onto open file shares or within the Intranet that is less protected.

7. Maintain Presence

At this point, only advanced methods can detect the intruder unless another indicator becomes more visible, perhaps through another component of defense in depth. Red Teams can often maintain presence for a long period of time without being spotted. However, with training and realistic expectations of how a Red Team could be stopped it is possible to increase the chance of detection and removal of
a persistence presence, see the APT Detection Framework and APT Detection Indicators blogs. There will also be a future blog on Outbound Traffic Analysis to Discover APTs.

Conclusion

In APT Red Teams – Part 1 we have provided an introduction to the concept as well as discussed the scope and how Red Teams can assist in the continuous improvement and optimization from counter-intuitive sources, together with validation of APT defenses by testing the organizations ability to Detect, Contain, Respond, Investigate, Eradicate and Recover.

APT Red Teams - Part 1

In APT Red Team – Part 2 we will explore this topic further by discussing how to create Red Teams, roles and responsibilities, risk assessment and, tools and techniques to calibrate and validate detection and response in a continuous improvement cycle.

References

This APT Red Teams – Part 1 blog is a part of the APT Strategy Series and complements and builds upon the Adaptive Zone Defense and the Defensible Security Posture blogs.

Thanks for your interest!

Nige the Security Guy.

Adaptive Zone Defense – Part 2

Adaptive Zone Defense – Part 2

Adaptive Zone Defense – Part 1
Adaptive Zone Defense – Part 3
Adaptive Zone Defense – Part 4

Network segmentation is commonly used in network design to – increase network performance, create individually segmented networks to simplify network management, and divide networks up to create separate security zones. Because the APT is a “communicable disease” any part of a network involved with critical data can become infected.

Zone Defense 2

By limiting and intelligently managing communications between services and systems it helps contain an infection or compromise to keep malware or a persistent threat from running rampant.

Adaptive Zone Design Template

In Adaptive Zone Defense – Part 1 we proposed a conceptual and high-level security zone design template (see graphic) that we will use as a foundation in this series and build upon with some detailed use cases and traffic flows in Part 3.

Conceptual Zone Design

Conceptual Zone Design Template

In part 2 we are going to first develop another foundation, known as Application Architecture Taxonomy that talks to application and system placement, organization and, management within the proposed zones.

Application Architecture Taxonomy

The goal of the Application Architecture Taxonomy is to identify the requirements, dependencies and profile criteria needed to determine the application and system zone policy, placement, and controls.

The Application Architecture Taxonomy provides a correlation of application flows as an overlay to the network architecture in order to understand how traffic is delivered from the various user communities to the servers supporting the applications.

It provides understanding of who is talking to what and what portions of the network are used to deliver the services as a set of Applications and their relationships together with Users and their relationships.

The information discovered in the application taxonomy phase will be used to profile Application Use Cases as they are migrated to:

  • Define the types of data that exist and how best to establish risk classification (High, Medium, Low Risk) categories that meet the organizations business and risk requirements
  • Determine the requirements around application and system zone placement as well as the applicable security policy and controls needed to ensure that the network is deployed with the appropriate security.

The following table presents a list of the criteria that are useful in profiling the application, to identify security requirements for risk assessment and, to determine system placement and protection.

Example Application Architecture Taxonomy Matrix 1
 Example Application Architecture Taxonomy Matrix

Example Application Architecture Taxonomy Matrix 2

Security Control Solution Set

The Security Control Solution Set identifies the list of tools, technologies and, solutions that can be as options used to protect, log and, monitor applications and systems within the Zones and between Zones both physical and virtualized. An example table is illustrated below.

Security Control Solution Set

Example Security Control Solution Set

Risk Rating Controls Matrix

The Risk Rating Controls Matrix maps the risk, as determined by application or system criteria in terms of data classification, risk assessment and applicable law/regulation and/or policy to the minimum set of controls and restrictions that are required for the rating and/or zone placement.

Risk Rating Controls Matrix

Risk Rating Controls Matrix

Security Zones, Community & Controls

The Security Zones, Community and Controls worksheet enumerates all of the Security Zones and identifies the User Community (exposure) as well as applicable Controls that are applicable to manage the Zone Perimeter. This is a worksheet that is further developed and refined and maintained over time to describe and manage the design and current deployment.

Security Zones Community Controls

Security Zones, Community and Controls Worksheet

Security Zones, Perimeters & Protocols

The Security Zones, Perimeters, and Protocols worksheet is used to identify the connectivity between Zones across Perimeters and enumerates the allowed protocols between Zones as well as connection origination. This is a worksheet that is further developed and refined and maintained over time to describe the design and deployment.

Security Zones Perimeters Protocols

Security Zones, Perimeters, & Protocols

Conclusion

This blog provides a key foundation to describe a set of tables and worksheets that can be used to document the criteria needed to profile applications and systems as well as manage their security posture in terms of Zones, perimeters, placement, protocols, policy, community and, controls.

The worksheets enable best-practice management and organization, to prevent DMZs or Zones that organically evolve over time and are no longer compliant with the original security design and posture. They enable the inventory, classification and organization of critical data and assets with a focus on mapping Applications and their relationships together with Users and their relationships across and within Zones.

In Adaptive Zone Defense – Part 3 we expand upon the conceptual and high-level security zone design with application architecture taxonomy that was introduced in Parts 1 and 2 to provide some practical use cases and more detailed examples of its implementation with traffic flows and inter-zone relationships.

High Level Zone Design

We will also discuss application deployment integration, i.e., to ensure that this zone design and process is integrated into the greenfield deployment (or refresh) of applications and systems as well as a waiver process to manage temporary exceptions.

Coming Soon

In Adaptive Zone Defense – Part 4 we will tie everything together with the above taxonomy, tables and worksheets to describe the rationale and process flow from examining a new application or system, reviewing the data classification, risk assessment and, criteria and, identifying the placement, policy and controls.

System Placement Process

References

This Adaptive Zone Defense – Part 2 blog is also a part of the APT Strategy Series and complements and builds upon the Adaptive Zone Defense – Part 1 and the Defensible Security Posture blogs.

Thanks for your interest!

Nige the Security Guy.