Defensible Security Posture – Part 2

Defensible Security Posture – Part 2

Defensible Security Posture – Part 1

How can you leverage the Defensible Actions Matrix? A defensible actions matrix defines processes and procedures that can impact an attacker’s capability at various stages of the cyber kill chain.

Defensible Posture - Part 2

Cyber Kill Chain

In Defensible Security Posture – Part 1 we introduced the concept of the Cyber Kill Chain. As a recap, a “kill chain” describes the progression an attacker follows when planning and executing an attack against a target. Understanding the signature of an APT helps align defensive capabilities, i.e., to identify security controls and actions that can be implemented or improved to detect, deny, and contain an attack scenario.

Cyber Kill Chain

The APT Signature and Cyber Kill Chain

A complex incident may involve multiple kill chains with different objectives that map to various phases of the attack. For example, reconnaissance is performed to assess target feasibility to develop an attack plan. Attackers may also perform further reconnaissance after gaining an initial footprint into the internal network to revise strategy for lateral movement and persistence.

Defensible Actions Matrix

In this Defensible Security Posture – Part 2 blog we provide a case study that makes use of the Defensible Actions Matrix and offers some defensive practical best-practices. The basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated. Common factors associated with APT attacks include the following:

  • Sudden increases in network traffic, outbound transfers
  • Unusual patterns of activity, such as large transfers of data outside normal office hours or to unusual locations
  • Repeated queries to dynamic DNS names
  • Unusual searches of directories and files of interest to an attacker, e.g., searches of source code repositories
  • Unrecognized, large outbound files that have been compressed, encrypted password-protected
  • Detection of communications to/from bogus IP addresses
  • External accesses that do not use local proxies or requests containing API calls
  • Unexplained changes in the configurations of platforms, routers or firewalls
  • Increased volume of IDS events/alerts

The more detailed Detection Framework, presented in APT Detection Framework is used to analyze the potential attack scenarios based upon a threat/risk profile to more definitively identify the above factors, as well as any detection gaps while the Defensible Actions Matrix summarizes it based on actions/controls.

The following example depicts a sample actions matrix using the actions of detect, deny, disrupt, degrade [optional], deceive [optional], and contain. Documenting the capabilities defenders can employ in this matrix as a worksheet enables organizations to quickly assess their Defensible Security Posture as well as identify any gaps or needed compensating controls.

Kill Chain Actions 2

Example Defensible Actions Matrix

Evolving Industry Best-Practices

In order to provide a foundation for the Defensible Actions Matrix we offer a sampling of defensive APT-aligned practical best-practices as an example. These will be expanded upon as a more complete APT Best-Practices framework and evolution in future blogs.

Industry Best Practices

Organizations should use a comprehensive programmatic approach, no one single technology will stop advanced attacks, even products specifically targeted at advanced forms of attack. Ongoing integration and sharing of security intelligence among disparate security
technologies and other external organizations should be a security program goal (see: APT Threat Analytics).

As such, organizations should review existing technologies and increasingly utilize advanced features in the latest products or services to keep up with changes in the threat landscape. This should be performed with the mindset of integrating and unifying security processes between each technology so that effective coordinated response to threats is possible and the detection and reduction of breach events is the result.

Security Governance

Social Media and Information Sharing: Attackers often leverage publicly available information on websites and social media to find information about an organization that can be useful in planning an attack. Information sharing and social media policies should define how material should be handled and exposed via public channels.

Configuration Management: Configuration standards define templates to consistently configure applications and systems based on role, hardening them, removing un-necessary services and, eliminating defaults. Define processes that are enforced with change control for infrastructure integrity to limit the ability of attackers to exploit infrastructure to deliver malicious software to targeted systems.

Privileged Access Management: Organizations face significant security exposure in the course of routine IT operations. For example, dozens of system administrators may share passwords for privileged accounts on thousands of devices. When system administrators move on, the passwords they used during their work often remain unchanged, leaving organizations vulnerable to attack by former employees and contractors.

Organizations should grant user and system accounts leveraging role-based access the least amount of privilege needed to perform the job. Processes to create, audit, and remove accounts and access levels should be well-defined.

Shared Service Accounts: Organizations should ensure that service accounts, including default credentials provided with third-party software, are properly secured with defaults removed and provided only to those who need them to perform their job function.

Database Account Security: Organizations should manage and audit database accounts as part of a larger account management process. This includes ensuring that accounts are only  granted the necessary level of access. Many organizations do not fully leverage the security built-in to databases and DBAs often use default admin accounts and users have full admin rights.

Two-factor Authentication: Two-factor authentication mechanisms is increasingly mandatory for networks or zones with critical data and/or servers. It can reduce the effectiveness of password stealing and cracking attempts.

Network Security Components

Threat Intelligence: A threat intelligence capability leveraging internal and/or external sourced visibility can provide an indication that threat actors are focusing on specific types of attacks and indicators to detect these attacks. For more information see APT Threat Analytics.

Network Zones / Segmentation: Limiting and intelligently managing communications between services and systems on an organizations network helps contain an infection or compromise to keep malware or a persistent threat from running rampant. Ensure proper zoning and segmentation is performed in your internal network environment not just the DMZ and that proper firewall logging and inspection is performed between high- and low-security segments. Treat every system as untrusted. For more information see Adaptive Zone Defense blog.

Advanced Threat Protection: Evaluate and deploy a network-based advanced threat detection/prevention technology to reduce the potential impact of zero-day malware and other targeted attacks. Review existing advanced threat detection/prevention technology and ensure that the prevention capabilities are validated, tested and fully leveraged.

Context-Awareness: Evaluate and leverage context-aware security capabilities of security platform providers. Security platforms must become context-aware — identity, application, content, location, geo-location and so on — in order to make better information security decisions regarding APTs.

Firewall Rules/ACLs: Review and, if necessary, adjust ingress network firewall rules on a regular basis in order to ensure only critical inbound services are permitted to enter the network. This also includes geographical blocking or filtering at the country level where possible based on business need. For more information on Firewall Rule Lifecycle Management, see Security Program Best-Practices – Part 5.

Egress / Outbound Filtering: Egress filtering enables a managed perimeter with a focus on well-defined outbound policy. It declares the acceptable protocols and destination hosts for communication with internal systems, with a focus on any systems with critical or regulatory data, such as PII, ePHI, PCI and so on.

Network activity associated with remote control can be identified, contained, disrupted through the analysis of outbound network traffic implemented through open source software tools.

Remote Access / VPN: Implement internal inspection devices, such as intrusion prevention system (IPS) and network behavior analysis (NBA) technologies between any VPN termination device and the internal network environment so that attacks or behaviors can be discovered or prevented within the remote access network infrastructure.

DNS Sinkholes: A DNS-based sinkhole monitors for name resolution attempts of known malicious or suspicious domains. The resolution response is modified to point to an internal sinkhole server where the malicious or suspicious traffic is routed for further analysis and containment. In addition block “uncategorized” web sites at proxies, employ split‐DNS and split‐routing where possible.

Network IPS: Network intrusion prevention systems (IPS) can actively block network traffic matching patterns associated with malware command-and-control (C2) communication and data exfiltration. Review NG-IPS features and ensure it provides host and traffic anomaly detection (for example, using processing NetFlow data) and has capabilities to prevent or at minimum detect and alert on the anomalous traffic exiting through your perimeter devices.

Network IDS: A network intrusion detection system (IDS) can identify traffic patterns matching network-based scanning, malware C2 mechanisms, and data exfiltration. For more information see: APT Detection Indicators.

Network Security Monitoring: Validate that monitoring controls are in place and appropriate levels of logging are performed off-device in centralized log servers. Deploy security information management systems so that attacks can be detected or analyzed through additional analysis or correlation of incoming events.

Make sure that network visibility extends into virtualized environments either by tapping internal virtual switch traffic out for external inspection or by virtualizing IPS capabilities and running directly within the virtualized environment.

Form a Security Operations Center (SOC) or designate specific individuals to operate as a security operations center in order to properly monitor and respond as well as perform initial triage status for security events. When suspicious anomalies or alerts are received by the security operations center, invoke the incident response process.

Incident Response: Organizations should have a response plan for handling incidents as well as periodically review and test the plan. For more information on response readiness and preparedness please see APT Response Strategy.

Application Security

Web Application Firewalls: Review Web application firewall configuration and implement vendor-recommended prevention settings. Prefer application firewalls that have the capability to share intelligence via reputation feeds, offer fraud detection services, and offer the capability to perform browser and endpoint security and spyware infection assessment.

Endpoint Protection: Host-based malware protection solutions including antivirus software, host intrusion prevention systems, and advanced malware protection solutions help identify, alert and block malicious software.

File Integrity Monitoring: File integrity monitoring involves monitoring system files for unauthorized changes and is often deployed as part of a larger software change management process.

Application Whitelisting: Application whitelisting defines a limited set of software that can be run on a system. Application whitelisting requires continual management of the list of allowed software to keep up with application and operating system updates.

Data Loss Prevention (DLP): Data loss prevention solutions use information tagging, packet inspection, and network monitoring to identify the potential movement of sensitive data outside the network. In addition, organizations can implement policies to manage the use of removable storage devices such as USB to limit these devices being used to steal sensitive information.

Security controls can have various impacts based on their purpose and implementation. Ultimately, the goals of a security control is to detect malicious activity, deny the malicious activity access to targeted assets, disrupt malicious activity that is actively in progress, or contain malicious activity to an area where damage can be mitigated.

The matrix illustrated below provides a partial example worksheet, applying the above best-practices and organizes the controls according to whether their primary goal is to detect, deny, disrupt, or contain.

Defensible Actions Matrix 2

Best-Practice Defensible Actions Matrix Use Case [Partial]

Conclusion

Recent incidents clearly demonstrate that cybercriminals can conduct operations that involve intrusion, lateral movement, and data exfiltration in complex networks secured to current best-practices. Attackers can adapt their attack techniques to the unique circumstances of targeted environment.

This level of resourcefulness points to the realization that current best-practices and regulatory compliance are a necessary minimum baseline but are not sufficient alone. Today there is an increasing need for organizations to progressively evolve and advance from current security posture to a more defensible and advanced program with visibility, validation and, vigilance.

Our solution, and the prime basis for this site, is to adopt a security architectural and design foundation approach that compartmentalizes breaches into small zones on networks and on endpoints. To strategically leverage the Adaptive Zone Defense blog to develop an innovative architecture with well-organized applications and services, managed communications and – good visibility to flows and logs that can actually detect the cyber kill chain activity and stop the breach.

It requires an ongoing lifecycle process to take the legacy, rapidly deployed and the chaotic infrastructure on the edge (innovation) and consolidate it into the core foundation based on the architecture/design blueprint, while continually evolving the blueprint based on new business requirements, technology solutions and, regulatory requirements, for more information see: Adaptive Security Lifecycle

Coming Soon

In this series we will discuss advanced APT-focused best-practices that enable organizations to take their security to the next level and build from Basic to Augmented through to APT-specific Countermeasures to Advanced Security depending upon various factors including assets and threat/risk profile.

APT Best Practices

Evolution Lifecycle using Security Best-Practices

In the upcoming APT Operational Maturity and APT Intelligent Operations blogs we will also discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

Intelligent Operations Evolution

References

This Defensive Security Posture – Part 2 blog is also a part of the APT Strategy Series and Security Architecture Series. For a complete listing of all NigeSecurityGuy blogs see the Security Series Master Index.

Thanks for your interest!

Nige the Security Guy.

APT Defense Puzzle

APT Defense Puzzle – Best-Practice & Controls

APT Strategy Series

Defensible Security Posture
Advanced Threat Defense
APT Detection Framework
APT Detection Indicators
APT Red Teams

In the last few years, protecting business assets has become much more difficult as the “bad” guys continue to evolve their attacks to evade IT defenses. When you add into the mix employee-owned mobile devices (BYOD) and cloud-based services, which require networks to be more dynamic than years past, traditional network security controls and practices are simply no longer enough to ensure protection.

 APT Defense Puzzle

A 2013 study by the Ponemon Institute found that 67 percent of organizations admit that their current security activities are insufficient to stop a targeted attack. Trend Micro found that 55 percent are not even aware of intrusions, and fewer know the extent of the attack or who exactly is behind it.

The Advanced Persistent Threat now more closely approximates the “Average” Persistent Threat, and the average organization is going to have to learn how to protect itself from this new and different form of cyber threat. Over the last few years, three factors have combined to attract organized criminal elements to hacking:

  1. There’s real money to be made –– in several different ways.
  2. There’s a very low risk of getting caught.
  3. There are readily-available hacking tools that anyone can modify to suit their purposes.

This APT Defense Puzzle blog, in the APT Strategy Series is a living and evolving blog that will be continually updated and extended with practical best-practice and controls that organizations can leverage to manage and defend against the real and increasing threat of APT. It complements the APT Threat Defense blog which focuses more on top-down Architecture and Strategy with a bottom-up focus on quick fixes and changes that enable tangible improvements in security posture.

Solving the APT Defense Puzzle

The blog is also complemented by a Linked-In group entitled, Solving the APT Defense Puzzle that bring together a virtual community of security professionals to share practical best-practice, controls, and tools together with analysis of APT attacks in the wild in terms of analysis and actionable steps.

See: Solving the APT Defense Puzzle Announcement

Defensible Posture Recap

As stated in the Defensible Security Posture blog, the basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated.

“He who tries to defend everything defends nothing.”, Frederick II

Kill Chain Actions 2

There are a few basic things we need to understand:

  1. Defensible does not mean secure
  2. There are more things to defend than there are resources to defend
  3. Sometimes your defenses can become your weakness
  4. Defensibility requires understanding of what critical assets you’re defending
  5. Defensibility focuses on what, why, how, when and from whom

There is no silver bullet or single solution. APT attackers continually demonstrate their capability to compromise systems by using social engineering techniques, customized malware, and zero-day exploits that intrusion detection, anti-virus and patching cannot always detect or mitigate. Responses to APT intrusions require an evolution in analysis, process, visibility and technology.

A Call to Action

While there is no APT silver bullet there is much an organization can do with a well-designed and managed defensible posture. Given the inherently porous nature of richly inter-connected systems, it is quite likely that determined attackers will penetrate virtually any system. This does not mean there is no defense. It means there is a need to change the concept of defense from walling off the system to detecting, monitoring and mitigating attacks on the system.

The reality is that organizations actually have much more control over cyber attackers when attackers are inside their system than when attackers are on the outside selecting access points into it. Moreover, most cyber attacks are not successful when they merely penetrate the system.

Success for the attacker does not occur until they gather valuable information and then exit the system with it. If an enterprise can detect an unwelcome entity within the system, for example, and block its pathway back out, it can successfully mitigate the attack even if the system has been successfully breached.

Practical Best-Practice & Controls

Developing a Defensible Security Posture is very similar to a complex 5000+ piece jigsaw puzzle only organizations do not have the complete picture yet or always know what pieces are actually missing given the complexity. As we are all painfully aware, security is only as good as your weakest link so a missing piece or two enables the APT attacker to compromise and establish a base camp.

Defense Puzzle

Based upon experience conducting hundreds of network architecture assessments, vulnerability assessments, penetration testing, and social engineering assessments what are the common gaps in best-practice, controls and, tools … typical missing pieces or unseen flaws that are easily fixable yet enable a successful APT attack?

Based upon analysis of successful APT attacks and compromises in the wild, what were the techniques of exploitation and persistence used, what were the lessons learned? What can organizations do that are easily actionable and fixable to prevent a similar attack to evolve and improve posture?

The devil is in the details. The blog presents an ever evolving list of missing pieces and/or validation checks to complete a defensible security posture. Do you have that control or option configured? Is it configured correctly? Did the organization miss this gotcha that others missed?

Conclusion

IT security threats continue to become more targeted and more dangerous, security challenges are getting even more complex, and the costs of security failures keep going up. Business as usual can no longer protect enterprise networks against these threats –– much less what’s coming tomorrow.

IT needs to act now to address the challenges of ubiquitous mobile device access, for-profit hacking, Advanced Persistent Threats, application vulnerabilities and complex multi-vendor hyper-extended network management.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 5

Security Program Best-Practices – Part 5

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments. In this blog we will discuss the final and most critical Gap 10 – Develop Firewall Rule Lifecycle Management.

Firewall Rule Lifecycle

Gap 10: Firewall Rule Lifecycle Management

Business Problem

Firewalls are the first and continued line of defense for enterprises today, handling vast amounts of traffic across the network. On the perimeter alone firewalls filter millions of packets daily. The organizational security policy implemented in these firewalls requires the definition of hundreds and often thousands of rules and objects. Objects may include groups of servers, user machines, sub-networks in the data center, and networks in company branch offices or DMZs. The firewall rules define which type of applications and which network services are allowed to traverse between networks and which should be blocked.

Firewalls are Organic

Since business needs are dynamic, firewall policies are constantly being changed and modified. This continuous flux causes the firewall configuration to grow dramatically over time. A huge and subsequently complex firewall configuration is hard to manage and may require lengthy research in order to add or change a rule. Moreover, the complexity of the configuration decreases the firewalls performance and may lead to potential security breaches. For example, a rule was created to allow a temporary service to work for a limited time, but the administrator failed to delete the rule after the task was finished, introducing real security risks.

Complex Rules

Finding unused rules that have not matched any traffic, duplicate rules, and rules that are covered by other rules is a complex manual task for the firewall administrator. It may take days of investigating just to locate such rules in huge firewall configurations, while at the same time the firewall is continuing to change daily due to user requests.

Firewall Policy Vulnerability

Gartner noted in a recent research note that …

“Through 2018, more than 95% of firewall breaches will be caused by firewall misconfigurations, not firewall flaws.”

Organizations need to develop a Firewall Rule Lifecycle Management process to clean up their firewall policies, easing the network security administrator’s job while boosting firewall performance and eliminating security holes.

Organizations need to identify and address, as follows:

  • Unused rules: Rules that have not matched any packet during a specified time. Either the use of Cisco ACL hit counters, central ‘syslog’ logging or, commercial tools can be used for analysis to look at the firewall logs compare the actual traffic to the rules in the policy. Unused rules are ideal candidates for removal. Often the application has been decommissioned or the server has been relocated to a different address.
  • Covered or duplicated rules: Rules that can never match traffic because a prior rule or a combination of earlier rules prevents traffic from ever hitting them. During firewall cleanup such covered rules can be deleted since they will be never used. Covered and Duplicated rules cause the firewall to spend precious time for free and decrease its performance.
  • Disabled rules: Rules that are marked “disabled” and are not in operation. Disabled rules are ideal candidates for removal, unless the administrator keeps them for occasional use or for historical record.
  • Time-inactive rules: Rules that were active for a specified time in the past and that time expired. Rules that were active for a specific period can become active again at the same time next year. Retaining such rules may create security holes.
  • Rules without logging: Rules that are defined not to generate logs. Usually security best-practice guidelines dictate to log everything. Since log information consumes a large amount of disk space, administrators often configure highly used rules that control low risk traffic not to generate logs. Listing the rules without logs will help the administrator verifying that the lack of audit for these rules is not in contradiction to policy.
  • Least used rules and most used rules: Rules that matched the smallest number of packets or the largest number over a predefined and configurable period of time. The rules usage statistics helps the administrator in the cleanup process for performance improvement: he may want to reposition most used rules in higher places in the configuration and least used rules in lower places. Rules with zero hit count may be removed.
  • Rules with empty comments: Rules not documented, i.e., without a text explanation or reference # to the original change management request. Often policy requires an explanation for each rule so defining rules without comments are a violation of the policy. Some companies require entering a ticket number of the help desk trouble-ticketing application into the rule comment.
  • Unattached objects: Objects that are not attached to any rule or unattached global object.
  • Empty objects: Objects that do not contain any IP address or address range.
  • Duplicate objects: Objects that already exist but are recreated contributing to the policy “bloat”.
  • Unused objects: Objects whose address ranges didn’t match any packet during a specified time or unused global object.

By removing the unnecessary rules and objects, the complexity of the firewall policy is reduced. This improves management, performance increases, and removes potential security holes.

Cleanup Phase 1: Validation

The Validation phase involves manually (or with the use of public domain or commercial tools such as Algosec or Tufin) reviewing the Firewall Rules and performing a static analysis.

Algosec Example

Items to be reviewed in this step are, as follows:

  •  Unattached Object / Unattached VPN User- Group – An object that:
    • Does not appear in any rule
    • Every group it belongs to does not appear in any rule
    • In any policy on any firewall
  • Empty Objects:
    • Do not refer to any IP address
  • Unattached VPN Users:
    • Do not appear in any user group and have no access
  • Unattached access-list (Cisco)
    • Not connected to any interface
  • Expired VPN users
    • No longer have access
  • Disabled Rules:
    • Maybe it’s time to delete them?
  • Time-Inactive rules:
    • Timed Rules are active on a certain days of the month, days of the week, or times of the day…
    • … But you cannot set a year.
    • Identify the expired rules before they will become active again next year.
  • Duplicate Rules
    • Firewalls process the rules in-order “first match”
    • If “early” rules match every packet that a “late” rule could match – the “late” rule is covered (== useless clutter!)
    • Easy cases:  single rule covers another rule  the object names match exactly
  • Duplicate Objects:
    • Most FW Vendor consoles cannot answer the question “does this definition already exist with another name?”
    • Result:  Administrators often define the same object (Host, Subnet, or Group) multiple times

Cleanup Phase 2: Unused Rules

The Unused Rules phase involves Usage-based Analysis, i.e., focusing on what has changed recently and ensuring that the Firewall Rules are kept up-to-date and those rules that are no longer needed are flagged and/or removed so that the Firewall does not become unwieldy and risk conflicts or duplicates.

Rules Cleanup

This step allows us to identify key and useful data, as follows:

  • Unused Rules:
    • have not matched traffic in the last NNN days
  • Unused Objects:
    • Do not belong to any rule that matched traffic in the last NNN days
  • Most / Least used rules
  • Last date that rule was used
    • Even if it is listed as “unused” due to logging configuration settings

These considerations and notes should be borne in mind for this step, as follows:

  • Over time:
    • Applications are discontinued
    • Servers are relocated to other IP addresses
    • Test environments move to production
    • Business partnerships change
    • Networks are re-architected
    • Routing is changed
  • Result: Firewalls still have the rules – but the traffic is gone
  • Idea: Track and flag rules and objects that have not been used “recently”
  • Firewalls can log each matched packet
  • Log includes rule number, timestamp, and more
  • Basic approach:
    • 1) Filter the logs based on rule number
    • 2) Find the missing rule numbers and delete those rules
  • Challenge #1: Logging is configured per rule
    • Some rules are not configured to produce logs
  • Solution #1: List rules that do not produce logs separately
  • Challenge #2: Rule Insertions & Deletions change the rule numbers!
    • Which rule corresponds to what was used to be called rule 101 in Nov’07?
    • Makes long-term statistics unreliable
  • Solution #2: Vendor attaches a unique “rule_id” to each rule, such that:
    • Reported to log
    • Remains with rule through any rule add/remove/modify
  • Cisco Firewalls & Routers maintain a per-rule hit-counter
  • Advantages:
    • Unrelated to logging: un-logged rules are counted too
    • Rule insertions & deletions do not affect the hit-counters
  • Challenge:
    • Hit-counters are reset to zero when device reboots
  • Solution:
    • Take periodic snapshots
    • Attach pseudo rule_uids, homogenize the snapshots
    • Make sure not to double-count …
  • Some rules only work occasionally or rarely
    • High-shopping season
    • Disaster recovery rules – tested semi-annually
    • Need usage information of many months
  • Challenge:
    • Log files can become huge – querying extended historical data can have a real impact on product log server
    • Logs are discarded or rotated
    • Hit-counters are occasionally set to 0
  • Solution:
    • Process the raw usage information frequently (daily)
    • … But keep concise summaries available (forever)

Cleanup Phase 3: Performance Optimization

In order to provide a measurable attribute for firewall performance that will show the improvement of the policy optimization, there is a metric called Rules Matched Per Packet (RMPP).

Rule Optimization

RMPP is simply a calculation of the average number of rules the firewall tested until it reached the rule that matched a packet (including the matched rule). For example:

If the firewall policy consists of only one rule (allow or deny all) that matches everything – RMPP will be 1. If the firewall policy consists of 100 rules, such that rule #1 matches 20% of the packets, rule #10 matches 30% and rule #100 matches 50% of the packets:

RMPP = 1 * 20% + 10 * 30% + 100 * 50% = 0.2 + 3 + 50 = 53.2

Firewalls do in fact test the rules in sequence, one after another, until they reach the matching rule, and each tested rule contributes to the firewall’s CPU utilization. Therefore, optimizing the policy to decrease the RMPP score will decrease the firewall CPU utilization and greatly improve overall performance.

Building on the previous example, if rule #100 (that matches 50% of the packets) can be relocated to position #50 – without modifying the firewall policy decisions – the RMPP will be reduced significantly:

RMPP = 1 * 20% + 10 * 30% + 50 * 50% = 0.2 + 3 + 25 = 28.2

This simple change, which can be achieved by reordering the rules, can produce a 47% improvement in firewall performance.

Conclusion

Firewall administrators can achieve significant and measurable performance improvements for their complex firewalls by using these cleanup, lifecycle management and, policy optimization (with rule reordering) techniques. There are many commercial tools available that help in policy cleanup identifying rules that are unused, covered and disabled and should ideally be removed. This is in addition to unattached, empty, duplicate and unused objects. The tools help to eliminate security risks and keep the firewall policy well managed by alerting administrators.

The more veteran firewall audit vendor list includes: Tufin Software Technologies, AlgoSec, Secure Passage and Athena Security — and then RedSeal Systems and Skybox Security, which are primarily risk-mitigation tools, and so go beyond firewall audit to feature risk-assessment and risk-management capabilities.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 4

Security Program Best-Practices – Part 4

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments.

As a refresher, in Part 1 we shared some typical gaps, deficiencies or, need for improvements summarized in the Opportunity Matrix below. The Opportunity Matrix can be used as a capability maturity assessment and iterative planning tool to present proposed next steps to executive management for approval and funding.

Opportunity Matrix Summary

Part 1 through Part 3 of the Security Program Best-Practices series covered an overview as well as Gap 01 through Gap 07 inclusive. In this blog we will discuss Gap 08 – Integrate Central Security Logging through Gap 09 – Establish Network Security Operations, per summary below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 08: Integrate Central Security Logging

Business Problem

To enable and deploy a defensible security posture pervasive and mission-critical information technology and hyper-extended networks must be more scrupulously monitored to detect anomalies and threats. High traffic volumes are also associated with higher threat levels, making automated network monitoring, alerting, and response indispensable. Automated monitoring improves system security, performance, and availability by allowing management by fact. Automation also frees the IT team to focus on exceptions, which in turn simplifies holistically managing large amounts of event data.

Vulnerability Types

Being able to monitor various instrumentation telemetry data sources and event logs gives an administrator a substantial advantage in identifying threats early on – rather than investigating them after the fact. A sound logging strategy is the centerpiece in any organization’s “big picture – big data” network security plan. The presence of event monitoring within its log strategy helps distinguish a proactive plan from a reactive plan.

It is well established among network security professionals that the greatest threats to network security are in fact internal – they often originate in the same building, the same floor perhaps, and often right down the hall. The source may be a disgruntled employee, a curious staff member in the payroll department, or a bored sales representative. For several years, this threat was overlooked for the sexier external threat – the hackers working in dark home offices late at night or a competitor’s agent of corporate espionage.

To a network security administrator, event logs are like a history book or the gauges of an automobile. Event logs allow administrators to look back at the recent history of a server or network device and see trends, failures, successes, and other vital information to the organization.

Botnet Army

Our richly interconnected online world has faced an ever increasing volume of malware and worm variants — even botnets. They exploit vulnerabilities in, for example the Windows operating system and systematically reproduce across the organization. All the while, servers, routers, and other network devices quietly log these events across LANs and WANs. For administrators, these log files gave them a snapshot of a window (excuse pun) in time that showed when, where, and most of the time, how the infection or compromise entered their controlled space.

Event logs also hold potentially valuable forensic evidence. In the aftermath of a network security breach, event logs hold all of the information about the breach. How it happened, when it happened, and in the end, the keys to preventing another breach. This data is key to enable the ability to Detect, Contain and, Eradicate as well as investigate the root cause analysis, address and prevent recurrence in the future.

Gap 09: Establish Network Security Operations

Business Problem

The problem with network security is not the lack of good security tools; it is the management of those tools and the exposure to human error. Large networks generate an overwhelming amount of logs and security events. Firewalls, intrusion detection systems, web servers, authentication devices, and many other network elements contribute to more and more logs which need to be analyzed and produce actionable information.

Holistic Logging

There is a lot of noise, at first and false positives that need to be resolved and addressed through profiling network traffic and tuning network security technologies to customize them to the organizations business – to detect anomalies and leverage the true potential and value from the technology or technologies. Too many organizations deploy the solution out-of-the-box and stop there, disappointed by all of the noise and overwhelmed by the task at hand.

However this on-going effort and its optimization can reduce the amount of alerts from thousands per day to dozens. When a correlation occurs, a simple message that says a particular server has been attacked with a technique which is likely to succeed can be sent to system owners, operations people, and other places. The operator starts to realize value from the technology and its automation to focus on those alerts and events that need action as to whether they are a breach or not – thus need further investigation.

Attackers typically create a smoke screen, a set of decoys that obscure the actual compromise or infection so that it is lost in the noise and any security operators are so overwhelmed they do not detect the stealthy attack. This is validated by the recent spate of DDoS attacks that not only seek to deny normal service but also seek to compromise servers under the cover of the attack. Many SEIM solutions generate a lot of noise out-of-the-box and need tuning to weed out and optimize.

Detection and Response

Systems fail and intrusions occur. At some point compromise is inevitable. Therefore, detection and containment is imperative. The earlier an intrusion or infection is detected, the greater the ability of the organization to mitigate the risk. Intrusion detection is considered the second line of perimeter defense, after the firewall. Intrusions can lead to malicious acts such as: identity theft; compromise of confidential information; and unauthorized changes in files, systems, and device configurations.

Threat Landscape

An organizations ability to detect and prevent intrusions adds more depth to its defensive security posture. Organizations must be aware that intrusion detection alone will not mitigate the risk of an intrusion. Mitigation can only occur with a timely and appropriate response. A prudent  response program incorporates people and processes in addition to technology, and starts with the creation of a computer security incident response team (CSIRT) that will be the initial responder when an incident is identified. In addition to the CSIRT, policies must be developed to guide the organization and team in responding to an event. Types of events and the specific procedures to be followed also need to be defined. The development of an incident response program is typically mandated by regulation, international standards or, industry best-practices.

The timely detection of an intrusion coupled with being prepared to respond is vital to minimizing financial, production, and operational losses. Specific actions and responsibilities need to be pre-assigned and the appropriate training provided. In addition, containment and restoration strategies need to be outlined that address the: isolation of the compromised system; increased, monitoring, collection and preservation of evidence; and notification to law enforcement, regulators, and other affected parties.

Continuous Improvement

Monitoring and updating the security program is essential to maintaining the effectiveness of the program. A static program will be ineffective over time and can leave the organization with a false sense of security. Monitoring should include both non-technical as well as technical issues.

Plan Do Check Act

Non-technical issues would include changes in business processes, policies and procedures, locations, sensitivity of data, key personnel, and organizational changes.

Technical issues include monitoring for vulnerabilities, changes in systems, service providers, configuration, users, products, and services. When changes do occur, it is imperative that they are reviewed for accuracy and legitimacy and the program is adjusted to reflect the changes and ensure continued security and operational success.

Accidental changes can be just as damaging as malicious or fraudulent change activities – resulting in increased costs for remediation and potential losses or negative affect on the organization’s top-line revenue. Best practices mandate the monitoring of all changes, intended and unintended, that will create an audit trail that details when, what, and how the change occurred. The use of automated change control and audit tools will also enhance operational efficiency by increasing the effectiveness and productivity of your security personnel.

Each change can potentially create a vulnerability or weakness in the security program if not properly evaluated, tested, and deployed. Therefore, strong change control procedures and monitoring are critical to reduce the exposure to financial losses, reputation damage, and loss of productivity.

Validation: Trust but Verify

To assure that its security strategies are adequate, each organization must test its controls against the risks events that were identified through its formal assessment of risks. The higher the probability and negative affect of a risk event, the greater the need to validate the effectiveness of the security controls. The type of test to perform and the frequency should also be based on risk.

Risk Management

Prior to testing, detailed test plans need to be developed to ensure testing is appropriate and controls are established to reduce the risk to data integrity, confidentiality, and ensure availability. Test results need to be measurable and traceable to provide assurances that the security strategy is meeting security objectives.

There are a variety of testing methodologies and tools available, many of which can be automated to improve efficiency and enable independence. Independent diagnostic tests include penetration tests, audits, and gap assessments that are performed by credible individuals who are considered independent of the design, installation, maintenance, and operation of the test subject area. Examples of resources that will help support and streamline the testing efforts include: log and audit files generated via security event management systems, change management reports, automated audit tools coupled with penetration testing, prior security gap assessments findings and recommendations, and internal IT audit findings and recommendations from prior audits.

No one control or solution can ever guarantee 100 percent security. High-performing organizations understand that business and technology risk management best practices mandate a defense-in-depth security approach that includes multiple controls and can be validated with internal and external audit resources. When properly aligned with the organization’s risk profile, all of the controls discussed above help to establish a practical and prudent risk-based security posture.

Balancing Security

When properly aligned with the organizations’ business goals, audit personnel and tools can validate the appropriateness of these controls and help to ensure operational excellence and a secure infrastructure.

Coming Soon

Security Program Best-Practices – Part 5 will complete this Security Governance Series with a significant topic that warrants its own blog, Gap 10 – Firewall Rule Lifecycle Management for discussion and helpful advice on key components.

Thanks for your interest!

Nige the Security Guy.

Defensible Security Posture

Defensible Security Posture – Part 1

Defensible Security Posture – Part 2

The purveyors of Fear, Uncertainty and Doubt (FUD) assert that preventing today’s advanced threats is unrealistic, internal compromise is inevitable and – that FUD factor is reinforced by more and more reports of malware and advanced attacks penetrating insufficient security controls. However, it’s not all doom and gloom. Although the experts concede that stopping 100% of attacks is a technical impossibility, there are ways for organizations to avoid becoming the next devastated victim.

Malware-Outbreak-Enterprise

Unfortunately ‘secure‘ is still the target of many CISOs and company leadership. From painful experience many security practitioners collectively know that ‘secure‘ is a mythical goal and doesn’t actually exist. The leap in logic proposed by this blog is that we move to something that’s ‘defensible‘.

The basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated. The analogy that I like to use is the human immune system since security and advanced attacks are organic in nature and can come from various sources of infection. There are a few basic things we need to understand:

  1. Defensible does not mean secure
  2. There are more things to defend than there are resources to defend
  3. Sometimes your defenses can become your weakness
  4. Defensibility requires deep understanding of what critical assets you’re defending
  5. Defensibility focuses on what, why, how, when and from whom

Advanced Persistent Threats

The US National Institute of Standards and Technology (NIST) defines that an APT is:

An adversary that possesses sophisticated levels of expertise and significant resources which allow it to create opportunities to achieve its objectives by using multiple attack vectors (e.g., cyber, physical, and deception). These objectives typically include establishing and extending footholds within the information technology infrastructure of the targeted organizations for purposes of exfiltrating information, undermining or impeding critical aspects of a mission, program, or organization; or positioning itself to carry out these objectives in the future. The advanced persistent threat: (i) pursues its objectives repeatedly over an extended period of time; (ii) adapts to defenders’ efforts to resist it; and (iii) is determined to maintain the level of interaction needed to execute its objectives.

Attacks from APTs are growing in scope,  increasing in frequency and, improving in effectiveness – to establish an insider base camp and cover tracks. Current strategies are not well-suited to mitigating prolonged and determined attackers leveraging a growing collection of stealthy techniques. The traditional perimeter and prevention response to threats is no longer realistic. Organizational resources need to shift the focus instead onto – Detection, Containment, Eradication and Recovery.

Defensible Logo

There is no silver bullet or single solution. Most organizations continue to focus on defending against zero-day exploits by relying on commercial security products to block bad sites and software and by patching systems to correct vulnerabilities in installed software. While these approaches are effective against some threats, they fail to stop the advanced attacks and provide no knowledge of what an adversary does once the network is penetrated.

APT attackers continually demonstrate their capability to compromise systems by using social engineering techniques, customized malware, and zero-day exploits that intrusion detection, anti-virus and patching cannot always detect or mitigate. Responses to APT intrusions require an evolution in analysis, process, visibility and technology. This blog describes an intelligence-driven, threat-focused approach.

Intelligence-driven Network Defense

Organizations may use a number of active techniques to detect attacks that can circumvent passive defenses. One approach uses honeypots to attract adversaries and look for patterns of behavior. Organizations may employ a number of active defense techniques within their own enterprises to detect and track adversaries as they explore networks. If a honeypot is set up with a number of different types of documents, organizations can watch to see which documents the adversary chooses to try to ex-filtrate.

Intelligence-driven Network Defense is a risk management strategy that addresses the threat component of risk, incorporating analysis of adversaries, their capabilities, objectives, doctrine and limitations. This is necessarily a continuous process, leveraging indicators to discover new activity. It requires a new understanding of the intrusions themselves, not as singular events, but rather as phased progression.

The benefit of Intelligence-driven Network Defense is a more resilient security posture. After all, APT attackers are persistent and attempt intrusion after intrusion, adjusting their operations based on the success or failure of each attempt. Once a compromise is achieved then the APT attacker deploys backdoors for contingency and covers any tracks.

The Signature of an APT

In any Advanced Persistent Threat (APT) attack there are typically a pre-defined set of phases that act as a signature, as follows:

APT Evolution

The importance is not that this is a linear flow – some phases may occur in parallel, and the order of earlier phases can be interchanged – but rather how far along an adversary has progressed in his or her attack, the corresponding damage, and investigation that must be performed.

APT Attack Kill Chain 2

  • Reconnaissance – Research, identification and selection of targets, often represented as crawling Internet websites such as social networks, organizational conferences and mailing lists for email addresses, social relationships, or information on specific technologies.
  • Weaponization – Coupling a remote access trojan with an exploit into a deliverable payload. Increasingly, application data files such as PDFs or Microsoft Office documents serve as the weaponized deliverable.
  • Delivery – Transmission of the weapon to the targeted environment via, for example, email attachments, websites, and USB removable media.
  • Exploitation – After payload delivery to victim host, exploitation triggers intruders’ code. Exploitation targets an application or operating system vulnerability or leverages an operating system feature that auto-executes code.
  • Installation – Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.
  • Command and Control – APT malware typically establishes remote command and control channels so that intruders have “hands on the keyboard” access inside the target environment.
  • Actions on Targets – Typically the prime objective is data exfiltration which involves collecting, encrypting and extracting information from the victim environment. Intruders may only seek access to victim box for use as a jump point to compromise additional systems and move laterally inside the network or attack other partner organizations.

Actionable Intelligence and the Intrusion Kill Chain

Cyber ‘kill chain’ methodology is the latest in a series of security strategies, targeted especially at APTs that are based on more of a proactive and visible model of real-time network monitoring, analysis, and mitigation. The formal concept of cyber ‘kill chain’ methodology was first developed by a group of scientists at Lockheed Martin in a paper titled, “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains“.

The intrusion kill chain becomes a model for actionable intelligence where practitioners align organizational defensive capabilities to the specific processes an adversary undertakes to target that organization. The end goal of this is to analyze the data for patterns of attack methods, behaviors of distinct hostile actors, and other indicators which can inform the development of unique responses. Fundamentally, this approach is the essence of Intelligence-driven Network Defense security posture basing security decisions and measurements on a keen understanding of the adversary.

Defensible Actions Matrix

The following is an example of a table that depicts a course of action matrix using the actions of detect, deny, disrupt, degrade, deceive, and contain. Documenting the capabilities defenders can employ in this matrix as a tool enables the reader to assess their Defensible Security Posture as well as identify any gaps or needed compensating controls. The matrix includes traditional systems like network intrusion detection systems (NIDS) and Firewall access control lists (ACL), system hardening best practices like audit logging, but also vigilant users themselves who can detect suspicious activity.

Kill Chain Actions 2

Intelligence-driven Network Defense is a necessity in light of advanced persistent threats. As conventional, vulnerability-focused processes are insufficient, understanding the threat itself, its intent, capability, doctrine, and patterns of operation is required to establish resilience. The intrusion kill chain provides a structure to analyze intrusions, identify indicators and drive defensive courses of actions. This model prioritizes investment for capability gaps, and serves as a framework to measure the effectiveness of the defenders’ actions. When defenders consider the threat component of risk to build resilience against APTs, they can turn the persistence of these actors into a liability, decreasing the adversary’s likelihood of success with each intrusion attempt.

Evolving Towards a Defensive Posture

If your organization does not already have visibility with proactive monitoring built into your environment this may seem like a major challenge. Implementing an Intelligence-driven Network Defense with a Cyber Kill Chain should be based initially on a prototype then iterate approach to evolve in capability and sophistication. Start with a basic framework that you can comfortably build and operate then make progress from there.

Cyber Kill Chain

Perform a Security Health Check with a focus on the organization’s web presence and external perimeter to see what information it could give an attacker – or leverage a 3rd party professional. Implement layered security to decrease the possibility that threats will slip through unnoticed. Create a policy for dealing with malware events. Educate staff about what to do with unexpected, suspicious emails and attachments.

With each step taken, you’ll get more information about your environment. And the more information you have, the more likely you will be able to identify anomalous behavior.

Next Steps

In Defensible Security Posture – Part 2 we discuss a case study that leverages the Defensible Actions Matrix and provides some recommended APT-focused best-practices.

The Defensible Security Posture series using an Intelligence-driven Network Defense will be built upon in future blogs. In the APT Operational Maturity and APT Intelligent Operations blogs we will discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

The defensible architecture foundation uses Adaptive Zone Defense to segment critical assets from general-purpose infrastructure to enable containment that includes Application Architecture Taxonomy to discusses the analysis, placement, policy and, controls for assets based upon classification and risk. There will also be a blog that takes a deeper dive in Risk Management Practices.

Thanks for your interest!

Nige the Security Guy.