Architecture Case Study – Part 1

Architecture Case Study – Part 1

Architecture Case Study – Part 2

In the Security Architecture Series of blogs we have shared all of the steps involved in requirements gathering, baseline, product and solution selection and, through to realizing the architecture. This blog presents an Architecture Case Study that uses those principles and recommendations as a practical example. The illustration provides a conceptual simplified view of the program use case.

Defense in Depth Part 1 (this blog) takes the reader from Architecture development through to the Technical Recommendation then Part 2 takes the reader from Design to Deployment strategy with Implementation and Migration.

Program Overview

The overall goal of the project was to seek to standardize across the organization and all of the 20+ business units. The business units are primarily autonomous with different types of technology and infrastructure and at varying degrees of maturity and security. The status quo presented a series of risks to both the organization as well as each of the business units.

The cost/benefits were multiple not just in terms of standardization but also the total cost of ownership (TCO) and Return on Security Investment (ROSI) to purchase technology at volume discount while gaining increased visibility and support from the vendor(s). However, the primary goal and benefit was to establish and foster a spirit of collaboration, sharing and, cross-pollination to work together towards a common vision.

Architecture Case Study

The overall high-level approach is defined, as follows:

  • Develop Architecture
  • Requirements
    • Specify Functional Requirements (RFI)
    • Request Information from the vendor community (Distribute RFI)
    • Review RFI responses
    • Select vendors for product/solution evaluation
  • Vendor/Product Selection
    • Conduct bake-off testing with business unit participation
    • Review evaluation scorecard results
    • Conduct pilot of highest ranked solution
    • Review pilot results
    • Technical Recommendation
  • Develop Design
  • Develop Implementation Program
    • 4 Phases
    • Alpha Test
    • Beta Test
  • Deploy/Execute
    • 9 Step Program

Baseline Network Standard Architecture

I worked on the project as a consultant in the role of Program Technical Architect as part of the overall Architecture Governance and Steering Committee. My role was both to guide the direction and act as a technical lead as well as perform a lot of the detailed work to develop the actual deliverables based on collaboration and interaction.

Network Security Working Group (WG)

The first step was to develop an Network Security Working Group (WG) that included stakeholders from the various business units to contribute at two levels, as follows:

  • Level 1 – A small representative sample of core members who were involved in the brainstorming sessions to represent their business unit and contribute input on both the architecture as well as unique requirements
  • Level 2 – A stakeholder from every business unit who was involved in monthly or quarterly (as appropriate) review and approval of the emerging work product and progress to enable consensus and buy-in.


Collaboration was key to the success of the project. We wanted to involve stakeholders in every stage of the process and to ensure that their contribution was captured and recorded. Brainstorming sessions were used extensively at various locations with preparation to seed and stimulate the discussion with a facilitator as well as scribes to record and document.

Architecture Draft Review

A series of review cycles were used with a broader and broader audience to ensure that the architecture aligned with both current and future strategy and needs for the business units. The architecture document contains the following sections:

  • Architectural Principles
  • Network Models
  • Physical Layer Design
  • Supported Protocols
  • Network Performance Architecture
  • Network Security Architecture
    • Areas
    • Perimeters
    • Zones
    • Controls
    • Management
  • Network Management Architecture
  • Enabling Services
  • Appendix
    • Profiling BU Network Traffic
    • Modeling Steps
    • Example of Modeling a BU Network

The finalization and ratification of the Network Baseline Standard Architecture was a major accomplishment for the organization because not only did it lay the groundwork for the success of this specific program it also laid the framework for future projects across initiatives such as Wireless and Evolving Security.

Requirements Specification (RFI)

The RFI – Network Security Functional Requirements document was developed next by the Network Security Working Group. The team worked closely together to identify the functional requirements and assign a relative priority of High, Medium, or Low.

RFI Evaluation Criteria

Once RFI was completed and reviewed the Network Security Working Group convened a meeting to establish the RFI response evaluation criteria and scorecard to be used for the analysis of response from bidding vendors. The functional requirements that were originally identified as High were further examined and 19 requirements were selected and rated as MUST by the group. A functional requirement with a MUST designation implied that the associated security device would be eliminated from further consideration if it did not comply.

RFI Requirements Sample

All functional requirements were then scored with either a maximum possible score of 10, a maximum possible score of 5 or, a maximum possible score of 3 respectively.

Evaluators and Decision-makers

To ensure that the RFI responses were analyzed in an independent and objective manner the Network Security Working Group assigned an Evaluation Team, which comprised primarily of consultants. The Evaluation Team was solely responsible for conducting the RFI response analysis to select vendors and solutions and also performed the network security equipment testing. However the team did not participate in any decision-making and only acted as advisors. The Decision-maker Team is comprised of members of the extended Network Security Working Group (Level 2).

RFI Evaluation Scorecard

The Evaluation Team developed an Evaluation Scorecard that took all response format files from bidding vendors and consolidated them into the Consolidated Vendor Response Form file. This consolidated file contained macros to process the entries from all of the bidding vendors and to create two worksheets, as follows:

  • Product Stack Ranking – summary of scores based upon device category
  • Vendor Stack Ranking – summary of scores by vendor

Vendor Evaluation

RFI Evaluation Methodology

The Evaluation Team adopted an objective method of evaluation focused on the functional requirements as defined by the Network Security Working Group development team, and communicated to the vendors in the Security RFI. It led to the following step procedure.

  1. Evaluate ‘Best of Breed’ responses to derive the top 3 vendors in each of the following four categories – Firewall, VPN, IPS, and Management.
  2. If possible, select the 4 most populous vendors from these rankings for inclusion in the network security equipment testing.
  3. Evaluate ‘Integrated Portfolio’ responses, if any, from all remaining vendors to derive the top 3 portfolio vendors.
  4. Select the best vendor from this ranking for inclusion in the network security equipment testing.

For quality control a Conformance Check was also conducted to ensure that all ‘Yes’ or ‘Partial’ responses had an associated supporting Response Reference Section and/or comment to backup the statement by the vendor.

Vendor / Product Solution Selection

Of the fifteen network security equipment manufacturers that responded to the RFI, five vendors who best met the functional and operational requirements were invited to participate in the bake-off.  Each vendor brought and installed equipment in the lab to allow members of the working group to conduct technical evaluations.

Bake-Off Testing Methodology

The primary goal of the bake-off testing was to further measure the fit of the proposed solutions, with a focus on holistic integration against the functional requirements that were documented in the Network Security RFI. It is interesting to note that most vendors acknowledged that this was the first time ever they had deployed and integrated their solutions holistically and operated them in a real-world scenario. Most vendors had only participated in one-off point solution evaluations.

The bake-off testing objectives sought to identify, test and select one or more manufacturers of network security solutions (Firewalls, IDS/IPS, VPN and, Management/Monitoring) to proactively meet the following goals:

  • Secure core business unit network infrastructure devices
  • Add network security components to protect and segregate critical assets
  • Integrate security components to current network management systems (NMS)
  • Develop a Network Security Management System (NSMS).

The evaluation team designed and deployed an inherently insecure Network Security Evaluation Lab that simulates a typical business unit network and provides both the network areas and security zones that need to be protected with sample assets. The testing viewed potential threats and attacks from External (outside the border perimeter) as well as from Internal/Insider (business unit networks) towards Data Center Zone and Management Zone(s) targets.

Test group scenarios were developed that made use of various typical threat and attack categories (e.g., signature based, anomaly based, DoS). The controlled attacks were initiated by a penetration tester from both external and internal sources. In addition, a traffic generation load/stress testing tool was utilized to exercise functionality and simulate normal traffic (client connections and sessions).

Network Security Evaluation

The above diagram provides a simplified illustration of the test groups and targets across the Functional Requirement categories, as follows:

  • Detection
  • Response
  • Alert / Logging
  • Correlation
  • Reporting
  • Management

These are the functional requirement categories that are documented in the Network Security RFI and the reference codes refer to the specific line item requirements.

The goal of the bake-off testing was to initiate multiple discrete sets of tests as ‘triggers’ that initiate a sequence of events that then flow through the functional requirements categories and elements in the diagram. The Evaluation team asked the manufacturer to demonstrate if/how the proposed security solution detected, responded, alerted, logged, correlated (where appropriate), and reported as a consequence of these sequences, and how any generated events are managed. The testing also evaluated the utility of the solution as well as factors, such as integration, management and, monitoring. Decoys and scans were used to generate noise while stealthy attacks were employed.

The Evaluation Team was restricted to performing the testing and to providing an objective report to the Network Security WG attendees, in addition to an independent and objective report from the penetration tester and traffic generation tester. The attendees made use of an Evaluation Scorecard and each stakeholder contributed a score.

Evaluation Scorecards

The Evaluation Team developed a set of scorecards that would be used by Network Security WG and business unit stakeholder attendees to the bake-off sessions. There were two scorecards used across two days, as follows:

  • Objective Scorecard – Validate Compliance to Requirements
  • Business Unit Scorecard – Validate Fit to Business Unit

For the Objective Scorecard the Evaluation team reviewed both tests and vendor demo together with Q&A to determine if the functional requirements were met by the implemented solution as cited by the Vendor in their RFI response. The team referenced the Vendor Summary and RFI Response sheet.

For the Business Unit Scorecard the attendees individually assessed how well the solution satisfies the requirements and fits the needs of their business unit and determined the total category score, per section.

RFI Scorecard

Technical Recommendation

The technical evaluation had consisted of more than 100 evaluation criteria and over 50 repeatable tests conducted on four network security components: Firewall, VPN, IDS/IPS and Management/Monitoring.  The RFI was designed to allow the option to select either the best suite of tools from a single manufacturer or to select the ‘best of breed’ (the best components from multiple vendors).

The Evaluation Team’s technical scores were tallied and submitted as a Technical Recommendation to complement the financial total cost of ownership analysis.  The Technical Recommendation and Financial Analysis were provided as a summary of the findings and recommendations as an outcome of this evaluation process.

Technical Recommendation

Implementing network security equipment on a business unit network is a large and challenging proposition.  The group recommended that there be Alpha and Beta deployments to validate technical elements and further understand the integration complexity with the proposed solution.  In addition, the Alpha and beta process helped to develop a deployment methodology which will allow for a deliberate approach for addressing important business unit and organization-wide security concerns.

Next Steps

Architecture Case Study – Part 2 will continue this series to take the reader from the Technical Recommendation on into Baseline Network Standard Design as well as the Deployment strategy with Implementation and Migration process.

Think You’re Secure? Think Again.
Security Architecture Primer
Security Architecture Baseline
Risk-Aware Security Architecture
Develop Security Architecture
Product and Solution Selection
Security Architecture Implementation
Adaptive Security Lifecycle

Thanks for your interest!

Nige the Security Guy.


Threat and Vulnerability Management

Threat and Vulnerability Management

The best way to ensure a fighting chance of discovering and defeating information exploitation and theft is to take a disciplined, programmatic approach to discovering and mitigating threats and vulnerabilities. Threat and Vulnerability Management is the cyclical practice of identifying, assessing, classifying, remediating, and mitigating security weaknesses together with fully understanding root cause analysis to address potential flaws in policy, process and, standards – such as configuration standards.

Vulnerability Management

Vulnerability assessment and management is an essential piece for managing overall IT risk, because:

  • Persistent Threats
    • Attacks exploiting security vulnerabilities for financial gain and criminal agendas continue to dominate headlines.
  • Regulation
    • Many government and industry regulations, such as HIPAA-HITECH, PCI DSS V2 and Sarbanes-Oxley (SOX), mandate rigorous vulnerability management practices
  • Risk Management
    • Mature organizations treat it as a key risk management component.
    • Organizations that follow mature IT security principles understand the importance of risk management.

Properly planned and implemented threat and vulnerability management programs represent a key element in an organization’s information security program, providing an approach to risk and threat mitigation that is proactive and business-aligned, not just reactive and technology-focused. Threat and vulnerability management programs include the following 4 major elements:

  • Baseline
  • Assess
  • Remediate
  • Lifecycle Management

Each of these elements individually benefits the organization in many ways, but together they form interlocking parts of an integrated, effective threat and vulnerability management program.



The threat and vulnerability management life cycle begins with the definition of policies, standards and specifications that define access restrictions, and includes configuration settings that harden the IT infrastructure against external or internal threats. Security configuration policies and specifications should be based on industry-recognized best practices such as the Center for Internet Security (CIS) benchmarks or National Institute of Standards and Technology (NIST) recommendations.

The development of security configuration policies and specifications is an iterative process that starts with industry standards and best practices as a desired state. However, many organizations may also need to define exceptions in order to accommodate specific applications or administrative processes within their environment and track them for resolution.

Closed Loop Policy

Organizations should also consider a mapping of organization-specific configuration policies and operational processes to industry-recognized control frameworks and best practices. Organizations that take the extra step of mapping the policies that are implemented by vulnerability management to control standards and best practices can strengthen their posture with auditors and reduce the cost of compliance reporting through automation. The mapping enables compliance reporting from configuration assessments.

Asset Inventory

To protect information, it is essential to know where it resides. The asset inventory must include the physical and logical elements of the information infrastructure. It should include the location, business processes, data classification, and identified threats and risks for each element.

This inventory should also include the key criteria of the information that needs to be protected, such as the type of information being inventoried, classification for the information and any other critical data points the organization has identified. From this baseline inventory pertinent applications and systems can be identified to iteratively develop and update an Application Security Profile Catalog. It is important to begin to understand application roles and relationships (data flows, interfaces) for threat and risk analysis since a set of applications may provide a service or business function. This will be discussed in more detail in a future blog.

Service Dependency Mapping

Classification of assets according to the business processes that they support is a crucial element of the risk assessment that is used to prioritize remediation activities. Assets should be classified based on the applications they support, the data that is stored and their role in delivering crucial business services. The resource mapping and configuration management initiatives within the IT operations areas can begin to provide the IT resource and business process linkage that is needed for security risk assessment.

IT operational areas need service dependency maps for change impact analysis, to evaluate the business impact of an outage, and to implement and manage SLAs with business context. IT operations owns and maintains the asset groupings and asset repositories needed to support service dependency mappings.

Asset Discovery and Catalog

This information is typically stored in an enterprise directory, asset management system or a CMDB. Further details will be provided in the forthcoming Application Architecture Taxonomy blog.

The security resource needs the same information in order to include business context in the risk assessment of vulnerabilities, to prioritize security incidents, to publish security metrics with business context and to publish compliance reports that are focused on the assets that are in scope for specific regulations.

Security resources should engage IT application operations areas to determine the sources for IT service dependency maps and should configure security assessment functions to dynamically access or import this data for risk analysis, security monitoring and compliance reporting functions. The security team should also participate in CMDB projects as a stakeholder and supporter.

Configuration Standards by Device Role

A vulnerability management program focusing only on vulnerability assessment is weak regarding a crucial vulnerability management program objective — making the environment more secure. Although vulnerability assessment excels at discovering security weaknesses, its reporting isn’t optimized for the mitigation work performed by operations areas. Chasing individual vulnerabilities often does not eliminate the root cause of the problem. A large percentage of vulnerabilities results from underlying configuration issues (missing patches, ports that shouldn’t be open or services that shouldn’t be running).

Infrastructure Integrity

The security resource should work with IT operations to define security configuration standards, and should use the security configuration assessment capability within their incumbent vulnerability assessment tool (if the vulnerability assessment tool provides it) to drive implementation of security configuration standards in desktop, network and server provisioning processes.

Threat and Vulnerability Analysis

To perform threat analysis effectively, it is important to employ a consistent methodology that examines the business and technical threats to an application or service. Adversaries use a combination of skills and techniques to exploit and compromise a business process or application, so it is necessary to have in place a similarly multipronged approach to defend against them that decomposes and analyzes systems.

Vulnerability Assessment

The next step is to assess the environment for known vulnerabilities, and to assess IT components using the security configuration policies (by device role) that have been defined for the environment. This is accomplished through scheduled vulnerability and configuration assessments of the environment.

Network-based vulnerability assessment (VA) has been the primary method employed to baseline networks, servers and hosts. The primary strength of VA is breadth of coverage. Thorough and accurate vulnerability assessments can be accomplished for managed systems via credentialed access. Unmanaged systems can be discovered and a basic assessment can be completed. The ability to evaluate databases and Web applications for security weaknesses is crucial, considering the rise of attacks that target these components.

Database scanners check database configuration and properties to verify whether they comply with database security best practices.

Web application scanners test an application’s logic for “abuse” cases that can break or exploit the application. Additional tools can be leveraged to perform more in-depth testing and analysis.

All three scanning technologies (network, application and database) assess a different class of security weaknesses, and most organizations need to implement all three.

Risk Assessment

Larger issues should be expressed in the language of risk (e.g., ISO 27005), specifically expressing impact in terms of business impact. The business case for any remedial action should incorporate considerations relating to the reduction of risk and compliance with policy. This incorporates the basis of the action to be agreed on between the relevant line of business and the security team

Risk Analysis

“Fixing” the issue may involve acceptance of the risk, shifting of the risk to another party or reducing the risk by applying remedial action, which could be anything from a configuration change to implementing a new infrastructure (e.g., data loss prevention, firewalls, host intrusion prevention software).

Elimination of the root cause of security weaknesses may require changes to user administration and system provisioning processes. Many processes and often several teams may come into play (e.g., configuration management, change management, patch management). Monitoring and incident management processes are also required to maintain the environment.

For more details on threat and risk assessment best-practices see the blogs: Risk-Aware Security Architecture as well as Risk Assessment and Roadmap.

Vulnerability Enumeration

CVE – Common Vulnerabilities and Exposures

Common Vulnerabilities and Exposures (CVE®) is a dictionary of common names (i.e., CVE Identifiers) for publicly known information security vulnerabilities. CVE’s common identifiers make it easier to share data across separate network security databases and tools, and provide a baseline for evaluating the coverage of an organization’s security tools. If a report from one of your security tools incorporates CVE Identifiers, you may then quickly and accurately access fix information in one or more separate CVE-compatible databases to remediate the problem.

CVSS – Common Vulnerability Scoring System

The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability impact scores.

CWE – Common Weakness Enumeration

The Common Weakness Enumeration Specification (CWE) provides a common language of discourse for discussing, finding and dealing with the causes of software security vulnerabilities as they are found in code, design, or system architecture. Each individual CWE represents a single vulnerability type. CWEs are used as a classification mechanism that differentiates CVEs by the type of vulnerability they represent. For more details see: Common Weakness Enumeration.

Remediation Planning


Vulnerability and security configuration assessments typically generate very long remediation work lists, and this remediation work needs to be prioritized. When organizations initially implement vulnerability assessment and security configuration baselines, they typically discover that a large number of systems contain multiple vulnerabilities and security configuration errors. There is typically more mitigation work to do than the resources available to accomplish it.


The organization should implement a process to prioritize the mitigation of vulnerabilities discovered through vulnerability assessments and security configuration audits, and to prioritize the responses to security events. The prioritization should be based on an assessment of risk to the business. Four variables should be evaluated when prioritizing remediation and mitigation activities:

  • Exploit Impact  – the nature of the vulnerability and the level of access achieved.
  • Exploit Likelihood – the likelihood that the vulnerability will be exploited.
  • Mitigating Controls – the ability to shield the vulnerable asset from the exploit.
  • Asset Criticality – the business use of the application or data that is associated with the vulnerable infrastructure or application.


Security is improved only when mitigation activity is executed as a result of the baseline and monitoring functions. Remediation is facilitated through cross-organizational processes and workflow (trouble tickets). Although the vulnerability management process is security-focused, the majority of mitigation activities are carried out by the organization’s IT operations areas as part of the configuration and change management processes.

Separation of duties dictate that security teams should be responsible for policy development and assessment of the environment, but should not be responsible for resolving the vulnerable or noncompliant conditions. Information sharing between security and operations teams is crucial to properly use baseline and monitoring information to drive remediation activities.

For more details on remediation planning and execution see complementary blog: Vulnerability Assessment Remediation

Vulnerability Lifecycle Management

Vulnerability management uses the input from the threat and vulnerability analysis to mitigate the risk that has been posed by the identified threats and vulnerabilities. A vulnerability management program consists of a continuous process, a lifecycle as follows:

Vulnerability Lifecycle

Monitor Baseline

While a threat and vulnerability management program can make an IT environment less susceptible to an attack, assessment and mitigation cannot completely protect the environment. It is not possible to immediately patch every system or eliminate every application weakness. Even if this were possible, users would still do things that allowed malicious code on systems.

In addition, zero-day attacks can occur without warning. Since perfect defenses are not practical or achievable, organizations should augment vulnerability management and shielding with more-effective monitoring. Targeted attacks take time to execute, and the longer a breach goes unnoticed, the greater the damage. Better monitoring is needed to detect targeted attacks in the early stages, before the final goals of the attack are achieved. Use security information and event management (SIEM) technologies or services to monitor, correlate and analyze activity across a wide range of systems and applications for conditions that might be early indicators of a security breach.

Root Cause Analysis

It is important to analyze security and vulnerability assessments in order to determine the root cause. In many cases, the root cause of a set of vulnerabilities lies within the provisioning, administration and maintenance processes of IT operations or within their development or the procurement processes of applications. Elimination of the root cause of security weaknesses may require changes to user administration and system provisioning processes.

Root Cause Analysis


In 2012, less than half of all vulnerabilities were easily exploitable, down from approximately 95 percent in 2000. In addition, fewer high severity flaws were found. The number of vulnerabilities with a score on the Common Vulnerability Scoring System (CVSS) of 7.0 or higher dropped to 34 percent of reported issues in 2012, down from a high of 51 percent in 2008.

Unfortunately, there are more than enough highly critical flaws to go around. In 2012, more than 9 percent of the publicly reported vulnerabilities had both a CVSS score of 9.9 and a low attack complexity, according to NSS Labs. Vulnerabilities disclosed in 2012 affected over 2,600 products from 1,330 vendors. New vendors who had not had a vulnerability disclosure accounted for 30% of the total vulnerabilities disclosed in 2012. While recurring vendors may still represent the bulk of vulnerabilities reported, research shows that the vulnerability and threat landscape continues to be highly dynamic.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 2

Security Program Best-Practices – Part 2

Security Program Best-Practices – Part 1
Part 3
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments. As a refresher the typical gaps, deficiencies or need for improvements are summarized in the Opportunity Matrix which is used as a planning tool.

Opportunity Matrix Summary

In part 1 of the series we covered an overview as well as Gap 01 – Identify Requirements and Gap 02 – Develop Security Governance Program. In this blog we will discuss Gap 03 through 05 per below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 3: Network Security Organization

Business Problem

Over the past few years, as security organizations have had to grapple with an increasingly complex threat landscape and a much more visible role in the organization, the expectations of the business have also significantly increased. The business expects that security will do all this and take on additional responsibilities while keeping its headcount almost static. As a result, there is often a disconnect between what a security organization can realistically deliver and what the business perceives it can deliver. Security organizations today must be agile and high-performing — capable of addressing a multitude of responsibilities and needs simultaneously.

Security Alignment

According to Forrester, maintaining existing systems and applications consumes 73 percent of the IT budget, leaving only 27 percent available for new project investment. This finding is corroborated by a study from AT Kearney, which reports that 70 percent of business executives believe that technology innovation is critical, yet 80 percent of actual IT expenditures are spent on infrastructure and core operations. Forty-five percent of business executives strongly agree that IT groups focus on day-to-day IT requirements at the expense of strategic goals. Add to this burden the voluminous security, regulatory, and legal issues that enterprises now face—and IT is stretched to the limit.

Recommended Solution

When it comes to data breaches, hackers and organized crime garner most of the headlines, but most data breaches are caused by human error and system glitches–application failures, inadvertent data dumps, logic errors in data transfer and more. Organizations with strong security posture and incident response plans experienced breach costs 20 percent less than others and so, the importance of a well-coordinated, holistic approach is clear.

Many organizations typically have resources who are trying to wear too many hats and may govern, manage, engineer, operate and support the network security infrastructure. This also results in a lack of checks and balances increasing the risk of human error, in that the same administrator can review, approve, implement, test and, monitor a policy. Security governance, management and operations all have very different functions, and clarity among them is fundamental to the performance of each.

Security Organization

A key part of the role of security governance is to ensure that business and security processes have sufficient internal segregation of duties (SOD) to avoid a conflict of interest. Organizations should carefully develop their charter and participation of a security governance team so that it does not become mired in operational issues, but gives the necessary direction and oversight. The security governance team should have sufficient separation from security management and operations so that a conflict of interest is avoided.

When companies perceive GRC as one team’s responsibility, it undermines the real value that a coordinated program can deliver; risk and compliance professionals can’t possibly identify and measure all risks or enforce all policies across the organization. They need to rely on their colleagues for support, which means enterprises must lay out clear expectations for every user. Conversely, enterprises must explain the benefits users should expect based on their active involvement.

GRC Overview

Organizations should adopt a process-driven approach to security governance, management and operations that includes formally defined process flows, responsibility charts and decision accountabilities. At a high-level the organization should support, as follows:

  • Strategy: Develop GRC readiness by assessing maturity against peers through key use cases, identify gaps and build roadmaps; rationalize and prioritize GRC initiatives by tightly integrating information and infrastructure imperatives with business obligations.
  • Design: Design GRC programs and governance models and align with policies; quantify and classify exposures and weaknesses and compare to well-defined metrics, develop treatment options to manage risk and optimize rewards.
  • Implement: Implement processes, policies, controls and technology solutions that monitor operations and key metrics. Measure exposures in people, processes and technology controls in the context of IT infrastructure interdependencies.
  • Operate: Treat exposures by continuously enforcing policies; detect violations and measure gains against desired states; continuously improve processes to maximize synergies and move up the maturity curve.

Best-practices set expectations that all employees in the organization will play a part in managing risk and meeting compliance obligations.

Security Roles

All systems have critical processes that, if subverted through human error or malicious intent, will significantly impact the objectives they enable. No one person should have absolute control over a critical network security process, asset or, system. Instead, processes should be segregated into discrete tasks that can then be assigned to parties who do not have a conflict of interest with safeguarding the sub-process. Through segregation of duties, an engineer cannot readily disrupt production by mistake or intent.

Gap 4: Security Collaboration WG

Business Problem

In a rapidly developing organization it is easy to get out-of-touch and for groups to develop at different paces in different directions, working in silos and generating fragmented security. While hybrid distributed security organizations with dotted line reporting relationships are a best-practice it is also key to collaborate closely together, working towards a common goal, integrate security architecture, seek compliance to policy and regulation and, automate process and systems.


Security governance requires a set of oversight processes to ensure that reasonable and appropriate actions are taken to protect the organization’s information resources in the most effective and efficient manner, aligned to business goals. The role of security governance within the cross-organizational and cross functional Collaboration Working Group (WG) is to work closely with all stakeholders, including senior executives, line-of-business managers, the IT organization and others to establish, as follows:

  • Establish Effective Governance Framework
  • Develop Meaningful Risk Assessments
  • Focus on Enterprise Risk Management
  • Establish Measurable Controls
    • Map to all relevant regulations and standards

The Security Collaboration WG is a critical component in setting the overall direction of the security program implemented by the CISO, taking into account the strategic needs of the business, the risk appetite of the organization, other non-IT and information security issues (such as physical and personnel security), and broader IT and information initiatives beyond the security realm.

Practical Security

The responsibilities of a Security Collaboration WG may include:

  • Acting as a steering committee for significant projects
  • Tracking the progress of remediation on risk items (audit report findings)
  • Reviewing metrics reporting
  • Monitoring operational performance
  • Enabling the CISO to guide security efforts within business units
  • Establishing and maintaining effective lines of accountability, responsibility and authority for protecting information assets
  • Acting as a mediation or arbitration for reconciling conflicting security requirements

A Security Collaboration WG that connects the various organizational silos and integrates with governance in terms of policy, compliance and, internal audit enables the alignment of controls and measurements with an evolving baseline security standard so that the various parties work together in lock step. There is also a high return on security investment through collaboration and sharing, generating ideas for improvement via cross-pollination, and so on.

Gap 5: Network Security Standards

Business Problem

A new model of assurance has emerged as the foundation for an enterprise information integrity, security, and compliance strategy. This domain is infrastructure integrity enabled by configuration management (assessment and change auditing). Change auditing ensures the integrity of all infrastructures in a network — in essence ensuring that the infrastructure remains in a “desired secure state” throughout the implementation of the changes necessary to keep pace with the dynamic demands of the business.

ITIL Basics

Infrastructure integrity is the foundation or anchor upon which IT infrastructures should be built. When there is no infrastructure integrity, the internal process controls put in place to manage this infrastructure fail. Like a structure built upon sand, when the ground underneath shifts, the building will crack. In essence, without infrastructure integrity, an enterprise’s investment in operations management and information security technologies can be compromised at best and wasted at worst.

Infrastructure integrity results in operational efficiency

Security Hardening

Network security baseline standards are key to translating applicable but often vague regulations and security policy into actionable statements that can be applied by network security technologies and those policies verifiably enforced, to work towards and support compliance. The standards also allow the security organization to define, review and, approve the ‘technical policy’ so that it is sanctioned and in conformance with the risk tolerance of the organization. Finally, standards provide a measurable baseline that can be used to ensure infrastructure integrity as well as audit against those standards – so that the security posture is known.

Closed Loop Policy


The stakes are too high for organizations to ignore anchoring their IT infrastructures by maintaining infrastructure integrity. The infrastructure is too complex, too critical to business success, and too vulnerable to attack. For these reasons the IT asset configurations must be standardized and closely controlled. Controlling the infrastructure has presented challenges for IT management and administrators in both large and small companies. Hoping for success is an exercise in futility if grounded on an environment in which the core information assets and the infrastructure do not have integrity. If the integrity of the core information assets, infrastructure, and procedures is in question, so too is the overall confidence in the security system. In IDC surveys, over half of IT professionals and managers at large enterprises are only somewhat confident or not confident about their companies’ enterprise security systems.

Coming Soon

Security Program Best-Practices – Part 3 will continue this Security Governance Series with the next set of gaps for discussion and helpful advice on key components.

Security Program Best-Practices 1
Part 3
Part 4

Thanks for your interest!

Nige the Security Guy.

Guest Partner Blogger

Become a Guest or Partner Blogger

The objective for this Blog is to bring together a virtual community of like-minded security practitioners to discuss, debate and, share cyber security observations and practical best-practices from the trenches – towards improving infrastructure security to our mutual benefit.

Join Our Team

NigeSecurityGuy blog is inviting articles and topics from like-minded security contributors who are either just getting started and bring a fresh new perspective up to veteran experts. If you either want to become a guest contributor and/or partner in the blog, do not want to operate your own blog or, just have a one-off topic, idea or experience that you want to share. As Margaret Mead once said:

“Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.”

Sharing and Reciprocity

Collaboration and sharing is a sophisticated skill that asks people who work together to look beyond personal interests towards outcomes benefiting the whole. Collaboration and sharing is a great way to address complex challenges, since it has the potential to tap communal creativity and unleash true innovation and earn genuine buy-in.

TeamworkCollaboration, at the conceptual level, involves:

  • Awareness – We become part of a working entity with a shared purpose
  • Motivation – We drive to gain consensus in problem solving or development
  • Participation – We participate in collaboration and we expect others to participate
  • Mediation – We negotiate and we collaborate together and find a middle point
  • Reciprocity – We share and we expect sharing in return through reciprocity
  • Reflection – We think and we consider alternatives
  • Engagement – We proactively engage rather than wait and see

Together we can build a safe and more secure environment. A car with good brakes and integrated systems can drive a lot faster, likewise a business with good security and integrated systems can develop with agility.

“Training often gives people solutions to problems already solved. Collaboration addresses challenges no one has overcome before.”
― Marcia Conner

Rules and Submission

If you are interested in submitting an article or becoming a Blog Partner and regular contributor then simply complete the contact form below and get in touch. All articles will be reviewed before publication to ensure that they are consistent with the mission and goal of this blog, as stated above. If there are any concerns and/or proposed corrections then the author will be contacted and feedback offered before publication. All credit and kudos will be provided to the author who will be recognized as a Guest Blogger or Partner Blogger depending upon regularity.

This post will be a living document and will be updated if and when clarification or additional information is needed.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices

Security Program Best-Practices – Part 1

Security Program Best-Practices – Part 2
Part 3
Part 4
Part 5

This ‘Back to Basics’ blog series highlights some foundational best-practices and recommendations – as part of the capability maturity model – to enable evolution towards an ever improving more comprehensive and proactive cyber security governance and management program. These recommendations are drawn from a broad sample of assessments where these are typical gaps, deficiencies or need for improvements.

Security Governance Series

The Opportunity Matrix

The diagram below presents an Opportunity Matrix, a series of articulated next step opportunities for organization’s to build an adaptive, holistic, integrated and, proactive Security Program that enables an iterative process improvement evolution towards a common sanctioned goal. They are graphically illustrated across three dimensions in terms of, as follows:

  • Impact and Benefit
  • Initiative and Effort
  • Relative Cost Estimate

The Opportunity Matrix can be used as a tool to reflect the above parameters as an aid for both planning as part of the Adaptive Security Lifecycle methodology as well as to seek support, budget and resources from executive management.

Security Opportunity Matrix

These opportunities are considered as ‘low hanging fruit’ best-practice initiatives and are listed for brevity as ‘gaps’ that organizations can progressively develop and evolve in a capability maturity model, based on available budget, resources, and priorities.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 1: Identify Requirements

Business Problem

Information security is a business issue, not just a technology issue. Information should be protected for sound business purposes. Corporate knowledge and data are arguably the most important assets of any organization.

Security Policies provide several benefits. They provide a standard baseline of security policy modules and checks, based on the organization’s information security policies and standards. They establish a solid, scale-able basis for an enterprise-wide deployment. Policies heighten security awareness of company personnel. They also help organizations demonstrate their commitment to protecting their vital information assets.

Security policy is not static. It should be a living document and process, which is enforced, implemented, and updated to reflect the latest changes in the enterprise infrastructure and service requirements. It evolves to guard against perceived threats and changing system architectures. A properly designed and implemented security policy is an absolute requirement for all types of enterprises and has to be owned by one group.

The security policy must clearly identify the resources in the enterprise that are at risk and resulting threat mitigation methodologies. It should define which users or classes of users have access to which resources. The policy must define the use of audit trails and verification to help identify and discover violations and the appropriate responses.

Common Compliance Framework

Policies, procedures and standards must be formally documented and reviewed with employees. Without documenting, communicating, enforcing, and raising awareness of corporate standards, security and process improvement efforts will not achieve the intended goals. If employees are not aware of how to properly execute their duties and responsibilities according to these standards, they will be ineffective.

If at all possible take the time to craft a Common Compliance Framework that includes any targeted standards (such as COBIT, ISO 27002, NIST, etc.), all applicable regulations (such PCI DSS, HIPAA, SOX, etc.) and, needed security policy for the business. All too often organizations struggle to assess compliance in silos which increases complexity and proves costly but also there are overlaps and synergies. Many groups offer mapping matrices between various standards and regulations making it possible to combine them as needed. We will cover the Common Compliance Framework in more detail in a future blog.

Security Governance

When developing policies and procedures, document only what can realistically be done. This can be accomplished by involving the process owners in the generation and maintenance of the policies and procedures. Moreover, there must be triggers in the system development life cycle and change management processes to ensure that policies and procedures are updated as the computing environment and risk environment changes. At a minimum, they should be reviewed annually.

Policy Management

Closed-loop policy management is implemented using the network security architecture, and includes configuration management of network devices, enforcement of policies in the network, and verification of network functionality via audit trails. Verification and audit trails close the loop on policy management, and result in updates to the policy to reflect corrective actions.

Closed Loop Policy

Gap 2: Security Governance Program

Business Problem

Contrary to some beliefs, companies cannot eliminate all risks for two reasons. First, the internal and external threats that create risk are very dynamic. Second, control investments eventually result in diminishing returns. Instead of focusing time and resources on eliminating risk, a realistic goal should be to reduce risk via Risk Management to a level that is acceptable to senior management and the board.

Balancing Security

Benefits of Managing Risk, Compliance and, Audits

Proactive risk management is a process that must be embedded in the organization’s culture to reap significant benefits. It can be used to constantly “tune” the control environment to ensure that correct controls are present and mitigating risks. Without using risk data input, the organization may have the wrong controls—or no controls—in place, which could leave the organization exposed to significant threats, material findings, and excessive costs.

Information technology personnel and resources play an integral risk management role, aiding in threat prevention, detection, and reconciliation. With the help of technology resources, business units and senior managers can work to execute a cost-effective program that continuously identifies, prioritizes, and manages risks.

Risk Management

When designing controls to proactively mitigate risks, consideration must be given to using tiers of controls in the same manner that a castle builder uses multiple walls to protect a fortress. Security personnel call this approach “defense in depth,” wherein layers of controls are used to protect against known and unknown threats that can originate from inside or outside of the organization. Controls must be risk-based and validated.

In some cases, the use of several relatively inexpensive and simple controls may generate more reliable outcomes than one expensive or complicated control. Layered controls also create a “safety net,” in the event that a previous control layer fails. The objective is to have sufficient capabilities to reduce risks to an acceptable level. When auditors review controls and determine that there has been a failure, they will look for compensating controls to offset the level of deficiency. In other words, if one layer fails, they will look to see if any other controls are present to detect or reduce the impact of the failure of the first level. If other controls are in place, the auditor can use his or her judgment to reduce the severity of the noted deficiency.

Risk Framework

Continuous and proactive risk assessments and risk management is quickly becoming the norm. Auditing is management’s tool to make sure the entire organization has the resources, systems, and processes for delivering efficient, effective, and secure operations. Audits also are designed to identify key goals, issues, and challenges facing an organization and evaluate its progress against important initiatives. In leading organizations, internal auditors provide independent assurance to key stakeholders while identifying any areas for improvement that management should consider.

For each organization there are different goals and objectives, and certainly different issues and challenges. Therefore, there is no one-size-fits-all audit process, nor one audit approach that fits all situations. There are, however, some common and strategic audit-focused questions that must be addressed in most organizations:

    • Is the organization addressing regulatory compliance requirements adequately?
    • Is the organization investing in operational excellence?
    • Has continuous improvement been studied for applicability and implemented in some form in your IT function? In your various business units? In audit?
    • Does governance and risk management practices reflect today’s operating climate?
    • Is performance meeting the needs of your users and partners?
    • Is management forward looking? Or are they just investing in solving current and past problems?

Compliance ensures that an organization’s governance processes are effective and its primary risks are being managed. Compliance is founded on effective controls— those structures, activities, processes, and systems that help management effectively mitigate risk.

A dedicated, independent, and effective internal audit brings a systematic, disciplined approach
to assessing the effectiveness of internal controls and risk management processes.

Because internal auditors are experts in understanding organizational risks and the internal controls available to mitigate these risks, they assist management in understanding these topics and provide recommendations for improvements. At the same time, data gathered from an audit can also help an organization improve its operations across the enterprise.

Tying it all Together

In general, a proactive technology function, compliance function, and management team will study and learn the strategic direction of the organization and implement plans to meet the long-term goals of the organization and continually communicate progress toward the stated goals in partnership with the auditors — together with an agreed yet evolving baseline audit standard.

Audit and IT need to work together in lock-step to seek solutions and evolve in concert rather than randomly select controls and make IT look bad since the goal posts continually shift. An effective internal audit activity understands the organization, its culture, operations, and risk profile. This makes audit a valuable resource for management, the board, and its designated audit committee. The objectivity, skills, and knowledge of competent internal auditors can significantly add value to an organization’s internal control, risk management, and governance processes.

Coming Soon

Security Program Best-Practices – Part 2 will continue this Security Governance Series with another set of gaps for discussion and helpful advice on key components.

Security Program Best-Practices 2
Part 3
Part 4

Thanks for your interest!

Nige the Security Guy.

Vulnerability Assessment Remediation

Vulnerability Assessment Remediation

The external threat environment has become quieter and much more dangerous. Today’s attacks target specific companies, individuals and data. A typical targeted attack will exploit multiple security weaknesses to achieve the ultimate goal — usually, to steal sensitive data, compromise a specific account or disrupt operations.

Organizations need to present a hardened defensible target to an attacker in addition to the ability to detect and contain. This requires a combination of vulnerability assessment and management processes to find and fix security weaknesses in systems and applications, and the implementation of compensating controls or shielding technologies to protect more legacy systems and applications that will have long-standing vulnerabilities.

Vulnerability Process

In the Security Assessment Series this blog discusses more tactical and reactive vulnerability assessment and remediation while the next blog in the series will cover strategic and more proactive vulnerability and risk management.

Vulnerability Assessment versus Penetration Tests

Over 25 years I have performed hundreds and hundreds of security assessments for organizations, many of those have been vulnerability assessments and penetration tests. In my experience many organizations seek a penetration test when they really need to perform a vulnerability assessment and address those issues before progressing onto a penetration test. I am often asked to perform a penetration test yet discover the environment is not hardened and suffers from many basic flaws. While a penetration test sounds exciting there are key differences.

Vulnerability Maze

A Vulnerability Assessment is designed to allow an organization to identify all of the potential vulnerabilities, validate them, prioritize them based on scores, such as the Common Vulnerability Scoring System (CVSS) for all CVE vulnerabilities (provided by the National Vulnerability Database) and, create a prioritized list for remediation. The scope is broad across many external and/or internal systems. Root cause analysis may be warranted to understand why the vulnerabilities exist and what processes need addressing to resolve fully. These are more tool-based tasks and seek to identify as many potential issues as possible.

Vulnerability assessments typically follow these general steps:

  1. Catalog assets and resources in a system
  2. Assign value and importance to the resources
  3. Identify vulnerabilities or potential threats
  4. Mitigate or eliminate most serious vulnerabilities

A Penetration Test is either focused on a hardened and locked down environment or a specific platform or service that an organization wants validation to ensure nothing was missed or if there is a yet un-discovered flaw or vulnerability. They are often used as a pre-production validation for sensitive systems or to assess what can be achieved with a mix of advanced threats and social engineering. These leverage more manual methods and will report out on the path the attacker took and the creative exploits used to ‘capture the flag’.

Penetration tests typically follow these steps:

  1. Determination of scope and target(s)
  2. Information gathering or reconnaissance
  3. Exploitation attempts for access and escalation
  4. Sensitive data collection and ex-filtration testing
  5. Clean up, evidence collection and reporting

As part of an emerging and evolving network security program organizations should deploy a vulnerability assessment and phased remediation strategy that makes practical sense to address the current and tactical vulnerability landscape and, in parallel evolve that towards a more comprehensive and proactive vulnerability and risk management strategy.

Vulnerability Landscape

During a vulnerability assessment it is possible to discover many vulnerabilities and the sheer volume of data can quickly become overwhelming. This blog proposes a practical and simplified process to get you started, focused on asset inventory and classification to profile the vulnerabilities and thus enable the appropriate prioritization and scheduling of remediation actions.

Remediation StepsInventory

Taking a complete inventory of the basic makeup of the organization’s network is a critical first step. The second thing is to actually inventory all the mission critical and/or enterprise applications being used. I typically recommend that an Asset Inventory is developed that contains a list of all of the approved assets, services, interfaces, connections and, so on. While this may not be possible in all cases it is extremely important for external accessible services as well as mission critical or core services. This will be covered more fully in a future blog on Application Architecture Taxonomy.

  • Inventory –
    • Inventory Network Infrastructure
    • Inventory Applications and Services
      • Identify Device Roles/Groups
      • Service Dependency Mapping
        • Organization Function
        • Service Group
          • Applications
          • Ports
          • Relationships

Classification Asset classification, based on criticality and sensitivity enables the determination and priority of the application of security configuration standards and remediation actions. For example, assets that are in production should conform to all of the applicable security standards as part of deployment and maintenance and critical and/or sensitive assets are prioritized. A future blog will take a deep-dive into data classification.

  • Classification –
    • Category –
      • R&D
      • Staging
      • Production
      • Mission Critical
    • Classification –
      • Restricted
      • Trusted
      • Internal
      • Public


Ideally, the identification phase should occur during the architecture and design phase (e.g., via security sign-off criteria), immediately prior to the equipment becoming operational (e.g., as a step in the release management process), and at regular intervals throughout the operational life of the infrastructure. Identification and, to some extent, assessment come from activities such as vulnerability scanning and penetration testing – of which this blog provides a foundation and an opportunity to develop a practical methodology and process.

Nessus Sample


Assessment requires knowledge of the technical implications of the weakness, and also of the business implications of the exploitation of the weakness. The risk owner must then make a decision regarding acceptance of the risk, remedial action to fix the weakness or transformation of the risk into another form.

  • Assessment –
    • Evaluate Vulnerability Risk
      • Accept
      • Avoid
      • Mitigate
      • Transfer

Vulnerability Profile

It is a normal practice to rationalize a set of reported vulnerabilities into groups that are accepted but documented, those that have mitigating controls and, those that are mitigated with a solution.


Remediation may range from detailed bottom-up technical measures, such as the application of patches, or changes to the configuration of firewalls or other network-based vulnerability protection infrastructure, through changes to custom-made applications, right up to very high-level measures, such as changes to governing policy, processes and procedures or configuration standards.

  • Remediation –
    • Create Remediation Task Map
      • Action Plan:
        • Budget
        • Resources
        • Priority
        • Timing
          • Immediate
          • 30 Days
          • 6 Months
          • Future
      • Typical Actions:
        • Patch
        • Upgrade
        • Configuration Standards Rollout [by Role]
        • Infrastructure Refresh
        • New Deployment



Reporting metrics should include frequency of identification exercises (e.g., regular vulnerability reports), and results from identification and assessment, including the number of issues and accumulated risks, and the tracking of remediation actions. In a future blog we will expand this to include the use of a risk register and tracking in the Risk Management blog to report upwards to executive management.

  • Reporting –
    • Metrics and Trends:
      • Number of issues
      • Accumulated risks
      • Tracking remediation actions


The next blog in the Security Assessment Series will develop this theme further to cover more proactive Vulnerability Management Strategy. Once the organization becomes more secure it can evolve to stay more secure.

Thanks for your interest!

Nige the Security Guy.

Defensible Security Posture

Defensible Security Posture – Part 1

Defensible Security Posture – Part 2

The purveyors of Fear, Uncertainty and Doubt (FUD) assert that preventing today’s advanced threats is unrealistic, internal compromise is inevitable and – that FUD factor is reinforced by more and more reports of malware and advanced attacks penetrating insufficient security controls. However, it’s not all doom and gloom. Although the experts concede that stopping 100% of attacks is a technical impossibility, there are ways for organizations to avoid becoming the next devastated victim.


Unfortunately ‘secure‘ is still the target of many CISOs and company leadership. From painful experience many security practitioners collectively know that ‘secure‘ is a mythical goal and doesn’t actually exist. The leap in logic proposed by this blog is that we move to something that’s ‘defensible‘.

The basic idea of a Defensible Security Posture is that you aren’t striving for an absolute, but rather for a position (or posture) that is able to be defended even when it’s infiltrated. The analogy that I like to use is the human immune system since security and advanced attacks are organic in nature and can come from various sources of infection. There are a few basic things we need to understand:

  1. Defensible does not mean secure
  2. There are more things to defend than there are resources to defend
  3. Sometimes your defenses can become your weakness
  4. Defensibility requires deep understanding of what critical assets you’re defending
  5. Defensibility focuses on what, why, how, when and from whom

Advanced Persistent Threats

The US National Institute of Standards and Technology (NIST) defines that an APT is:

An adversary that possesses sophisticated levels of expertise and significant resources which allow it to create opportunities to achieve its objectives by using multiple attack vectors (e.g., cyber, physical, and deception). These objectives typically include establishing and extending footholds within the information technology infrastructure of the targeted organizations for purposes of exfiltrating information, undermining or impeding critical aspects of a mission, program, or organization; or positioning itself to carry out these objectives in the future. The advanced persistent threat: (i) pursues its objectives repeatedly over an extended period of time; (ii) adapts to defenders’ efforts to resist it; and (iii) is determined to maintain the level of interaction needed to execute its objectives.

Attacks from APTs are growing in scope,  increasing in frequency and, improving in effectiveness – to establish an insider base camp and cover tracks. Current strategies are not well-suited to mitigating prolonged and determined attackers leveraging a growing collection of stealthy techniques. The traditional perimeter and prevention response to threats is no longer realistic. Organizational resources need to shift the focus instead onto – Detection, Containment, Eradication and Recovery.

Defensible Logo

There is no silver bullet or single solution. Most organizations continue to focus on defending against zero-day exploits by relying on commercial security products to block bad sites and software and by patching systems to correct vulnerabilities in installed software. While these approaches are effective against some threats, they fail to stop the advanced attacks and provide no knowledge of what an adversary does once the network is penetrated.

APT attackers continually demonstrate their capability to compromise systems by using social engineering techniques, customized malware, and zero-day exploits that intrusion detection, anti-virus and patching cannot always detect or mitigate. Responses to APT intrusions require an evolution in analysis, process, visibility and technology. This blog describes an intelligence-driven, threat-focused approach.

Intelligence-driven Network Defense

Organizations may use a number of active techniques to detect attacks that can circumvent passive defenses. One approach uses honeypots to attract adversaries and look for patterns of behavior. Organizations may employ a number of active defense techniques within their own enterprises to detect and track adversaries as they explore networks. If a honeypot is set up with a number of different types of documents, organizations can watch to see which documents the adversary chooses to try to ex-filtrate.

Intelligence-driven Network Defense is a risk management strategy that addresses the threat component of risk, incorporating analysis of adversaries, their capabilities, objectives, doctrine and limitations. This is necessarily a continuous process, leveraging indicators to discover new activity. It requires a new understanding of the intrusions themselves, not as singular events, but rather as phased progression.

The benefit of Intelligence-driven Network Defense is a more resilient security posture. After all, APT attackers are persistent and attempt intrusion after intrusion, adjusting their operations based on the success or failure of each attempt. Once a compromise is achieved then the APT attacker deploys backdoors for contingency and covers any tracks.

The Signature of an APT

In any Advanced Persistent Threat (APT) attack there are typically a pre-defined set of phases that act as a signature, as follows:

APT Evolution

The importance is not that this is a linear flow – some phases may occur in parallel, and the order of earlier phases can be interchanged – but rather how far along an adversary has progressed in his or her attack, the corresponding damage, and investigation that must be performed.

APT Attack Kill Chain 2

  • Reconnaissance – Research, identification and selection of targets, often represented as crawling Internet websites such as social networks, organizational conferences and mailing lists for email addresses, social relationships, or information on specific technologies.
  • Weaponization – Coupling a remote access trojan with an exploit into a deliverable payload. Increasingly, application data files such as PDFs or Microsoft Office documents serve as the weaponized deliverable.
  • Delivery – Transmission of the weapon to the targeted environment via, for example, email attachments, websites, and USB removable media.
  • Exploitation – After payload delivery to victim host, exploitation triggers intruders’ code. Exploitation targets an application or operating system vulnerability or leverages an operating system feature that auto-executes code.
  • Installation – Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.
  • Command and Control – APT malware typically establishes remote command and control channels so that intruders have “hands on the keyboard” access inside the target environment.
  • Actions on Targets – Typically the prime objective is data exfiltration which involves collecting, encrypting and extracting information from the victim environment. Intruders may only seek access to victim box for use as a jump point to compromise additional systems and move laterally inside the network or attack other partner organizations.

Actionable Intelligence and the Intrusion Kill Chain

Cyber ‘kill chain’ methodology is the latest in a series of security strategies, targeted especially at APTs that are based on more of a proactive and visible model of real-time network monitoring, analysis, and mitigation. The formal concept of cyber ‘kill chain’ methodology was first developed by a group of scientists at Lockheed Martin in a paper titled, “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains“.

The intrusion kill chain becomes a model for actionable intelligence where practitioners align organizational defensive capabilities to the specific processes an adversary undertakes to target that organization. The end goal of this is to analyze the data for patterns of attack methods, behaviors of distinct hostile actors, and other indicators which can inform the development of unique responses. Fundamentally, this approach is the essence of Intelligence-driven Network Defense security posture basing security decisions and measurements on a keen understanding of the adversary.

Defensible Actions Matrix

The following is an example of a table that depicts a course of action matrix using the actions of detect, deny, disrupt, degrade, deceive, and contain. Documenting the capabilities defenders can employ in this matrix as a tool enables the reader to assess their Defensible Security Posture as well as identify any gaps or needed compensating controls. The matrix includes traditional systems like network intrusion detection systems (NIDS) and Firewall access control lists (ACL), system hardening best practices like audit logging, but also vigilant users themselves who can detect suspicious activity.

Kill Chain Actions 2

Intelligence-driven Network Defense is a necessity in light of advanced persistent threats. As conventional, vulnerability-focused processes are insufficient, understanding the threat itself, its intent, capability, doctrine, and patterns of operation is required to establish resilience. The intrusion kill chain provides a structure to analyze intrusions, identify indicators and drive defensive courses of actions. This model prioritizes investment for capability gaps, and serves as a framework to measure the effectiveness of the defenders’ actions. When defenders consider the threat component of risk to build resilience against APTs, they can turn the persistence of these actors into a liability, decreasing the adversary’s likelihood of success with each intrusion attempt.

Evolving Towards a Defensive Posture

If your organization does not already have visibility with proactive monitoring built into your environment this may seem like a major challenge. Implementing an Intelligence-driven Network Defense with a Cyber Kill Chain should be based initially on a prototype then iterate approach to evolve in capability and sophistication. Start with a basic framework that you can comfortably build and operate then make progress from there.

Cyber Kill Chain

Perform a Security Health Check with a focus on the organization’s web presence and external perimeter to see what information it could give an attacker – or leverage a 3rd party professional. Implement layered security to decrease the possibility that threats will slip through unnoticed. Create a policy for dealing with malware events. Educate staff about what to do with unexpected, suspicious emails and attachments.

With each step taken, you’ll get more information about your environment. And the more information you have, the more likely you will be able to identify anomalous behavior.

Next Steps

In Defensible Security Posture – Part 2 we discuss a case study that leverages the Defensible Actions Matrix and provides some recommended APT-focused best-practices.

The Defensible Security Posture series using an Intelligence-driven Network Defense will be built upon in future blogs. In the APT Operational Maturity and APT Intelligent Operations blogs we will discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

The defensible architecture foundation uses Adaptive Zone Defense to segment critical assets from general-purpose infrastructure to enable containment that includes Application Architecture Taxonomy to discusses the analysis, placement, policy and, controls for assets based upon classification and risk. There will also be a blog that takes a deeper dive in Risk Management Practices.

Thanks for your interest!

Nige the Security Guy.