vCISO Smart Practices – Part 1

vCISO Smart Practices – Part 1: Enabling Success via Collaboration Infrastructure

The Internet of Things offers a tremendous opportunity for businesses to truly transform themselves by realizing the potential of data that is sitting, untapped, in existing infrastructures. The challenge to unlocking that data is the evolution towards a Secure Collaboration Infrastructure.

vCISO Smart Practices

This blog introduces our vCISO Smart Practices series which kickoff with a fundamental discussion on the importance and value of human collaboration and teamwork as a foundational cross-discipline cross-functional ‘Architecture Team’. We also offer an introduction to the blog author, Nige the Security Guy (@NigeSecurityGuy).

This blog series will later address a truly distributed security architecture that supports the Collaboration Infrastructure and applies Smart Practices to that as we evolve rapidly towards the new and exciting yet challenging IOT.

“Training often gives people solutions to problems already solved.
Collaboration addresses challenges no one has overcome before.” Marcia Conner

Sharing and Reciprocity

Collaboration and sharing is a sophisticated skill that asks people who work together to look beyond personal interests towards outcomes benefiting the whole. Collaboration and sharing is a great way to address complex challenges, since it has the potential to tap communal creativity and unleash true innovation and earn genuine buy-in.

Collaboration

Collaboration, at the conceptual level, involves:

  • Awareness – We become part of a working entity with a shared purpose
  • Motivation – We drive to gain consensus in problem solving or development
  • Participation – We participate in collaboration and we expect others to participate
  • Mediation – We negotiate and we collaborate together and find a middle point
  • Reciprocity – We share and we expect sharing in return through reciprocity
  • Reflection – We think and we consider alternatives
  • Engagement – We proactively engage rather than wait and see

Together we can build a safe and increasingly more secure environment …

 “Security done right is a business enabler that dramatically reduces total cost of ownership (TCO) providing a tangible Return on Security Investment (ROSI).

IT complexity and fragmentation replaced by an adaptive modular and flexible architecture enables agility and improves your competitive edge — so the business can refocus quickly as new opportunities emerge.” Nigel P. Willson

People Process Policy Technology

A critical success factor towards successfully deploying a collaboration infrastructure is orchestrated policy, focused resources and, well-defined process that fully leverages and unlocks technology. As a creative solutions-focused, charismatic, and passionate security evangelist Nigel Willson is available to consult as a Trusted Security Services Partner to collaboratively assist organizations to iteratively improve and optimize their security as a virtual team member in the role of vCISO, IT Security Strategist and, Architect.

Nige the Security Guy: Professional Profile

Architect Blueprint

Nigel P. Willson
Principal Security Architect
AT&T Security Solutions

Nigel Willson is a Principal Security Architect at AT&T with 30 years of experience in Security Operations, Management, Research, Development and Security Services providing thought leadership, architecture/design and practical strategy.

Nigel has responsibilities as Security SME for AT&T complex cyber security solutions across the portfolio of security consulting, managed security services and mobile security solutions.

He specializes in collaboration as both a Trusted Advisor and Virtual Chief Information Security Officer (vCISO) helping companies to evolve and improve their security capability maturity and posture in the following areas:

  • IT Security Governance, Strategy, Roadmap
  • Security Architecture & Design (including adaptive security architecture lifecycle)
  • Security Operations (including advanced threats, detection frameworks, defensible posture)
  • Threat Intelligence & Risk Management (focused on business processes)
  • Security Research & Analyst
  • Regulatory Compliance

AT&T Security Solutions is the AT&T Advanced Enterprise Solutions customer facing security opportunity team. His participation is consistently solicited by AT&T teams and AT&T customers as both a Trusted Advisor and Security SME in both the private and public sectors. Nigel joined AT&T as a Practice Director, Security via the acquisition of Callisma (AT&T Consulting Solutions) in 2005.

Prior to joining AT&T, Nigel worked as a Practice Director, Security for Avaya Converged Security as well as TCS America responsible for the development of discrete security consulting services and leading teams of security consultants. He previously worked as the Director, Security for The Walt Disney Company focused on global Internet Security for 27 business units including ABC, Disney On-Line, and ESPN.

Nigel is a former assembler programmer and reverse engineer (ethical hacker) with a diverse international background. He has worked on U.S. DoD projects developing security products and technology for the World-Wide Military Command and Control System (WWMCCS) and Military Airlift Command Deployment Flow (MACDF).

Magazine2

He is a published author of many security guides, books, magazine articles and currently operates a security-focused NigeSecurityGuy blog providing impartial practical advice and methodology on security architecture, assessments and, advanced persistent threats (APTs). Nigel also operates the ‘Solving the APT Defense Puzzle’ group on Linked-In, a reference library of useful research and topics.

Nigel was recently selected as a finalist in the InfoSec Europe 2014 Security Bloggers awards and was invited to publish an article on Leveraging Security as a Business Enabler.

Nigel’s passion is taking blog readers Back to Basics to focus on key security principles to develop a strong architectural foundation (Security Architecture Series) and from that add advanced threat defense (APT Strategy Guide) as well as security operations optimization (NG-OPS Strategy Guide).

Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.”

Background Summary

  • International Background — 30 years international experience gained as security expert across England, Australia, France and, the USA.
  • Strategic Architect – Cloud-Orientated Architecture, BYOD, Mobile, Security Operations, Risk, Intelligence, Analytics, Metrics, Visualization – Situational Awareness: Detect, Contain, Investigate, Eradicate, Recover
  • Director, Security @ Disney – Establish strategic architecture team, develop successful proactive security management program.
  • Published Author – Author and co-author of many security guides, books and, magazine articles.
  • Security Consultant — 15 years thought leadership and strategy experience consulting to Fortune 500 companies.
  • Security Engineer – Developed new security protocols and products for U.S. Department of Defense (DoD), e.g., MACDF and WWMCCS. Used in Gulf War. Worked on multi-level security and covert channel prevention.
  • Reverse Engineer – Original assembler programmer, ethical hacker and reverse engineer who could analyze code and manipulate any technology, protocol or system.
  • Awards Plaque: AT&T April 2008: In Recognition of Unwavering Commitment, Steadfast Leadership and Outstanding Performance on the California State University ITRP program.

Next Generation Operations

Thanks for your Interest!

Nige the Security Guy.

Holiday Thank you

Holiday – Thank You

As a busy yet productive 2013 draws to a close and we enter the Holiday period I wanted to take this opportunity to thank our readers and followers for making Nige the Security Guy Blog a great success.

We could not do what we do without your support, sharing and feedback. Thank you.

Holiday Thank You

Please do continue to share our Nige the Security Guy Blog with your peers and connections as we start 2014 with an exciting array of practical and thought provoking blogs on various security topics. Happy holidays!

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 5

Security Program Best-Practices – Part 5

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments. In this blog we will discuss the final and most critical Gap 10 – Develop Firewall Rule Lifecycle Management.

Firewall Rule Lifecycle

Gap 10: Firewall Rule Lifecycle Management

Business Problem

Firewalls are the first and continued line of defense for enterprises today, handling vast amounts of traffic across the network. On the perimeter alone firewalls filter millions of packets daily. The organizational security policy implemented in these firewalls requires the definition of hundreds and often thousands of rules and objects. Objects may include groups of servers, user machines, sub-networks in the data center, and networks in company branch offices or DMZs. The firewall rules define which type of applications and which network services are allowed to traverse between networks and which should be blocked.

Firewalls are Organic

Since business needs are dynamic, firewall policies are constantly being changed and modified. This continuous flux causes the firewall configuration to grow dramatically over time. A huge and subsequently complex firewall configuration is hard to manage and may require lengthy research in order to add or change a rule. Moreover, the complexity of the configuration decreases the firewalls performance and may lead to potential security breaches. For example, a rule was created to allow a temporary service to work for a limited time, but the administrator failed to delete the rule after the task was finished, introducing real security risks.

Complex Rules

Finding unused rules that have not matched any traffic, duplicate rules, and rules that are covered by other rules is a complex manual task for the firewall administrator. It may take days of investigating just to locate such rules in huge firewall configurations, while at the same time the firewall is continuing to change daily due to user requests.

Firewall Policy Vulnerability

Gartner noted in a recent research note that …

“Through 2018, more than 95% of firewall breaches will be caused by firewall misconfigurations, not firewall flaws.”

Organizations need to develop a Firewall Rule Lifecycle Management process to clean up their firewall policies, easing the network security administrator’s job while boosting firewall performance and eliminating security holes.

Organizations need to identify and address, as follows:

  • Unused rules: Rules that have not matched any packet during a specified time. Either the use of Cisco ACL hit counters, central ‘syslog’ logging or, commercial tools can be used for analysis to look at the firewall logs compare the actual traffic to the rules in the policy. Unused rules are ideal candidates for removal. Often the application has been decommissioned or the server has been relocated to a different address.
  • Covered or duplicated rules: Rules that can never match traffic because a prior rule or a combination of earlier rules prevents traffic from ever hitting them. During firewall cleanup such covered rules can be deleted since they will be never used. Covered and Duplicated rules cause the firewall to spend precious time for free and decrease its performance.
  • Disabled rules: Rules that are marked “disabled” and are not in operation. Disabled rules are ideal candidates for removal, unless the administrator keeps them for occasional use or for historical record.
  • Time-inactive rules: Rules that were active for a specified time in the past and that time expired. Rules that were active for a specific period can become active again at the same time next year. Retaining such rules may create security holes.
  • Rules without logging: Rules that are defined not to generate logs. Usually security best-practice guidelines dictate to log everything. Since log information consumes a large amount of disk space, administrators often configure highly used rules that control low risk traffic not to generate logs. Listing the rules without logs will help the administrator verifying that the lack of audit for these rules is not in contradiction to policy.
  • Least used rules and most used rules: Rules that matched the smallest number of packets or the largest number over a predefined and configurable period of time. The rules usage statistics helps the administrator in the cleanup process for performance improvement: he may want to reposition most used rules in higher places in the configuration and least used rules in lower places. Rules with zero hit count may be removed.
  • Rules with empty comments: Rules not documented, i.e., without a text explanation or reference # to the original change management request. Often policy requires an explanation for each rule so defining rules without comments are a violation of the policy. Some companies require entering a ticket number of the help desk trouble-ticketing application into the rule comment.
  • Unattached objects: Objects that are not attached to any rule or unattached global object.
  • Empty objects: Objects that do not contain any IP address or address range.
  • Duplicate objects: Objects that already exist but are recreated contributing to the policy “bloat”.
  • Unused objects: Objects whose address ranges didn’t match any packet during a specified time or unused global object.

By removing the unnecessary rules and objects, the complexity of the firewall policy is reduced. This improves management, performance increases, and removes potential security holes.

Cleanup Phase 1: Validation

The Validation phase involves manually (or with the use of public domain or commercial tools such as Algosec or Tufin) reviewing the Firewall Rules and performing a static analysis.

Algosec Example

Items to be reviewed in this step are, as follows:

  •  Unattached Object / Unattached VPN User- Group – An object that:
    • Does not appear in any rule
    • Every group it belongs to does not appear in any rule
    • In any policy on any firewall
  • Empty Objects:
    • Do not refer to any IP address
  • Unattached VPN Users:
    • Do not appear in any user group and have no access
  • Unattached access-list (Cisco)
    • Not connected to any interface
  • Expired VPN users
    • No longer have access
  • Disabled Rules:
    • Maybe it’s time to delete them?
  • Time-Inactive rules:
    • Timed Rules are active on a certain days of the month, days of the week, or times of the day…
    • … But you cannot set a year.
    • Identify the expired rules before they will become active again next year.
  • Duplicate Rules
    • Firewalls process the rules in-order “first match”
    • If “early” rules match every packet that a “late” rule could match – the “late” rule is covered (== useless clutter!)
    • Easy cases:  single rule covers another rule  the object names match exactly
  • Duplicate Objects:
    • Most FW Vendor consoles cannot answer the question “does this definition already exist with another name?”
    • Result:  Administrators often define the same object (Host, Subnet, or Group) multiple times

Cleanup Phase 2: Unused Rules

The Unused Rules phase involves Usage-based Analysis, i.e., focusing on what has changed recently and ensuring that the Firewall Rules are kept up-to-date and those rules that are no longer needed are flagged and/or removed so that the Firewall does not become unwieldy and risk conflicts or duplicates.

Rules Cleanup

This step allows us to identify key and useful data, as follows:

  • Unused Rules:
    • have not matched traffic in the last NNN days
  • Unused Objects:
    • Do not belong to any rule that matched traffic in the last NNN days
  • Most / Least used rules
  • Last date that rule was used
    • Even if it is listed as “unused” due to logging configuration settings

These considerations and notes should be borne in mind for this step, as follows:

  • Over time:
    • Applications are discontinued
    • Servers are relocated to other IP addresses
    • Test environments move to production
    • Business partnerships change
    • Networks are re-architected
    • Routing is changed
  • Result: Firewalls still have the rules – but the traffic is gone
  • Idea: Track and flag rules and objects that have not been used “recently”
  • Firewalls can log each matched packet
  • Log includes rule number, timestamp, and more
  • Basic approach:
    • 1) Filter the logs based on rule number
    • 2) Find the missing rule numbers and delete those rules
  • Challenge #1: Logging is configured per rule
    • Some rules are not configured to produce logs
  • Solution #1: List rules that do not produce logs separately
  • Challenge #2: Rule Insertions & Deletions change the rule numbers!
    • Which rule corresponds to what was used to be called rule 101 in Nov’07?
    • Makes long-term statistics unreliable
  • Solution #2: Vendor attaches a unique “rule_id” to each rule, such that:
    • Reported to log
    • Remains with rule through any rule add/remove/modify
  • Cisco Firewalls & Routers maintain a per-rule hit-counter
  • Advantages:
    • Unrelated to logging: un-logged rules are counted too
    • Rule insertions & deletions do not affect the hit-counters
  • Challenge:
    • Hit-counters are reset to zero when device reboots
  • Solution:
    • Take periodic snapshots
    • Attach pseudo rule_uids, homogenize the snapshots
    • Make sure not to double-count …
  • Some rules only work occasionally or rarely
    • High-shopping season
    • Disaster recovery rules – tested semi-annually
    • Need usage information of many months
  • Challenge:
    • Log files can become huge – querying extended historical data can have a real impact on product log server
    • Logs are discarded or rotated
    • Hit-counters are occasionally set to 0
  • Solution:
    • Process the raw usage information frequently (daily)
    • … But keep concise summaries available (forever)

Cleanup Phase 3: Performance Optimization

In order to provide a measurable attribute for firewall performance that will show the improvement of the policy optimization, there is a metric called Rules Matched Per Packet (RMPP).

Rule Optimization

RMPP is simply a calculation of the average number of rules the firewall tested until it reached the rule that matched a packet (including the matched rule). For example:

If the firewall policy consists of only one rule (allow or deny all) that matches everything – RMPP will be 1. If the firewall policy consists of 100 rules, such that rule #1 matches 20% of the packets, rule #10 matches 30% and rule #100 matches 50% of the packets:

RMPP = 1 * 20% + 10 * 30% + 100 * 50% = 0.2 + 3 + 50 = 53.2

Firewalls do in fact test the rules in sequence, one after another, until they reach the matching rule, and each tested rule contributes to the firewall’s CPU utilization. Therefore, optimizing the policy to decrease the RMPP score will decrease the firewall CPU utilization and greatly improve overall performance.

Building on the previous example, if rule #100 (that matches 50% of the packets) can be relocated to position #50 – without modifying the firewall policy decisions – the RMPP will be reduced significantly:

RMPP = 1 * 20% + 10 * 30% + 50 * 50% = 0.2 + 3 + 25 = 28.2

This simple change, which can be achieved by reordering the rules, can produce a 47% improvement in firewall performance.

Conclusion

Firewall administrators can achieve significant and measurable performance improvements for their complex firewalls by using these cleanup, lifecycle management and, policy optimization (with rule reordering) techniques. There are many commercial tools available that help in policy cleanup identifying rules that are unused, covered and disabled and should ideally be removed. This is in addition to unattached, empty, duplicate and unused objects. The tools help to eliminate security risks and keep the firewall policy well managed by alerting administrators.

The more veteran firewall audit vendor list includes: Tufin Software Technologies, AlgoSec, Secure Passage and Athena Security — and then RedSeal Systems and Skybox Security, which are primarily risk-mitigation tools, and so go beyond firewall audit to feature risk-assessment and risk-management capabilities.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 4

Security Program Best-Practices – Part 4

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments.

As a refresher, in Part 1 we shared some typical gaps, deficiencies or, need for improvements summarized in the Opportunity Matrix below. The Opportunity Matrix can be used as a capability maturity assessment and iterative planning tool to present proposed next steps to executive management for approval and funding.

Opportunity Matrix Summary

Part 1 through Part 3 of the Security Program Best-Practices series covered an overview as well as Gap 01 through Gap 07 inclusive. In this blog we will discuss Gap 08 – Integrate Central Security Logging through Gap 09 – Establish Network Security Operations, per summary below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 08: Integrate Central Security Logging

Business Problem

To enable and deploy a defensible security posture pervasive and mission-critical information technology and hyper-extended networks must be more scrupulously monitored to detect anomalies and threats. High traffic volumes are also associated with higher threat levels, making automated network monitoring, alerting, and response indispensable. Automated monitoring improves system security, performance, and availability by allowing management by fact. Automation also frees the IT team to focus on exceptions, which in turn simplifies holistically managing large amounts of event data.

Vulnerability Types

Being able to monitor various instrumentation telemetry data sources and event logs gives an administrator a substantial advantage in identifying threats early on – rather than investigating them after the fact. A sound logging strategy is the centerpiece in any organization’s “big picture – big data” network security plan. The presence of event monitoring within its log strategy helps distinguish a proactive plan from a reactive plan.

It is well established among network security professionals that the greatest threats to network security are in fact internal – they often originate in the same building, the same floor perhaps, and often right down the hall. The source may be a disgruntled employee, a curious staff member in the payroll department, or a bored sales representative. For several years, this threat was overlooked for the sexier external threat – the hackers working in dark home offices late at night or a competitor’s agent of corporate espionage.

To a network security administrator, event logs are like a history book or the gauges of an automobile. Event logs allow administrators to look back at the recent history of a server or network device and see trends, failures, successes, and other vital information to the organization.

Botnet Army

Our richly interconnected online world has faced an ever increasing volume of malware and worm variants — even botnets. They exploit vulnerabilities in, for example the Windows operating system and systematically reproduce across the organization. All the while, servers, routers, and other network devices quietly log these events across LANs and WANs. For administrators, these log files gave them a snapshot of a window (excuse pun) in time that showed when, where, and most of the time, how the infection or compromise entered their controlled space.

Event logs also hold potentially valuable forensic evidence. In the aftermath of a network security breach, event logs hold all of the information about the breach. How it happened, when it happened, and in the end, the keys to preventing another breach. This data is key to enable the ability to Detect, Contain and, Eradicate as well as investigate the root cause analysis, address and prevent recurrence in the future.

Gap 09: Establish Network Security Operations

Business Problem

The problem with network security is not the lack of good security tools; it is the management of those tools and the exposure to human error. Large networks generate an overwhelming amount of logs and security events. Firewalls, intrusion detection systems, web servers, authentication devices, and many other network elements contribute to more and more logs which need to be analyzed and produce actionable information.

Holistic Logging

There is a lot of noise, at first and false positives that need to be resolved and addressed through profiling network traffic and tuning network security technologies to customize them to the organizations business – to detect anomalies and leverage the true potential and value from the technology or technologies. Too many organizations deploy the solution out-of-the-box and stop there, disappointed by all of the noise and overwhelmed by the task at hand.

However this on-going effort and its optimization can reduce the amount of alerts from thousands per day to dozens. When a correlation occurs, a simple message that says a particular server has been attacked with a technique which is likely to succeed can be sent to system owners, operations people, and other places. The operator starts to realize value from the technology and its automation to focus on those alerts and events that need action as to whether they are a breach or not – thus need further investigation.

Attackers typically create a smoke screen, a set of decoys that obscure the actual compromise or infection so that it is lost in the noise and any security operators are so overwhelmed they do not detect the stealthy attack. This is validated by the recent spate of DDoS attacks that not only seek to deny normal service but also seek to compromise servers under the cover of the attack. Many SEIM solutions generate a lot of noise out-of-the-box and need tuning to weed out and optimize.

Detection and Response

Systems fail and intrusions occur. At some point compromise is inevitable. Therefore, detection and containment is imperative. The earlier an intrusion or infection is detected, the greater the ability of the organization to mitigate the risk. Intrusion detection is considered the second line of perimeter defense, after the firewall. Intrusions can lead to malicious acts such as: identity theft; compromise of confidential information; and unauthorized changes in files, systems, and device configurations.

Threat Landscape

An organizations ability to detect and prevent intrusions adds more depth to its defensive security posture. Organizations must be aware that intrusion detection alone will not mitigate the risk of an intrusion. Mitigation can only occur with a timely and appropriate response. A prudent  response program incorporates people and processes in addition to technology, and starts with the creation of a computer security incident response team (CSIRT) that will be the initial responder when an incident is identified. In addition to the CSIRT, policies must be developed to guide the organization and team in responding to an event. Types of events and the specific procedures to be followed also need to be defined. The development of an incident response program is typically mandated by regulation, international standards or, industry best-practices.

The timely detection of an intrusion coupled with being prepared to respond is vital to minimizing financial, production, and operational losses. Specific actions and responsibilities need to be pre-assigned and the appropriate training provided. In addition, containment and restoration strategies need to be outlined that address the: isolation of the compromised system; increased, monitoring, collection and preservation of evidence; and notification to law enforcement, regulators, and other affected parties.

Continuous Improvement

Monitoring and updating the security program is essential to maintaining the effectiveness of the program. A static program will be ineffective over time and can leave the organization with a false sense of security. Monitoring should include both non-technical as well as technical issues.

Plan Do Check Act

Non-technical issues would include changes in business processes, policies and procedures, locations, sensitivity of data, key personnel, and organizational changes.

Technical issues include monitoring for vulnerabilities, changes in systems, service providers, configuration, users, products, and services. When changes do occur, it is imperative that they are reviewed for accuracy and legitimacy and the program is adjusted to reflect the changes and ensure continued security and operational success.

Accidental changes can be just as damaging as malicious or fraudulent change activities – resulting in increased costs for remediation and potential losses or negative affect on the organization’s top-line revenue. Best practices mandate the monitoring of all changes, intended and unintended, that will create an audit trail that details when, what, and how the change occurred. The use of automated change control and audit tools will also enhance operational efficiency by increasing the effectiveness and productivity of your security personnel.

Each change can potentially create a vulnerability or weakness in the security program if not properly evaluated, tested, and deployed. Therefore, strong change control procedures and monitoring are critical to reduce the exposure to financial losses, reputation damage, and loss of productivity.

Validation: Trust but Verify

To assure that its security strategies are adequate, each organization must test its controls against the risks events that were identified through its formal assessment of risks. The higher the probability and negative affect of a risk event, the greater the need to validate the effectiveness of the security controls. The type of test to perform and the frequency should also be based on risk.

Risk Management

Prior to testing, detailed test plans need to be developed to ensure testing is appropriate and controls are established to reduce the risk to data integrity, confidentiality, and ensure availability. Test results need to be measurable and traceable to provide assurances that the security strategy is meeting security objectives.

There are a variety of testing methodologies and tools available, many of which can be automated to improve efficiency and enable independence. Independent diagnostic tests include penetration tests, audits, and gap assessments that are performed by credible individuals who are considered independent of the design, installation, maintenance, and operation of the test subject area. Examples of resources that will help support and streamline the testing efforts include: log and audit files generated via security event management systems, change management reports, automated audit tools coupled with penetration testing, prior security gap assessments findings and recommendations, and internal IT audit findings and recommendations from prior audits.

No one control or solution can ever guarantee 100 percent security. High-performing organizations understand that business and technology risk management best practices mandate a defense-in-depth security approach that includes multiple controls and can be validated with internal and external audit resources. When properly aligned with the organization’s risk profile, all of the controls discussed above help to establish a practical and prudent risk-based security posture.

Balancing Security

When properly aligned with the organizations’ business goals, audit personnel and tools can validate the appropriateness of these controls and help to ensure operational excellence and a secure infrastructure.

Coming Soon

Security Program Best-Practices – Part 5 will complete this Security Governance Series with a significant topic that warrants its own blog, Gap 10 – Firewall Rule Lifecycle Management for discussion and helpful advice on key components.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 3

Security Program Best-Practices – Part 3

Security Program Best-Practices – Part 1
Part 2
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments.

As a refresher, in Part 1 we shared some typical gaps, deficiencies or, need for improvements summarized in the Opportunity Matrix below. The Opportunity Matrix can be used as a capability maturity assessment and iterative planning tool to present proposed next steps to executive management for approval and funding.

Opportunity Matrix Summary

Part 1 and Part 2 of the Security Program Best-Practices series covered an overview as well as Gap 01 through Gap 05 inclusive. In this blog we will discuss Gap 06 – Develop Network Security Architecture through Gap 07 Develop Network Security Roadmap, per summary below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 06: Develop Network Security Architecture

Business Problem

From the earliest days of networking, security manifested itself in strong information security perimeter defenses. As long as the perimeter was secure, the assets being protected didn’t need to be monitored or managed because the command and control environment gave people assurance that core data was safe because unauthorized access was prevented.

Today’s hyper-extended connected enterprise faces a security paradox. The very openness and ubiquity that make the Internet such a powerful business tool also make it a tremendous liability. The Internet was designed to share, not to protect. The ports and portals that welcome remote sites, mobile users, and business partners into the trusted internal network also potentially welcome cyber-thieves, hackers, and others who would misappropriate network resources for personal gain.

Most companies didn’t design their current security architecture; rather, they built it over time, based on need: a firewall here, an intrusion prevention system there.

As a result, many businesses rely on a bewildering collection of stand-alone security systems. That’s a problem in two ways. First, without a clear understanding of how all your defenses fit together, it’s impossible to know if they provide complete protection. Second, managing and integrating all those systems costs time and money. Security integration into a holistic architecture that enables but manages role-based access is critical to success.

Chart Course

That’s why many organizations desire ways to simplify their security architectures.

“Security done right is the key to Anywhere Anytime by Any Device Access”

Developing a Framework

Network security architecture is defined as the desired structure of an enterprise’s technology components and technical safeguards. With network security architecture in place, an enterprise has a framework for more informed decision making and a guide for ongoing planning, design, and implementation activities.

Establish Coordinates –

  • Pinpoint your Business Requirements and Vision
  • Analyze Current State in terms of Infra and Services

Harmonize –

  • Establish an Holistic yet Defensible Network Security Architecture
  • Identify Organization Stakeholders and Seek Consensus

Chart your Course –

  • Develop a Security Roadmap (Adaptive Iterative Evolution)
  • Deliver Prioritized Action Plans

A Defensible Network Security Architecture provides a conceptual, physical, and procedural framework of best recommendations and solutions for network security. It serves as an important reference guide for IT professionals responsible for designing and implementing secure networks.

blueprint

Architecture typically provides, as follows:

  • A way to evaluate applicability of new technologies, products, and services
  • A blueprint for future applications and infrastructure growth
  • A framework for security technology decision making
  • A framework that guides the security implementation
  • Decomposes into modular and flexible components (enables reuse of proven modules as organization grows, e.g. remote office module)
  • A method of cost avoidance
  • A macro view of security-relevant systems and components
  • A method for creating and documenting consensus
  • A statement of direction for IT

A Defensible Network Security Architecture is realistic.

It assumes that all components of an IT infrastructure are targets … that even internal users could be network threats … attacks are inevitable … network performance cannot be compromised by processing intensive security measures … and IT budgets are constrained.

The Network Security Architecture should consider and include, as follows:

  • Business Requirements
  • Regulatory Requirements
  • Security Policy Requirements
  • Current Network Security Architecture
  • Goal-State Network Security Architecture
  • High-level gap assessment

The Defensible Network Security Architecture promotes a process, rather than an endpoint. Effective security is not achieved through a one-time initiative. This architecture outlines measures for strong ongoing policy management, reflecting both human and technical factors. For more details see the Security Architecture Series, referenced below.

The above set of blogs takes the reader through a detailed step-by-step development of a network security architecture with the latter blogs presenting an Architecture Realization Case Study. Future blogs will present network architecture and design templates that make use of security zones to enable Access Anywhere Anytime by Any Device.

Gap 07: Develop Network Security Roadmap

Business Problem

“A good plan executed today is better than a perfect plan executed at some indefinite point in the future.”

—General George S. Patton Jr.

Computers and the environments in which they operate are dynamic. System technology and users, data and information in the systems, risks associated with the system, business drivers, and security requirements are ever-changing. Many types of changes affect security: technological developments (whether adopted by the system owner or available for use by others); connection to external networks; a change in the value or use of information; or the emergence of a new threat.

Sample Roadmap

In addition, security is never perfect when a system is implemented. System users and operators discover new ways to intentionally or unintentionally bypass or subvert security. Changes in the system or the environment can create new vulnerabilities. Strict adherence to procedures is rare, and procedures become outdated over time.

These issues make it necessary to periodically reassess security architecture and the supporting plans in a feedback loop that both addresses tactical exceptions as well as updates and refreshes the vision and objectives.

Adaptive Security Roadmap

What is required is a systematic business risk approach to security that integrates and drives security into the network as an operational service. This is accomplished with an Adaptive Security Roadmap and iterative lifecycle process that refreshes the architecture on an annual or quarterly basis to establish, implement, operate, monitor, review, maintain, and improve network security.

The first step in the process is to develop the current state (see figure below). The results of the security baseline and assessment (current infrastructure environment) are analyzed. Factors such as the network security perimeter, Virtual Private Networks (VPNs), intranet, extranet, partner connections, remote access, and access to assets, are considered to develop the current state and security risk profile.

Adaptive Lifecycle

The network security architecture (from Gap 06) creates the goal state. This process takes the current state and security-risk profile and adds the business drivers, prioritized requirements, policy, legal constraints, and so on. From this step, an updated and finalized network security architecture is developed and shared with the stakeholders to gain consensus.

The final step is to compare the current state with the goal state and to identify the projects that are required to transition the current infrastructure and realize the architecture goals. From a migration strategy workshop, together with the business units and stakeholders, the viable projects are selected based upon their dependencies, priorities, available resources, and budgets forming the annual plan of infrastructure improvements.

During the next planning year, the process is repeated and the architecture updated with new business requirements, new technologies, new solutions, and so on. A follow-on assessment of the current infrastructure captures improvements together with any new threats, vulnerabilities, and exposures, and documents the new current state and security-risk profile. Performing a gap analysis and migration strategy planning workshop contrasting the new current state and goal state allows an updated plan to be developed for that year.

Architecture Evolution

Over time, it can be seen (see figure above) that the security architecture is used as a baseline for consensus and direction but that it is active and capable of being updated. This process allows the security architecture to adapt to support the needs of the business. It evolves and sets future objectives.

At the same time, the annual plan sets the stage for the projects that need to occur that year, and the improvements begin to track with the architecture. Finally, with the proactive asset, risk, and policy management and infrastructure improvements, the security-risk profile is also managed, resulting in risk reduction. In this manner, not only does the security architecture drive the IT and network infrastructure direction, but it also enables the illustration of tangible results, winning continued support for the program.

Technology Roadmap

It is an organizational approach to network security with the appropriate network security architecture, governance, policy, standards, compliance verification and, audit.

From an operations perspective, time is money. This is why project management is an important process domain. It helps ensure that the outcomes of information technology projects are on time, within budget, and deliver the expected outcomes.

For more details on developing a Network Security Roadmap together with lifecycle management process see the Adaptive Security Lifecycle blog.

Coming Soon

Security Program Best-Practices – Part 4 will complete this Security Governance Series with the next set of gaps for discussion and helpful advice on key components.

Thanks for your interest!

Nige the Security Guy.

Architecture Case Study – Part 2

Architecture Case Study – Part 2

Architecture Case Study – Part 1

The Security Architecture Series shares the steps involved in requirements gathering, baseline, product and solution selection and, through to realizing the architecture. This blog continues an Architecture Realization Case Study that uses those principles and recommendations as a practical example.

The use case study clearly demonstrates that a focus on the details, i.e., detailed preparation and planning enables and ensures a smooth and successful implementation and migration.

Security RDA Evolution

In Part 1 the blog took readers through the highly collaborative use case program to develop an Architecture, identify requirements in an RFI, map responses and short-list vendor solutions and, conduct a network security evaluation to select contrasting best-of-breed versus integrated solutions. The steps presented were, as follows:

  • Develop Architecture
  • Requirements
    • Specify Functional Requirements (RFI)
    • Request Information from the vendor community (Distribute RFI)
    • Review RFI responses
    • Select vendors for product/solution evaluation
  • Vendor/Product Selection
    • Conduct bake-off testing with business unit participation
    • Review evaluation scorecard results
    • Conduct pilot of highest ranked solution
    • Review pilot results
    • Technical Recommendation

In Part 2 this blog discusses the development of the network security design based on the solutions, development of a repeatable deployment program and, the phased implementation and migration across 20+ business units. In this blog we will present, as follows:

  • Develop Design
  • Develop Implementation Program
    • 4 Phases
    • Alpha Test
    • Beta Test
  • Deploy/Execute
    • 9 Step Program

Based on the Technical Recommendation developed from part 1 a best-of-breed solution was identified sourced from a single vendor that closely mapped to the RFI requirements as well as tested the best in all 4 categories, Firewall, IPS, VPN and, Management. The selected solution was deployed in the business unit simulation lab as a proof of concept to further validate and for practical hands-on study as a basis to inform the design phase.

Baseline Network Standard Design

For the Design phase the program team re-convened the Network Security Working Group to work together to develop the design, the same technique that was employed for the Architecture and RFI.

The Design was already defined at a conceptual level by the Architecture and the Functional Requirements (RFI) added a lot of bounding detail. In this phase we wanted to define those further in a more detailed design as well as fully leverage the functionality and capability of the selected solutions while being cognizant of any constraints or caveats.

Our goal was to establish a practical and sensible baseline design, to develop something that could be deployed across all 20+ business units easily and then evolved over time once the prototype foundation was in place. We did not want to make things too complicated, keep things simple and roll-out the solutions, migrate and then seek to evolve in discrete phases of capability maturity.

The team developed the Baseline Network Standard Design while using the lab both as a proof-of-concept as well as validate the design and finalize it per the approval processes used for the Architecture.

Deployment Program Development

The major goal for the development of the deployment program was that it needed to be programmatic and repeatable since the solution components, i.e., Firewall, VPN, IPS, Management were being deployed to 20+ business units with various current technologies and maturity. The steps for this process were identified, as follows:

  • Draft Stage
    • developed the initial framework with a technical approach, proof of concept, initial templates and tools, and dry run
  • Alpha Stage
    • partnered with an alpha business unit to initially validate the technical design, deployment approach, templates and tools
  • Development Stage
    • fully developed and baked the Reference Guides, Templates, Tools, Bill of Materials (BOM) template and, Kick-Off presentation
  • Beta Stage
    • partnered with two business units to more precisely verify the deployment program to work out any glitches and ensure repeatability
  • Deployment Stage (Production)
    • program developers hand-off to 3 deployment teams and fallback to a tier support, to monitor and refine the process
    • deployment teams partnered with early adopter business units and rolled out to 20+ business unit in schedule

Architecture Program DevelopmentThe technical deployment aspects of the program were complemented with a plan that both provided a breakdown of the 4 solution components (Firewall, VPN, IPS, Management) into a phased deployment, as illustrated below.

Security Program

Each Phase was then further defined to describe the timeline with pre-deployment planning, official project kick-off, deployment and then migration and closeout, as illustrated below.

Security Plan

Once the Alpha and Beta stages were completed and any lessons learned factored into the program the team worked with the business units to identify the flow of deployments based on candidate selection criteria and develop a program deployment schedule using 3 deployment teams.

Phased Deployment and Migration

Once the technical and process components of the deployment program were developed they were validated and tested with the above methodology, through Alpha, Beta, Early Adopters and, Rollout. Each step built upon the other, both validating and refining the technical design and process towards the ‘production’ rollout with the 3 deployment teams. Deployment consisted of 4 high-level categories with 9 discrete and repeatable tasks, as follows:

Deployment Program

Kick-Off (Initiation)

In this phase the deployment teams engaged with the business unit to perform Discovery, to gather data to profile the business unit in terms of current state, business requirements, and so on as input into the design task. A formal kick-off meeting was also held to present the project to stakeholders, address any questions and, discuss/validate the project plan timeline.

Planning

In this phase the detailed design was developed using the program Design Reference Guide to populate the business unit template to capture custom options, specifics of the network architecture and, the detailed configuration. The business unit design was reviewed and approved.

The Approved Equipment List was used to identify and size the components required (Firewalls, VPN, IPS, Management) for the target business unit in order to develop the Bill of Materials (BOM) and place the order for devices to be ready for staging and then deployment.

Based upon the design preparation planning was performed to discuss and document the logistics and maintenance windows for the Execution phase. There were two deployment maintenance windows scheduled to provide a backup in case an unforeseen problem occurred. The second window was never needed since the key to success was focused and thorough preparation and planning which enabled successful and flawless execution. A lot of contingency was built into the program.

Execution

In this phase the team split the tasks into 4 discrete tracks with Site Readiness, Implementation, Migration and Decommission. These were split into parallel tracks with a different focus.

Site Readiness was focused on the Network Infrastructure Layer. The team had designed a Transition Zone which enabled the new components to be deployed and tested without affecting the production network. It also enabled the rapid seamless cutover to the new environment for validation and, fallback if/when needed. This task focuses on the technical logistical preparation and configuration.

Implementation was focused on the Network Security Layer, validating the new components in terms of their base functionality as well as integration into network services, such as authentication, logging, network management, and so on. It also performed functional testing to ensure that the environment was operating correctly in terms of stateful failover, remote access, and so on.

Migration was focused on the Application Layer, translation and conversion of existing policy and rules (ACLs, Firewall rules) to the new solution. There were two options, either expedited where all new rules would be applied and tested at once or incremental depending upon business unit size and/or complexity. A comprehensive application testing plan was used to ensure that during cutover all key applications and communications were operating correctly and there were no issues.

Deployment Comparison

Training

During the project initiation training was provided on the technology solutions, in-scope functionality, technical design, future direction, operational best-practices and, network security management using the lab. The business unit stakeholders were a part of the project team intimately involved in the design, implementation, migration and, cutover to make them as hands-on as possible and capable to take a transition.

Closeout

Once the deployment was complete a project closeout meeting was held to ensure that the business unit was comfortable with the operations and that the training program and knowledge sharing had addressed the ability and comfort for ongoing operations and support. The project was closed out and deployment team backed out to provide tier support and subject matter expertise (SME) as needed.

Conclusion

This Architecture Case Study can be seen as a Cadillac use case where it is atypically a fairly large organization with 20+ business units that has the resources, time and, budget to develop a comprehensive and repeatable program that crosses all t’s and dots all i’s. However it is shared with readers as an example from which specific best-practices and/or subsets of the program can be successfully leveraged by any organization to either improve or add to their current deployment or migration processes. Further details are available upon request.

Think You’re Secure? Think Again.
Security Architecture Primer
Security Architecture Baseline
Risk-Aware Security Architecture
Develop Security Architecture
Product and Solution Selection
Security Architecture Implementation
Adaptive Security Lifecycle

Thanks for your interest!

Nige the Security Guy.

Architecture Case Study – Part 1

Architecture Case Study – Part 1

Architecture Case Study – Part 2

In the Security Architecture Series of blogs we have shared all of the steps involved in requirements gathering, baseline, product and solution selection and, through to realizing the architecture. This blog presents an Architecture Case Study that uses those principles and recommendations as a practical example. The illustration provides a conceptual simplified view of the program use case.

Defense in Depth Part 1 (this blog) takes the reader from Architecture development through to the Technical Recommendation then Part 2 takes the reader from Design to Deployment strategy with Implementation and Migration.

Program Overview

The overall goal of the project was to seek to standardize across the organization and all of the 20+ business units. The business units are primarily autonomous with different types of technology and infrastructure and at varying degrees of maturity and security. The status quo presented a series of risks to both the organization as well as each of the business units.

The cost/benefits were multiple not just in terms of standardization but also the total cost of ownership (TCO) and Return on Security Investment (ROSI) to purchase technology at volume discount while gaining increased visibility and support from the vendor(s). However, the primary goal and benefit was to establish and foster a spirit of collaboration, sharing and, cross-pollination to work together towards a common vision.

Architecture Case Study

The overall high-level approach is defined, as follows:

  • Develop Architecture
  • Requirements
    • Specify Functional Requirements (RFI)
    • Request Information from the vendor community (Distribute RFI)
    • Review RFI responses
    • Select vendors for product/solution evaluation
  • Vendor/Product Selection
    • Conduct bake-off testing with business unit participation
    • Review evaluation scorecard results
    • Conduct pilot of highest ranked solution
    • Review pilot results
    • Technical Recommendation
  • Develop Design
  • Develop Implementation Program
    • 4 Phases
    • Alpha Test
    • Beta Test
  • Deploy/Execute
    • 9 Step Program

Baseline Network Standard Architecture

I worked on the project as a consultant in the role of Program Technical Architect as part of the overall Architecture Governance and Steering Committee. My role was both to guide the direction and act as a technical lead as well as perform a lot of the detailed work to develop the actual deliverables based on collaboration and interaction.

Network Security Working Group (WG)

The first step was to develop an Network Security Working Group (WG) that included stakeholders from the various business units to contribute at two levels, as follows:

  • Level 1 – A small representative sample of core members who were involved in the brainstorming sessions to represent their business unit and contribute input on both the architecture as well as unique requirements
  • Level 2 – A stakeholder from every business unit who was involved in monthly or quarterly (as appropriate) review and approval of the emerging work product and progress to enable consensus and buy-in.

Collaboration

Collaboration was key to the success of the project. We wanted to involve stakeholders in every stage of the process and to ensure that their contribution was captured and recorded. Brainstorming sessions were used extensively at various locations with preparation to seed and stimulate the discussion with a facilitator as well as scribes to record and document.

Architecture Draft Review

A series of review cycles were used with a broader and broader audience to ensure that the architecture aligned with both current and future strategy and needs for the business units. The architecture document contains the following sections:

  • Architectural Principles
  • Network Models
  • Physical Layer Design
  • Supported Protocols
  • Network Performance Architecture
  • Network Security Architecture
    • Areas
    • Perimeters
    • Zones
    • Controls
    • Management
  • Network Management Architecture
  • Enabling Services
  • Appendix
    • Profiling BU Network Traffic
    • Modeling Steps
    • Example of Modeling a BU Network

The finalization and ratification of the Network Baseline Standard Architecture was a major accomplishment for the organization because not only did it lay the groundwork for the success of this specific program it also laid the framework for future projects across initiatives such as Wireless and Evolving Security.

Requirements Specification (RFI)

The RFI – Network Security Functional Requirements document was developed next by the Network Security Working Group. The team worked closely together to identify the functional requirements and assign a relative priority of High, Medium, or Low.

RFI Evaluation Criteria

Once RFI was completed and reviewed the Network Security Working Group convened a meeting to establish the RFI response evaluation criteria and scorecard to be used for the analysis of response from bidding vendors. The functional requirements that were originally identified as High were further examined and 19 requirements were selected and rated as MUST by the group. A functional requirement with a MUST designation implied that the associated security device would be eliminated from further consideration if it did not comply.

RFI Requirements Sample

All functional requirements were then scored with either a maximum possible score of 10, a maximum possible score of 5 or, a maximum possible score of 3 respectively.

Evaluators and Decision-makers

To ensure that the RFI responses were analyzed in an independent and objective manner the Network Security Working Group assigned an Evaluation Team, which comprised primarily of consultants. The Evaluation Team was solely responsible for conducting the RFI response analysis to select vendors and solutions and also performed the network security equipment testing. However the team did not participate in any decision-making and only acted as advisors. The Decision-maker Team is comprised of members of the extended Network Security Working Group (Level 2).

RFI Evaluation Scorecard

The Evaluation Team developed an Evaluation Scorecard that took all response format files from bidding vendors and consolidated them into the Consolidated Vendor Response Form file. This consolidated file contained macros to process the entries from all of the bidding vendors and to create two worksheets, as follows:

  • Product Stack Ranking – summary of scores based upon device category
  • Vendor Stack Ranking – summary of scores by vendor

Vendor Evaluation

RFI Evaluation Methodology

The Evaluation Team adopted an objective method of evaluation focused on the functional requirements as defined by the Network Security Working Group development team, and communicated to the vendors in the Security RFI. It led to the following step procedure.

  1. Evaluate ‘Best of Breed’ responses to derive the top 3 vendors in each of the following four categories – Firewall, VPN, IPS, and Management.
  2. If possible, select the 4 most populous vendors from these rankings for inclusion in the network security equipment testing.
  3. Evaluate ‘Integrated Portfolio’ responses, if any, from all remaining vendors to derive the top 3 portfolio vendors.
  4. Select the best vendor from this ranking for inclusion in the network security equipment testing.

For quality control a Conformance Check was also conducted to ensure that all ‘Yes’ or ‘Partial’ responses had an associated supporting Response Reference Section and/or comment to backup the statement by the vendor.

Vendor / Product Solution Selection

Of the fifteen network security equipment manufacturers that responded to the RFI, five vendors who best met the functional and operational requirements were invited to participate in the bake-off.  Each vendor brought and installed equipment in the lab to allow members of the working group to conduct technical evaluations.

Bake-Off Testing Methodology

The primary goal of the bake-off testing was to further measure the fit of the proposed solutions, with a focus on holistic integration against the functional requirements that were documented in the Network Security RFI. It is interesting to note that most vendors acknowledged that this was the first time ever they had deployed and integrated their solutions holistically and operated them in a real-world scenario. Most vendors had only participated in one-off point solution evaluations.

The bake-off testing objectives sought to identify, test and select one or more manufacturers of network security solutions (Firewalls, IDS/IPS, VPN and, Management/Monitoring) to proactively meet the following goals:

  • Secure core business unit network infrastructure devices
  • Add network security components to protect and segregate critical assets
  • Integrate security components to current network management systems (NMS)
  • Develop a Network Security Management System (NSMS).

The evaluation team designed and deployed an inherently insecure Network Security Evaluation Lab that simulates a typical business unit network and provides both the network areas and security zones that need to be protected with sample assets. The testing viewed potential threats and attacks from External (outside the border perimeter) as well as from Internal/Insider (business unit networks) towards Data Center Zone and Management Zone(s) targets.

Test group scenarios were developed that made use of various typical threat and attack categories (e.g., signature based, anomaly based, DoS). The controlled attacks were initiated by a penetration tester from both external and internal sources. In addition, a traffic generation load/stress testing tool was utilized to exercise functionality and simulate normal traffic (client connections and sessions).

Network Security Evaluation

The above diagram provides a simplified illustration of the test groups and targets across the Functional Requirement categories, as follows:

  • Detection
  • Response
  • Alert / Logging
  • Correlation
  • Reporting
  • Management

These are the functional requirement categories that are documented in the Network Security RFI and the reference codes refer to the specific line item requirements.

The goal of the bake-off testing was to initiate multiple discrete sets of tests as ‘triggers’ that initiate a sequence of events that then flow through the functional requirements categories and elements in the diagram. The Evaluation team asked the manufacturer to demonstrate if/how the proposed security solution detected, responded, alerted, logged, correlated (where appropriate), and reported as a consequence of these sequences, and how any generated events are managed. The testing also evaluated the utility of the solution as well as factors, such as integration, management and, monitoring. Decoys and scans were used to generate noise while stealthy attacks were employed.

The Evaluation Team was restricted to performing the testing and to providing an objective report to the Network Security WG attendees, in addition to an independent and objective report from the penetration tester and traffic generation tester. The attendees made use of an Evaluation Scorecard and each stakeholder contributed a score.

Evaluation Scorecards

The Evaluation Team developed a set of scorecards that would be used by Network Security WG and business unit stakeholder attendees to the bake-off sessions. There were two scorecards used across two days, as follows:

  • Objective Scorecard – Validate Compliance to Requirements
  • Business Unit Scorecard – Validate Fit to Business Unit

For the Objective Scorecard the Evaluation team reviewed both tests and vendor demo together with Q&A to determine if the functional requirements were met by the implemented solution as cited by the Vendor in their RFI response. The team referenced the Vendor Summary and RFI Response sheet.

For the Business Unit Scorecard the attendees individually assessed how well the solution satisfies the requirements and fits the needs of their business unit and determined the total category score, per section.

RFI Scorecard

Technical Recommendation

The technical evaluation had consisted of more than 100 evaluation criteria and over 50 repeatable tests conducted on four network security components: Firewall, VPN, IDS/IPS and Management/Monitoring.  The RFI was designed to allow the option to select either the best suite of tools from a single manufacturer or to select the ‘best of breed’ (the best components from multiple vendors).

The Evaluation Team’s technical scores were tallied and submitted as a Technical Recommendation to complement the financial total cost of ownership analysis.  The Technical Recommendation and Financial Analysis were provided as a summary of the findings and recommendations as an outcome of this evaluation process.

Technical Recommendation

Implementing network security equipment on a business unit network is a large and challenging proposition.  The group recommended that there be Alpha and Beta deployments to validate technical elements and further understand the integration complexity with the proposed solution.  In addition, the Alpha and beta process helped to develop a deployment methodology which will allow for a deliberate approach for addressing important business unit and organization-wide security concerns.

Next Steps

Architecture Case Study – Part 2 will continue this series to take the reader from the Technical Recommendation on into Baseline Network Standard Design as well as the Deployment strategy with Implementation and Migration process.

Think You’re Secure? Think Again.
Security Architecture Primer
Security Architecture Baseline
Risk-Aware Security Architecture
Develop Security Architecture
Product and Solution Selection
Security Architecture Implementation
Adaptive Security Lifecycle

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 2

Security Program Best-Practices – Part 2

Security Program Best-Practices – Part 1
Part 3
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments. As a refresher the typical gaps, deficiencies or need for improvements are summarized in the Opportunity Matrix which is used as a planning tool.

Opportunity Matrix Summary

In part 1 of the series we covered an overview as well as Gap 01 – Identify Requirements and Gap 02 – Develop Security Governance Program. In this blog we will discuss Gap 03 through 05 per below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 3: Network Security Organization

Business Problem

Over the past few years, as security organizations have had to grapple with an increasingly complex threat landscape and a much more visible role in the organization, the expectations of the business have also significantly increased. The business expects that security will do all this and take on additional responsibilities while keeping its headcount almost static. As a result, there is often a disconnect between what a security organization can realistically deliver and what the business perceives it can deliver. Security organizations today must be agile and high-performing — capable of addressing a multitude of responsibilities and needs simultaneously.

Security Alignment

According to Forrester, maintaining existing systems and applications consumes 73 percent of the IT budget, leaving only 27 percent available for new project investment. This finding is corroborated by a study from AT Kearney, which reports that 70 percent of business executives believe that technology innovation is critical, yet 80 percent of actual IT expenditures are spent on infrastructure and core operations. Forty-five percent of business executives strongly agree that IT groups focus on day-to-day IT requirements at the expense of strategic goals. Add to this burden the voluminous security, regulatory, and legal issues that enterprises now face—and IT is stretched to the limit.

Recommended Solution

When it comes to data breaches, hackers and organized crime garner most of the headlines, but most data breaches are caused by human error and system glitches–application failures, inadvertent data dumps, logic errors in data transfer and more. Organizations with strong security posture and incident response plans experienced breach costs 20 percent less than others and so, the importance of a well-coordinated, holistic approach is clear.

Many organizations typically have resources who are trying to wear too many hats and may govern, manage, engineer, operate and support the network security infrastructure. This also results in a lack of checks and balances increasing the risk of human error, in that the same administrator can review, approve, implement, test and, monitor a policy. Security governance, management and operations all have very different functions, and clarity among them is fundamental to the performance of each.

Security Organization

A key part of the role of security governance is to ensure that business and security processes have sufficient internal segregation of duties (SOD) to avoid a conflict of interest. Organizations should carefully develop their charter and participation of a security governance team so that it does not become mired in operational issues, but gives the necessary direction and oversight. The security governance team should have sufficient separation from security management and operations so that a conflict of interest is avoided.

When companies perceive GRC as one team’s responsibility, it undermines the real value that a coordinated program can deliver; risk and compliance professionals can’t possibly identify and measure all risks or enforce all policies across the organization. They need to rely on their colleagues for support, which means enterprises must lay out clear expectations for every user. Conversely, enterprises must explain the benefits users should expect based on their active involvement.

GRC Overview

Organizations should adopt a process-driven approach to security governance, management and operations that includes formally defined process flows, responsibility charts and decision accountabilities. At a high-level the organization should support, as follows:

  • Strategy: Develop GRC readiness by assessing maturity against peers through key use cases, identify gaps and build roadmaps; rationalize and prioritize GRC initiatives by tightly integrating information and infrastructure imperatives with business obligations.
  • Design: Design GRC programs and governance models and align with policies; quantify and classify exposures and weaknesses and compare to well-defined metrics, develop treatment options to manage risk and optimize rewards.
  • Implement: Implement processes, policies, controls and technology solutions that monitor operations and key metrics. Measure exposures in people, processes and technology controls in the context of IT infrastructure interdependencies.
  • Operate: Treat exposures by continuously enforcing policies; detect violations and measure gains against desired states; continuously improve processes to maximize synergies and move up the maturity curve.

Best-practices set expectations that all employees in the organization will play a part in managing risk and meeting compliance obligations.

Security Roles

All systems have critical processes that, if subverted through human error or malicious intent, will significantly impact the objectives they enable. No one person should have absolute control over a critical network security process, asset or, system. Instead, processes should be segregated into discrete tasks that can then be assigned to parties who do not have a conflict of interest with safeguarding the sub-process. Through segregation of duties, an engineer cannot readily disrupt production by mistake or intent.

Gap 4: Security Collaboration WG

Business Problem

In a rapidly developing organization it is easy to get out-of-touch and for groups to develop at different paces in different directions, working in silos and generating fragmented security. While hybrid distributed security organizations with dotted line reporting relationships are a best-practice it is also key to collaborate closely together, working towards a common goal, integrate security architecture, seek compliance to policy and regulation and, automate process and systems.

CollaborationRecommendation

Security governance requires a set of oversight processes to ensure that reasonable and appropriate actions are taken to protect the organization’s information resources in the most effective and efficient manner, aligned to business goals. The role of security governance within the cross-organizational and cross functional Collaboration Working Group (WG) is to work closely with all stakeholders, including senior executives, line-of-business managers, the IT organization and others to establish, as follows:

  • Establish Effective Governance Framework
  • Develop Meaningful Risk Assessments
  • Focus on Enterprise Risk Management
  • Establish Measurable Controls
    • Map to all relevant regulations and standards

The Security Collaboration WG is a critical component in setting the overall direction of the security program implemented by the CISO, taking into account the strategic needs of the business, the risk appetite of the organization, other non-IT and information security issues (such as physical and personnel security), and broader IT and information initiatives beyond the security realm.

Practical Security

The responsibilities of a Security Collaboration WG may include:

  • Acting as a steering committee for significant projects
  • Tracking the progress of remediation on risk items (audit report findings)
  • Reviewing metrics reporting
  • Monitoring operational performance
  • Enabling the CISO to guide security efforts within business units
  • Establishing and maintaining effective lines of accountability, responsibility and authority for protecting information assets
  • Acting as a mediation or arbitration for reconciling conflicting security requirements

A Security Collaboration WG that connects the various organizational silos and integrates with governance in terms of policy, compliance and, internal audit enables the alignment of controls and measurements with an evolving baseline security standard so that the various parties work together in lock step. There is also a high return on security investment through collaboration and sharing, generating ideas for improvement via cross-pollination, and so on.

Gap 5: Network Security Standards

Business Problem

A new model of assurance has emerged as the foundation for an enterprise information integrity, security, and compliance strategy. This domain is infrastructure integrity enabled by configuration management (assessment and change auditing). Change auditing ensures the integrity of all infrastructures in a network — in essence ensuring that the infrastructure remains in a “desired secure state” throughout the implementation of the changes necessary to keep pace with the dynamic demands of the business.

ITIL Basics

Infrastructure integrity is the foundation or anchor upon which IT infrastructures should be built. When there is no infrastructure integrity, the internal process controls put in place to manage this infrastructure fail. Like a structure built upon sand, when the ground underneath shifts, the building will crack. In essence, without infrastructure integrity, an enterprise’s investment in operations management and information security technologies can be compromised at best and wasted at worst.

Infrastructure integrity results in operational efficiency

Security Hardening

Network security baseline standards are key to translating applicable but often vague regulations and security policy into actionable statements that can be applied by network security technologies and those policies verifiably enforced, to work towards and support compliance. The standards also allow the security organization to define, review and, approve the ‘technical policy’ so that it is sanctioned and in conformance with the risk tolerance of the organization. Finally, standards provide a measurable baseline that can be used to ensure infrastructure integrity as well as audit against those standards – so that the security posture is known.

Closed Loop Policy

Conclusion

The stakes are too high for organizations to ignore anchoring their IT infrastructures by maintaining infrastructure integrity. The infrastructure is too complex, too critical to business success, and too vulnerable to attack. For these reasons the IT asset configurations must be standardized and closely controlled. Controlling the infrastructure has presented challenges for IT management and administrators in both large and small companies. Hoping for success is an exercise in futility if grounded on an environment in which the core information assets and the infrastructure do not have integrity. If the integrity of the core information assets, infrastructure, and procedures is in question, so too is the overall confidence in the security system. In IDC surveys, over half of IT professionals and managers at large enterprises are only somewhat confident or not confident about their companies’ enterprise security systems.

Coming Soon

Security Program Best-Practices – Part 3 will continue this Security Governance Series with the next set of gaps for discussion and helpful advice on key components.

Security Program Best-Practices 1
Part 3
Part 4

Thanks for your interest!

Nige the Security Guy.

Guest Partner Blogger

Become a Guest or Partner Blogger

The objective for this Blog is to bring together a virtual community of like-minded security practitioners to discuss, debate and, share cyber security observations and practical best-practices from the trenches – towards improving infrastructure security to our mutual benefit.

Join Our Team

NigeSecurityGuy blog is inviting articles and topics from like-minded security contributors who are either just getting started and bring a fresh new perspective up to veteran experts. If you either want to become a guest contributor and/or partner in the blog, do not want to operate your own blog or, just have a one-off topic, idea or experience that you want to share. As Margaret Mead once said:

“Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.”

Sharing and Reciprocity

Collaboration and sharing is a sophisticated skill that asks people who work together to look beyond personal interests towards outcomes benefiting the whole. Collaboration and sharing is a great way to address complex challenges, since it has the potential to tap communal creativity and unleash true innovation and earn genuine buy-in.

TeamworkCollaboration, at the conceptual level, involves:

  • Awareness – We become part of a working entity with a shared purpose
  • Motivation – We drive to gain consensus in problem solving or development
  • Participation – We participate in collaboration and we expect others to participate
  • Mediation – We negotiate and we collaborate together and find a middle point
  • Reciprocity – We share and we expect sharing in return through reciprocity
  • Reflection – We think and we consider alternatives
  • Engagement – We proactively engage rather than wait and see

Together we can build a safe and more secure environment. A car with good brakes and integrated systems can drive a lot faster, likewise a business with good security and integrated systems can develop with agility.

“Training often gives people solutions to problems already solved. Collaboration addresses challenges no one has overcome before.”
― Marcia Conner

Rules and Submission

If you are interested in submitting an article or becoming a Blog Partner and regular contributor then simply complete the contact form below and get in touch. All articles will be reviewed before publication to ensure that they are consistent with the mission and goal of this blog, as stated above. If there are any concerns and/or proposed corrections then the author will be contacted and feedback offered before publication. All credit and kudos will be provided to the author who will be recognized as a Guest Blogger or Partner Blogger depending upon regularity.

This post will be a living document and will be updated if and when clarification or additional information is needed.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices

Security Program Best-Practices – Part 1

Security Program Best-Practices – Part 2
Part 3
Part 4
Part 5

This ‘Back to Basics’ blog series highlights some foundational best-practices and recommendations – as part of the capability maturity model – to enable evolution towards an ever improving more comprehensive and proactive cyber security governance and management program. These recommendations are drawn from a broad sample of assessments where these are typical gaps, deficiencies or need for improvements.

Security Governance Series

The Opportunity Matrix

The diagram below presents an Opportunity Matrix, a series of articulated next step opportunities for organization’s to build an adaptive, holistic, integrated and, proactive Security Program that enables an iterative process improvement evolution towards a common sanctioned goal. They are graphically illustrated across three dimensions in terms of, as follows:

  • Impact and Benefit
  • Initiative and Effort
  • Relative Cost Estimate

The Opportunity Matrix can be used as a tool to reflect the above parameters as an aid for both planning as part of the Adaptive Security Lifecycle methodology as well as to seek support, budget and resources from executive management.

Security Opportunity Matrix

These opportunities are considered as ‘low hanging fruit’ best-practice initiatives and are listed for brevity as ‘gaps’ that organizations can progressively develop and evolve in a capability maturity model, based on available budget, resources, and priorities.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 1: Identify Requirements

Business Problem

Information security is a business issue, not just a technology issue. Information should be protected for sound business purposes. Corporate knowledge and data are arguably the most important assets of any organization.

Security Policies provide several benefits. They provide a standard baseline of security policy modules and checks, based on the organization’s information security policies and standards. They establish a solid, scale-able basis for an enterprise-wide deployment. Policies heighten security awareness of company personnel. They also help organizations demonstrate their commitment to protecting their vital information assets.

Security policy is not static. It should be a living document and process, which is enforced, implemented, and updated to reflect the latest changes in the enterprise infrastructure and service requirements. It evolves to guard against perceived threats and changing system architectures. A properly designed and implemented security policy is an absolute requirement for all types of enterprises and has to be owned by one group.

The security policy must clearly identify the resources in the enterprise that are at risk and resulting threat mitigation methodologies. It should define which users or classes of users have access to which resources. The policy must define the use of audit trails and verification to help identify and discover violations and the appropriate responses.

Common Compliance Framework

Policies, procedures and standards must be formally documented and reviewed with employees. Without documenting, communicating, enforcing, and raising awareness of corporate standards, security and process improvement efforts will not achieve the intended goals. If employees are not aware of how to properly execute their duties and responsibilities according to these standards, they will be ineffective.

If at all possible take the time to craft a Common Compliance Framework that includes any targeted standards (such as COBIT, ISO 27002, NIST, etc.), all applicable regulations (such PCI DSS, HIPAA, SOX, etc.) and, needed security policy for the business. All too often organizations struggle to assess compliance in silos which increases complexity and proves costly but also there are overlaps and synergies. Many groups offer mapping matrices between various standards and regulations making it possible to combine them as needed. We will cover the Common Compliance Framework in more detail in a future blog.

Security Governance

When developing policies and procedures, document only what can realistically be done. This can be accomplished by involving the process owners in the generation and maintenance of the policies and procedures. Moreover, there must be triggers in the system development life cycle and change management processes to ensure that policies and procedures are updated as the computing environment and risk environment changes. At a minimum, they should be reviewed annually.

Policy Management

Closed-loop policy management is implemented using the network security architecture, and includes configuration management of network devices, enforcement of policies in the network, and verification of network functionality via audit trails. Verification and audit trails close the loop on policy management, and result in updates to the policy to reflect corrective actions.

Closed Loop Policy

Gap 2: Security Governance Program

Business Problem

Contrary to some beliefs, companies cannot eliminate all risks for two reasons. First, the internal and external threats that create risk are very dynamic. Second, control investments eventually result in diminishing returns. Instead of focusing time and resources on eliminating risk, a realistic goal should be to reduce risk via Risk Management to a level that is acceptable to senior management and the board.

Balancing Security

Benefits of Managing Risk, Compliance and, Audits

Proactive risk management is a process that must be embedded in the organization’s culture to reap significant benefits. It can be used to constantly “tune” the control environment to ensure that correct controls are present and mitigating risks. Without using risk data input, the organization may have the wrong controls—or no controls—in place, which could leave the organization exposed to significant threats, material findings, and excessive costs.

Information technology personnel and resources play an integral risk management role, aiding in threat prevention, detection, and reconciliation. With the help of technology resources, business units and senior managers can work to execute a cost-effective program that continuously identifies, prioritizes, and manages risks.

Risk Management

When designing controls to proactively mitigate risks, consideration must be given to using tiers of controls in the same manner that a castle builder uses multiple walls to protect a fortress. Security personnel call this approach “defense in depth,” wherein layers of controls are used to protect against known and unknown threats that can originate from inside or outside of the organization. Controls must be risk-based and validated.

In some cases, the use of several relatively inexpensive and simple controls may generate more reliable outcomes than one expensive or complicated control. Layered controls also create a “safety net,” in the event that a previous control layer fails. The objective is to have sufficient capabilities to reduce risks to an acceptable level. When auditors review controls and determine that there has been a failure, they will look for compensating controls to offset the level of deficiency. In other words, if one layer fails, they will look to see if any other controls are present to detect or reduce the impact of the failure of the first level. If other controls are in place, the auditor can use his or her judgment to reduce the severity of the noted deficiency.

Risk Framework

Continuous and proactive risk assessments and risk management is quickly becoming the norm. Auditing is management’s tool to make sure the entire organization has the resources, systems, and processes for delivering efficient, effective, and secure operations. Audits also are designed to identify key goals, issues, and challenges facing an organization and evaluate its progress against important initiatives. In leading organizations, internal auditors provide independent assurance to key stakeholders while identifying any areas for improvement that management should consider.

For each organization there are different goals and objectives, and certainly different issues and challenges. Therefore, there is no one-size-fits-all audit process, nor one audit approach that fits all situations. There are, however, some common and strategic audit-focused questions that must be addressed in most organizations:

    • Is the organization addressing regulatory compliance requirements adequately?
    • Is the organization investing in operational excellence?
    • Has continuous improvement been studied for applicability and implemented in some form in your IT function? In your various business units? In audit?
    • Does governance and risk management practices reflect today’s operating climate?
    • Is performance meeting the needs of your users and partners?
    • Is management forward looking? Or are they just investing in solving current and past problems?

Compliance ensures that an organization’s governance processes are effective and its primary risks are being managed. Compliance is founded on effective controls— those structures, activities, processes, and systems that help management effectively mitigate risk.

A dedicated, independent, and effective internal audit brings a systematic, disciplined approach
to assessing the effectiveness of internal controls and risk management processes.

Because internal auditors are experts in understanding organizational risks and the internal controls available to mitigate these risks, they assist management in understanding these topics and provide recommendations for improvements. At the same time, data gathered from an audit can also help an organization improve its operations across the enterprise.

Tying it all Together

In general, a proactive technology function, compliance function, and management team will study and learn the strategic direction of the organization and implement plans to meet the long-term goals of the organization and continually communicate progress toward the stated goals in partnership with the auditors — together with an agreed yet evolving baseline audit standard.

Audit and IT need to work together in lock-step to seek solutions and evolve in concert rather than randomly select controls and make IT look bad since the goal posts continually shift. An effective internal audit activity understands the organization, its culture, operations, and risk profile. This makes audit a valuable resource for management, the board, and its designated audit committee. The objectivity, skills, and knowledge of competent internal auditors can significantly add value to an organization’s internal control, risk management, and governance processes.

Coming Soon

Security Program Best-Practices – Part 2 will continue this Security Governance Series with another set of gaps for discussion and helpful advice on key components.

Security Program Best-Practices 2
Part 3
Part 4

Thanks for your interest!

Nige the Security Guy.