Security Architecture Series Guide

Security Architecture Series Guide: Navigating Security Architecture Strategy & Roadmap

The primary purpose of creating an enterprise security architecture is to ensure that business strategy and IT security are aligned. As such, enterprise security architecture allows traceability from the business strategy down to the underlying technology.

Security Architecture Series Guide

Business Value Proposition

A security architecture is a design document describing the security components that will protect the enterprise, and the ways they relate and interact with each other. It represents a strategic planning horizon and guide that defines the desired state of an organization’s infrastructure.

The architecture sets the context for planning, design, and implementation. It enables a company to evolve and to become agile, multi-functional, and competitive, allowing the seamless adoption of new capabilities and applications into a common infrastructure. Security architecture also facilitates budgeting for security solutions and personnel.

In summary, the security architecture provides:

  • A way to evaluate applicability of new technologies, products, and services
  • A framework for technology decision-making
  • A macro view of IT systems and components, from the security perspective
  • A statement of direction for IT
  • A way to reduce and manage risk in the most cost-effective manner
  • A way to facilitate compatibility and easier administration of systems
  • A blueprint for future network growth
  • A way to create and document consensus
  • A methodology to force consideration of all design factors
  • A guide for the creation of an enabling infrastructure for unforeseen new applications

This Security Architecture Series Guide blog provides an overview of the series to enable readers to facilitate navigation. The series includes the following detailed topics:

Security Architecture Series Guide

Think You’re Secure? Think Again.

Today, with the advent of APTs attackers are laser-focused on multi-pronged exploits that steal data or wreak havoc.  Security is horizontal … it covers all IT infrastructure. The result is that security infrastructure becomes much more complex and fragmented. Attackers don’t discriminate and will take advantage of any gap in protection to reach their end goal. The bad guys continually evolve and innovate. All potential threat vectors need to be examined and addressed.

pc_vs_network_vs_mobile

The secret to success in security is typically simplicity, to have a well designed and organized infrastructure that provides the appropriate layer of controls while enabling users a consistent ‘policy managed’ experience regardless of location, transport or device. The challenge is in achieving that goal. Stay tuned for more information on lessons learned and experience from the field, success stories and, practical case studies.

Think You’re Secure? Think Again.

Security Architecture Primer

The primary purpose of creating an enterprise security architecture is to ensure that business strategy and IT security are aligned. As such, enterprise security architecture allows traceability from the business strategy down to the underlying technology.

Technology Foundation

However, many IT organizations have moved away from formal security architecture governance in favor of rapid deployment cycles and tactical changes which over time risk diverging into complexity and fragmentation – with unresolved security exceptions. Complexity not only leads to insecurity and the increasing potential for human error but also increased cost of operations.

Security Architecture Primer

Security Architecture Baseline

Once distributed roles and responsibilities are identified and established for the security architecture project team, the next important step is to add to that foundation with a security architecture project baseline.

This blog in the series will enable organizations to create that baseline by defining and reviewing applicable regulations, security policy and standards, identifying and classifying information assets and resources, and conducting a risk and threat analysis.

Security Architecture Baseline

Risk-Aware Security Architecture

We continue the series to develop an on-going threat analysis and risk management process – as key requirements to guide architectural direction and also design/implementation to support mitigation of risks/threats via compensating controls and/or countermeasures or, enable the transfer of risk to other parties, acceptance as a business risk (exception process) or, seek avoidance.

Risk ManagementProcess

Risk-Aware Security Architecture

Develop Security Architecture

The next step is to build the security architecture and migration strategy. This strategy lays the foundation for a successful deployment and the ongoing integration of additional applications and services. We cannot emphasize enough that the quality of up-front planning is one of the biggest factors determining the success and degree of payoff from a security project.

Security Services

This section enables organizations to assemble and align the pieces necessary to develop, update, or validate a modular and flexible security architecture.

Develop Security Architecture

Product and Solution Selection

The security architecture and migration strategy (which now embodies your approved and prioritized requirements) may recommend specific products, or it may recommend going through a competitive process to select products. In either case, partner selection isn’t final until costs and schedules are nailed down, funding approved, and contracts signed.

Vendor ComparisonThe architecture is an important foundation for selecting the right vendors, partners, and approaches. However, additional tools are required during product evaluation and procurement. Relatively informal Requests for Information (RFIs) can bring the team up to speed on the advantages and disadvantages of various products. Formal requests for proposals (RFPs) should form the final basis for vendor selection and tasking.

Product and Solution Selection

Security Architecture Implementation

The security architecture defines and justifies a number of solution implementation, integration and/or improvement projects each year, based on budget, resources and, priority. As such, a master project plan should be created that takes into account identified dependencies, integration points and any parallel tasks.

Security Plan

To plan implementation of a security solution, you must identify where project execution resources will come from, develop an implementation plan, obtain buy-in for the implementation plan, and create a detailed design for the configuration and deployment of the security infrastructure.

Security Architecture Implementation

Adaptive Security Lifecycle

Infrastructure and the environments in which they operate are dynamic and continually evolving over time, especially in our rapid deployment world. Many fast-tracked organizations start out with a well-designed, orchestrated and secure architecture but organically, like Firewall rules it devolves and diverges into increasing levels of complexity and fragmentation.

Adaptive Lifecycle

Applications and systems grow exponentially creating increasingly complex connectivity and relationships that result in a spiders web of interfaces across domains. Complexity leads to insecurity, increased risk of human error and, a substantial increase in the cost of operations and maintenance. The result dramatically impacts the organizations ability to deploy rapidly and efficiently and move forward with agility.

Security done right is a business enabler that dramatically reduces total cost of ownership (TCO)
providing a tangible Return on Security Investment (ROSI).

IT complexity and fragmentation replaced by an adaptive modular and flexible architecture enables agility and
improves your competitive edge — so the business can refocus quickly as new opportunities emerge.

Security is a process, not just a product or technology issue.”

Nigel P. Willson

Adaptive Security Lifecycle

Architecture Case Study – Part 1 & 2

In the Security Architecture Series of blogs we have shared all of the steps involved in requirements gathering, baseline, product and solution selection and, through to realizing the architecture. This blog presents an Architecture Case Study that uses those principles and recommendations as a practical example. The illustration provides a conceptual simplified view of the program use case.

Defense in Depth Part 1 takes the reader from Architecture development through to the Technical Recommendation then Part 2 takes the reader from Design to Deployment strategy with Implementation and Migration.

Architecture Case Study – Part 1

Architecture Case Study – Part 2

Thanks for your interest!

Nige the Security Guy.

vCISO Smart Practices – Part 1

vCISO Smart Practices – Part 1: Enabling Success via Collaboration Infrastructure

The Internet of Things offers a tremendous opportunity for businesses to truly transform themselves by realizing the potential of data that is sitting, untapped, in existing infrastructures. The challenge to unlocking that data is the evolution towards a Secure Collaboration Infrastructure.

vCISO Smart Practices

This blog introduces our vCISO Smart Practices series which kickoff with a fundamental discussion on the importance and value of human collaboration and teamwork as a foundational cross-discipline cross-functional ‘Architecture Team’. We also offer an introduction to the blog author, Nige the Security Guy (@NigeSecurityGuy).

This blog series will later address a truly distributed security architecture that supports the Collaboration Infrastructure and applies Smart Practices to that as we evolve rapidly towards the new and exciting yet challenging IOT.

“Training often gives people solutions to problems already solved.
Collaboration addresses challenges no one has overcome before.” Marcia Conner

Sharing and Reciprocity

Collaboration and sharing is a sophisticated skill that asks people who work together to look beyond personal interests towards outcomes benefiting the whole. Collaboration and sharing is a great way to address complex challenges, since it has the potential to tap communal creativity and unleash true innovation and earn genuine buy-in.

Collaboration

Collaboration, at the conceptual level, involves:

  • Awareness – We become part of a working entity with a shared purpose
  • Motivation – We drive to gain consensus in problem solving or development
  • Participation – We participate in collaboration and we expect others to participate
  • Mediation – We negotiate and we collaborate together and find a middle point
  • Reciprocity – We share and we expect sharing in return through reciprocity
  • Reflection – We think and we consider alternatives
  • Engagement – We proactively engage rather than wait and see

Together we can build a safe and increasingly more secure environment …

 “Security done right is a business enabler that dramatically reduces total cost of ownership (TCO) providing a tangible Return on Security Investment (ROSI).

IT complexity and fragmentation replaced by an adaptive modular and flexible architecture enables agility and improves your competitive edge — so the business can refocus quickly as new opportunities emerge.” Nigel P. Willson

People Process Policy Technology

A critical success factor towards successfully deploying a collaboration infrastructure is orchestrated policy, focused resources and, well-defined process that fully leverages and unlocks technology. As a creative solutions-focused, charismatic, and passionate security evangelist Nigel Willson is available to consult as a Trusted Security Services Partner to collaboratively assist organizations to iteratively improve and optimize their security as a virtual team member in the role of vCISO, IT Security Strategist and, Architect.

Nige the Security Guy: Professional Profile

Architect Blueprint

Nigel P. Willson
Principal Security Architect
AT&T Security Solutions

Nigel Willson is a Principal Security Architect at AT&T with 30 years of experience in Security Operations, Management, Research, Development and Security Services providing thought leadership, architecture/design and practical strategy.

Nigel has responsibilities as Security SME for AT&T complex cyber security solutions across the portfolio of security consulting, managed security services and mobile security solutions.

He specializes in collaboration as both a Trusted Advisor and Virtual Chief Information Security Officer (vCISO) helping companies to evolve and improve their security capability maturity and posture in the following areas:

  • IT Security Governance, Strategy, Roadmap
  • Security Architecture & Design (including adaptive security architecture lifecycle)
  • Security Operations (including advanced threats, detection frameworks, defensible posture)
  • Threat Intelligence & Risk Management (focused on business processes)
  • Security Research & Analyst
  • Regulatory Compliance

AT&T Security Solutions is the AT&T Advanced Enterprise Solutions customer facing security opportunity team. His participation is consistently solicited by AT&T teams and AT&T customers as both a Trusted Advisor and Security SME in both the private and public sectors. Nigel joined AT&T as a Practice Director, Security via the acquisition of Callisma (AT&T Consulting Solutions) in 2005.

Prior to joining AT&T, Nigel worked as a Practice Director, Security for Avaya Converged Security as well as TCS America responsible for the development of discrete security consulting services and leading teams of security consultants. He previously worked as the Director, Security for The Walt Disney Company focused on global Internet Security for 27 business units including ABC, Disney On-Line, and ESPN.

Nigel is a former assembler programmer and reverse engineer (ethical hacker) with a diverse international background. He has worked on U.S. DoD projects developing security products and technology for the World-Wide Military Command and Control System (WWMCCS) and Military Airlift Command Deployment Flow (MACDF).

Magazine2

He is a published author of many security guides, books, magazine articles and currently operates a security-focused NigeSecurityGuy blog providing impartial practical advice and methodology on security architecture, assessments and, advanced persistent threats (APTs). Nigel also operates the ‘Solving the APT Defense Puzzle’ group on Linked-In, a reference library of useful research and topics.

Nigel was recently selected as a finalist in the InfoSec Europe 2014 Security Bloggers awards and was invited to publish an article on Leveraging Security as a Business Enabler.

Nigel’s passion is taking blog readers Back to Basics to focus on key security principles to develop a strong architectural foundation (Security Architecture Series) and from that add advanced threat defense (APT Strategy Guide) as well as security operations optimization (NG-OPS Strategy Guide).

Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.”

Background Summary

  • International Background — 30 years international experience gained as security expert across England, Australia, France and, the USA.
  • Strategic Architect – Cloud-Orientated Architecture, BYOD, Mobile, Security Operations, Risk, Intelligence, Analytics, Metrics, Visualization – Situational Awareness: Detect, Contain, Investigate, Eradicate, Recover
  • Director, Security @ Disney – Establish strategic architecture team, develop successful proactive security management program.
  • Published Author – Author and co-author of many security guides, books and, magazine articles.
  • Security Consultant — 15 years thought leadership and strategy experience consulting to Fortune 500 companies.
  • Security Engineer – Developed new security protocols and products for U.S. Department of Defense (DoD), e.g., MACDF and WWMCCS. Used in Gulf War. Worked on multi-level security and covert channel prevention.
  • Reverse Engineer – Original assembler programmer, ethical hacker and reverse engineer who could analyze code and manipulate any technology, protocol or system.
  • Awards Plaque: AT&T April 2008: In Recognition of Unwavering Commitment, Steadfast Leadership and Outstanding Performance on the California State University ITRP program.

Next Generation Operations

Thanks for your Interest!

Nige the Security Guy.

Security Strategy Retrospective

Security Strategy Retrospective

Building a strong foundation, enabling budget and support. For most organizations, justifying an ongoing investment in information security has not been easy. How do you measure the value added to the organization versus something more tangible, like implementing a new business function? What is the vision? What is the risk? What is the plan?

Security Retrospective

Back to the Future

This Security Strategy Retrospective blog takes readers back to basic principles. It is an interesting reflection on the challenges that security practitioners faced in the mid-90’s which are very familiar and applicable … even today. Organizations are currently concerned about advanced threats with blended multi-step attacks yet many of the practices and solutions from the past are still valid and very helpful as a key security strategy foundation.

The blog takes you on a journey back to the future to tie today’s technology, solutions and requirements to founding well-proven principles. The magazine feature article reproduced in part below was originally published in June 1997 (an extract is presented here). It is quite dated yet discusses persistent behavior patterns and shares key and still valid principles. The more things change the more they stay the same. We have been here before so let us learn from the past.

In future blogs we will take the principles and practices described below to update for a 21st century security strategy and operations framework, in the age of the APT.

Intranet Security: An Investment in the Enterprise (circa 1997)

Simply because data on an Intranet is behind the company Firewall doesn’t mean it’s safe. Here are some issues to consider when securing your Intranet applications.

Magazine2

Creating a Security Strategy

B2B LogoOrganizations are changing the way they do business. The promise of reaching new markets and easily creating new inter-enterprise relationships through Internet technology is sparking unprecedented interest in going online and moving toward electronic commerce. Thus, an organization-wide security strategy becomes vital when business plans extend that same communications model to suppliers, customers, and partners via extended networks. Therefore internal security implemented consistently across the enterprise is fundamental to both the successful evolution of the business and its ability to compete in the global marketplace.

So how do you create a secure extended infrastructure? Security is a discipline that’s daily becoming more complex as information systems migrate from mainframes to distributed client-server architectures. As stated above, the extended network adds yet another level of complexity: the need to integrate one more set of technologies into what’s already a complex heterogeneous world.

Implemented correctly, the extended network can be the ‘holy grail’ that affords the middleware to hook thin clients to legacy systems, and provides, once and for all, that elusive single sign-on user environment. The illustration below shows a simplified security architecture model that organizations can use as a framework for developing a comprehensive program of organization-wide security.

adaptive

Security Architecture Framework (circa 1997)

Steps to Success

1. Create a Living Security Policy and Standards. (circa 1997)

The information security policy is the first step in the evolution toward a coherent and cohesive security architecture. Just like a development project, a security policy explicitly defines requirements, and together with strong management, change control and quality process, it provides the framework that bounds security within the organization.

What is security policy and standards?

Closed Loop Policy

Security Policy - Procedures, Standards, Process

It’s a set of rules stating what is and is not permitted in a system’s normal operations. Written in general terms, it describes the security requirements that systems must adhere to. Organizations must address a number of policy issues, for example:

  • Who is allowed to use the resources?
  • What is the proper use of the resources?
  • Who is authorized to grant access and approve usage?
  • Who has system administration privileges?
  • What are the user’s rights and responsibilities?
  • How is sensitive information protected?

A security policy and its implementation should be as unobtrusive as possible. If the policy is too restrictive, or improperly explained, it’s likely to be violated. Organizations need to be able to detect and review violations so that they can validate the policy, its effectiveness and appropriateness to address risk and fine tune it – else the policy cannot be used successfully to share requirements and audit.

Opportunity Matrix Summary

Security Program Best-Practices: Opportunity Matrix

For more information, see the Security Governance Series: Security Program Best-Practices – Part 1 and onwards (circa 2013).

2. Engage in Information and Resource Asset Classification. (circa 1997)

How much money, inconvenience, and time should be spent to counter an exposure or class of exposure? Taking an inventory of all infrastructure assets facilitates the identification of all potential targets. At the rate some organizations are growing today, it’s no surprise to discover unknown active equipment on an internal network. Finding this equipment and determining ownership establishes responsibility and accountability for what occurs on the equipment.

Classification involves assessing what resources or assets an attacker might want to steal. The inventory forms a clearer picture of exactly which data is critical, and thus which applications and servers need the most protection, monitoring, and auditing. This step lets the organization focus energy – its resources and budget – on the critical and sensitive resources at the optimal level (see below), establishing a priority and minimum required level of protection so those resources can be managed correctly.

Detection Precision versus Cost

The Security Cost Function

By determining which resources need the highest level of protection, you can minimize the cost and the danger involved with putting your data on the net.

3. Maintain an Evolving Security Standard. (circa 1997)

The previous two steps establish a baseline for the enterprise security plan. Together, the security policy and information / resources classification define the minimum standard of security: what to protect and how well to protect it. Because this standard can rarely be implemented within one budget cycle, organizations should develop an annual security program plan as part of the overall strategic roadmap.

Sample Roadmap

Security Program Roadmap

For more information, see Security Architecture Primer, Security Architecture Baseline, and Develop Security Architecture (circa 2013).

4. Conduct Security Audits and Risk Assessments. (circa 1997)

Regular security audits aren’t just for large companies. They’re for organizations of any size that have data to protect from becoming publicly available, to protect confidentiality, integrity and, availability. Security audits should be carried out at least annually, depending upon how quickly the infrastructure evolves and expands. They enable the organization measure its compliance to the security policy and determine the current situation. A risk assessment identifies and quantifies all potential exposures and threats.

Security Model Framework Interaction

At this stage, the consolidation of the current security status with the identification of potential threats allows the definition of security objectives. These objectives are the level of security (protection) the organization has decided on to protect both its own assets and those of organizational units and partners.

In the first year, the security program should schedule the protection of those resources identified as most critical or at risk. In subsequent years, the security plan should document other desired protection measures, as defined in the objectives, and any newly identified critical resources and threats.

In a perfect world, a three year security plan would allow the organization to attain its defined security objectives. However, in reality, any environment continually evolves, and the requirements for protective measures to deal with potential threats need to evolve with it. Implementation of security measures should progress on a sliding scale, ideally in pace with the objectives and threat/risk.

For more information, see Risk Assessment and Roadmap, ISO 27002 Security Benchmark, Security Health Check and, Adaptive Security Lifecycle (circa 2013).

5. Take Security Measures. (circa 1997)

Traditional security is passive and reactive. Extended network security must be aggressive, holistic and proactive. It must be focused on protecting information – the data contents of the system, not the device itself. That paradigm shift changes the rules.

A robust security solution consists of three types of measures:

  • Protection measures,
  • Monitoring Measures, and
  • Validation Measures

An array of monitoring an detection solutions, such as auditing and tracking products, anti-virus software, intrusion-detection schemes, and event logging strategies, can help security managers head off potential security threats before they evolve into serious problems.

Protective Measures

There are three categories of protective measures, they are prevention, detection and, recovery to “Lock the Door“.

A security prevention mechanism applies and enforces security controls during the system’s operation based upon policy, classification and risk to prevent a security violation from occurring. For example, this protection could include a mechanism restricting physical access to a system or the use of logical access control mechanisms based on encryption to prevent unauthorized users from accessing resources. For an update, see Adaptive Zone Defense – Part 1 and APT Threat Analytics – Part 1.

A detection mechanism detects both attempts to violate security, and successful security incidents, when or after they’ve occurred. Alarms can detect unauthorized physical access, and audit trails can detect unusual system activities after they occur. For an update, see Defensible Security PostureAPT Detection Framework and APT Detection Indicators – Part 1.

A recovery mechanism is used after a security violation has been detected, restoring the system to a pre-violation state. Examples include backup tapes and the addition of redundant hardware to a system. For an update, see APT Response Strategy – Part 1.

Validation Measures

Validation measures are proactive. “The lock is checked“. Examples are: active network probing, which involves using commercial and open-source tools to ensure that all unauthorized ports are locked down; compliance testing, using automated processes to validate the exposure to platform specific administration vulnerabilities; and audit reviews, which entail conducting internal reviews to measure compliance to security policy. For an update, see: APT Red Teams – Part 1.

Monitoring Measures

Monitoring measures are reactive: “The security camera detects an intruder“. Examples of these include audit trails, activity monitoring, and policy breach detection.

It’s important to have monitoring and detection solutions that do most of the monitoring work, then send alarms that alert the network security team to threats. In addition, the best-buys in monitoring and detection products are those that can analyze data, rather than just record it. There’s an remerging market for detect and alert technologies that correlate persistent behavior patterns before the damage is done.

For an update, see the upcoming APT Operational Maturity and APT Intelligent Operations blogs.

6. Keep Track of Security Violations. (circa 1997)

A key security function is the centralized logging of application and system accesses. All the security in the world won’t do any good if there’s no way of identifying, investigating, and pinpointing breaches.

Holistic Logging

In any organization, violations of security policy will inevitably occur. Often, the violation can be classified as caused by an individual’s negligence, an accident or mistake, ignorance of the current policy, or deliberate disregard of the policy. Security vulnerabilities can easily occur as a result of mistakes made by the system administrator introducing weakness, never mind the myriad of patches needed. Obviously, effective audit capabilities must be in place for an organization to fully investigate a violation and determine its classification, what damage has occurred, and what action to take.

But who watches the administrators? Many organizations are tailoring products to audit the administrator, as a hierarchical super-manager overseeing a large distributed base. Network administrators can unwittingly create security holes during routine maintenance and administration duties. The rule of “least privilege” with “separation of duties” is the best practice, where no one person has the keys to the kingdom in all domains, and administration functions, like access control, are shared.

7. Be Prepared. (circa 1997)

The final layer of security is the preparedness to act immediately and correctly when an incident occurs, and to capture and process any perceived threat or incident. This requires the establishment of a reporting process, a response team, and precise well-defined procedures. The need for proper training of staff can’t be over-emphasized, and is a wise investment, saving the organization a great deal of trouble and money in the medium to long term. Tiger teams are very popular ways of honing skills and gaining expertise with security tools. They comprise internal teams of system and network administrators whose purpose it is to test security monitoring and protection measures.

For more information see APT Response Strategy and APT Red Teams (circa 2014).

APT Incident ResponseConclusion (circa 1997)

For most IT departments, justifying an ongoing investment in information security has not been easy. How do you measure the value added to the organization versus something more tangible, like implementing a new business process? Companies often view security measures as insurance, and the risks the measures address are often never quantified and, as such, accepted. Thus, security is a low priority item and the first task to suffer in a budget cut.

However, since the advent of Internet technology and extended networks, the return on investment is very clear. A secure extended network is the key to enabling technology that facilitates the ability to exchange goods, services, and information electronically in a stable, well-managed infrastructure. This provides interaction with customers, partners, and employees promoting loyalty and allowing the enterprise to gain a competitive edge.

Coming Soon

In the upcoming APT Operational Maturity and APT Intelligent Operations blogs we will build upon this dated but valid Security Strategy Retrospective foundation with a similar but updated building block layered security model and an evolving maturity scale. We will discuss the need for a continuously evolving next-generation SIEM, risk management processes and, network behavior anomaly detection that enable organizations to take security operations and situational awareness to the next level, depending upon various factors including threat/risk profile.

Thanks for your interest!

Nige the Security Guy.

APT Strategy Series

APT Strategy Series

There is a wealth of news and noise with regard to advanced threats, also known as persistent targeted threats and, marketed as the Advanced Persistent Threat (APT). The APT Strategy Series of blogs aims to try to cut through the hype and provide practical steps to our readers to help mitigate the threat.

APT Strategy Series

There is a lot of hype and yet there is no silver bullet. However there is much an organization can do to extend their defense-in-depth strategy, to improve their detection and containment capability and, to gain key visibility to rapidly respond to a compromise or attempt. There is also a win-win in that many best-practices, controls and, detection techniques needed for APT also help address the Insider Threat.

The APT Strategy Series covers, as follows:

  • APT Architecture & Strategy: Architecture and Design principles that help address APT and Insider Threats
  • APT-Focused Best-Practices: Practical steps to improve security posture and reduce threat/risk
  • APT Detection Framework: A framework to enable organization and analysis of attack and detection methods
  • APT Detection Indicators: Practical steps, methodology and tools to gain key visibility and identify a potential compromise

Strategic Defensible Security Posture

The Advanced Threat Defense series of blogs takes a top-down approach for those organizations who have the opportunity to address their security architecture and design to create a Defensible Security Posture, enabling the ability to seamlessly Detect, Contain, Respond, Eradicate, Recover.

Zone Defense

In this series we have currently published, as follows:

Tactical Best-Practices and Detection Techniques

The APT Defense Puzzle series of blogs takes a bottom-up approach for those organizations who need to take more immediate tactical steps to address their current security posture to address APT and the Insider Threat with practical best-practices and detection techniques. The blogs will gather together the best of the best-practices from the multitude of sources and gather them together to discuss the merits based on industry, organization size, threat/risk tolerance and, security profile.

APT Practical Defense

In this series we have started the process with, as follows:

Some quite fundamental but practical steps in that regard we will discuss are, as follows:

  • Maintaining a list of application systems at risk
  • Creating an APT checklist for assets at risk
  • Focusing on APT detection techniques and analysis tools
  • Focusing on incident response for APTs
  • Creating ready to use APT rapid response tactics
  • Preparing an APT forensic response plan
  • Increasing use of external threat intelligence
  • Focusing on APTs in security awareness training

And … significantly, implementing a policy requiring least privilege and authentication for all intranet services because trust-based access is a weakness that must be eliminated.

The Importance of a Defensible Foundation

While deploying tactical improvements and countermeasures is very important if organizations do not have the luxury to address their architecture and design near-term, a good solid foundation is critical. A Defensible Security Posture, Strategy and, Roadmap should be developed and factored into IT planning as a future goal.

blueprint

Develop a Secure Architecture Strategy & Roadmap Blueprint

I would like to illustrate my point with an extract from What Continuous Monitoring Really Means by Dr. Ron Ross, which appears in the Summer 2012 issue of FedTech Magazine. At the end of the day it can cost more in terms of operations, resources and, operational costs to maintain a complex and fragmented infrastructure with band-aids than migrate to a new architecture and design that flexibly supports the business. This is even more important in the age of advanced targeted and insider threats.

Security done right is a business enabler that dramatically reduces total cost of ownership (TCO) providing a tangible Return on Security Investment (ROSI).

IT complexity and fragmentation replaced by an adaptive modular and flexible architecture enables agility and improves your competitive edge — so the business can refocus quickly as new opportunities emerge.

What Continuous Monitoring Really Means

Continuous monitoring is an important part of an organization’s cyber security efforts. But without establishing an effective security framework first, those efforts may be misspent.

Holistic Logging

Prototype, Iterate and Evolve towards Holistic Monitoring

Organizations that begin work on a continuous monitoring program with a narrow focus on security controls at the information system level without first doing some basic investment in strengthening their underlying IT infrastructure face significant problems.

First, they may end up wasting significant resources monitoring inherently weak information systems — in essence, throwing good money after bad. You can check a broken lock on the front door of your house once a day or every hour, but the lock is still broken. Better to fix the lock first, reinforce the doorjamb, and then with the remaining resources, check the lock on an ongoing basis.

Second, premature allocation of resources toward continuous monitoring of security controls for information systems may preclude organizations from investing the resources needed to build stronger, more penetration-resistant systems. Such investments are critical as organizations address the advanced persistent threat and cyber attacks associated with sophisticated and well-resourced adversaries. This is especially important for information systems that support key infrastructure.

Strengthening the IT infrastructure begins with establishing a sound cyber security and risk management governance process. Next, organizations must manage the complexity of their IT infrastructures by using enterprise architecture to consolidate, standardize and optimize the current inventory of IT assets as well as developing “threat aware” mission and business processes.

Sample Roadmap

Develop a Security Improvement Program to Evolve Capability Maturity

Organizations must also develop and integrate into their enterprise architecture a security architecture that guides the effective allocation of security controls to their information systems. And finally, organizations must initiate continuous monitoring of all of the above activities to ensure ongoing effectiveness of cyber security and risk management governance, mission/business processes, enterprise and security architectures, and security controls deployed within the enterprise.

Continuous monitoring, broadly applied, can provide important benefits to organizations with regard to cyber security and risk management. It can support and enhance a dedicated, mature process for building the necessary trustworthiness into the information systems. [Extract from article by Dr. Ron Ross]

Thanks for your interest!

Nige the Security Guy.

Advanced Threat Defense – Part 1

Advanced Threat Defense – Part 1

APT Strategy Series

Defensible Security Posture
APT Defense Puzzle
APT Risk Assessment
APT Incident Response
Adaptive Zone Defense

Many organizations continue to struggle to achieve network visibility into a number of advanced, targeted, and layered threats that evade detection by traditional approaches to incident management.

Advanced Threat Defense

This APT Strategy Series blog offers readers a preview of a proposed foundation and strategy for Advanced Threat Defense. While there is no APT silver bullet there is much an organization can do with a well-designed and managed defensible posture to Protect, Detect, Contain, Eradicate and, Recover to minimize impact and cost.

Situational Awareness

Attackers are continuously evolving and becoming smarter, the largest looming threat to an organization is always that which it does not know or cannot detect. What is changing is that perpetrators understand that their victims can’t cover all their bases all the time. So DDoS attacks can be used as decoys to divert attention. This way, attackers can subsequently move to sectors that are more vulnerable.

Situational Awareness

But coming up with a standardized means of defending against these attacks presents a paradoxical challenge: while the ever-expanding risk footprints and evolving foes calls for more regulation in security, increased regulation effectively provides cybercriminals with a handbook on how to circumvent an organization’s security.

In an ideal world, every security manager would be able to assess their organization’s security ecosystem at any given time and be able articulate the current security posture. But for that to happen, risks must be known and acted upon with speed and intelligence, incoming events must be logged and scrutinized in real-time, and threats must be identified and anticipated before they become full-blown attacks.

Security Operations

Unfortunately, we do not live in an ideal world and we cannot obtain “situational awareness” and reap its benefits without the right security architecture, technology solutions and operational practices. Overall, it is imperative for organizations to integrate a well designed architectural blueprint with capable security technologies together with security intelligence and resource skills/expertise in order to achieve a more comprehensive threat perspective and informed risk management.

Recent Trends Increase Stakes

Business is about managing risk. Every business decision has inherent risk. It is essential to understand and make decisions based on the cost and benefit of that risk. CISOs no longer lay awake at night just worrying about defending their organization’s perimeters and the latest malware infection. The challenges facing security leaders today are far more complex and challenging.

ROSI

The following are examples of recent trends with a significant impact on risk:

  • BYOD
  • Cloud
  • Cyber Threats
  • Mobility
  • Social Media

It is time to rethink security in a broader holistic context and integrate across silos and functional roles so that organizations can protect what really matters: intellectual property and critical data. The challenge is to create an integrated ecosystem that is fully prepared to provide situational awareness visibility, detect any anomaly, investigate it to verify and, remove threats in a defensible posture.

Traditional technologies are still needed to catch vulnerability-based attacks, these technologies need to be layered to enable the organization to detect, monitor and stop an attack at each stage of the kill chain. Our goals are to:

  • Evaluate and Manage Threat/Risk
  • Establish Defensible Security Posture
  • Develop Skills and Expertise
  • Defend against Advanced Threats
  • Improve Detection and Reaction time
  • Manage Operational Costs

Distributed Defense Ecosystem

The goal in a Distributed Defense Ecosystem is to align detection and mitigation technologies with earlier phases of the intrusion kill chain (see: Defensible Security Posture) to stop persistent threats. Defensive measures at each stage of the kill chain help ensure resiliency, particularly when faced with adapting threats. When one defense fails to stop or detect an attack, hopefully one of the others will succeed. This is the modern approach to defense in depth – intelligence-driven detection and mitigation aligned with the intrusion kill chain.

APT Defense Flow

  • Layer 1: Protect – Defensible Architecture Foundation
    • Identity and Role-Based Access (Who)
    • Asset Inventory Classification (What)
    • Application & System Zoning/Controls
    • Application Connectivity Management
    • Activity Logging and Monitoring
      • Instrumentation and Telemetry
      • Data Collection and Mining
  • Layer 2: Detect – Security Operations
    • Calibration & Validation
    • Threat Intelligence/Indicators
    • Situational Awareness
      • Behavior Baseline
      • Anomaly Detection
      • Attack Patterns
  • Layer 3: Contain – Incident Response
    • Anomaly Verification
    • Damage Assessment/Containment
    • Incident Response
  • Layer 4: Investigate – Lessons Learned
    • Root Cause Analysis
    • Break the Kill Chain
    • Remediate and Recovery

In Advanced Threat Defense – Part 2 we will further develop this initial foundation and strategy to discuss the various defense-in-depth, situational awareness, containment and, response layers in more detail to begin to define the framework, technology and, practices for a Defensible Security Posture.

Threat Indicators

Conclusion

Dealing with advanced attackers is not for immature security organizations. Organizations need to master security fundamentals and have good security practices in place. Our Security Architecture Series (see links below) takes readers Back to Basics to assemble and align the pieces necessary to develop, update, or validate a modular and flexible security architecture that forms the basis for Advanced Threat Defense.

As discussed in our Threat & Vulnerability Management blog, organizations need to have already hardened key devices and implemented a strong device hygiene (patch and configuration management) program. It is also necessary to properly segment the network to make it difficult for attackers to get at important data, which will be discussed more fully in the Application & System Zoning blog (aka Adaptive Zone Defense) and Service Communication Management (aka Application Architecture Taxonomy) blog which profiles applications based on classification and risk, determines zone placement and, defines policy and controls.

While traditional endpoint protection is not ‘the’ sole solution organizations do still need some level of protection on key devices with access to sensitive data.

To help improve security posture, penetration testing and security validation testing is key. There is a lot to be learned from these tests, (also see: Vulnerability Assessment Remediation) regardless of whether they are required or not. The penetration testing exercise can be used to validate security operations / monitoring visibility and detection as a real-world attack scenario, as well as a training tool for the forensic team to provide lessons learned regarding overall security issues.

Security Architecture Series

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 5

Security Program Best-Practices – Part 5

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 4
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments. In this blog we will discuss the final and most critical Gap 10 – Develop Firewall Rule Lifecycle Management.

Firewall Rule Lifecycle

Gap 10: Firewall Rule Lifecycle Management

Business Problem

Firewalls are the first and continued line of defense for enterprises today, handling vast amounts of traffic across the network. On the perimeter alone firewalls filter millions of packets daily. The organizational security policy implemented in these firewalls requires the definition of hundreds and often thousands of rules and objects. Objects may include groups of servers, user machines, sub-networks in the data center, and networks in company branch offices or DMZs. The firewall rules define which type of applications and which network services are allowed to traverse between networks and which should be blocked.

Firewalls are Organic

Since business needs are dynamic, firewall policies are constantly being changed and modified. This continuous flux causes the firewall configuration to grow dramatically over time. A huge and subsequently complex firewall configuration is hard to manage and may require lengthy research in order to add or change a rule. Moreover, the complexity of the configuration decreases the firewalls performance and may lead to potential security breaches. For example, a rule was created to allow a temporary service to work for a limited time, but the administrator failed to delete the rule after the task was finished, introducing real security risks.

Complex Rules

Finding unused rules that have not matched any traffic, duplicate rules, and rules that are covered by other rules is a complex manual task for the firewall administrator. It may take days of investigating just to locate such rules in huge firewall configurations, while at the same time the firewall is continuing to change daily due to user requests.

Firewall Policy Vulnerability

Gartner noted in a recent research note that …

“Through 2018, more than 95% of firewall breaches will be caused by firewall misconfigurations, not firewall flaws.”

Organizations need to develop a Firewall Rule Lifecycle Management process to clean up their firewall policies, easing the network security administrator’s job while boosting firewall performance and eliminating security holes.

Organizations need to identify and address, as follows:

  • Unused rules: Rules that have not matched any packet during a specified time. Either the use of Cisco ACL hit counters, central ‘syslog’ logging or, commercial tools can be used for analysis to look at the firewall logs compare the actual traffic to the rules in the policy. Unused rules are ideal candidates for removal. Often the application has been decommissioned or the server has been relocated to a different address.
  • Covered or duplicated rules: Rules that can never match traffic because a prior rule or a combination of earlier rules prevents traffic from ever hitting them. During firewall cleanup such covered rules can be deleted since they will be never used. Covered and Duplicated rules cause the firewall to spend precious time for free and decrease its performance.
  • Disabled rules: Rules that are marked “disabled” and are not in operation. Disabled rules are ideal candidates for removal, unless the administrator keeps them for occasional use or for historical record.
  • Time-inactive rules: Rules that were active for a specified time in the past and that time expired. Rules that were active for a specific period can become active again at the same time next year. Retaining such rules may create security holes.
  • Rules without logging: Rules that are defined not to generate logs. Usually security best-practice guidelines dictate to log everything. Since log information consumes a large amount of disk space, administrators often configure highly used rules that control low risk traffic not to generate logs. Listing the rules without logs will help the administrator verifying that the lack of audit for these rules is not in contradiction to policy.
  • Least used rules and most used rules: Rules that matched the smallest number of packets or the largest number over a predefined and configurable period of time. The rules usage statistics helps the administrator in the cleanup process for performance improvement: he may want to reposition most used rules in higher places in the configuration and least used rules in lower places. Rules with zero hit count may be removed.
  • Rules with empty comments: Rules not documented, i.e., without a text explanation or reference # to the original change management request. Often policy requires an explanation for each rule so defining rules without comments are a violation of the policy. Some companies require entering a ticket number of the help desk trouble-ticketing application into the rule comment.
  • Unattached objects: Objects that are not attached to any rule or unattached global object.
  • Empty objects: Objects that do not contain any IP address or address range.
  • Duplicate objects: Objects that already exist but are recreated contributing to the policy “bloat”.
  • Unused objects: Objects whose address ranges didn’t match any packet during a specified time or unused global object.

By removing the unnecessary rules and objects, the complexity of the firewall policy is reduced. This improves management, performance increases, and removes potential security holes.

Cleanup Phase 1: Validation

The Validation phase involves manually (or with the use of public domain or commercial tools such as Algosec or Tufin) reviewing the Firewall Rules and performing a static analysis.

Algosec Example

Items to be reviewed in this step are, as follows:

  •  Unattached Object / Unattached VPN User- Group – An object that:
    • Does not appear in any rule
    • Every group it belongs to does not appear in any rule
    • In any policy on any firewall
  • Empty Objects:
    • Do not refer to any IP address
  • Unattached VPN Users:
    • Do not appear in any user group and have no access
  • Unattached access-list (Cisco)
    • Not connected to any interface
  • Expired VPN users
    • No longer have access
  • Disabled Rules:
    • Maybe it’s time to delete them?
  • Time-Inactive rules:
    • Timed Rules are active on a certain days of the month, days of the week, or times of the day…
    • … But you cannot set a year.
    • Identify the expired rules before they will become active again next year.
  • Duplicate Rules
    • Firewalls process the rules in-order “first match”
    • If “early” rules match every packet that a “late” rule could match – the “late” rule is covered (== useless clutter!)
    • Easy cases:  single rule covers another rule  the object names match exactly
  • Duplicate Objects:
    • Most FW Vendor consoles cannot answer the question “does this definition already exist with another name?”
    • Result:  Administrators often define the same object (Host, Subnet, or Group) multiple times

Cleanup Phase 2: Unused Rules

The Unused Rules phase involves Usage-based Analysis, i.e., focusing on what has changed recently and ensuring that the Firewall Rules are kept up-to-date and those rules that are no longer needed are flagged and/or removed so that the Firewall does not become unwieldy and risk conflicts or duplicates.

Rules Cleanup

This step allows us to identify key and useful data, as follows:

  • Unused Rules:
    • have not matched traffic in the last NNN days
  • Unused Objects:
    • Do not belong to any rule that matched traffic in the last NNN days
  • Most / Least used rules
  • Last date that rule was used
    • Even if it is listed as “unused” due to logging configuration settings

These considerations and notes should be borne in mind for this step, as follows:

  • Over time:
    • Applications are discontinued
    • Servers are relocated to other IP addresses
    • Test environments move to production
    • Business partnerships change
    • Networks are re-architected
    • Routing is changed
  • Result: Firewalls still have the rules – but the traffic is gone
  • Idea: Track and flag rules and objects that have not been used “recently”
  • Firewalls can log each matched packet
  • Log includes rule number, timestamp, and more
  • Basic approach:
    • 1) Filter the logs based on rule number
    • 2) Find the missing rule numbers and delete those rules
  • Challenge #1: Logging is configured per rule
    • Some rules are not configured to produce logs
  • Solution #1: List rules that do not produce logs separately
  • Challenge #2: Rule Insertions & Deletions change the rule numbers!
    • Which rule corresponds to what was used to be called rule 101 in Nov’07?
    • Makes long-term statistics unreliable
  • Solution #2: Vendor attaches a unique “rule_id” to each rule, such that:
    • Reported to log
    • Remains with rule through any rule add/remove/modify
  • Cisco Firewalls & Routers maintain a per-rule hit-counter
  • Advantages:
    • Unrelated to logging: un-logged rules are counted too
    • Rule insertions & deletions do not affect the hit-counters
  • Challenge:
    • Hit-counters are reset to zero when device reboots
  • Solution:
    • Take periodic snapshots
    • Attach pseudo rule_uids, homogenize the snapshots
    • Make sure not to double-count …
  • Some rules only work occasionally or rarely
    • High-shopping season
    • Disaster recovery rules – tested semi-annually
    • Need usage information of many months
  • Challenge:
    • Log files can become huge – querying extended historical data can have a real impact on product log server
    • Logs are discarded or rotated
    • Hit-counters are occasionally set to 0
  • Solution:
    • Process the raw usage information frequently (daily)
    • … But keep concise summaries available (forever)

Cleanup Phase 3: Performance Optimization

In order to provide a measurable attribute for firewall performance that will show the improvement of the policy optimization, there is a metric called Rules Matched Per Packet (RMPP).

Rule Optimization

RMPP is simply a calculation of the average number of rules the firewall tested until it reached the rule that matched a packet (including the matched rule). For example:

If the firewall policy consists of only one rule (allow or deny all) that matches everything – RMPP will be 1. If the firewall policy consists of 100 rules, such that rule #1 matches 20% of the packets, rule #10 matches 30% and rule #100 matches 50% of the packets:

RMPP = 1 * 20% + 10 * 30% + 100 * 50% = 0.2 + 3 + 50 = 53.2

Firewalls do in fact test the rules in sequence, one after another, until they reach the matching rule, and each tested rule contributes to the firewall’s CPU utilization. Therefore, optimizing the policy to decrease the RMPP score will decrease the firewall CPU utilization and greatly improve overall performance.

Building on the previous example, if rule #100 (that matches 50% of the packets) can be relocated to position #50 – without modifying the firewall policy decisions – the RMPP will be reduced significantly:

RMPP = 1 * 20% + 10 * 30% + 50 * 50% = 0.2 + 3 + 25 = 28.2

This simple change, which can be achieved by reordering the rules, can produce a 47% improvement in firewall performance.

Conclusion

Firewall administrators can achieve significant and measurable performance improvements for their complex firewalls by using these cleanup, lifecycle management and, policy optimization (with rule reordering) techniques. There are many commercial tools available that help in policy cleanup identifying rules that are unused, covered and disabled and should ideally be removed. This is in addition to unattached, empty, duplicate and unused objects. The tools help to eliminate security risks and keep the firewall policy well managed by alerting administrators.

The more veteran firewall audit vendor list includes: Tufin Software Technologies, AlgoSec, Secure Passage and Athena Security — and then RedSeal Systems and Skybox Security, which are primarily risk-mitigation tools, and so go beyond firewall audit to feature risk-assessment and risk-management capabilities.

Thanks for your interest!

Nige the Security Guy.

Security Program Best-Practices 4

Security Program Best-Practices – Part 4

Security Program Best-Practices – Part 1
Part 2
Part 3
Part 5

This blog continues our Security Governance Series with the next installment of recommended security program best-practices drawn from a broad sample of assessments.

As a refresher, in Part 1 we shared some typical gaps, deficiencies or, need for improvements summarized in the Opportunity Matrix below. The Opportunity Matrix can be used as a capability maturity assessment and iterative planning tool to present proposed next steps to executive management for approval and funding.

Opportunity Matrix Summary

Part 1 through Part 3 of the Security Program Best-Practices series covered an overview as well as Gap 01 through Gap 07 inclusive. In this blog we will discuss Gap 08 – Integrate Central Security Logging through Gap 09 – Establish Network Security Operations, per summary below.

  • GAP 01 – Identify Requirements: Security Policy, Regulation and, Laws
  • GAP 02 – Develop Security Governance Program
  • GAP 03 – Establish Network Security Organization
  • GAP 04 – Establish Security Collaboration Working Group (WG)
  • GAP 05 – Develop and Maintain Network Security Standards
  • GAP 06 – Develop Network Security Architecture (3-5 Year Objective)
  • GAP 07 – Develop Network Security Roadmap (with Annual Plans)
  • GAP 08 – Integrate Central Security Logging
  • GAP 09 – Establish Network Security Management & Operations
  • GAP 10 – Develop Firewall Rule Lifecycle Management

Gap 08: Integrate Central Security Logging

Business Problem

To enable and deploy a defensible security posture pervasive and mission-critical information technology and hyper-extended networks must be more scrupulously monitored to detect anomalies and threats. High traffic volumes are also associated with higher threat levels, making automated network monitoring, alerting, and response indispensable. Automated monitoring improves system security, performance, and availability by allowing management by fact. Automation also frees the IT team to focus on exceptions, which in turn simplifies holistically managing large amounts of event data.

Vulnerability Types

Being able to monitor various instrumentation telemetry data sources and event logs gives an administrator a substantial advantage in identifying threats early on – rather than investigating them after the fact. A sound logging strategy is the centerpiece in any organization’s “big picture – big data” network security plan. The presence of event monitoring within its log strategy helps distinguish a proactive plan from a reactive plan.

It is well established among network security professionals that the greatest threats to network security are in fact internal – they often originate in the same building, the same floor perhaps, and often right down the hall. The source may be a disgruntled employee, a curious staff member in the payroll department, or a bored sales representative. For several years, this threat was overlooked for the sexier external threat – the hackers working in dark home offices late at night or a competitor’s agent of corporate espionage.

To a network security administrator, event logs are like a history book or the gauges of an automobile. Event logs allow administrators to look back at the recent history of a server or network device and see trends, failures, successes, and other vital information to the organization.

Botnet Army

Our richly interconnected online world has faced an ever increasing volume of malware and worm variants — even botnets. They exploit vulnerabilities in, for example the Windows operating system and systematically reproduce across the organization. All the while, servers, routers, and other network devices quietly log these events across LANs and WANs. For administrators, these log files gave them a snapshot of a window (excuse pun) in time that showed when, where, and most of the time, how the infection or compromise entered their controlled space.

Event logs also hold potentially valuable forensic evidence. In the aftermath of a network security breach, event logs hold all of the information about the breach. How it happened, when it happened, and in the end, the keys to preventing another breach. This data is key to enable the ability to Detect, Contain and, Eradicate as well as investigate the root cause analysis, address and prevent recurrence in the future.

Gap 09: Establish Network Security Operations

Business Problem

The problem with network security is not the lack of good security tools; it is the management of those tools and the exposure to human error. Large networks generate an overwhelming amount of logs and security events. Firewalls, intrusion detection systems, web servers, authentication devices, and many other network elements contribute to more and more logs which need to be analyzed and produce actionable information.

Holistic Logging

There is a lot of noise, at first and false positives that need to be resolved and addressed through profiling network traffic and tuning network security technologies to customize them to the organizations business – to detect anomalies and leverage the true potential and value from the technology or technologies. Too many organizations deploy the solution out-of-the-box and stop there, disappointed by all of the noise and overwhelmed by the task at hand.

However this on-going effort and its optimization can reduce the amount of alerts from thousands per day to dozens. When a correlation occurs, a simple message that says a particular server has been attacked with a technique which is likely to succeed can be sent to system owners, operations people, and other places. The operator starts to realize value from the technology and its automation to focus on those alerts and events that need action as to whether they are a breach or not – thus need further investigation.

Attackers typically create a smoke screen, a set of decoys that obscure the actual compromise or infection so that it is lost in the noise and any security operators are so overwhelmed they do not detect the stealthy attack. This is validated by the recent spate of DDoS attacks that not only seek to deny normal service but also seek to compromise servers under the cover of the attack. Many SEIM solutions generate a lot of noise out-of-the-box and need tuning to weed out and optimize.

Detection and Response

Systems fail and intrusions occur. At some point compromise is inevitable. Therefore, detection and containment is imperative. The earlier an intrusion or infection is detected, the greater the ability of the organization to mitigate the risk. Intrusion detection is considered the second line of perimeter defense, after the firewall. Intrusions can lead to malicious acts such as: identity theft; compromise of confidential information; and unauthorized changes in files, systems, and device configurations.

Threat Landscape

An organizations ability to detect and prevent intrusions adds more depth to its defensive security posture. Organizations must be aware that intrusion detection alone will not mitigate the risk of an intrusion. Mitigation can only occur with a timely and appropriate response. A prudent  response program incorporates people and processes in addition to technology, and starts with the creation of a computer security incident response team (CSIRT) that will be the initial responder when an incident is identified. In addition to the CSIRT, policies must be developed to guide the organization and team in responding to an event. Types of events and the specific procedures to be followed also need to be defined. The development of an incident response program is typically mandated by regulation, international standards or, industry best-practices.

The timely detection of an intrusion coupled with being prepared to respond is vital to minimizing financial, production, and operational losses. Specific actions and responsibilities need to be pre-assigned and the appropriate training provided. In addition, containment and restoration strategies need to be outlined that address the: isolation of the compromised system; increased, monitoring, collection and preservation of evidence; and notification to law enforcement, regulators, and other affected parties.

Continuous Improvement

Monitoring and updating the security program is essential to maintaining the effectiveness of the program. A static program will be ineffective over time and can leave the organization with a false sense of security. Monitoring should include both non-technical as well as technical issues.

Plan Do Check Act

Non-technical issues would include changes in business processes, policies and procedures, locations, sensitivity of data, key personnel, and organizational changes.

Technical issues include monitoring for vulnerabilities, changes in systems, service providers, configuration, users, products, and services. When changes do occur, it is imperative that they are reviewed for accuracy and legitimacy and the program is adjusted to reflect the changes and ensure continued security and operational success.

Accidental changes can be just as damaging as malicious or fraudulent change activities – resulting in increased costs for remediation and potential losses or negative affect on the organization’s top-line revenue. Best practices mandate the monitoring of all changes, intended and unintended, that will create an audit trail that details when, what, and how the change occurred. The use of automated change control and audit tools will also enhance operational efficiency by increasing the effectiveness and productivity of your security personnel.

Each change can potentially create a vulnerability or weakness in the security program if not properly evaluated, tested, and deployed. Therefore, strong change control procedures and monitoring are critical to reduce the exposure to financial losses, reputation damage, and loss of productivity.

Validation: Trust but Verify

To assure that its security strategies are adequate, each organization must test its controls against the risks events that were identified through its formal assessment of risks. The higher the probability and negative affect of a risk event, the greater the need to validate the effectiveness of the security controls. The type of test to perform and the frequency should also be based on risk.

Risk Management

Prior to testing, detailed test plans need to be developed to ensure testing is appropriate and controls are established to reduce the risk to data integrity, confidentiality, and ensure availability. Test results need to be measurable and traceable to provide assurances that the security strategy is meeting security objectives.

There are a variety of testing methodologies and tools available, many of which can be automated to improve efficiency and enable independence. Independent diagnostic tests include penetration tests, audits, and gap assessments that are performed by credible individuals who are considered independent of the design, installation, maintenance, and operation of the test subject area. Examples of resources that will help support and streamline the testing efforts include: log and audit files generated via security event management systems, change management reports, automated audit tools coupled with penetration testing, prior security gap assessments findings and recommendations, and internal IT audit findings and recommendations from prior audits.

No one control or solution can ever guarantee 100 percent security. High-performing organizations understand that business and technology risk management best practices mandate a defense-in-depth security approach that includes multiple controls and can be validated with internal and external audit resources. When properly aligned with the organization’s risk profile, all of the controls discussed above help to establish a practical and prudent risk-based security posture.

Balancing Security

When properly aligned with the organizations’ business goals, audit personnel and tools can validate the appropriateness of these controls and help to ensure operational excellence and a secure infrastructure.

Coming Soon

Security Program Best-Practices – Part 5 will complete this Security Governance Series with a significant topic that warrants its own blog, Gap 10 – Firewall Rule Lifecycle Management for discussion and helpful advice on key components.

Thanks for your interest!

Nige the Security Guy.