APT Threat Analytics – Part 2

APT Threat Analytics – Part 2

APT Threat Analytics – Part 1

In today’s threat environment, rapid communication of pertinent threat information is vital to quickly detecting, responding and containing targeted attacks. The key to improving an organizations reaction time is a workflow and set of tools that allows threat information to be analyzed and actionable results communicated across the organization rapidly.

APT Threat Analytics 2

In this APT Threat Analytics – Part 2 blog, we will discuss the options for threat intelligence collaboration and sharing together with a current snapshot of the available tools and standards/developments to help determine whether your organization can benefit from an emerging dedicated in-house threat intelligence  program.

Given the escalating threat landscape, improving threat detection is key for most organizations, specifically developing an intelligence-driven approach. This requires collecting and consolidating threat intelligence data from internal sources (e.g. systems and applications) and external sources (e.g. government or commercial threat feeds) and using analytics to spot attack indicators or anomalous behavior patterns.

The core team should have the capabilities to achieve good visibility and situational awareness across the organization.

  • People: Internal and external networking for developing good sources of intelligence, communication for developing reports and presenting intelligence briefings
  • Process: Designing an end-to-end intelligence process including obtaining and filtering data, performing analysis, communicating
    actionable results, making a risk decision, and taking action
  • Technology: Analysis (drawing connections between seemingly disconnected data), and data analytics techniques

What is Threat Intelligence?

Threat intelligence means different things to  different organizations, so it’s important to first  define what threat intelligence means to your organization. The ultimate goals of threat intelligence gathering and sharing should be, as follows:

  • Develop actionable intelligence
  • Better intelligence translates to better protection
  • Increased protection translates to less fraud and decrease in revenue loss
  • Collective intelligence is far more effective than individual silos
  • Both external and internal sources are needed to address targeted threats

In-House Threat Intelligence

With the increase in advanced, multidimensional threats, organizations can no longer depend solely on existing automated perimeter gateway tools to weed out malicious activity. More and more organizations are considering development of an in-house threat intelligence program, dedicating staff and other resources to network baselines, anomaly detection, deep inspection and, correlation of network and application data and activity.

With the advanced, blended, multidimensional, targeted cyber attacks being levied today, your organization still needs an experienced set of human eyes analyzing data collected —not to mention its reputation.

APT Attack Patterns

Performing in-house threat intelligence need not be complex or expensive. Such a program can be as simple as IT staff being trained to pay closer attention to data. In other cases, threat intelligence might mean having a team of people performing deep content inspection and forensics on a full-time basis. Where an organization falls in that range depends on various factors such as critical assets/data, risk tolerance, core competency, and so on. The organization may elect a DIY in-house model or a co-sourced partner.

One of the biggest benefits of taking control of internal threat intelligence is that it forces organizations to, as follows:

  • Develop a deep understanding of –
    • How systems are used
    • How data is accessed
  • Recognize traffic and usage patterns
  • Pay attention to log data and to correlate that data with a known baseline of how users interact with data, applications and servers.
  • Consolidate and manage log sources

With all of this data at an analyst’s fingertips, organizations can recognize the subtle anomalies that may indicate an attack—the main goal of your threat intelligence effort. For more information see the forthcoming blog: APT Anomaly Detection as well as the new NG Security Operations (SoC V2) series.

APT Behavior

What would constitute a subtle anomaly? One example is inappropriate remote access to critical servers in your environment. Many organizations don’t bother to actively audit remote desktop access to critical servers, but what if out of the blue you detect repeated Remote Desktop Protocol (RDP) sessions and failed logins to a domain controller from a new system in your environment? Your gateway tools won’t help here, and this activity would certainly warrant investigation because it could indicate the beginning (or middle) of an advanced persistent threat (APT).

Picking the Right Tools

There are a few major hurdles when it  comes to performing comprehensive cyber threat  intelligence (CTI) in-house. Organizations need a core set of security tools to provide the essential foundational elements for performing threat intelligence. They should certainly consider leveraging external sources and service providers to fill in gaps in their defenses.

Some of the frameworks, tools, standards, and working groups to be considered are, as follows:

  • OpenIOC – Open Indicators of Compromise framework
  • VERIS – Vocabulary for Event Recording and Incident Sharing
  • CybOX – Cyber Observable eXpression
  • IODEF – Incident Object Description and Exchange Format
  • TAXII – Trusted Automated eXchange of Indicator Information
  • STIX – Structured threat Information Expression
  • MILE – Managed Incident Lightweight Exchange
  • TLP – Traffic Light Protocol
  • OTX – Open Threat Exchange
  • CIF – Collective Intelligence Framework

IODEF

Incident Object Description and Exchange Format (IODEF) is a standard defined by Request For Comments (RFC) 5070. IODEF is an XML based standard used to share incident information by Computer Security Incident Response Teams (CSIRTs). The IODEF Data Model includes over 30 classes and sub classes used to define incident data.

IODEF provides a data model to accommodate most commonly exchanged data elements and associated context for indicators and incidents

OpenIOC

OpenIOC was introduced by Mandiant. It is used in Mandiant products and tools such as RedLine, but has also been released as an open standard. OpenIOC provides definitions for specific technical details including over 500 indicator terms. It is an XML-based standardized format for sharing Threat Indicators.

  • Derived from years of “What Works” for Mandiant
  • Indicator Terms
    • Artifacts on Hosts and Networks
  • Logical Comparisons
    • Groupings, Conditions
  • Ability to Store & Communicate Context
  • Continues to be developed and improved

OpenIOC Process Flow

OpenIOC Process Flow

VERIS

The Vocabulary for Event Recording and Incident Sharing (VERIS) framework was released by Verizon in March of 2010. As the name implies VERIS provides a standard way for defining and sharing incident information. VERIS is an open and free set of metrics designed to provide a common language for describing security incidents (or threats) in a structured and repeatable manner.

  • DBIR participants use the VERIS framework to collect and share data
  • Enables case data to be shared anonymously to RISK Team for analysis
VERIS OverviewVocabulary for Event Recording and Incident Sharing (VERIS) Overview

CIF

The Collective Intelligence Framework (CIF) is a client/server system for sharing threat intelligence data. CIF was developed out of the Research and Education Network Information Sharing and Analysis Center (REN-ISAC). CIF includes a server component which collects
and stores threat intelligence data. Data can be IP addresses, ASN numbers, email addresses, domain names and uniform resource locators (URLs) and other attributes. The data can be accessed via various client programs.

CIF Architecture 2

  • CIF is a cyber threat intelligence management system
  • Can combine known malicious threat information from many sources
  • Use information for action: identification (incident response), detection (IDS) and mitigation

CybOX

cyboxlogo

CybOX is used for defining details regarding measurable events and stateful properties. The objects that can be defined in CybOX can be used in higher level schemas like STIX. The goal of CybOX is to enable the ability to automate sharing of security information such as CTI. It does this by providing over 70 defined objects that can be used to define measurable events or stateful properties. Examples of objects are File, HTTP Session, Mutex, Network Connection, Network Flow and X509 Certificate.

STIX

stix-logo

Mitre developed Structured Threat Information Expression (STIX) is for defining threat information including threat details as well as the context of the threat. STIX is designed to support four cyber threat use cases:

  • Analyzing cyber threats
  • Specifying indicator patterns
  • Managing response activities
  • Sharing threat information

It uses XML to define threat related constructs such as campaign, exploit target, incident, indicator, threat actor and TTP.

STIX V1 Architecture

Structured Threat Information Expression (STIX) Architecture

STIX provides a common mechanism for addressing structured cyber threat information across wide range of use cases:

  • Analyzing Cyber Threats
  • Specifying Indicator Patterns for Cyber Threats
  • Managing Cyber Threat Response Activities
    • Cyber Threat Prevention
    • Cyber Threat Detection
    • Incident Response
  • Sharing Cyber Threat Information

TAXII

taxii-logo

Mitre developed Trusted Automated eXchange of Indicator Information (TAXII) supports sharing of threat intelligence data. The Mitre definition for TAXII states, “Defines a set of services and message exchanges for exchanging cyber threat informationThese models allow for push or  pull transfer of CTI data. The models are supported by four core services:

  • Discovery,
  • Feed management,
  • Inbox
  • Poll

OTX

alienvault-logo-nav-green

Open Threat Exchange (OTX) is a publicly available service created by Alien Vault for sharing threat data. AV-OTX cleanses aggregates, validates and publishes threat data streaming in from a broad range of security devices across a community of more than 18,000 OSSIM and Alien Vault deployments. OTX is a centralized system for collecting threat intelligence. It is provided by AlienVault and interoperates with their Open Source SIEM (OSSIM) system, where SIEM is Security Event and Information Management. OSSIM is free to use. OSSIM users can configure their system to upload their threat data to OTX.

MILE

The Managed Incident Lightweight Exchange (MILE) Working Group is working on standards for exchanging incident data. The group works on the data format to define indicators and incidents. It also works on standards for exchanging data. This group has defined a package of standards for threat intelligence which includes Incident Object Description and Exchange Format (IODEF), IODEF for Structured Cyber Security Information (IODEFSCI) and Real-time Inter-network Defense (RID).

Cyber Security Standards

Structured Cyber Security Standards

TLP

The Traffic Light Protocol (TLP) is a very straight forward and simple protocol. TLP is used to control what can be done with shared information. Shared information is tagged with one of four colors white, green, amber or red. The color designates what can be done with the shared information. Information tagged white can be distributed without restriction. Information tagged green can be shared within the sector or community, but not publicly. Information tagged amber may only be shared with members of their own organization. Information tagged red may not be shared. Given its simplicity TLP can be used verbally, with email or incorporated in to an overall system.

Conclusion – KISS

The simplest way to mine threat intelligence is to leverage the information already on your systems and networks. Many organizations don’t fully mine logs from their perimeter devices and public-facing web servers for threat intelligence. For instance, organizations could review access logs from their web servers and look for connections coming from particular countries or IP addresses that could indicate reconnaissance activity. Or they could set up alerts when employees with privileged access to high-value systems attract unusual amounts of traffic, which could then be correlated with other indicators of threat activity to uncover signs of impending spear-phishing attacks.

Many of the standards are a good fit for organizations with specific needs. If an organization wants to share incident data and be part of the analysis of a broad data set, then VERIS would be the best choice. If an organization wants to share indicator details in a completely public system, then OTX would be a reasonable choice. If an organization is using tools that support OpenIOC, then of course OpenIOC would be the best choice. If an organization is looking for a package of industry standards then the MILE package (IODEF, IODEF-SCI, RID) or the Mitre package (CybOX, STIX, TAXII) would be suitable. Both have the capability to represent a broad array of data and support sharing of that data.

As a recipe for success it is important to start small, start simple, prototype and evolve as organizations gain confidence and familiarity and grow the threat sources, communication and collaboration – so that there is valid data, analysis and, actionable results. Focus on and validate the process.

Next Steps

In future parts we will delve further with some practical use cases containing examples and implementations, review external feed sources (pros and cons), discuss triage, building context, validation of data, and performing analysis as well as discuss the development of organization threat profiles based on risk management process.

In addition, we will be publishing the APT Anomaly Detection blog that complements this work using network flows to baseline a network and detect anomalous activity, so that there isn’t reliance on just one methodology. We will also kick-off the Next Generation Security Operations series that seeks to tie all of the APT Series threads into a holistic architecture and defensive strategy (SoC V2).

Thanks for your interest!

Nige the Security Guy.

Advertisements

APT Threat Analytics – Part 1

APT Threat Analytics – Part 1

APT Threat Analytics – Part 2

How can you predict emerging threats? Threat intelligence and analytics continues to dominate the headlines and attention of organizations seeking viable options in their escalating battle against advanced threat actors.

APT Threat Analytics 1

Threat intelligence is evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging threat to assets. It is used to inform decisions regarding response in terms of prevention enhancements with mitigating controls or detection adjustments (rules or triggers).

This APT Threat Analytics – Part 1 blog discusses best practices to identify internal and external data sources as well as help overcome many operationalization challenges to take an emerging program to the next level as a strategic component.

Value Proposition

The reality of continuously evolving, persistent, sophisticated and targeted attacks is wreaking havoc. Many enterprises only have fragmented or partial security monitoring in place, lacking complete visibility to what is happening in their environments. Detecting this stealthy activity often aimed at specific targets requires the ability to monitor and compare anomalous behavior over time, connecting the dots among multiple events.

The effective use and sharing of actionable threat intelligence has the potential to fundamentally change the way organizations prepare for and respond to attacks. By gathering threat intelligence from various sources and performing risk-based analytics enables mitigating controls to be proactively identified – applied to the APT Detection Framework to assess coverage and gaps – and put into place to provide better protection against emerging threats and vulnerabilities.

Terminology Definition

Risk management processes and frameworks are an important component in threat analytics in order to make sense of the feeds and gain actionable and tangible benefit. The importance of these fundamentals will become more clear as we progress through this blog.

Risk is defined as the likelihood that a particular threat using a specific attack will exploit a particular vulnerability of a system that results in undesirable consequences.

APT Risk Framework

Risk Framework

Risk Management – The process concerned with the identification, measurement, control and minimization of security risks in information systems to a level commensurate with the value of the assets protected.

Threats – Threats exist simply as the potential for an event to occur. Attacks occurring to others, but not us, are threats, while threats that have been seen as attacks on us have moved off of the threat phase and into the attack phase.

Catalyst – A catalyst serves as the trigger that causes a threat to be realized as an attack.  Business conditions, financial gain and political motivation may all serve as these triggers. A catalyst moves the flow of risk from existing as threat to occurring as an attack.

Attack – Attacks are threats that have become actualized events that may have an adverse impact. Attacks may be intentional or non-intentional and may occur as directed or non-directed.

Vulnerability – Vulnerabilities can allow an attack to: pose as another user or entity, execute commands or access data that is contrary to the specified access restrictions, reduce availability of an asset, or to hide activities.  Vulnerabilities may also be considered as a point of entry that an attack may attempt to use to gain access to the system or data.

APT Risk Framework 2

Back to Basics: Placing Threats in Context

Anything that may increase risk according to security policy, standards or guidelines may also be considered a vulnerability.  While vulnerabilities may have many causes, some of the most critical are apathy and ignorance. Lack of awareness of security issues is a common cause of vulnerabilities.

Impact – The final phase of risk is actual impact. Loss of Integrity, Compromise of Confidentiality or Reduction of Availability to any asset has an adverse affect on the business.  The implications of a threat becoming an attack that succeeds in causing disclosure, destruction or alteration of business assets range from financial loss to reputation damage.

The interaction of security solutions with risk, once it causes impact, are to reduce the impact, eliminate additional impact and/or institute recovery processes.

Threat Profile

A problem that many companies face is that they don’t fully understand the threats against their organizations. Creating a threat profile is a key step in understanding what threats a company faces and the potential impact if an attack were to be successful. A threat profile can also help companies prioritize resources in order to successfully defend sensitive data.

There are different ways that organizations can create threat profiles, we will cover some examples of applicable risk management frameworks and threat profiles in future blogs. The best option is to use or evolve established risk management processes or frameworks.

Risk Management Example

Risk Management Methodology Example

Threat profiles based on typical risk management methodology include scenarios about known sources of threats and typical outcomes. This valuable information enables organizations to move from being vulnerability-centric, with no real prioritization or protection of assets, to a more threat-centric posture, including strategies for defense and incident response.

Threat Intelligence Sources

The goal of threat intelligence is to reduce the danger of zero-day malware and other unknown attacks by providing early detection and near-real-time alerts.

There is certainly no shortage of threat intelligence sources and the staggering volume of data can be paralyzing. When it comes to intelligence, less is more. It is much better to have a smaller number of high-value sources than to be inundated with low-value ones.

In APT Threat Analysis – Part 2 we will discuss both the commercial and open source projects. Vendors compete head-to-head with each other on product features, functionality, and usability. The key point is that they should also compete on the overall accuracy, timeliness, and scope of their security intelligence – as well as standard APIs for open security intelligence integration with standardized data formats.

By their very nature, open source solutions are extensible and offer the ability to act on intelligence. Ideally CISOs should become active lobbyists for an open security intelligence market. This could help avoid the coming deluge of security intelligence as the right feeds are selected that align with organizations and to use this intelligence for its intended purposes – to lower risk and accelerate incident detection and response.

Threat Data Sources – Internal

For many organizations that consume threat intelligence, a key challenge is how to consume and act on that intelligence. Organizations should start small with just a few threat sources, start with a smaller internal prototype to deal with variances in data quality and relevancy and work out the process and kinks to derive useful actionable results then expand externally in iterative steps.

Once an analyst has gone through the intelligence cycle and has a qualified threat indicator, they must be able to apply it to the organizations security controls.

APT Threat Sources - Internal

Threat Data Sources - Internal

Threat Data Sources – Global

Security teams need even greater coverage of malicious activity from multiple sources to have a better understanding of what’s going on globally—as opposed to just the network under their control. They are turning to external systems that provide worldwide data correlation and analysis.

Most enterprises don’t have the staff or resources to do their own external threat intelligence gathering, so it makes sense to subscribe to a service that provides prepackaged threat intelligence data. This can be used in conjunction with managed security devices or fed into in-house-based sensors to better understand developing threats.

APT Threat Sources - Global

Threat Data Sources - Global

For those organizations who leverage APT Red Teams then that becomes another invaluable threat source and key validation of the organizations current security posture, to enable continuous improvement and optimization from counter-intuitive sources and help mitigate advanced threats.

The Importance of Analytics

To make threat intelligence usable within your organization, you must rein in all the data and make it scalable. Analysts need a platform that enables the prudent integration of both internal and external intelligence.

Intelligence isn’t all about sources and indicators … it is about analysis

Threat indicators typically include data items like IP addresses, fully qualified domain names, MD5/SHA-1 hashes, SSL certificates, filenames, mutexes, and registry keys. Much of today’s threat intelligence comes in the form of simple indicators. Unfortunately, attacks aren’t simple, and lists of indicators alone don’t always offer value to the organization. Please see APT Detection Indicators for more information on developing and using Indicators of Compromise (IoCs).

Threat Analytics Process

From Feeds through Analysis to Governance Actions

Correlation is key to turning all this data into intelligence so that defenses can be updated to address, closing the gaps and window of exposure more quickly. Chaining together multiple events and alerts makes it possible to identify sequences of activity that can equate to a far more severe incident. Correlation also helps validate the severity of threats by lowering the rate of false positives and noise.

Analysts need robust capabilities to query and interact with raw intelligence to determine what the implications to their organization are. The ability to pivot and uncover relationships that would otherwise be obscure is key. Visualization should also be a core capability making analysts’ jobs easier.

Feeds and Feedback

Once the organizations threat profile has been created (from above) and consumers of the threat feeds have been identified, the data has to get to the right places so it can be acted on. It is easiest to get this information to the security operations group because the majority of threat intelligence data will go directly into the SIEM.

Security Model Framework Interaction

Information Security Management Ecosystem

An important, and sometimes overlooked, step in the threat intelligence consumption process is providing feedback on the relevance of the data received and the effectiveness of action taken.

Conclusion

Security Operations

The rise of threat intelligence services is helping enterprises gain more insight and visibility into global and industry-specific threats. But organizations have to figure out how to take that information, make it actionable and put it in the right hands so it gets implemented in a timely manner.

The key feature of leveraging threat intelligence with analytics is that it produces predictive threat warnings and mitigation advice by monitoring security events from a wide and diverse variety of sources. Using heuristics and correlation techniques to analyze millions of global events, organizations look to uncover malicious activities.

 

Instead of using traditional signature-based analysis at the network perimeter, the Threat Management (or SEIM) platforms use IP, URL and file reputation services; contextual analysis; and behavioral rule sets to uncover and block access to malicious content, with some even adjusting or changing their security strategies in real time.

The big advantage is that, done right they consolidate threat, vulnerability, risk, fraud, spam, phishing, attacker and network intelligence information, overcoming the problem of information being fragmented and disparate stored in silos.

Coming Soon

In APT Threat Analytics – Part 2 we touch upon commercial, government and open source threat intelligence sources. We will focus on interesting emerging standards, such as Structured Threat Information Expression (STIX) as well as Trusted Automated Exchange of Indicator Information (TAXII) which are beginning to be embraced by startups such as Confer.

For open source, Bro Network Security Monitor, OSSIM, Snort, and YARA are just a few examples of projects that enable you to share threat intelligence across your ecosystem. The Security Onion packages many open source projects into a Linux distribution (for more information see APT Detection Indicators).

In future parts we will also delve further into the process of reviewing external feed sources (pros and cons), triage, building context, validation of data, and performing analysis as well as discuss the development of organization threat profiles based on risk management process.

References

This APT Threat Analytics – Part 1 blog is also a part of the APT Strategy Series and Security Architecture Series. For a complete listing of all NigeSecurityGuy blogs see the Security Series Master Index.

Thanks for your interest!

Nige the Security Guy.