Showing posts with label Security Tactics. Show all posts
Showing posts with label Security Tactics. Show all posts

Six Steps for Managing Cyber Breaches

Ale Espinosa

You’ve been breached. Now what?

Being quick to respond to a security breach is critical in minimizing the impact that malware could have on your network, as well as limiting an intruder’s access to your data. Having helped numerous clients with their cybersecurity needs, we have identified how to better prepare for and respond to cyber-attacks, which we included in our recently published white paper Incident Response: Six Steps for Managing Cyber Breaches.

With 70% of cyber-attack victims being notified by third parties about their security breaches (which you can read more in my recent blog post Hello? You’ve Been Breached.), many security professionals from even the largest organizations and agencies in the world have found themselves surprised by the fact that their enterprise was center stage to a cyber-attack –sometimes for several months—all without their knowing. That is why it is extremely important to be proactive about implementing security best practices and an incident response plan, as well as having in place tools for the detection, analysis, and remediation or cyber-attacks, such as EnCase Analytics and EnCase Cybersecurity.

...Or you could fix the software.

Josh Beckett

One of the fundamental realities of security is dealing with vulnerabilities.  In the industry, we have become so jaded to the fact that software makers simply don't want to go to the trouble and expense of churning out secure code that we have just learned to 'abide.'  Consequently, we come up with elaborate ways to measure vulnerabilities and concoct Wile E. Coyote style mitigation plans to bring the risk down to an acceptable level.

Occasionally, I'm reminded that my permanently security-tainted skepticism needs a bit of a challenge to my comfortable position that there is no real security, there is only incident response.  We continue to fight a losing war and resign ourselves to try harder tomorrow.  With nation-states throwing their hats and ample wallets into the ring and anonymously buying bugs and exploits and expecting it to not be reported to the software vendor or public, it seems all is lost.

Chinese government behind Chinese hack-a-thon...really?

Josh Beckett The Pentagon has come out and stated the obvious. When listening to this story this morning on NPR, the immediate thought that came to me was, "Yeah, well, what are you going to do now?"  Of course, the interviewer asked that very question and the interviewee burbled and hemmed and hawed.  No real answer.  What can you do in a war that is not fought on a physical battlefield with physical weapons, but inside of computers?

When it comes to personal information, be your own, best custodian

Josh Beckett When it comes to maintaining the security of our information, we expect that others will do a good job with our information. However, when we, ourselves, don't really care and fling info all over the internet, why would anyone act surprised when others fail to do any better? Several days ago I read Bruce Schneier's blog where he characterized society today as a surveillance state.  I think that is a bit generous and would think that the label of police state would be slightly more appropriate.

The Road to CEIC 2013: Cybersecurity 101

Jessica Bair The “Road to CEIC 2013” is a series of blog posts on all things CEIC, before, during, and after, from an insider’s point of view. 

Are you an EnCase® Enterprise user who'd like to learn how to automate your network-enabled incident response? Or, perhaps an experienced EnCase® examiner looking for a career change or career enhancement? If a more complete approach to incident response is  on your task list, you should attend Cybersecurity 101 with Josh Beckett, product manager for EnCase® Cybersecurity, at the CEIC 2013 Cybersecurity and Compliance Lab.  This hands-on lab will demonstrate the basics of using EnCase Cybersecurity, as Josh walks through the major use cases of how the software will assist you in both incident response and compliance management roles; and how to implement it into your organization’s processes.

RSA Conference: Actionable Intelligence is the Missing Link in Incident Response

Anthony Di Bello Yesterday at Moscone Center I walked by the former Gartner security analyst who famously pronounced nearly 10 years ago that “IDS is dead.”

So it was fitting to attend the keynote by RSA Chairman Art Coviello and hear him say, “It’s past time for us to disenthrall ourselves from the reactive and perimeter-based security dogmas of the past and speed adoption of intelligence-driven security.” He described a fact that’s inescapable to all security professionals now, which is that alerting systems and point solutions for threat response aren’t sufficient to respond to modern threats. The time has come to change the way we perform incident response by using rapidly accessible, actionable intelligence to make the stakes higher for hackers, crackers, and thieves.

Guidance Software Customer Wins a Government Security News Homeland Security Award

Anthony Di BelloWe’re pleased to share that our customer, the U.S. Department of Energy (DOE), was selected as “Most Notable Cyber Security Program, Project or Initiative” in the 4th Annual Government Security News Homeland Security Awards competition.

The DOE won in one of the 42 categories. All of the winners were announced at a gala dinner in late 2012 that drew hundreds of government officials and industry executives to the Washington Convention Center.

EnCase software users have access to an all-in-one architecture for compliance, incident response, investigations and e-discovery. The software allows users to identify malware exploits and rapidly sweep all nodes on all networks to confirm the existence of that malware and then choose to remediate from a central console, saving hours of time on incident response and ensuring data integrity.

The winners were selected by a panel of objective judges, according to Jacob Goodwin, Editor-in-Chief of Government Security News. "We received an outstanding group of entries and have handed winners’ plaques to an exceptional group of companies and government agencies," he said.

A complete list of winners can be found at the GSN Magazine website. Congratulations to our associates at the DOE!

A Trio of 2013 Security Predictions

Anthony Di Bello It’s that time of year again. The time when everyone predicts what they think will happen in the next year when it comes to IT security.  Most are predicting the obvious: attacks will increase in both numbers and complexity; they’ll be more hacktivist type of attacks, and another major headline-making breach or two.

No doubt most of these things will happen. But many of these predictions overlook some of the crucial technological changes underway when it comes to protecting corporate data.

Here are the three big trends we think will take hold this year.

1. Host and network security technologies will begin to converge. Signature-based malware defenses, whether running on the network or on the host, can no longer be counted on to identify - let alone block - today’s sophisticated attacks. More enterprises are just realizing now that they need rapid insight to what is happening both on the network and on the host. Just looking at one or the other doesn’t provide a complete picture into the nature of attacks.

In the coming year, what organizations will come to realize is that they need to thoroughly understand the state of the endpoint and network at the time of attack. They’ll want to know who was authenticated to the system at the time of the breach, what services and applications were running, what data may have been accessible, what networks and network segments the system was actively connected, among many other potential variables.

The rationale here is simple. As threats become more advanced, relying on data from single-points on the infrastructure isn’t sufficient. That’s not good enough for detecting threats, and certainly not good enough to respond to any successful attacks or understand the extent of the risk presented. And organizations are also learning that both incident response and detection should be more closely integrated.

Security Information and Event (SIEM) and incident response software vendors are aware of these trends, too. And they’ll be continuing to integrate their solutions to facilitate the ability to near-instantaneously grab state data on an endpoint while sharing alert data with the SIEM. It’s also a trend we’ll be keeping a careful eye on here at Threat Response.

2. Organizations will increasingly focus on their data. This is a welcomed trend.  Organizations will finally begin implementing processes and technology to maintain a “data map” that details where all of their valuable unstructured data resides.

And just as organizations now assess their systems for vulnerabilities that must be remedied, they’ll also continuously audit for sensitive data, and look for ways to enforce their data policy - such as where sensitive data can be accessed and stored.

For years now, whenever I speak in front of groups and I ask attendees if their organizations have data retention policies, all of their hands go up. When I follow up with who can enforce any of those polices, no hands go up. In the next year, we will see more folks focus on technologies that will help them understand where their valuable data actually lives.

3. Thin client, mobile virtualization and data centralization initiatives will be embraced to secure mobile devices. More and more corporate data are being accessed on mobile devices as more enterprise applications are being run on iOS and Android tablets and smartphones. And part of the challenge is that increasingly employees are choosing the devices and the services they want to use to get their jobs done. No one wants to be forced to work on old, dull corporate issued notebooks or mobile devices. They want to use the same phones and tablets at work as they do at home.

The risk here is high. It means regulated and protected information is much more likely to end up on devices that organizations don’t even fully control.
So what’s likely to be the solution? I think, increasingly, we will see enterprises give up entirely on trying to control the BYOD trend, and instead they will choose to work with it. And the technology they chose to do this will either be a mix of mobile thin client and mobile virtualization, along with initiatives to centralize business data and push users to these central repositories to work with this data. Approaches designed to segregate or centralize critical business data in such a way as to make it a more reasonable task to secure in a scalable manner. 
In the year ahead, while many will focus heavily on the advances of the threat and attack side of IT security, it’s important not to forget the advances on the defense side of the ledger. You don’t have control over the actions of the criminals and malicious, but you certainly do have control over how you manage and security your data and the level of security insight you bring into organization you create.

Recent breaches show traditional security defenses fail to deliver

Anthony Di Bello

We’ve highlighted in numerous posts that studies of security incidents and publicly disclosed breaches reveal that it’s all too common for attacks to go unnoticed for days, weeks, months, and even years. And, nearly as troubling, it’s rarely the breached organization that discovers that it’s been compromised – rather it’s usually a customer, partner, supplier, or even law enforcement that eventually notices something is awry and brings it to victims’ attention.

All of that was certainly true with the South Carolina Department of Revenue attack that we covered here. In this incident, the post-breach investigation found that the compromise occurred in mid-September and wasn't detected until mid-October. And when it was detected, it was done so by the United States Secret Service, which happened to be conducting a sting against the group that was responsible for the attack.

So what happened regarding this breach? As we learn more, it’s clear that time was working against the South Carolina Department of Revenue. To be fair, this is true for all targeted attacks. Take a look at the illustration below, from the 2012 Verizon data breach investigation report, which accurately demonstrates the scope of this challenge. The data in the figure below are the result of thousands of investigations that were conducted last year both by Verizon and a number of government agencies from multiple countries, including the United States Secret Service.

When looking at the various time spans between attack and response in all of those incident investigations, disturbing patterns emerge. Specifically, patterns appear when attack life cycles are segmented into four stages: the time between initial attack and compromise; the time between the initial compromise and data being stolen from the target; the time between that compromise and the point at which it was discovered; and finally the time between the discovery of that compromise and remediation.

The data find that attackers can exfiltrate data at best in a matter of hours, or days, and at worse in a span of only minutes. Once in, attackers have shown again and again that they have the ability to begin exfiltrating data as soon as they’ve compromised a system.

And this isn’t just a handful of organizations; it is thousands. This proves that the status quo provided by traditional security software simply isn’t good enough. And the reality is that after attackers have had weeks, or months, to rummage through a network, simply wiping servers and endpoints isn’t going to rid the infection. The attacker has had too much time to plant backdoors and create ways to burrow back in. 

Identify unknown, suspicious behaviors
What’s needed are ways to identify unknown, suspicious behaviors on endpoints. This is best achieved by performing periodic assessments designed to expose unknown running applications that exist in temporary memory; instances of known threats that morph (such as the Zeus banking Trojan); and the ability to conduct ongoing scans for variants of such threats in order to fully understand and address the scope of a successful attack against your infrastructure.

Additionally, and in order to reduce your attack surface, you also need to be able to audit endpoints for sensitive data, which in all likelihood, are the target of the attackers’ activity. By limiting pools of sensitive and confidential data, you can significantly reduce risk.

EnCase Cybersecurity helps in many of these efforts. First, EnCase Cybersecurity conducts network-wide system integrity assessments against a known good baseline that has been established. Essentially, what you are doing is performing regularly scheduled audits for anomalies across the range of endpoints. And it works because, while you don’t know what the unknown looks like, you do know what the baseline looks like. This allows you to look at everything that doesn’t match that baseline, so you then can decide whether it's something that's good (and should be added to a trusted profile), or if you've been exposed to a malicious attack that needs to be remedied and added to known bad profiles for future integrity audit scans.

How does EnCase Cybersecurity achieve this? It does so by leveraging the concept of entropy for similar file scans. Consider it a very fuzzy signature, but not an exact match, that the system is assessing. It doesn’t matter what kind of files are being evaluated – EnCase Cybersecurity will expose the files and processes used by advanced attacks that are easily missed by traditional security technologies, such as intrusion detection systems and anti-malware software.

We’ve recently completed a webinar on this topic, Hunt or be Hunted: Exposing Undetected Threats with EnCase Cybersecurity, that provides much more detail about how EnCase Cybersecurity helps to defend against advanced, clandestine attacks. I invite you to watch, and learn how your organization can proactively ferret out any possible breaches before it’s too late and attackers have had time to entrench themselves into your infrastructure.

# # #

South Carolina Department of Revenue Timeline Tells Common Tale

Anthony Di Bello

If any lesson is to be learned from the recent South Carolina data breach in which 387,000 credit and debit cards and 3.6 million Social Security numbers were stolen, it is that automated incident response is crucial.

Nineteen days after the South Carolina Division of Information Technology informed the state’s Department of Revenue that it had been hacked, a timeline of events has emerged that exemplifies the need for organizations to have proactive and reactive incident response capabilities in place. As the analysis within the Verizon Data Breach Investigation Report (DBIR), shows, it’s common for defenders to be so far behind the attackers that the damage is done before anyone knows what has happened. This South Carolina breach is no different.

The Verizon DBIR also reveals that 92% of data breaches are brought to the target organization’s notice via third-party sources, not by their own perimeter detection technologies—and once again, this South Carolina Department of Revenue breach is no different. The U.S. Secret Service informed the Department of Revenue of the breach almost a month after the data had been stolen.

According to what we now know publicly, on August 27, there was an attempted probe of the SC Department of Revenue systems. Another set of probes hit on September 2. Then, around mid-September, the breach occurred, and Social Security numbers, credit and debit cards were accessed. It wasn’t until early October that the Secret Service informed the Department of Revenue of a potential cyber attack. Then, on October 20, the vulnerabilities that made the attacks possible were patched. Finally, six days later, on October 26, the public was notified.

Thus the timeline looked like this, with a large gap between the breach and detection:

According to that timeline of known events, attackers were active on the target network three different times before any data were extracted. During that time, it’s a safe assumption that attackers were mapping out sensitive data locations and looking for vulnerabilities that would allow them to exfiltrate data without being noticed.

If public information is correct, it is likely that the initial probes included installation of a command and control beacon to ensure access to systems for continued reconnaissance. From there, it is very likely that there was ongoing covert channel communication and disk/memory artifacts that could have been detected before the attack was ultimately successful.

The central takeaway here is that something must be done to close the gap between when a breach is occurring and identified. We covered how to do that by having both advanced threat detection and incident response technologies in place, as we discussed in our Webinar 1-2 Punch Against Advanced Threats.

Not identifying breaches underway is a huge opportunity lost, because if the proper detection and response capabilities are in place, it’s possible to stop many attacks as they are in progress. For instance, it is very likely that technology like FireEye could have detected the illicit outbound communication, while EnCase Cybersecurity could have validated the hosts responsible for that communication as well as exposed additional artifacts with which to triage the scope of the attack underway.

At this point, FireEye would cut the outbound communication and EnCase Cybersecurity would kill the process and files that were responsible for that communication, and a scope/impact assessment investigation would commence—all before any data were stolen.

The resulting timeline would look like the graphic below:

While there’s no perfect defense that will stop all attacks, it’s pretty clear from the South Carolina breach, coupled with the data from Verizon DBIR, that with swifter, automated incident response, many more attacks could be stopped before any data are stolen.

The High Costs of Manual Incident Response

Anthony Di Bello It’s widely accepted and understood in most circles – but especially in IT – that when something can be effectively automated, it should be. In fact, automation is one of the best ways to increase efficiency.

It’s widely understood, that is, except when it comes to incident response. For whatever reason, incident response at most organizations remains largely a set of manual processes. Hard drives are combed manually for incident data. So are servers. Oftentimes, as a result, evidence in volatile memory is lost. And, in manual ad hoc efforts, confusion often reigns as to who should respond and how. In large organizations, where there are complex lines over who owns what assets and processes, such decisions are anything but easy. Determining who should respond to each incident as it is already underway wastes valuable time. We would never take this approach in the physical world — imagine a bank with no armed guards, no security cameras and no clearly defined emergency processes in place.

Additionally, organizations that rely heavily on manual processes would have to dispatch an expert to the location where the affected systems reside just to be able to perform their analysis and prepare for the eventual response.

There are clear costs associated with all of those aspects of manual response. But there are additional costs, too. First, with automated response, you can immediately validate the attack and prioritize high risk assets, reduce the number of infected or breached systems, and even more quickly contain an ongoing attack. This alone can be the difference between confidential data and intellectual property being stolen, or systems being disrupted. It also can be the difference between regulated data being disclosed, triggering a mandatory breach notification, or an incident that doesn't go any further than a single end user’s endpoint being infiltrated.

There are many ways automation can help to turn this around, and help put time on your side. For instance, with an automated incident response capability, such as that provided by EnCase Cybersecurity, it’s possible to integrate existing security information and alerting systems to ensure response occurs as the alert is generated. This capability dramatically reduces mean-time-to-response and provides the right individuals’ time-sensitive information they need to accurately assess the source and scope of the problem, as well as the risk it presents your data.

This automation also includes the ability to take snapshots of all affected endpoints and servers, so that immediate analysis of the exact state of the machine at the time of the incident can be performed. This is a great way to identify what is actually going on in the system, such as uncovering unknown or hidden processes, running dynamic link libraries, and other stealth activities. As threats grow increasingly clandestine, this speed is all the more important.

There’s also a facet of our technology that’s often not considered part of response, but actually is, and that’s endpoint data discovery. This way, you can understand where sensitive data exists across your enterprise, remove from errant locations, and ensure data policies are being followed to reduce the risk of data ex-filtration. By integrating that capability with detection systems, you can be assured to quickly understand the risk a threat presents any potentially affected machine based on its’ sensitive data profile and prioritize other response activities accordingly.

Finally, to ensure that the process is efficient, it’s crucial to have solid workflow processes in place. This makes it much more straightforward to quickly assign incidents to the right analysts, or teams, as well as track investigations from open to close.

It’s impossible to detail the exact monetary return of effective, automated incident response – but it’s certain that such automation will save you significantly. It will reduce your exposure, manpower required to respond, speed time to identification and remediation of a breach, and very likely limit breach impact. This is especially the case when caught early. With a malicious breach costing more than $200 per record, and breach records running into the tens, if not hundreds, of thousands per incident – not to forget regulatory sanctions, fines, and potential lawsuits – anything reasonable that can be done to mitigate the impact of the inevitable breach should be done.

Incident Response for the Masses

Anthony Di Bello Being able to leverage the powerful capabilities of forensic incident response software no longer requires significant, specialized training for the security analyst.

When an attack strikes, or a suspected breach is underway, time is everything. Unfortunately, alerts sent from intrusion detection systems, security information and event managers (SIEM), data leak prevention tools and others aren’t always the most accurate. Yet, every time consuming false alert and lost moment is costly to the effectiveness of the IT security program.

The trouble is that historically initial forensic investigations require detailed training. And that expertise isn’t always available at notice - if you even have those skills readily available on staff.

Helping to automate incident response, without the need for extensive forensic training, is one of the strongest points of EnCase Cybersecurity. When you first suspect, or know, an attack is underway, the first thing that needs to be accomplished is to validate the alerts as well as understand the nature of the attack and the depth of its impact.
  • Is the attack coming from:
    • A malicious insider? 
    • A knowledgeable and determined outside attacker?  
    • A low-risk malware infection that’s not likely to have progressed beyond a single system? 
  • How many endpoints or servers are involved? 
  • How many hours, days, weeks, or months has the threat likely been present?
These are questions that can truly only be answered after a complete examination of affected systems. EnCase Cybersecurity helps security teams do just that without deep forensics expertise, through its ability to expose and automate forensic response actions in the console they are most used to working in, such as a SIEM. This provides teams what they need not only to validate, but also to have a working understanding of how the threat is affecting any given endpoint, and to identify how deep the compromise does - or doesn’t - go.

As a simple example, take an alert type that runs a high false positive rate as a result of unpatched anti-virus on the indicated system. Normally, validating this false positive requires involvement from IT, and the time it takes for IT to obtain access to the system and report back the status of the anti-virus software installed – this could take several days. If a forensic incident response solution were integrated into the alerting system, validating this false positive would be a simple matter of automating a hash value look-up based on the hash value representing an up-to-date anti-virus executable or related file – the entire process taking mere seconds. This same concept can be used to validate malware detected in-motion – that is to say, understand immediate if the attack was successful.

All of this is completed without biases or misguided assumptions that cloud the judgment of many investigator’s during an investigation. The forensic grade, disk-level visibility granted by EnCase Cybersecurity provides teams a transparent, accurate view of what’s happening and what exists on endpoints, from advanced malware to misplaced regulated data, and helps teams to quickly understand the nature of attacks.

The tools are out there to help simplify incident response and forensic analysis and in today’s threat landscape, and it's time more organizations started using them.

Cyber-attacks: Gaining Increased Insight at the moment of the breach

Anthony Di Bello
Triage – the prioritizing of patients’ treatments based on how urgently they need care – saves lives when order of care decisions must be made. Medical teams base their decisions on the symptoms they can see and the conditions they can diagnose. I’m sure the process isn’t perfect, especially in the midst of a crisis, but skilled medics can prioritize care based on their knowledge and experience.

For IT incident response teams, cyber attacks and incidents also happen quickly, but the symptoms could go unnoticed until it’s too late, particularly in regards to targeted attacks for specific data such as medical records or cardholder data. In a previous post, I've discussed the challenges security teams have regarding mean time to detection and response. Despite the team’s knowledge and experience when it comes to dealing with incidents, without the right information – right away – it’s nearly impossible for the team to make the “triage” decisions needed to mitigate as much risk as they otherwise could.

On the other hand, suppose a number of employees have just clicked on links tucked away in a cleverly crafted phishing e-mail, and their end points infiltrated. As the attackers launch exploits aimed at applications on their end points, security alerts are kicked out from the endpoint security software. When there’s a mass attack, such as one that involves automated malware, hundreds or even thousands of end points can be infected simultaneously. It’s not hard for any organization to see when an incident like this is underway.

In either scenario, what’s one of the most crucial aspects of response? Beyond the ability to quickly identify that an attack is underway, the other – just as in triage – is to be able to identify what systems pose the greatest risk to security or contain sensitive data and require immediate response. In many cases, systems affected by a malware attack, for instance, may just need to be restored to a known safe state, while other systems – those with critical access to important systems or containing sensitive information – would need immediate attention so that risk can be properly mitigated.

Once an attack or incident is discovered, the clock begins to tick as you scope, triage, and remedy the damage. Every delay and false positive costs you time and money, and increases the risk of significant loss or widespread damage. The problem is compounded by lack of visibility into the troves of sensitive data potentially being stored in violation of policy.

One of the most effective things to do – whether looking at 500 systems that have been infected all at once or getting reports of dozens if not hundreds of unrelated incidents – is to decide which system breaches place the organization at greatest risk. 

There are a number of ways you can try to accomplish this. For instance, a notebook that is breached, which happens to be operated by a research scientist, could very likely contain more sensitive information than that of a salesperson. But what if a developer’s system gets breached? What if someone in marketing gets breached? Who knows, offhand, what risk that could entail. Perhaps the developer was running a test on an application with thousands of real-world credit card numbers. Maybe the person in marketing was carrying confidential information on a yet-to-be-released product. 

In either case, it’d be helpful to know if sensitive data resided on the systems. And with alerts and potential breaches coming in so quickly, one would think it’s nearly impossible to make such decisions on the fly? Fortunately, it’s not. We’ve written in the past about the importance of automation when it comes to incident response, and how the marriage of security information and event management and incident response can help improve organizations respond to security events. Now, here’s another technology that you might want to consider using with your incident response systems, and that’s the ability to conduct content scans, such as those that look for critical data, financial account data, personality identifiable information, and other sensitive content either on a scheduled basis, or automatically in response to an alert. 

Consider for a moment the potential value such a capability brings most organizations. First, it provides powerful insight that helps prioritize which systems get evaluated first. If several systems are hit by a targeted attack, you’ll instantly know which systems to focus your attention on for containment and remediation. Second, you may know, from the types of systems that are being targeted what data or information the attackers seek. This will give you valuable time, potentially very early in the attack, to tighten your defenses accordingly. Third, because you’ll have actionable information, you'll have a fighting chance at clamping down on the attack before many records are accessed, or at least mitigating the attack as quickly as modern technology and your procedures allow.

Unlike triage in health care, lives may not be at stake – but critical data and information certainly are. And anything that can be done, such as coupling incident response with intelligent content scans and immediately capturing time-sensitive end point data the moment an alert is generated, will increase the overall effectiveness of your security and incident response efforts, and help you understand immediately if sensitive data is at risk.

Before the Breach Part 2 - 6 Best Practices

Anthony Di BelloEarlier this week, we talked about the implortance of incident response. In this post, I'm going to touch on the 6 best practices presented in our webinar, and why I think these are important considerations for you to take.

Best Practice #1: Preparation. Whether you are running a sports team, a military, or incident response effort, success requires that the team be prepared. A plan of action needs to be in place. The organization needs intelligence on new threats and risks it faces. Also, anyone involved in the incident response process needs a clear understanding of what their expectations are and what they need to do to when an incident is underway.

Another part of preparation is understanding the abilities and limitations of your organization. Do you have the resources necessary to respond to outbreaks manually, or would network-enabled response help your team save time and effort? Other important aspects of preparation include having clear and up-to-date knowledge of your environment, as well as understanding where sensitive and regulated data are stored. Finally, as is true with any successful team, you need to test your incident response processes with fire drills and ongoing practice.

Best Practice #2: Identify the risk. Make sure you can identify incidents as quickly as is reasonably possible. Tune your intrusion detection systems properly and integrate security and infrastructure event logs with your SIEM (if you have one). This way, you can quickly identity the critical events that need immediate attention. And note that while SIEMs can help eliminate a lot of noise from your intrusion detection systems, as well as events from throughout your infrastructure, high risk incidents will need to be carefully vetted and handled by incident response proceedures.

Best Practice #3: Triage. Speaking of alerts, as you have them rolling in from various security platforms, you need to have a system in place that enables you to understand what threats need immediate attention and which can wait for analysis later. In one interesting example during our best practice webinar, Darrell Arms, solutions engineer, Accuvant Inc., recalls an instance when a client was on the verge of publicly disclosing what, initially appeared to be a significant breach on the company’s data. Fortunately, after a thorough forensic investigation, it came to be realized that there wasn’t any breach at all. That experience goes to show how important the power and visibility provided by capabilities driven by forensic principles can be.

Best Practice #4: Contain. To be able to contain a threat, you need to be able to collect, preserve, and understand the evidence associated with the incident. That includes any malware uncovered. And this evidence can’t be collected in a haphazard way; it needs to be handled as evidence, because it potentially will be. Malware and live system information must be collected and preserved in real-time, in order to provide accurate and timely details for a full scope assessment. This is crucial to understand potential targets of the attacker, as well as how deep the attacker may have infiltrated, or how far malware had propagated. Live system details as well as the Malware binary can be leveraged to seek out other infections throughout the network quickly. While the largest enterprises may have the expertise in-house to reverse engineer malware, it is a time consuming and expensive process when there is more readily available data that can be used to immediately perform an accurate scope assessment. Either way, have a plan in place to utilize when needed.

Best Practice #5: Recover & Audit. Before the incident can be considered closed, all offending malware and exploits have to be removed from affected systems, and any offending vulnerabilities that made the attack possible need to be closed. Systems need to be cleansed, or possibly even rebuilt. It’s important, for this stage, to have the ability to search throughout the network for other potentially infected or compromised systems. During this phase of your incident response plan, you’ll need to conduct a sensitive data audit of any affected systems that may have contained personally identifiable information, intellectual properly, data that are governed by regulatory controls, or anything that could possibly trigger a mandated breach notification.

Best Practice #6: Report & Lessons Learned. Hopefully, the breach didn’t involve regulated data. However, if it did, you’ll need to consult all relevant data breach notification regulations, and develop a plan with internal stakeholders such as IT, communications, business leaders, and legal on how you’ll move forward.

It’s important to remember that any business can be breached. In fact, most will be. And, if a company has been in business long enough, it’ll be breached more than once. That’s why it’s so important to learn from these incidents. Document, in detail, what went wrong, and suggest controls that could work better in the future. Also, document what went right and why.

Perhaps this negative incident can also be an opportunity to obtain the budget for things that have been neglected, but shouldn’t have been. Perhaps your organization needs more people dedicated to IT security and response. Maybe what’s needed is not more people but employees with perhaps different skills sets than are currently on staff. Maybe it is certain types of technology that you’re missing that would help to block such attacks more effectively, and more rapidly expose those that do manage to slither through. Take the opportunity to learn from what went wrong, so you can be stronger in the future.

DNS Changer malware highlights need for scalable forensic response

Anthony Di Bello

Given the fact that DNS Changer, 5-year-old malware designed to redirect traffic from infected users, still infects an estimated 58 of the Fortune 500 and at least 2 government agencies – it’s safe to say IT and IS staff cannot entrust users to oversee the security of their corporate/government issued devices. While the warnings have been loud and clear, and there are detection and cleanup tools available, it’s no fault of their own — most employees aren’t paid to spend their day ensuring that their computer is free of malware.

Unfortunately, for threats like DNS Changer, the detection and cleanup tools require physical access to any given machine in order to address the problem, and in any enterprise spanning multiple locations, or with remote employees, this poses a challenge for the information security team. 

Fortunately, there are tools and just enough publically available information to overcome this challenge. As mentioned above, the DNS Changer malware modifies device DNS tables to redirect the computer to fraudulent DNS servers. As such, the FBI has been kind enough to provide the ranges for fraudulent IP address that are being injected into the DNS tables of infected computers: through through through through through through

This information, coupled with cyber response technology like EnCase Cybersecurity, allow information security teams to rapidly audit the DNS tables on devices across the enterprise, exposing any device containing reference to a fraudulent DNS entry for a rapid, definitive understanding of any devices infected with the DNS Changer malware. At which point, the information security team can take proper steps to remediate the malware.

A view of a device DNS table as seen by EnCase with IP addresses associated with various DNS entries called out. An audit of these tables network-wide with EnCase Cybersecurity can be used to expose the effects of DNS Changer via known fraudulent DNS table entries.

While modern threats such as DNS Changer have learned to evade traditional signature-based defenses, these threats still leave traces of their effect somewhere on the target device whether on the hard disk, or in memory. Forensic response technologies like EnCase Cybersecurity are designed to rapidly audit the enterprise for these artifacts, enabling security teams with a full and accurate understanding of the scope of any incident, as well as the information to empower complete remediation of those threats.

Value of Incident Response Data Goes Far Beyond Any Single Breach

Anthony Di Bello

When thinking about the value of incident response, most people focus on how it limits the potential damage of recent attacks, or even attacks that are currently underway on the network. This is for good reason: proper incident response can help reduce risk, limit the scope of disclosures (should the investigation show that no PII was actually accessed, for instance), reduce the costs of each incident investigation, and cut the costs of breaches significantly.

Yet, what many don’t consider is how the information that is gleaned from the investigation can not only go a long way to understanding the source and scope of any specific incident, but that these findings can also provide the valuable insight needed to shore up defenses for future attacks.

Consider some of the findings of the 2012 Data Breach Investigations Report, a study conducted by the Verizon RISK Team. It found that 81% of breaches occurred through some form of hacking, and most by external attackers. Additionally, nearly 70% of attacks incorporated some type of malware, and many used stolen authentication credentials and also left a Trojan behind on the network as a way to gain re-entry.

If, for instance, you were breached in that way you’d know to keep a close eye for any suspicious logins (such as time, geographic location, failed attempts, etc.), as well as any files or network communication that aren’t normal in the environment. Yes, you should be taking care of those things anyway, but if you know you are being targeted, or have been recently targeted - it doesn’t hurt to tune the radar to look for such anomalies.

One thing about security is that system defense is often like squeezing a water balloon, when you squeeze and tighten in one place, it gets bigger someplace else. So as you harden certain areas of your infrastructure, it’s likely that attackers will quickly target another area. That’s why it’s important to consistently analyze security event data: Especially data from the most recent incidents and breach attempts.

Here’s a sample of ways incident data can help you thwart future incidents:

Data gleaned from incident investigations can provide a complete understanding of an incident and will inform IT security exactly how an attacker managed their way onto a system or network as well as how they operated once inside. Ideally, the collection of such data should be automated, to ensure real-time response before attack related data has a chance to disappear. Event related data that can be gathered in such a way gives analysts useful indicators they can use to quickly understand the spread of malware throughout their organization without having to go through the time-consuming task of malware analysis. This type of data includes ports tied to running processes, artifacts spawned by the malware once on the endpoint, logged on users, network card information and much more.

With this knowledge, you gain the ability to conduct conclusive scope assessment, blacklists can be maintained to protect against reinfection and other specific defenses against similar attacks in the future can be developed. For example, if you see more attacks through infected USB devices, it may be necessary to block such devices. If there are a number of phishing attacks, an organization can launch an employee  awareness campaign. If it’s an attack against certain server services left on, close them when possible and put in place mitigating defenses. You get the idea: Use what you learn to harden your infrastructure.

Data from the response can be used to develop signatures specific to your own intrusion detection systems and even used to tune alerts sent by your security information and event management system. That same data can be shared with anti-virus vendors so that they can craft specific signatures against new threats. For instance, an organization may be the only one to experience a particular kind of attack, or the attack may be vertical specific, but a thorough incident response process may be the only way to obtain data needed for a signature to protect one’s own systems and those of the community.

The investigation may indicate the attack came through a supplier or partner, or through a path within the organization once thought to be secure. With the right information steps can be taken to notify the breached partner, or potentially close security gaps you didn’t know existed on your own systems.

It now should be clear, when considering the value of incident response, that it’s important not to view this data in a vacuum, and that the processes in place can not only to contain the damage of the incident at hand, but make sure the data gathered is used for lessons learned and incorporated to make one’s infrastructure more resilient to future attacks.

Incident Response in the Cloud: Don’t Let It Be an Afterthought

Anthony Di Bello There certainly has been plenty of discussion around the impact of cloud computing on security. But the fact of the issue remains that cloud computing can both complicate, and simplify enterprise IT security. For instance, when an organization's data and applications are spread across multiple cloud service providers - security can become significantly more complex. However, using cloud and other IT outsourcing services small companies can outsource all of their IT - and security - and (in most cases) both greatly simplify their IT as well as increase security.

However, when talking about how cloud affects anything, including something as complex as security and incident response, it’s important to define what types of cloud services we are talking about. The impact on incident response will be considerably different depending.

Essentially, there are three types of cloud: public, private, and hybrid clouds. Public cloud is what most people think of when they say “cloud” computing. A public cloud is where the underlying infrastructure is shared, and resources are dynamically provisioned. Think of Amazon Web Services for cloud infrastructure, or storage-specific services such as Dropbox.

Then we have private cloud. Private cloud is primarily the domain of large enterprises and government agencies. And these are organizations that want a highly-virtualized, self-provisional cloud environment - but need to maintain full control and transparency over the infrastructure. Then are organizations that build a “hybrid” cloud infrastructure that consists of both public cloud and private cloud resources. Less critical data and applications may be used on the public cloud, while the private cloud is where classified, regulated or valuable intellectual property data will be stored and accessed.

The challenge for IR teams is understanding how each of these architectures affect digital investigations. It’ll be a topic that we look at from time to time in the upcoming months here in this blog.

A simple example in how a cloud architecture can affect a incident response would be how, under circumstances depending on the public cloud service provider, make it impossible to get the forensics data they need for an investigation. Because public providers may not have the internal policy framework, staff resources, technologies or even the architecture necessary to contain or recover data — such abilities will vary greatly from one provider to the next.

Also, the sharing of resources in multi-tenant environments may make it next to impossible for cloud providers to share logs, network data, etc. because of its contractual agreements with other customers.

Another area where cloud may complicate incident response efforts is when it comes to so-called rogue cloud services, when users turn to cloud providers without the knowledge or approval of the corporate IT department. This could include users storing data in public cloud storage services such as Megaupload, or using cloud applications at service providers that may not have the necessary processes in place to aid with IR investigation requests.

While cloud computing doesn’t change what makes for good incident response practices, it does add another level of complexity - and organizations need to be prepared for the change. Of course, this is nothing new to investigators and security teams who have had to deal with many technological changes over the years, from mobile device storage to the rise in intelligent portable devices, virtualization, and even the encroachment of early generation Web services onto the corporate network.

Cloud computing is simply another step in the evolution — and incident responders need to be prepared for the complexities cloud computing brings.

SEC Cybersecurity Guidelines Pose Potential Increase in Litigation forOrganizations

Anthony Di BelloChad McManamyOn October 13, the Division of Finance at the Securities and Exchange Commission (SEC) released “CF Disclosure Guidance: Topic No. 2 - Cybersecurity” representing the culmination of an effort on behalf of a group of Senators led by Senator Jay Rockefeller to establish a set of guidelines for publicly traded companies to consider when faced with data security breach disclosures. The concern from the Senators was that investors were having difficulty evaluating risks faced by organizations where they were not disclosing such information in their public filings.
According to the SEC in issuing the guidelines, "[w]e have observed an increased level of attention focused on cyber attacks that include, but are not limited to, gaining unauthorized access to digital systems for purposes of misappropriating assets or sensitive information, corrupting data, or causing operational disruption." And while the guidelines do not make it a legal requirement for organizations to disclose data breach issues, the guidelines lay the groundwork for shareholders suits based on failure to disclose such attacks.

The guidelines come on the heels of number of recent high-profile, large-scale data security breaches including those involving Citicorp, Sony, NBC and others – many of which have affected organizations around the world. A catalyst for the regulations is found in part in many organizations failure to timely report, or complete failure to report, their breaches. To curb any future disclosure issues, the SEC released the guidelines ordering companies to reveal their data security breaches.

As stated in the guidance notes, “[c]yber incidents may result in losses from asserted and unasserted claims, including those related to warranties, breach of contract, product recall and replacement, and indemnification of counterparty losses from their remediation efforts.”

“Cyber incidents may also result in diminished future cash flows, thereby requiring consideration of impairment of certain assets including goodwill, customer-related intangible assets, trademarks, patents, capitalized software or other long-lived assets associated with hardware or software, and inventory.”

Consistent with other SEC forms and regulations, organizations are not being advised to report every cyber incident. To the contrary, registrants should disclose only the risk of cyber incidents “if these issues are among the most significant factors that make an investment in the company speculative or risky.” If an organization determines in their evaluation that the incident is material, they should “describe the nature of the material risks and specify how each risk affects the registrant,” avoiding generic disclosures.

The SEC indicated that in evaluating the risks associated with cyber incidents and determining whether those incidents should be reported, organizations should consider:

-- prior cyber incidents and the severity and frequency of those incidents;

-- the probability of cyber incidents occurring and the quantitative and qualitative magnitude of those risks, including the potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption; and

-- the adequacy of preventative actions taken to reduce cyber security risks in the context of the industry in which they operate and risks to that security, including threatened attacks of which they are aware.

Rather than exposing new obligations for organizations, the SEC guidance highlights what company executives already knew about their obligations to report cyber incidents but may not have fully appreciated. The true lynch pin for every organization will be the determination of materiality and making the decision on which breaches gets reported and which do not. As such, public companies will also need to weigh real-world business risks specific to their particular market associated with incidents. For example, “if material intellectual property is stolen in a cyber attack, and the effects of the theft are reasonably likely to be material, the registrant should describe the property that was stolen and the effect of the attack on its results of operations, liquidity, and financial condition and whether the attack would cause reported financial information not to be indicative of future operating results or financial condition," the statement says.

Given the sophistication and success of recent attacks, forensic response has taken center stage when it comes to exposing unknown threats, assessing potential risks to sensitive data and decreasing the overall time it takes to successfully determine the source and scope of any given incident and the risk it may present.

Cybersecurity threats will continue to proliferate for companies of all sizes around the world. Failing to protect sensitive company data will pose an even greater risk going forward, so too will the legal implications for failing to disclose those material cyber incidents. A proactive, timely approach to prevention of cyber incidents represents the best case scenario for all organizations. Guidance Software’s Professional Services team and partners can help. Our consultants can help expose unknown risks in your environment, remediation of those risks, as well as provide prevention techniques designed to give your organization an active defense and knowledge against possible attacks unique to your organization.

Chad McManamy is assistant general counsel for Guidance Software, and Anthony Di Bello is product marketing manager for Guidance Software.

Beating the Hacking Latency

Guidance SoftwareJournalist Kevin Townsend recently spoke with Guidance Software’s Frank Coggrave about preventing data theft from hacking attacks by reducing the time from security alert to remediation and Guidance Software’s recent announcement of EnCase® Cybersecurity 4.3 that automates incident response through integration with SIEM tools like ArcSight.

The article discusses the value that SIEM solutions provide: they scan logs in real-time looking for anomalies, discover security events and can show where things are happening on the network. But they do have a shortcoming – they lack the next step which is response. That’s where Guidance Software’s EnCase® Cybersecurity comes in. EnCase® Cybersecurity is able to identify the root cause of the event and help IT administrators respond quickly, closing the gap between alert and response.

Kevin writes, “Today’s hacker likes to get in and hide himself. He thinks he can go undetected (and often can and does) while he infiltrates deeper into the network looking for the most valuable data. Hacking comes with its own latency – and you need to use that latency between infiltration by the hacker and exfiltration of your data in order to stop him…SIEM plus forensics has the potential to improve the SIEM and, by reducing the time to remediation, to defeat the hacking latency.”

An additional problem is that IT security is a 24x7 job. When the SIEM solution triggers an alert in the middle of the night, response can’t wait. Frank provided Kevin with an example of how EnCase® Cybersecurity can help:

“One of the filtering systems picks up that something is happening that shouldn’t. It reports it to the SIEM. Correlation with other alerts indicates that it’s potentially a serious incident. ‘But what do you do if it’s 2:00am. Or it’s just part of a whole series of other alerts happening at the same time? Well, the SIEM can now trigger EnCase® Cybersecurity Solution to automatically and immediately dive in and do an investigation. We can capture who is on the machine in question, what applications are running at the time, what processes are in memory; we can kill the applications if we want to, and we can clear up the incident before it becomes too serious.’ Going back to our earlier metaphor, SIEM+EnCase can now close the stable door before the hacking latency expires, while the hacker is still in the stable and before too much damage is done.”
Read the full article on Kevin Townsend’s website.

EnCase Automates Response to Security Incidents

Anthony Di BelloNew software and services from Guidance Software fill a critical gap in information security by helping organizations respond automatically to security attacks and breaches, giving businesses and government agencies the capacity to react to thousands of events daily and reduce the time between a breach and incident response.

Guidance Software has connected EnCase® Cybersecurity version 4.3 with security information and event management (SIEM) systems to facilitate security automation. For example, when an attack or breach event is suspected, the SIEM system can now automatically trigger an EnCase® Cybersecurity forensic response, including exposing, collecting, triaging and remediating data related to threats — essentially taking action on or gathering data about a security event that might otherwise have been missed.

By automating incident response, organizations can collect actionable information about an attack, minimize data leakage and economic damage, and reduce the time needed to eliminate the threat and return an endpoint computer to a normal state.

According to a September 2011 Cost of Cyber Crime study by The Ponemon Institute, the average time to resolve a cyber attack in 2011 was 18 days. Shortening that duration could reduce the cost and impact of an attack, which the Ponemon study placed at $416,000 on average. Results of the study also showed that malicious insider attacks can take more than 45 days to contain.

"Time is of the essence when performing incident response, but today's security teams are constrained by the volume of attacks and the time it takes to initiate a response. Any delay in response means a potential for more damage and a loss of valuable data," said Victor Limongelli, president and chief executive officer, Guidance Software. "By automating forensic response EnCase® Cybersecurity enables security teams to achieve a real-time view of what was occurring on endpoints during an attack, even if the incident occurred over a weekend or in the middle of the night."

Organizations have three ways they can automate incident response using new features in EnCase® Cybersecurity:

-- Integration with ArcSight — The integration of EnCase® Cybersecurity with HP ArcSight Enterprise Security Manager (ESM) offers four pre-programmed, automatic functions, including forensic auto-capture of system memory, scanning for Internet history and cache files, scanning for personally identifiable information, and conducting a targeted forensic data audit of a system. Security managers can run these EnCase® functions and view results from a pull-down menu inside ArcSight ESM with a few mouse clicks, or they can set them to run automatically, without manual intervention, when an incident triggers a security alert.

-- Response Automation Connector — EnCase® Cybersecurity 4.3 includes the new response automation connector, which is an application-programming interface (API) that gives organizations the ability to integrate the software with other security alerting systems. Customers using the API can integrate all of EnCase® Cybersecurity's incident response capabilities into their SIEM environment and automate those functions that are most important to their security processes.

-- Response Automation Services — Guidance Software has also launched new professional services offerings to help organizations with other security alerting tools or unique staffing needs to automate response to security incidents using EnCase® Cybersecurity.

Learn more about automated incident response with Arcsight ESM and EnCase® Cybersecurity. 
Read the news release.