Showing posts with label Incident Response. Show all posts
Showing posts with label Incident Response. Show all posts

The Road to CEIC 2013: Cybersecurity 101

Jessica Bair The “Road to CEIC 2013” is a series of blog posts on all things CEIC, before, during, and after, from an insider’s point of view. 

Are you an EnCase® Enterprise user who'd like to learn how to automate your network-enabled incident response? Or, perhaps an experienced EnCase® examiner looking for a career change or career enhancement? If a more complete approach to incident response is  on your task list, you should attend Cybersecurity 101 with Josh Beckett, product manager for EnCase® Cybersecurity, at the CEIC 2013 Cybersecurity and Compliance Lab.  This hands-on lab will demonstrate the basics of using EnCase Cybersecurity, as Josh walks through the major use cases of how the software will assist you in both incident response and compliance management roles; and how to implement it into your organization’s processes.

Attack Aftermath: What’s Next for South Korean Banks and Broadcasters?

Anthony Di Bello What's next for South Korean banks and broadcasters that were paralyzed by a massive cyber attack this past week? I was talking with Rodney Smith, who directs information security and field engineering here at Guidance Software and has consulted on post-attack digital investigations with hundreds of firms around the world.

His take is that a thorough digital forensic investigation is an urgent and essential next step to getting back to normal after having hard drives and associated master boot records (MBRs) wiped out. Master boot records encapsulate critical information on the organization of file systems on the drives. Affected systems were given a forced reboot command, but restarts were impossible because the MBRs and file systems had been corrupted.

RSA Conference: Actionable Intelligence is the Missing Link in Incident Response

Anthony Di Bello Yesterday at Moscone Center I walked by the former Gartner security analyst who famously pronounced nearly 10 years ago that “IDS is dead.”

So it was fitting to attend the keynote by RSA Chairman Art Coviello and hear him say, “It’s past time for us to disenthrall ourselves from the reactive and perimeter-based security dogmas of the past and speed adoption of intelligence-driven security.” He described a fact that’s inescapable to all security professionals now, which is that alerting systems and point solutions for threat response aren’t sufficient to respond to modern threats. The time has come to change the way we perform incident response by using rapidly accessible, actionable intelligence to make the stakes higher for hackers, crackers, and thieves.

Cutting Through the Cyber "Fog of War"

Anthony Di Bello Most people are familiar with the phrase Fog of War, which refers to the uncertainty present in the heat of military operations. That same “fog of war” is also present in the cyber battlefields of today. Without the right insight, it’s next to impossible to tell what constitutes an attack, let alone what attacks have successfully hit their endpoints. Today’s advanced threats are multi-dimensional, rapidly evolving and stealthy.

And they often hit endpoints quickly, sometimes through little known zero day vulnerabilities found in browsers, operating systems, and other applications, they’ll sit clandestinely and await instructions, which may be to exfiltrate data of value, burrow deeper into the infrastructure, launch attacks on others, or wait for a more opportune time to strike.

It may be startling to many, but faith in traditional defenses to fight these attacks is often misguided as anti-virus, intrusion detection and prevention systems, firewalls, and other old-line defenses fail to block, let alone identify these attacks and provide quick visibility into what is occurring on their network.

Guidance Software has recently partnered with FireEye, Inc. to help clear away the fog by integrating communications between their Malware Protection System (MPS) Appliances, which analyzes and protects network traffic with our EnCase Cybersecurity software, which secures the endpoint. Together, the two solutions provide a clear view into attempted attacks.

One of the first things customers of our partner FireEye explain, as soon as they install the FireEye MPS Appliance, is that they can suddenly see things they couldn’t see before, such as numerous bad outbound and inbound communications they previously had no idea were underway.

But seeing the threats is much different than being able to understand precisely what they’re doing on the endpoint. Security and IT managers need to know if malicious traffic is a threat to their networks and infrastructure, and if any of these attacks have successfully compromised an endpoint.

This is where the FireEye-Guidance relationship comes in. When the FireEye MPS Appliance identifies nefarious traffic, the integration with EnCase Cybersecurity makes it possible to automatically validate if the attacks detected over the wire had successfully penetrated into any systems attached to the network.

This integration between FireEye and EnCase Cybersecurity provides customers with everything they need to scope and remedy compromised endpoints.

To achieve this we’ve built an Enterprise Service Bus (ESB), a way to communicate, with other technologies. With the new integration, EnCase Cybersecurity listens for FireEye MPS to report on detected events via an XML feed that is translated by the listener service. With just IP address information and hash values related to the FireEye detected event, EnCase Cybersecurity will first validate whether or not the attack successfully compromised the indicated endpoint(s). Once it confirms the presence of malware, additional information related to the attack with be collected and presented to the security analyst via a thin client review capability. By capturing attack artifacts and indicators in this manner at the time of the alert, the security team can be confident that have a complete picture of the attack, and a wealth of information for which to triage, determine risk exposure, and accelerate remediation efforts.

Without this network to endpoint view provided by the FireEye MPS Appliance and EnCase CyberSecurity, there’s no realistic way to tell if exploits and attacks are harmless to an infrastructure (such as exploits targeting an OS that is non-existent on a network), or if some other countermeasure such as a firewall rule or intrusion-prevention system has successfully blocked an attack. 

Additionally, EnCase Cybersecurity, is grabbing all of the data about the state of the machine, including what processes are running in RAM, what services and system libraries are running, who is authenticated to the machine, and more. With that information, the security analyst not only understands what systems are truly at-risk, but they know what they need to know to more deeply understand the attack and what is truly at-risk.

What this coupling of FireEye and EnCase technology does is clear much of the fog associated with all of the data that pounds security analyst management console screens everyday. And it makes it possible for them to make clear, well informed decisions all the way through remediation. For more information about the Guidance Software and FireEye collaboration, check out our press release, and download the datasheet.

Incident Response, E-Discovery: Crucial for Finance Industry Security and Compliance Planning

Anthony Di Bello A security breach, or fraudster acting from the inside, is expensive for any organization to endure - but such incidents are especially expensive for the heavily regulated financial services industry. 

In addition to direct monetary losses from an incident, breaches also create a significant hit to the trust an institution enjoys. That lost trust causes customers to leave, and creates a loss in confidence among partners and even regulators. The resulting customer churn is expensive, as is the loss of confidence among regulators which can result in more aggressive audits. It’s just human nature to look more closely following a security incident. And then there’s the higher cost of business insurance that follows a breach. 

While they’re naturally targets among cyber-thieves, financial services firms are also very heavily regulated. Now, that’s no secret but it helps to highlight why, in addition to mitigating the damage of attacks, financial services firms should make sure they have solid incident response and e-discovery capabilities in place. These capabilities - properly integrated with IT, IT security, risk management, legal, HR, and business executives - should be on the ready to respond to potential cases of system abuse, fraudulent transactions, unauthorized, or repeated, access attempts to systems and applications, and incidents involving customer financial data.
What many people overlook is that there are quite a few regulations that require these capabilities be in place. And if they don’t require it directly, their mandates make them essential.
For instance, the Payment Card Industry Data Security Standard (PCI DSS) is often overlooked when it comes to e-discovery and incident response. However, as is pointed out in this Information Law Group post, while PCI DSS doesn't directly require an incident response capability, it certainly does through the resulting requirements that are set, and now commonplace, among merchants and their payment processors:
In reality, however, a merchant's true obligations in a security breach situation are dictated by the merchant agreement it has with a payment processor or acquiring bank.  Most modern merchant agreements will require the merchant to comply with the operating regulations and security programs of the relevant card brands.  However, these contracts may also have additional duties relating to incident response, including different reporting requirements, audit rights and indemnification obligations. 

If you are accepting a certain volume of credit card payments chances are you are contractually required to have adequate incident response capabilities in place.

Same is true if you are a public company, the Sarbanes-Oxley Act of 2002 requires companies have the ability to prevent, and detect fraud, as outlined in this this FindLaw article:

Provide reasonable assurance regarding prevention or timely detection of unauthorized acquisition, use or disposition of the [company's] assets that could have a material effect on the financial statements.[9]
Section 302 also specifically identifies internal fraud as an event that would require disclosure by senior management. Put simply, an adequate internal control structure must include "controls related to the prevention, identification and detection of fraud."
It’s not just those two, albeit rather substantial, regulations that require financial services firms’ and others to have effective incident response and e-discovery capabilities in place. There’s also the FTC’s Red Flag Rules designed to identify and fight identity theft, as well as the Gramm-Leach-Bliley Act ‘s Notification Rule.

Each of these regulations, as well as numerous others, make incident response and e-discovery capabilities essential. In fact, there isn’t a financial services firm that doesn’t need to be able to quickly find and provide documents necessary for GLBA or Red Flag rule compliance for incidents involving privacy or potentially even fraud.

Of course, all of this is easier written in a blog post than done. Like many things in life, success requires the right combination of technology, people, and practice. We believe Guidance Software provides the right technology for both e-discovery and incident response, so all you need to do is make an incident response plan, put in in place, and test and practice - this way when something unexpected occurs you’ll be ready. 

South Carolina Department of Revenue Timeline Tells Common Tale

Anthony Di Bello

If any lesson is to be learned from the recent South Carolina data breach in which 387,000 credit and debit cards and 3.6 million Social Security numbers were stolen, it is that automated incident response is crucial.

Nineteen days after the South Carolina Division of Information Technology informed the state’s Department of Revenue that it had been hacked, a timeline of events has emerged that exemplifies the need for organizations to have proactive and reactive incident response capabilities in place. As the analysis within the Verizon Data Breach Investigation Report (DBIR), shows, it’s common for defenders to be so far behind the attackers that the damage is done before anyone knows what has happened. This South Carolina breach is no different.

The Verizon DBIR also reveals that 92% of data breaches are brought to the target organization’s notice via third-party sources, not by their own perimeter detection technologies—and once again, this South Carolina Department of Revenue breach is no different. The U.S. Secret Service informed the Department of Revenue of the breach almost a month after the data had been stolen.

According to what we now know publicly, on August 27, there was an attempted probe of the SC Department of Revenue systems. Another set of probes hit on September 2. Then, around mid-September, the breach occurred, and Social Security numbers, credit and debit cards were accessed. It wasn’t until early October that the Secret Service informed the Department of Revenue of a potential cyber attack. Then, on October 20, the vulnerabilities that made the attacks possible were patched. Finally, six days later, on October 26, the public was notified.

Thus the timeline looked like this, with a large gap between the breach and detection:

According to that timeline of known events, attackers were active on the target network three different times before any data were extracted. During that time, it’s a safe assumption that attackers were mapping out sensitive data locations and looking for vulnerabilities that would allow them to exfiltrate data without being noticed.

If public information is correct, it is likely that the initial probes included installation of a command and control beacon to ensure access to systems for continued reconnaissance. From there, it is very likely that there was ongoing covert channel communication and disk/memory artifacts that could have been detected before the attack was ultimately successful.

The central takeaway here is that something must be done to close the gap between when a breach is occurring and identified. We covered how to do that by having both advanced threat detection and incident response technologies in place, as we discussed in our Webinar 1-2 Punch Against Advanced Threats.

Not identifying breaches underway is a huge opportunity lost, because if the proper detection and response capabilities are in place, it’s possible to stop many attacks as they are in progress. For instance, it is very likely that technology like FireEye could have detected the illicit outbound communication, while EnCase Cybersecurity could have validated the hosts responsible for that communication as well as exposed additional artifacts with which to triage the scope of the attack underway.

At this point, FireEye would cut the outbound communication and EnCase Cybersecurity would kill the process and files that were responsible for that communication, and a scope/impact assessment investigation would commence—all before any data were stolen.

The resulting timeline would look like the graphic below:

While there’s no perfect defense that will stop all attacks, it’s pretty clear from the South Carolina breach, coupled with the data from Verizon DBIR, that with swifter, automated incident response, many more attacks could be stopped before any data are stolen.

The High Costs of Manual Incident Response

Anthony Di Bello It’s widely accepted and understood in most circles – but especially in IT – that when something can be effectively automated, it should be. In fact, automation is one of the best ways to increase efficiency.

It’s widely understood, that is, except when it comes to incident response. For whatever reason, incident response at most organizations remains largely a set of manual processes. Hard drives are combed manually for incident data. So are servers. Oftentimes, as a result, evidence in volatile memory is lost. And, in manual ad hoc efforts, confusion often reigns as to who should respond and how. In large organizations, where there are complex lines over who owns what assets and processes, such decisions are anything but easy. Determining who should respond to each incident as it is already underway wastes valuable time. We would never take this approach in the physical world — imagine a bank with no armed guards, no security cameras and no clearly defined emergency processes in place.

Additionally, organizations that rely heavily on manual processes would have to dispatch an expert to the location where the affected systems reside just to be able to perform their analysis and prepare for the eventual response.

There are clear costs associated with all of those aspects of manual response. But there are additional costs, too. First, with automated response, you can immediately validate the attack and prioritize high risk assets, reduce the number of infected or breached systems, and even more quickly contain an ongoing attack. This alone can be the difference between confidential data and intellectual property being stolen, or systems being disrupted. It also can be the difference between regulated data being disclosed, triggering a mandatory breach notification, or an incident that doesn't go any further than a single end user’s endpoint being infiltrated.

There are many ways automation can help to turn this around, and help put time on your side. For instance, with an automated incident response capability, such as that provided by EnCase Cybersecurity, it’s possible to integrate existing security information and alerting systems to ensure response occurs as the alert is generated. This capability dramatically reduces mean-time-to-response and provides the right individuals’ time-sensitive information they need to accurately assess the source and scope of the problem, as well as the risk it presents your data.

This automation also includes the ability to take snapshots of all affected endpoints and servers, so that immediate analysis of the exact state of the machine at the time of the incident can be performed. This is a great way to identify what is actually going on in the system, such as uncovering unknown or hidden processes, running dynamic link libraries, and other stealth activities. As threats grow increasingly clandestine, this speed is all the more important.

There’s also a facet of our technology that’s often not considered part of response, but actually is, and that’s endpoint data discovery. This way, you can understand where sensitive data exists across your enterprise, remove from errant locations, and ensure data policies are being followed to reduce the risk of data ex-filtration. By integrating that capability with detection systems, you can be assured to quickly understand the risk a threat presents any potentially affected machine based on its’ sensitive data profile and prioritize other response activities accordingly.

Finally, to ensure that the process is efficient, it’s crucial to have solid workflow processes in place. This makes it much more straightforward to quickly assign incidents to the right analysts, or teams, as well as track investigations from open to close.

It’s impossible to detail the exact monetary return of effective, automated incident response – but it’s certain that such automation will save you significantly. It will reduce your exposure, manpower required to respond, speed time to identification and remediation of a breach, and very likely limit breach impact. This is especially the case when caught early. With a malicious breach costing more than $200 per record, and breach records running into the tens, if not hundreds, of thousands per incident – not to forget regulatory sanctions, fines, and potential lawsuits – anything reasonable that can be done to mitigate the impact of the inevitable breach should be done.

Beyond Virtual Whack-a-Mole: A Look at Proactive Incident Response with SC Magazine

Anthony Di Bello Several days ago I met with SC Magazine Executive Editor Dan Kaplan to film a discussion about a new kind of incident response. You can watch the video of the interview here. We discussed how Operation Aurora and Google’s unsolicited disclosure of this attack really opened people’s eyes to the fact that data breaches are going to happen to even the most secure of organizations.
It was this disclosure that kicked off a fundamental shift in the willingness of any given organization to admit that a breach is a very real possibility and a significant problem.
While many publically disclosed breaches are from government, financial and university organizations, this is a cross-industry problem and it’s important that any organization with something worth stealing move beyond the game of virtual whack-a-mole being played today, and institute a response along with the people and technology to support that plan.
Proactive Response
So when Dan asked what I thought were the general best practices for the organization, my answer was direct and straightforward: the visionary organization needs to take a “lean forward” approach. After all, in cybersecurity as in football, the best defense is a good offense.

Where do you start with proactive incident response? One key way is by automating as much as you can of the initial response workflow, such as capturing some responsive data from the host in order to determine things like:

  • What happened?
  • What’s the scope of the potential breach?
  • How were devices communicating when the alert came across?
  • Are any hidden processes running on the machine in question?

As Dan phrased it, it’s about creating a new level of intelligence; the more information you have, the better you can respond. Which is exactly the right perspective: if your organization can capture host information in a more automated fashion as a new layer of visibility to whatever exists on the wire, the faster you can respond to any threat arriving at any time.

The net-net of our discussion was this: Cyberattacks aren’t going away. The forward-thinking organization needs to have a workflow in place and a big, red button to push the moment the first signs of an attack arrive.

For more on the best practices in incident response, check out my blog series, “Before the Breach.” 

Incident Response for the Masses

Anthony Di Bello Being able to leverage the powerful capabilities of forensic incident response software no longer requires significant, specialized training for the security analyst.

When an attack strikes, or a suspected breach is underway, time is everything. Unfortunately, alerts sent from intrusion detection systems, security information and event managers (SIEM), data leak prevention tools and others aren’t always the most accurate. Yet, every time consuming false alert and lost moment is costly to the effectiveness of the IT security program.

The trouble is that historically initial forensic investigations require detailed training. And that expertise isn’t always available at notice - if you even have those skills readily available on staff.

Helping to automate incident response, without the need for extensive forensic training, is one of the strongest points of EnCase Cybersecurity. When you first suspect, or know, an attack is underway, the first thing that needs to be accomplished is to validate the alerts as well as understand the nature of the attack and the depth of its impact.
  • Is the attack coming from:
    • A malicious insider? 
    • A knowledgeable and determined outside attacker?  
    • A low-risk malware infection that’s not likely to have progressed beyond a single system? 
  • How many endpoints or servers are involved? 
  • How many hours, days, weeks, or months has the threat likely been present?
These are questions that can truly only be answered after a complete examination of affected systems. EnCase Cybersecurity helps security teams do just that without deep forensics expertise, through its ability to expose and automate forensic response actions in the console they are most used to working in, such as a SIEM. This provides teams what they need not only to validate, but also to have a working understanding of how the threat is affecting any given endpoint, and to identify how deep the compromise does - or doesn’t - go.

As a simple example, take an alert type that runs a high false positive rate as a result of unpatched anti-virus on the indicated system. Normally, validating this false positive requires involvement from IT, and the time it takes for IT to obtain access to the system and report back the status of the anti-virus software installed – this could take several days. If a forensic incident response solution were integrated into the alerting system, validating this false positive would be a simple matter of automating a hash value look-up based on the hash value representing an up-to-date anti-virus executable or related file – the entire process taking mere seconds. This same concept can be used to validate malware detected in-motion – that is to say, understand immediate if the attack was successful.

All of this is completed without biases or misguided assumptions that cloud the judgment of many investigator’s during an investigation. The forensic grade, disk-level visibility granted by EnCase Cybersecurity provides teams a transparent, accurate view of what’s happening and what exists on endpoints, from advanced malware to misplaced regulated data, and helps teams to quickly understand the nature of attacks.

The tools are out there to help simplify incident response and forensic analysis and in today’s threat landscape, and it's time more organizations started using them.

Not If, But When: My Chat with SC Magazine on Incident Response

Anthony Di Bello Last week I had the opportunity to sit down with the Executive Editor of SC Magazine, Dan Kaplan, at his office in New York City to talk about incident response. Since Google’s bad breach year in 2009, we at Guidance Software have been working with companies in every industry who know they must have a response plan in place. There’s no escaping the fact that data breaches will happen and that a wait-and-see incident response can be very bad for business.

Watch the whole interview above for more on this proactive incident-response trend, best practices for incident response and how response varies depending on the type of attack. And for more on the best practices in incident response, check out my blog series, “Before the Breach.”

Cyber-attacks: Gaining Increased Insight at the moment of the breach

Anthony Di Bello
Triage – the prioritizing of patients’ treatments based on how urgently they need care – saves lives when order of care decisions must be made. Medical teams base their decisions on the symptoms they can see and the conditions they can diagnose. I’m sure the process isn’t perfect, especially in the midst of a crisis, but skilled medics can prioritize care based on their knowledge and experience.

For IT incident response teams, cyber attacks and incidents also happen quickly, but the symptoms could go unnoticed until it’s too late, particularly in regards to targeted attacks for specific data such as medical records or cardholder data. In a previous post, I've discussed the challenges security teams have regarding mean time to detection and response. Despite the team’s knowledge and experience when it comes to dealing with incidents, without the right information – right away – it’s nearly impossible for the team to make the “triage” decisions needed to mitigate as much risk as they otherwise could.

On the other hand, suppose a number of employees have just clicked on links tucked away in a cleverly crafted phishing e-mail, and their end points infiltrated. As the attackers launch exploits aimed at applications on their end points, security alerts are kicked out from the endpoint security software. When there’s a mass attack, such as one that involves automated malware, hundreds or even thousands of end points can be infected simultaneously. It’s not hard for any organization to see when an incident like this is underway.

In either scenario, what’s one of the most crucial aspects of response? Beyond the ability to quickly identify that an attack is underway, the other – just as in triage – is to be able to identify what systems pose the greatest risk to security or contain sensitive data and require immediate response. In many cases, systems affected by a malware attack, for instance, may just need to be restored to a known safe state, while other systems – those with critical access to important systems or containing sensitive information – would need immediate attention so that risk can be properly mitigated.

Once an attack or incident is discovered, the clock begins to tick as you scope, triage, and remedy the damage. Every delay and false positive costs you time and money, and increases the risk of significant loss or widespread damage. The problem is compounded by lack of visibility into the troves of sensitive data potentially being stored in violation of policy.

One of the most effective things to do – whether looking at 500 systems that have been infected all at once or getting reports of dozens if not hundreds of unrelated incidents – is to decide which system breaches place the organization at greatest risk. 

There are a number of ways you can try to accomplish this. For instance, a notebook that is breached, which happens to be operated by a research scientist, could very likely contain more sensitive information than that of a salesperson. But what if a developer’s system gets breached? What if someone in marketing gets breached? Who knows, offhand, what risk that could entail. Perhaps the developer was running a test on an application with thousands of real-world credit card numbers. Maybe the person in marketing was carrying confidential information on a yet-to-be-released product. 

In either case, it’d be helpful to know if sensitive data resided on the systems. And with alerts and potential breaches coming in so quickly, one would think it’s nearly impossible to make such decisions on the fly? Fortunately, it’s not. We’ve written in the past about the importance of automation when it comes to incident response, and how the marriage of security information and event management and incident response can help improve organizations respond to security events. Now, here’s another technology that you might want to consider using with your incident response systems, and that’s the ability to conduct content scans, such as those that look for critical data, financial account data, personality identifiable information, and other sensitive content either on a scheduled basis, or automatically in response to an alert. 

Consider for a moment the potential value such a capability brings most organizations. First, it provides powerful insight that helps prioritize which systems get evaluated first. If several systems are hit by a targeted attack, you’ll instantly know which systems to focus your attention on for containment and remediation. Second, you may know, from the types of systems that are being targeted what data or information the attackers seek. This will give you valuable time, potentially very early in the attack, to tighten your defenses accordingly. Third, because you’ll have actionable information, you'll have a fighting chance at clamping down on the attack before many records are accessed, or at least mitigating the attack as quickly as modern technology and your procedures allow.

Unlike triage in health care, lives may not be at stake – but critical data and information certainly are. And anything that can be done, such as coupling incident response with intelligent content scans and immediately capturing time-sensitive end point data the moment an alert is generated, will increase the overall effectiveness of your security and incident response efforts, and help you understand immediately if sensitive data is at risk.

Lessons from Black Hat

Anthony Di BelloOne of the biggest security conferences of the year is an important reminder on just how creative your adversaries can be.
Whenever I go to Black Hat USA security conference in Las Vegas,  don’t know whether I feel more knowledgeable about the state of IT security - or if I’m more concerned. Honestly, it’s probably a little bit of both. This year’s show was no different.
One of the more frightening items of research this year will certainly give hotel-goers around the world something to think about. Security researcher Cody Brocious revealed in his presentation just how easy it is to pick hotel electronic locks. The researcher demonstrated how certain types of hotel locks can be bypassed to gain access to the room using little more than the open source portable programming platform known as Arduino.
Another very interesting bit of research came from two university researchers who managed to create a “replicated eye” that is capable of fooling iris biometric scanners into allowing authentication. The team printed synthetic iris image codes of actual irises stored in a database. You can read more about their research here.
Even Microsoft’s upcoming operating system didn’t get through the conference unscathed, with a researcher highlighting ways the security of the operating system can be bypassed, such as applications being able to hijack Internet access rights of other applications, and other potential vulnerabilities. While the researcher says Windows 8 has many security benefits over its predecessors, there will still be zero-day vulnerabilities just waiting to be found.
And in the days after Black Hat at DefCon, a 10-year old hacker was recognized at the very first DefCon Kids, an overlay at DefCon, for finding a way to exploit mobile apps via the manipulation of the device’s system clocks.
Other interesting research included tools that made it possible to circumvent web application firewalls, the ease in which database permissions can be bypassed, and a growing number of known ways to hack smartphones.
All of this goes to show that the imagination (and age!) of attackers has no limits. And, inherently, no system can be trusted to be fully secure and impenetrable. As someone who has spent so much time in the IT security industry that’s a humbling reminder that no matter how much we focus on prevention - someone will always be able to figure and make their way through the walls we’ve put in place.
This makes it essential that organizations be able to identify any potentially nefarious changes and unknown data or processes in their environment. That means, of course, enterprises need to know what their systems look like when pristine and healthy. That’s the only way to be able to spot the unknown in the environment, and be able to clamp down on the attack as soon as is possible. And that’s an important part of the philosophy behind EnCase Cybersecurity.
It also means that a focus on incident response is as important as ever. It’s the organizations that can identify, clamp down upon, and successfully mitigate the damage of breaches that will, I believe, prove to be the most effective at information security. And effective incident response is a subject we just treated at some length.

Before the Breach Part 2 - 6 Best Practices

Anthony Di BelloEarlier this week, we talked about the implortance of incident response. In this post, I'm going to touch on the 6 best practices presented in our webinar, and why I think these are important considerations for you to take.

Best Practice #1: Preparation. Whether you are running a sports team, a military, or incident response effort, success requires that the team be prepared. A plan of action needs to be in place. The organization needs intelligence on new threats and risks it faces. Also, anyone involved in the incident response process needs a clear understanding of what their expectations are and what they need to do to when an incident is underway.

Another part of preparation is understanding the abilities and limitations of your organization. Do you have the resources necessary to respond to outbreaks manually, or would network-enabled response help your team save time and effort? Other important aspects of preparation include having clear and up-to-date knowledge of your environment, as well as understanding where sensitive and regulated data are stored. Finally, as is true with any successful team, you need to test your incident response processes with fire drills and ongoing practice.

Best Practice #2: Identify the risk. Make sure you can identify incidents as quickly as is reasonably possible. Tune your intrusion detection systems properly and integrate security and infrastructure event logs with your SIEM (if you have one). This way, you can quickly identity the critical events that need immediate attention. And note that while SIEMs can help eliminate a lot of noise from your intrusion detection systems, as well as events from throughout your infrastructure, high risk incidents will need to be carefully vetted and handled by incident response proceedures.

Best Practice #3: Triage. Speaking of alerts, as you have them rolling in from various security platforms, you need to have a system in place that enables you to understand what threats need immediate attention and which can wait for analysis later. In one interesting example during our best practice webinar, Darrell Arms, solutions engineer, Accuvant Inc., recalls an instance when a client was on the verge of publicly disclosing what, initially appeared to be a significant breach on the company’s data. Fortunately, after a thorough forensic investigation, it came to be realized that there wasn’t any breach at all. That experience goes to show how important the power and visibility provided by capabilities driven by forensic principles can be.

Best Practice #4: Contain. To be able to contain a threat, you need to be able to collect, preserve, and understand the evidence associated with the incident. That includes any malware uncovered. And this evidence can’t be collected in a haphazard way; it needs to be handled as evidence, because it potentially will be. Malware and live system information must be collected and preserved in real-time, in order to provide accurate and timely details for a full scope assessment. This is crucial to understand potential targets of the attacker, as well as how deep the attacker may have infiltrated, or how far malware had propagated. Live system details as well as the Malware binary can be leveraged to seek out other infections throughout the network quickly. While the largest enterprises may have the expertise in-house to reverse engineer malware, it is a time consuming and expensive process when there is more readily available data that can be used to immediately perform an accurate scope assessment. Either way, have a plan in place to utilize when needed.

Best Practice #5: Recover & Audit. Before the incident can be considered closed, all offending malware and exploits have to be removed from affected systems, and any offending vulnerabilities that made the attack possible need to be closed. Systems need to be cleansed, or possibly even rebuilt. It’s important, for this stage, to have the ability to search throughout the network for other potentially infected or compromised systems. During this phase of your incident response plan, you’ll need to conduct a sensitive data audit of any affected systems that may have contained personally identifiable information, intellectual properly, data that are governed by regulatory controls, or anything that could possibly trigger a mandated breach notification.

Best Practice #6: Report & Lessons Learned. Hopefully, the breach didn’t involve regulated data. However, if it did, you’ll need to consult all relevant data breach notification regulations, and develop a plan with internal stakeholders such as IT, communications, business leaders, and legal on how you’ll move forward.

It’s important to remember that any business can be breached. In fact, most will be. And, if a company has been in business long enough, it’ll be breached more than once. That’s why it’s so important to learn from these incidents. Document, in detail, what went wrong, and suggest controls that could work better in the future. Also, document what went right and why.

Perhaps this negative incident can also be an opportunity to obtain the budget for things that have been neglected, but shouldn’t have been. Perhaps your organization needs more people dedicated to IT security and response. Maybe what’s needed is not more people but employees with perhaps different skills sets than are currently on staff. Maybe it is certain types of technology that you’re missing that would help to block such attacks more effectively, and more rapidly expose those that do manage to slither through. Take the opportunity to learn from what went wrong, so you can be stronger in the future.

Before the Breach Part 1: Prepare for the Inevitable

Anthony Di Bello
Most every organization will be breached eventually. This is the first in a series of posts during Black Hat week covering six best practices that need to be in place for best response.

It’s unfortunate, but history shows that it’s not a matter of IF a business will be breached, but WHEN. According to the Ponemon study cited in this ZDNet blog post, Cybersecurity by the numbers: How bad is it?, 90 percent of businesses were breached during the period of the survey last year. Additionally, the study found a staggering 40 percent of businesses didn’t know the source of the attacks against them, while 48 percent pointed to malicious software downloads as a prominent attack vector.

The news isn’t all bad. The fact is that organizations can do a lot to mitigate their risks – if they take the right security precautions and maintain a healthy focus on their ability to respond to incidents as they occur. For example, a separate Ponemon Institute survey from last year found that there is a strong correlation between companies that have CISO-leading organizational security efforts and lower breach costs. The year-over-year cost per record declined from $214 to $194.

This SecurityWeek post, Report: Breach Costs Fall, You Can Thank Your CISO, quoted Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, as saying, “One of the most interesting findings of the 2011 report was the correlation between an organization having a CISO on its executive team and reduced costs of a data breach.”

It stands to reason that a CISO would improve IT security effort efficiency. There’s an executive in the organization fully focused on security, and committed to driving best practices into the organization’s processes. The data show the profound impact that all of this focus and preparation creates. It’s also important, when it comes to information security, that focus not be so lopsided toward defense.

Let me explain. With the hostile environment we must do business in today, it makes sense to focus on defending your environment with technologies such as firewalls, anti-virus, intrusion detection systems, and the many other defensive tools available. However, just as fire prevention isn’t only about safety awareness and better building codes – it’s also about smart response, fire alarms and a fully trained and equipped fire department on the ready – IT breach incident response is the same way.

And the key to success in incident response is the determination to make it a priority, and having the right equipment and training in place. With that in mind, we recently conducted a webinar on The Six Best Practices on Incident Response that details the key things organizations need to do so that they can mitigate risk and lower the cost and impact of the incidents that come their way.

Throughout the week on this blog we will be taking a closer look at the best practices discussed in our webinar.

Be sure to follow @EnCase on Twitter for Guidance Software announcements and polls during Black Hat.

If you are at the conference, join me and Guidance Software in booth #113 where we will be showcasing the benefits of integrating cyber response technology with perimeter detection tools and raffling off a Google Nexus 7 each day!

Universities increasingly targeted by cybercriminals

Anthony Di Bello There's certainly been plenty of news around universities whose files have been breached in recent years. In a recent incident at  the University of Tampa, sensitive information was spilled about 30,000 students and employees. Last summer the University of Wisconsin reported finding that it found a server infected with malware stored Social Security numbers and names of 75,000 faculty members and students. A cursory search of the DatalossDB shows that breaches at the University of Virginia, Holy Family University, University of Nebraska, and Stanford, among others, have occurred recently.

Why are university files breached so commonly? There are many reasons. The first may have to do with the culture at most universities. Schools are typically more open with their infrastructure than enterprises, with higher network user turnover, and universities generally promote an environment that is more tolerant of students exploring and pushing boundaries on their network. Finally, universities are more likely than enterprises to be operating under tighter IT budgets, which means that security investment is also going to be tight.

These conditions create an environment that cyber criminals are more likely to view as an easier target.

In addition to being more vulnerable, universities also are shiny targets for attack because they hold a trove of valuable data.

Think about it. Universities possess decades worth of data on students –financial aid and loan information, Social Security numbers, student work history, e-mails, as well as student addresses and possibly even information on their parents. Many universities also hold sensitive health-related information.

A quick look at the DatalossDB shows that university files aren’t only being breached accidentally (through lost drives, web server errors, etc.), they’re actually being targeted by attackers. Of the 10 recent breaches listed in the DatalossDB, seven are due to an attack of one form or another.

Considering the facts, it’s pretty clear that universities aren’t being targeted just because they are perceived to be less secure than enterprises, but also because they have valuable data that can be used for fraud and identity theft.

Yet, when it comes to saving, universities are shortsighted and costing themselves more by cutting security budgets. With nearly every state having a data breach disclosure law, the cost of disclosure is quite high when considering the expense of notification, investigation, mitigation, and potential lawsuits. For instance, the cost of credit monitoring alone can run $15 to $20 per record – and these breaches can involve hundreds, thousands, and even tens of thousands of records per breach.

To cut the risks of data breaches and keep the costs of those that do occur relatively low, universities need to understand wheretheir sensitive data lives, and be able to quickly respond when something goes awry, such as malware infiltrating a server. And they need to make sure that strong incident response procedures are in place. Given budget cutbacks, while there’s also increased dependence on IT systems – and attackers are more active than ever – it’s more important than ever that universities have the tools in place to be able to limit the scope, and thereby the costs, of the breaches they do incur.

How cloud computing changes incident response

There’s certainly been plenty, perhaps too much, talk on cloud computing. There’s infrastructure-as-a-Service, software-as-a-Service, platform-as-a-Service - everything is now sold as-a-service. But aspect of all of this that doesn’t get much attention is how cloud computing affects incident response. And even if you’ve yet to move to cloud in a significant way, incident response in the cloud is something you should start considering long before you make the move.

Anthony Di Bello So how does cloud computing affect incident response? There are a number of ways. First, and possibly most significantly, security and incident response in cloud computing are so brand new that everyone - from cloud providers, security vendors, to enterprises, are still striving to get their hands fully around the issue. There are a number of worthwhile organizations that can help with this, such as the European Network and Information Security Agency (ENISA), which has published some material relating to cloud security and incident response. In North America there’s the Cloud Security Alliance (CSA), which has recently created a cloud computing security incident response team dedicated to cloud incident response.

Interestingly, some of the biggest challenges around cloud computing aren’t technical at all, they’re legal. The legal vagaries surrounding cloud make it difficult to understand how incident response can be executed in the event of a breach or attack. Who owns the data in the breach? In many cloud contracts, it turns out technically the cloud service provider owns the data. Is your service provider contractually obligated to notify you should they be breached? Are you sure about your answer? Legal experts say that clients need to make certain that their contracts cover things such as breach notification, the cost of lost downtime or data that has been destroyed.

Also, are you confident, in the event of a breach, that your cloud services provider can conduct an incident investigation - or provide the way for you to investigate the breach against your systems, data, or applications?

If a customer can’t do it themselves, should cloud providers be offering incident response and e-Discovery as a service? That’s a possibility because existing incident response technology does work in the cloud, but its use is more a matter of data ownership, legal authority, and accessibility to affected systems that it is about technical challenges.

As more data moves to the cloud, attackers are going to increasingly target cloud-based systems. But until the rules about incident response become more clearly defined, one of the most important things you can do now to prepare yourself and make sure your cloud provider has the appropriate incident response capabilities in place, and that you have the right contractual agreements set for when something goes wrong (and it will, eventually, at one or more of your cloud services providers).

While most will wait until there is an actual breach before asking these questions, it’s not the best time to do so. In actuality, it may be the worst time. That’s because a breached services provider is not going to be in the mood to go beyond what is detailed in the contract while they are in the midst of an incident.

So it’s best to have how incident response will be handled long before that happens. To learn more about how incident response capabilities are critical to understanding the source, scope and damages suffered by a suspected attack, visit  

Value of Incident Response Data Goes Far Beyond Any Single Breach

Anthony Di Bello

When thinking about the value of incident response, most people focus on how it limits the potential damage of recent attacks, or even attacks that are currently underway on the network. This is for good reason: proper incident response can help reduce risk, limit the scope of disclosures (should the investigation show that no PII was actually accessed, for instance), reduce the costs of each incident investigation, and cut the costs of breaches significantly.

Yet, what many don’t consider is how the information that is gleaned from the investigation can not only go a long way to understanding the source and scope of any specific incident, but that these findings can also provide the valuable insight needed to shore up defenses for future attacks.

Consider some of the findings of the 2012 Data Breach Investigations Report, a study conducted by the Verizon RISK Team. It found that 81% of breaches occurred through some form of hacking, and most by external attackers. Additionally, nearly 70% of attacks incorporated some type of malware, and many used stolen authentication credentials and also left a Trojan behind on the network as a way to gain re-entry.

If, for instance, you were breached in that way you’d know to keep a close eye for any suspicious logins (such as time, geographic location, failed attempts, etc.), as well as any files or network communication that aren’t normal in the environment. Yes, you should be taking care of those things anyway, but if you know you are being targeted, or have been recently targeted - it doesn’t hurt to tune the radar to look for such anomalies.

One thing about security is that system defense is often like squeezing a water balloon, when you squeeze and tighten in one place, it gets bigger someplace else. So as you harden certain areas of your infrastructure, it’s likely that attackers will quickly target another area. That’s why it’s important to consistently analyze security event data: Especially data from the most recent incidents and breach attempts.

Here’s a sample of ways incident data can help you thwart future incidents:

Data gleaned from incident investigations can provide a complete understanding of an incident and will inform IT security exactly how an attacker managed their way onto a system or network as well as how they operated once inside. Ideally, the collection of such data should be automated, to ensure real-time response before attack related data has a chance to disappear. Event related data that can be gathered in such a way gives analysts useful indicators they can use to quickly understand the spread of malware throughout their organization without having to go through the time-consuming task of malware analysis. This type of data includes ports tied to running processes, artifacts spawned by the malware once on the endpoint, logged on users, network card information and much more.

With this knowledge, you gain the ability to conduct conclusive scope assessment, blacklists can be maintained to protect against reinfection and other specific defenses against similar attacks in the future can be developed. For example, if you see more attacks through infected USB devices, it may be necessary to block such devices. If there are a number of phishing attacks, an organization can launch an employee  awareness campaign. If it’s an attack against certain server services left on, close them when possible and put in place mitigating defenses. You get the idea: Use what you learn to harden your infrastructure.

Data from the response can be used to develop signatures specific to your own intrusion detection systems and even used to tune alerts sent by your security information and event management system. That same data can be shared with anti-virus vendors so that they can craft specific signatures against new threats. For instance, an organization may be the only one to experience a particular kind of attack, or the attack may be vertical specific, but a thorough incident response process may be the only way to obtain data needed for a signature to protect one’s own systems and those of the community.

The investigation may indicate the attack came through a supplier or partner, or through a path within the organization once thought to be secure. With the right information steps can be taken to notify the breached partner, or potentially close security gaps you didn’t know existed on your own systems.

It now should be clear, when considering the value of incident response, that it’s important not to view this data in a vacuum, and that the processes in place can not only to contain the damage of the incident at hand, but make sure the data gathered is used for lessons learned and incorporated to make one’s infrastructure more resilient to future attacks.

Ponemon Cost of a Data Breach Study and Verizon DBIR Highlight Some Good News and Some Bad News

Anthony Di Bello Two highly regarded security studies were recently released: the Ponemon Institute’s 2011 Cost of a Data Breach Study and Verizon's annual Data Breach Investigations Report, or DBIR. Both with interesting results.

There was good news in the Ponemon report as both the cost per breached record (whether lost or stolen) and the organizational costs associated with breaches have both declined. This was the first time since the start of study seven years ago. The cost of breaches to the organization dropped to $5.5 million from $7.2 million last year. While the cost per each breached record fell to $194 from $214.

The 2011 Cost of a Data Breach Study results are based on the evaluation of 49 data breach incidents that ranged 4,500 to 98,000 records. The study found that 41 percent of companies notified their affected customers within one month of the incident.

While timely notifications and lowered costs associated with breaches are good news, one of the more interesting findings in the report is that organizations that have chief information security officers with organizational responsibility for data protection are able to cut the costs of their breaches by up to 35 percent per compromised record.

That statistic clearly shows that organizations with more mature security programs in place tend to have better outcomes.

The study also found that customer churn rates went down last year: Customers are no longer so quick to leave companies that have announced that they’ve been breached. While that’s certainly good news for companies that have to announce that they’ve been breached, it also shows that consumers are becoming desensitized to breach notifications. There have been so many breach notifications and hacking news stories breaking that people are no longer paying attention.

When we look at the Verizon DBIR we learn that  the surge in hacktivism in the recent year has made online activism the most prevalent motivation for attack. That’s not to say that attackers aren’t aren’t still targeting data for monetary value, such as account numbers and intellectual property - they are. However, we’ve seen a wave of politically or civically motivated attacks. Which means companies that find themselves in a politically charged industry or part of a dispute had better adjust their risk posture accordingly.

The DBIR also points to the fact that all companies had better be prepared to stop attacks quickly, and even better identify when they’re underway. According to the study’s analysis, in 85 percent of incidents attackers are able to compromise their target in minutes or seconds. It turns out, thanks to easy to use and automated tools, it doesn’t take long to hack a server or point of sale system.

What makes it even more troubling is that in more than 50 percent of cases, for all organizations, the target’s data is successfully removed within hours of the initial breach. In about 40 percent of incidents it took about a day, or more, for the attacker to find and exfiltrate the data.

While that’s certainly concerning enough, the very disheartening news is that enterprises move much more slowly than their attackers. In about a third of incidents (27 percent), it took days between initial comprise and attack discovery. For another 24 percent of organization that discover took weeks. For the remaining 48 percent it took months to years for breach discover.

It goes without saying that’s just not acceptable. For an adversary that moves in minutes - the defender needs to be able to identify and respond to attacks in near real-time. This is only possible when response technology like EnCase Cybersecurity is integrated directly with alerting or event management solutions. It’s a topic we’ve covered previously here.

For a deeper look into the implications of the studies referred to in this post, join Larry Ponemon of the Ponemon Institute and Bryan Sartin, co-author of the Verizon DBIR for the Guidance Software CISO Summit, May 21st at the Red Rock Resort in Summerlin Nevada. Learn more at

Successful incident response requires a sound plan backed by accurate information

Anthony Di Bello In a number of our recent posts we discussed the importance of being able to quickly identify and respond to potential security breaches:

Incident Response: The First Step is Identifying the Breach

Beating the Hacking Latency

SIEM Turbocharger

In those articles we covered why it’s critical, for effective incident response, to have the ability to filter the noise from the various security technologies most organizations have in place. In our SIEM Turbocharger post, for instance, we talked about how EnCase’s SIEM integration capabilities enables instantaneous forensic data collection, and how (when the inevitable breach does occur) an assessment can be quickly conducted across endpoints to scope the breadth of the situation.

But what happens should the breach be significant and turns out to be a reportable incident due to regulatory mandate, SEC reporting expectations, when a considerable amount of confidential data has been stolen, or a related situation is encountered?

News headlines of companies that didn't handle their incidents, once identified, properly are all too common.

The key to success at these times is by having a well-crafted plan in place for how the incident will be handled.

Based on our discussions with customers and industry leaders there are several things that must be in place to make a successful incident response possible. While the IT security incident response team is generally a tight group of IT and security managers and analysts, having the right, and much more organizationally broad, team in place to respond to business-critical incidents is crucial. In fact, if a publicly reportable incident is going to be well managed it will most likely require input from many aspects of the business. That’s the only way to best decide how the general public, partners, suppliers, customers or anyone else affected, and who will be notified, could best be handled.

Unfortunately, this is where many organizations fall short. When they approach their customers, or announce a breach it’s not always handled as well as should be, which can cause loss of trust, loss of customers, and even increased regulatory scrutiny.

However, once it’s determined that an incident is serious enough to notify business leadership the people and the protocol for this need to be in place ahead of time. That includes informing members of the legal and compliance teams, the CIO’s office, corporate communications, and others. Once the legal, business, and regulatory implications of the breach are understood it’s time to take the incident to executive management and eventually notify the affected stakeholders.

Of course, what is required throughout the entire incident response process is accurate and trusted information. Organizations need clarity on the nature and scope of the breach as soon as possible so they can start making intelligent decisions as early in the incident as possible. Fortunately, through the integration of EnCase Cybersecurity with SIEM technology, it’s possible to automate the digital forensics data capture process and therefore quickly understand the nature and true scope of an incident. This way well informed and the more appropriate decisions can be made from the start.

Follow me on Twitter @CyberResponder

Incident Response in the Cloud: Don’t Let It Be an Afterthought

Anthony Di Bello There certainly has been plenty of discussion around the impact of cloud computing on security. But the fact of the issue remains that cloud computing can both complicate, and simplify enterprise IT security. For instance, when an organization's data and applications are spread across multiple cloud service providers - security can become significantly more complex. However, using cloud and other IT outsourcing services small companies can outsource all of their IT - and security - and (in most cases) both greatly simplify their IT as well as increase security.

However, when talking about how cloud affects anything, including something as complex as security and incident response, it’s important to define what types of cloud services we are talking about. The impact on incident response will be considerably different depending.

Essentially, there are three types of cloud: public, private, and hybrid clouds. Public cloud is what most people think of when they say “cloud” computing. A public cloud is where the underlying infrastructure is shared, and resources are dynamically provisioned. Think of Amazon Web Services for cloud infrastructure, or storage-specific services such as Dropbox.

Then we have private cloud. Private cloud is primarily the domain of large enterprises and government agencies. And these are organizations that want a highly-virtualized, self-provisional cloud environment - but need to maintain full control and transparency over the infrastructure. Then are organizations that build a “hybrid” cloud infrastructure that consists of both public cloud and private cloud resources. Less critical data and applications may be used on the public cloud, while the private cloud is where classified, regulated or valuable intellectual property data will be stored and accessed.

The challenge for IR teams is understanding how each of these architectures affect digital investigations. It’ll be a topic that we look at from time to time in the upcoming months here in this blog.

A simple example in how a cloud architecture can affect a incident response would be how, under circumstances depending on the public cloud service provider, make it impossible to get the forensics data they need for an investigation. Because public providers may not have the internal policy framework, staff resources, technologies or even the architecture necessary to contain or recover data — such abilities will vary greatly from one provider to the next.

Also, the sharing of resources in multi-tenant environments may make it next to impossible for cloud providers to share logs, network data, etc. because of its contractual agreements with other customers.

Another area where cloud may complicate incident response efforts is when it comes to so-called rogue cloud services, when users turn to cloud providers without the knowledge or approval of the corporate IT department. This could include users storing data in public cloud storage services such as Megaupload, or using cloud applications at service providers that may not have the necessary processes in place to aid with IR investigation requests.

While cloud computing doesn’t change what makes for good incident response practices, it does add another level of complexity - and organizations need to be prepared for the change. Of course, this is nothing new to investigators and security teams who have had to deal with many technological changes over the years, from mobile device storage to the rise in intelligent portable devices, virtualization, and even the encroachment of early generation Web services onto the corporate network.

Cloud computing is simply another step in the evolution — and incident responders need to be prepared for the complexities cloud computing brings.