Showing posts with label Data Breach. Show all posts
Showing posts with label Data Breach. Show all posts

Recent breaches show traditional security defenses fail to deliver

Anthony Di Bello

We’ve highlighted in numerous posts that studies of security incidents and publicly disclosed breaches reveal that it’s all too common for attacks to go unnoticed for days, weeks, months, and even years. And, nearly as troubling, it’s rarely the breached organization that discovers that it’s been compromised – rather it’s usually a customer, partner, supplier, or even law enforcement that eventually notices something is awry and brings it to victims’ attention.

All of that was certainly true with the South Carolina Department of Revenue attack that we covered here. In this incident, the post-breach investigation found that the compromise occurred in mid-September and wasn't detected until mid-October. And when it was detected, it was done so by the United States Secret Service, which happened to be conducting a sting against the group that was responsible for the attack.

So what happened regarding this breach? As we learn more, it’s clear that time was working against the South Carolina Department of Revenue. To be fair, this is true for all targeted attacks. Take a look at the illustration below, from the 2012 Verizon data breach investigation report, which accurately demonstrates the scope of this challenge. The data in the figure below are the result of thousands of investigations that were conducted last year both by Verizon and a number of government agencies from multiple countries, including the United States Secret Service.

When looking at the various time spans between attack and response in all of those incident investigations, disturbing patterns emerge. Specifically, patterns appear when attack life cycles are segmented into four stages: the time between initial attack and compromise; the time between the initial compromise and data being stolen from the target; the time between that compromise and the point at which it was discovered; and finally the time between the discovery of that compromise and remediation.

The data find that attackers can exfiltrate data at best in a matter of hours, or days, and at worse in a span of only minutes. Once in, attackers have shown again and again that they have the ability to begin exfiltrating data as soon as they’ve compromised a system.

And this isn’t just a handful of organizations; it is thousands. This proves that the status quo provided by traditional security software simply isn’t good enough. And the reality is that after attackers have had weeks, or months, to rummage through a network, simply wiping servers and endpoints isn’t going to rid the infection. The attacker has had too much time to plant backdoors and create ways to burrow back in. 

Identify unknown, suspicious behaviors
What’s needed are ways to identify unknown, suspicious behaviors on endpoints. This is best achieved by performing periodic assessments designed to expose unknown running applications that exist in temporary memory; instances of known threats that morph (such as the Zeus banking Trojan); and the ability to conduct ongoing scans for variants of such threats in order to fully understand and address the scope of a successful attack against your infrastructure.

Additionally, and in order to reduce your attack surface, you also need to be able to audit endpoints for sensitive data, which in all likelihood, are the target of the attackers’ activity. By limiting pools of sensitive and confidential data, you can significantly reduce risk.

EnCase Cybersecurity helps in many of these efforts. First, EnCase Cybersecurity conducts network-wide system integrity assessments against a known good baseline that has been established. Essentially, what you are doing is performing regularly scheduled audits for anomalies across the range of endpoints. And it works because, while you don’t know what the unknown looks like, you do know what the baseline looks like. This allows you to look at everything that doesn’t match that baseline, so you then can decide whether it's something that's good (and should be added to a trusted profile), or if you've been exposed to a malicious attack that needs to be remedied and added to known bad profiles for future integrity audit scans.

How does EnCase Cybersecurity achieve this? It does so by leveraging the concept of entropy for similar file scans. Consider it a very fuzzy signature, but not an exact match, that the system is assessing. It doesn’t matter what kind of files are being evaluated – EnCase Cybersecurity will expose the files and processes used by advanced attacks that are easily missed by traditional security technologies, such as intrusion detection systems and anti-malware software.

We’ve recently completed a webinar on this topic, Hunt or be Hunted: Exposing Undetected Threats with EnCase Cybersecurity, that provides much more detail about how EnCase Cybersecurity helps to defend against advanced, clandestine attacks. I invite you to watch, and learn how your organization can proactively ferret out any possible breaches before it’s too late and attackers have had time to entrench themselves into your infrastructure.

# # #

South Carolina Department of Revenue Timeline Tells Common Tale

Anthony Di Bello

If any lesson is to be learned from the recent South Carolina data breach in which 387,000 credit and debit cards and 3.6 million Social Security numbers were stolen, it is that automated incident response is crucial.

Nineteen days after the South Carolina Division of Information Technology informed the state’s Department of Revenue that it had been hacked, a timeline of events has emerged that exemplifies the need for organizations to have proactive and reactive incident response capabilities in place. As the analysis within the Verizon Data Breach Investigation Report (DBIR), shows, it’s common for defenders to be so far behind the attackers that the damage is done before anyone knows what has happened. This South Carolina breach is no different.

The Verizon DBIR also reveals that 92% of data breaches are brought to the target organization’s notice via third-party sources, not by their own perimeter detection technologies—and once again, this South Carolina Department of Revenue breach is no different. The U.S. Secret Service informed the Department of Revenue of the breach almost a month after the data had been stolen.

According to what we now know publicly, on August 27, there was an attempted probe of the SC Department of Revenue systems. Another set of probes hit on September 2. Then, around mid-September, the breach occurred, and Social Security numbers, credit and debit cards were accessed. It wasn’t until early October that the Secret Service informed the Department of Revenue of a potential cyber attack. Then, on October 20, the vulnerabilities that made the attacks possible were patched. Finally, six days later, on October 26, the public was notified.

Thus the timeline looked like this, with a large gap between the breach and detection:

According to that timeline of known events, attackers were active on the target network three different times before any data were extracted. During that time, it’s a safe assumption that attackers were mapping out sensitive data locations and looking for vulnerabilities that would allow them to exfiltrate data without being noticed.

If public information is correct, it is likely that the initial probes included installation of a command and control beacon to ensure access to systems for continued reconnaissance. From there, it is very likely that there was ongoing covert channel communication and disk/memory artifacts that could have been detected before the attack was ultimately successful.

The central takeaway here is that something must be done to close the gap between when a breach is occurring and identified. We covered how to do that by having both advanced threat detection and incident response technologies in place, as we discussed in our Webinar 1-2 Punch Against Advanced Threats.

Not identifying breaches underway is a huge opportunity lost, because if the proper detection and response capabilities are in place, it’s possible to stop many attacks as they are in progress. For instance, it is very likely that technology like FireEye could have detected the illicit outbound communication, while EnCase Cybersecurity could have validated the hosts responsible for that communication as well as exposed additional artifacts with which to triage the scope of the attack underway.

At this point, FireEye would cut the outbound communication and EnCase Cybersecurity would kill the process and files that were responsible for that communication, and a scope/impact assessment investigation would commence—all before any data were stolen.

The resulting timeline would look like the graphic below:

While there’s no perfect defense that will stop all attacks, it’s pretty clear from the South Carolina breach, coupled with the data from Verizon DBIR, that with swifter, automated incident response, many more attacks could be stopped before any data are stolen.

The High Costs of Manual Incident Response

Anthony Di Bello It’s widely accepted and understood in most circles – but especially in IT – that when something can be effectively automated, it should be. In fact, automation is one of the best ways to increase efficiency.

It’s widely understood, that is, except when it comes to incident response. For whatever reason, incident response at most organizations remains largely a set of manual processes. Hard drives are combed manually for incident data. So are servers. Oftentimes, as a result, evidence in volatile memory is lost. And, in manual ad hoc efforts, confusion often reigns as to who should respond and how. In large organizations, where there are complex lines over who owns what assets and processes, such decisions are anything but easy. Determining who should respond to each incident as it is already underway wastes valuable time. We would never take this approach in the physical world — imagine a bank with no armed guards, no security cameras and no clearly defined emergency processes in place.

Additionally, organizations that rely heavily on manual processes would have to dispatch an expert to the location where the affected systems reside just to be able to perform their analysis and prepare for the eventual response.

There are clear costs associated with all of those aspects of manual response. But there are additional costs, too. First, with automated response, you can immediately validate the attack and prioritize high risk assets, reduce the number of infected or breached systems, and even more quickly contain an ongoing attack. This alone can be the difference between confidential data and intellectual property being stolen, or systems being disrupted. It also can be the difference between regulated data being disclosed, triggering a mandatory breach notification, or an incident that doesn't go any further than a single end user’s endpoint being infiltrated.

There are many ways automation can help to turn this around, and help put time on your side. For instance, with an automated incident response capability, such as that provided by EnCase Cybersecurity, it’s possible to integrate existing security information and alerting systems to ensure response occurs as the alert is generated. This capability dramatically reduces mean-time-to-response and provides the right individuals’ time-sensitive information they need to accurately assess the source and scope of the problem, as well as the risk it presents your data.

This automation also includes the ability to take snapshots of all affected endpoints and servers, so that immediate analysis of the exact state of the machine at the time of the incident can be performed. This is a great way to identify what is actually going on in the system, such as uncovering unknown or hidden processes, running dynamic link libraries, and other stealth activities. As threats grow increasingly clandestine, this speed is all the more important.

There’s also a facet of our technology that’s often not considered part of response, but actually is, and that’s endpoint data discovery. This way, you can understand where sensitive data exists across your enterprise, remove from errant locations, and ensure data policies are being followed to reduce the risk of data ex-filtration. By integrating that capability with detection systems, you can be assured to quickly understand the risk a threat presents any potentially affected machine based on its’ sensitive data profile and prioritize other response activities accordingly.

Finally, to ensure that the process is efficient, it’s crucial to have solid workflow processes in place. This makes it much more straightforward to quickly assign incidents to the right analysts, or teams, as well as track investigations from open to close.

It’s impossible to detail the exact monetary return of effective, automated incident response – but it’s certain that such automation will save you significantly. It will reduce your exposure, manpower required to respond, speed time to identification and remediation of a breach, and very likely limit breach impact. This is especially the case when caught early. With a malicious breach costing more than $200 per record, and breach records running into the tens, if not hundreds, of thousands per incident – not to forget regulatory sanctions, fines, and potential lawsuits – anything reasonable that can be done to mitigate the impact of the inevitable breach should be done.

Could We Finally Be Getting The National Data Breach Law We Need?

Anthony Di Bello Readers of this blog know that I’m a proponent of a federal data breach disclosure law. And that a national data breach notification law - or possibly an international agreement - that streamlines breach disclosure mandates is long past due. As we wrote earlier this year, people do need to know that certain records have been compromised. But asking companies to have to contend with a different data breach notification laws wherever they sell their goods or services is creating too much confusion.

Fortunately, we may finally be getting a federal breach disclosure with the Data Security and breach Notification Act of 2012. According to the draft bill, the act would require organizations to take “reasonable measures” to protect personally identifiable information. For the purpose of breach notification that would include Social Security numbers, driver’s license numbers, as well as financial and credit or debt card numbers and associated security codes. Fines for non-compliance could be as high as $500,000. The Federal Trade Commission would see to enforcement of the law.

Assuming this bill isn't weakened too much as it moves through Congress, I’d be happy to see it become law. Eventually, an international breach disclosure standard would be ideal. Currently, there are too many laws that organizations have to contend. There are more than 40 state data breach disclosure laws in the U.S. And in the European Union, this year, they've discussed a data breach notification being required within 24 hours of knowledge of a breach.

Supporters of this bill argue, rightly so, that the mess of several dozen state laws creates too much complexity, and that this law - because it would simplify disclosure and some aspects of incident response planning for businesses that have customers in multiple states. And, let’s face it, that is just about everyone. And, because this bill, as it’s currently written, calls for risk-based decisions, organizations will be able to take reasonable steps to identify the nature of a breach before they disclose. That will lesson confusion of breach disclosures for consumers, partners, and law enforcement.

There’s no way to tell yet the chances this bill has of making it into law, but whether or not this bill moves forward, there will be a national data breach law - and a good one can’t come soon enough.

Here are a handful of additional interesting news stories and blogs on the Data Security and Breach Notification Act of 2012:

Cyber-attacks: Gaining Increased Insight at the moment of the breach

Anthony Di Bello
Triage – the prioritizing of patients’ treatments based on how urgently they need care – saves lives when order of care decisions must be made. Medical teams base their decisions on the symptoms they can see and the conditions they can diagnose. I’m sure the process isn’t perfect, especially in the midst of a crisis, but skilled medics can prioritize care based on their knowledge and experience.

For IT incident response teams, cyber attacks and incidents also happen quickly, but the symptoms could go unnoticed until it’s too late, particularly in regards to targeted attacks for specific data such as medical records or cardholder data. In a previous post, I've discussed the challenges security teams have regarding mean time to detection and response. Despite the team’s knowledge and experience when it comes to dealing with incidents, without the right information – right away – it’s nearly impossible for the team to make the “triage” decisions needed to mitigate as much risk as they otherwise could.

On the other hand, suppose a number of employees have just clicked on links tucked away in a cleverly crafted phishing e-mail, and their end points infiltrated. As the attackers launch exploits aimed at applications on their end points, security alerts are kicked out from the endpoint security software. When there’s a mass attack, such as one that involves automated malware, hundreds or even thousands of end points can be infected simultaneously. It’s not hard for any organization to see when an incident like this is underway.

In either scenario, what’s one of the most crucial aspects of response? Beyond the ability to quickly identify that an attack is underway, the other – just as in triage – is to be able to identify what systems pose the greatest risk to security or contain sensitive data and require immediate response. In many cases, systems affected by a malware attack, for instance, may just need to be restored to a known safe state, while other systems – those with critical access to important systems or containing sensitive information – would need immediate attention so that risk can be properly mitigated.

Once an attack or incident is discovered, the clock begins to tick as you scope, triage, and remedy the damage. Every delay and false positive costs you time and money, and increases the risk of significant loss or widespread damage. The problem is compounded by lack of visibility into the troves of sensitive data potentially being stored in violation of policy.

One of the most effective things to do – whether looking at 500 systems that have been infected all at once or getting reports of dozens if not hundreds of unrelated incidents – is to decide which system breaches place the organization at greatest risk. 

There are a number of ways you can try to accomplish this. For instance, a notebook that is breached, which happens to be operated by a research scientist, could very likely contain more sensitive information than that of a salesperson. But what if a developer’s system gets breached? What if someone in marketing gets breached? Who knows, offhand, what risk that could entail. Perhaps the developer was running a test on an application with thousands of real-world credit card numbers. Maybe the person in marketing was carrying confidential information on a yet-to-be-released product. 

In either case, it’d be helpful to know if sensitive data resided on the systems. And with alerts and potential breaches coming in so quickly, one would think it’s nearly impossible to make such decisions on the fly? Fortunately, it’s not. We’ve written in the past about the importance of automation when it comes to incident response, and how the marriage of security information and event management and incident response can help improve organizations respond to security events. Now, here’s another technology that you might want to consider using with your incident response systems, and that’s the ability to conduct content scans, such as those that look for critical data, financial account data, personality identifiable information, and other sensitive content either on a scheduled basis, or automatically in response to an alert. 

Consider for a moment the potential value such a capability brings most organizations. First, it provides powerful insight that helps prioritize which systems get evaluated first. If several systems are hit by a targeted attack, you’ll instantly know which systems to focus your attention on for containment and remediation. Second, you may know, from the types of systems that are being targeted what data or information the attackers seek. This will give you valuable time, potentially very early in the attack, to tighten your defenses accordingly. Third, because you’ll have actionable information, you'll have a fighting chance at clamping down on the attack before many records are accessed, or at least mitigating the attack as quickly as modern technology and your procedures allow.

Unlike triage in health care, lives may not be at stake – but critical data and information certainly are. And anything that can be done, such as coupling incident response with intelligent content scans and immediately capturing time-sensitive end point data the moment an alert is generated, will increase the overall effectiveness of your security and incident response efforts, and help you understand immediately if sensitive data is at risk.

Lessons from Black Hat

Anthony Di BelloOne of the biggest security conferences of the year is an important reminder on just how creative your adversaries can be.
Whenever I go to Black Hat USA security conference in Las Vegas,  don’t know whether I feel more knowledgeable about the state of IT security - or if I’m more concerned. Honestly, it’s probably a little bit of both. This year’s show was no different.
One of the more frightening items of research this year will certainly give hotel-goers around the world something to think about. Security researcher Cody Brocious revealed in his presentation just how easy it is to pick hotel electronic locks. The researcher demonstrated how certain types of hotel locks can be bypassed to gain access to the room using little more than the open source portable programming platform known as Arduino.
Another very interesting bit of research came from two university researchers who managed to create a “replicated eye” that is capable of fooling iris biometric scanners into allowing authentication. The team printed synthetic iris image codes of actual irises stored in a database. You can read more about their research here.
Even Microsoft’s upcoming operating system didn’t get through the conference unscathed, with a researcher highlighting ways the security of the operating system can be bypassed, such as applications being able to hijack Internet access rights of other applications, and other potential vulnerabilities. While the researcher says Windows 8 has many security benefits over its predecessors, there will still be zero-day vulnerabilities just waiting to be found.
And in the days after Black Hat at DefCon, a 10-year old hacker was recognized at the very first DefCon Kids, an overlay at DefCon, for finding a way to exploit mobile apps via the manipulation of the device’s system clocks.
Other interesting research included tools that made it possible to circumvent web application firewalls, the ease in which database permissions can be bypassed, and a growing number of known ways to hack smartphones.
All of this goes to show that the imagination (and age!) of attackers has no limits. And, inherently, no system can be trusted to be fully secure and impenetrable. As someone who has spent so much time in the IT security industry that’s a humbling reminder that no matter how much we focus on prevention - someone will always be able to figure and make their way through the walls we’ve put in place.
This makes it essential that organizations be able to identify any potentially nefarious changes and unknown data or processes in their environment. That means, of course, enterprises need to know what their systems look like when pristine and healthy. That’s the only way to be able to spot the unknown in the environment, and be able to clamp down on the attack as soon as is possible. And that’s an important part of the philosophy behind EnCase Cybersecurity.
It also means that a focus on incident response is as important as ever. It’s the organizations that can identify, clamp down upon, and successfully mitigate the damage of breaches that will, I believe, prove to be the most effective at information security. And effective incident response is a subject we just treated at some length.

Before the Breach Part 1: Prepare for the Inevitable

Anthony Di Bello
Most every organization will be breached eventually. This is the first in a series of posts during Black Hat week covering six best practices that need to be in place for best response.

It’s unfortunate, but history shows that it’s not a matter of IF a business will be breached, but WHEN. According to the Ponemon study cited in this ZDNet blog post, Cybersecurity by the numbers: How bad is it?, 90 percent of businesses were breached during the period of the survey last year. Additionally, the study found a staggering 40 percent of businesses didn’t know the source of the attacks against them, while 48 percent pointed to malicious software downloads as a prominent attack vector.

The news isn’t all bad. The fact is that organizations can do a lot to mitigate their risks – if they take the right security precautions and maintain a healthy focus on their ability to respond to incidents as they occur. For example, a separate Ponemon Institute survey from last year found that there is a strong correlation between companies that have CISO-leading organizational security efforts and lower breach costs. The year-over-year cost per record declined from $214 to $194.

This SecurityWeek post, Report: Breach Costs Fall, You Can Thank Your CISO, quoted Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, as saying, “One of the most interesting findings of the 2011 report was the correlation between an organization having a CISO on its executive team and reduced costs of a data breach.”

It stands to reason that a CISO would improve IT security effort efficiency. There’s an executive in the organization fully focused on security, and committed to driving best practices into the organization’s processes. The data show the profound impact that all of this focus and preparation creates. It’s also important, when it comes to information security, that focus not be so lopsided toward defense.

Let me explain. With the hostile environment we must do business in today, it makes sense to focus on defending your environment with technologies such as firewalls, anti-virus, intrusion detection systems, and the many other defensive tools available. However, just as fire prevention isn’t only about safety awareness and better building codes – it’s also about smart response, fire alarms and a fully trained and equipped fire department on the ready – IT breach incident response is the same way.

And the key to success in incident response is the determination to make it a priority, and having the right equipment and training in place. With that in mind, we recently conducted a webinar on The Six Best Practices on Incident Response that details the key things organizations need to do so that they can mitigate risk and lower the cost and impact of the incidents that come their way.

Throughout the week on this blog we will be taking a closer look at the best practices discussed in our webinar.

Be sure to follow @EnCase on Twitter for Guidance Software announcements and polls during Black Hat.

If you are at the conference, join me and Guidance Software in booth #113 where we will be showcasing the benefits of integrating cyber response technology with perimeter detection tools and raffling off a Google Nexus 7 each day!

Universities increasingly targeted by cybercriminals

Anthony Di Bello There's certainly been plenty of news around universities whose files have been breached in recent years. In a recent incident at  the University of Tampa, sensitive information was spilled about 30,000 students and employees. Last summer the University of Wisconsin reported finding that it found a server infected with malware stored Social Security numbers and names of 75,000 faculty members and students. A cursory search of the DatalossDB shows that breaches at the University of Virginia, Holy Family University, University of Nebraska, and Stanford, among others, have occurred recently.

Why are university files breached so commonly? There are many reasons. The first may have to do with the culture at most universities. Schools are typically more open with their infrastructure than enterprises, with higher network user turnover, and universities generally promote an environment that is more tolerant of students exploring and pushing boundaries on their network. Finally, universities are more likely than enterprises to be operating under tighter IT budgets, which means that security investment is also going to be tight.

These conditions create an environment that cyber criminals are more likely to view as an easier target.

In addition to being more vulnerable, universities also are shiny targets for attack because they hold a trove of valuable data.

Think about it. Universities possess decades worth of data on students –financial aid and loan information, Social Security numbers, student work history, e-mails, as well as student addresses and possibly even information on their parents. Many universities also hold sensitive health-related information.

A quick look at the DatalossDB shows that university files aren’t only being breached accidentally (through lost drives, web server errors, etc.), they’re actually being targeted by attackers. Of the 10 recent breaches listed in the DatalossDB, seven are due to an attack of one form or another.

Considering the facts, it’s pretty clear that universities aren’t being targeted just because they are perceived to be less secure than enterprises, but also because they have valuable data that can be used for fraud and identity theft.

Yet, when it comes to saving, universities are shortsighted and costing themselves more by cutting security budgets. With nearly every state having a data breach disclosure law, the cost of disclosure is quite high when considering the expense of notification, investigation, mitigation, and potential lawsuits. For instance, the cost of credit monitoring alone can run $15 to $20 per record – and these breaches can involve hundreds, thousands, and even tens of thousands of records per breach.

To cut the risks of data breaches and keep the costs of those that do occur relatively low, universities need to understand wheretheir sensitive data lives, and be able to quickly respond when something goes awry, such as malware infiltrating a server. And they need to make sure that strong incident response procedures are in place. Given budget cutbacks, while there’s also increased dependence on IT systems – and attackers are more active than ever – it’s more important than ever that universities have the tools in place to be able to limit the scope, and thereby the costs, of the breaches they do incur.

Value of Incident Response Data Goes Far Beyond Any Single Breach

Anthony Di Bello

When thinking about the value of incident response, most people focus on how it limits the potential damage of recent attacks, or even attacks that are currently underway on the network. This is for good reason: proper incident response can help reduce risk, limit the scope of disclosures (should the investigation show that no PII was actually accessed, for instance), reduce the costs of each incident investigation, and cut the costs of breaches significantly.

Yet, what many don’t consider is how the information that is gleaned from the investigation can not only go a long way to understanding the source and scope of any specific incident, but that these findings can also provide the valuable insight needed to shore up defenses for future attacks.

Consider some of the findings of the 2012 Data Breach Investigations Report, a study conducted by the Verizon RISK Team. It found that 81% of breaches occurred through some form of hacking, and most by external attackers. Additionally, nearly 70% of attacks incorporated some type of malware, and many used stolen authentication credentials and also left a Trojan behind on the network as a way to gain re-entry.

If, for instance, you were breached in that way you’d know to keep a close eye for any suspicious logins (such as time, geographic location, failed attempts, etc.), as well as any files or network communication that aren’t normal in the environment. Yes, you should be taking care of those things anyway, but if you know you are being targeted, or have been recently targeted - it doesn’t hurt to tune the radar to look for such anomalies.

One thing about security is that system defense is often like squeezing a water balloon, when you squeeze and tighten in one place, it gets bigger someplace else. So as you harden certain areas of your infrastructure, it’s likely that attackers will quickly target another area. That’s why it’s important to consistently analyze security event data: Especially data from the most recent incidents and breach attempts.

Here’s a sample of ways incident data can help you thwart future incidents:

Data gleaned from incident investigations can provide a complete understanding of an incident and will inform IT security exactly how an attacker managed their way onto a system or network as well as how they operated once inside. Ideally, the collection of such data should be automated, to ensure real-time response before attack related data has a chance to disappear. Event related data that can be gathered in such a way gives analysts useful indicators they can use to quickly understand the spread of malware throughout their organization without having to go through the time-consuming task of malware analysis. This type of data includes ports tied to running processes, artifacts spawned by the malware once on the endpoint, logged on users, network card information and much more.

With this knowledge, you gain the ability to conduct conclusive scope assessment, blacklists can be maintained to protect against reinfection and other specific defenses against similar attacks in the future can be developed. For example, if you see more attacks through infected USB devices, it may be necessary to block such devices. If there are a number of phishing attacks, an organization can launch an employee  awareness campaign. If it’s an attack against certain server services left on, close them when possible and put in place mitigating defenses. You get the idea: Use what you learn to harden your infrastructure.

Data from the response can be used to develop signatures specific to your own intrusion detection systems and even used to tune alerts sent by your security information and event management system. That same data can be shared with anti-virus vendors so that they can craft specific signatures against new threats. For instance, an organization may be the only one to experience a particular kind of attack, or the attack may be vertical specific, but a thorough incident response process may be the only way to obtain data needed for a signature to protect one’s own systems and those of the community.

The investigation may indicate the attack came through a supplier or partner, or through a path within the organization once thought to be secure. With the right information steps can be taken to notify the breached partner, or potentially close security gaps you didn’t know existed on your own systems.

It now should be clear, when considering the value of incident response, that it’s important not to view this data in a vacuum, and that the processes in place can not only to contain the damage of the incident at hand, but make sure the data gathered is used for lessons learned and incorporated to make one’s infrastructure more resilient to future attacks.

Ponemon Cost of a Data Breach Study and Verizon DBIR Highlight Some Good News and Some Bad News

Anthony Di Bello Two highly regarded security studies were recently released: the Ponemon Institute’s 2011 Cost of a Data Breach Study and Verizon's annual Data Breach Investigations Report, or DBIR. Both with interesting results.

There was good news in the Ponemon report as both the cost per breached record (whether lost or stolen) and the organizational costs associated with breaches have both declined. This was the first time since the start of study seven years ago. The cost of breaches to the organization dropped to $5.5 million from $7.2 million last year. While the cost per each breached record fell to $194 from $214.

The 2011 Cost of a Data Breach Study results are based on the evaluation of 49 data breach incidents that ranged 4,500 to 98,000 records. The study found that 41 percent of companies notified their affected customers within one month of the incident.

While timely notifications and lowered costs associated with breaches are good news, one of the more interesting findings in the report is that organizations that have chief information security officers with organizational responsibility for data protection are able to cut the costs of their breaches by up to 35 percent per compromised record.

That statistic clearly shows that organizations with more mature security programs in place tend to have better outcomes.

The study also found that customer churn rates went down last year: Customers are no longer so quick to leave companies that have announced that they’ve been breached. While that’s certainly good news for companies that have to announce that they’ve been breached, it also shows that consumers are becoming desensitized to breach notifications. There have been so many breach notifications and hacking news stories breaking that people are no longer paying attention.

When we look at the Verizon DBIR we learn that  the surge in hacktivism in the recent year has made online activism the most prevalent motivation for attack. That’s not to say that attackers aren’t aren’t still targeting data for monetary value, such as account numbers and intellectual property - they are. However, we’ve seen a wave of politically or civically motivated attacks. Which means companies that find themselves in a politically charged industry or part of a dispute had better adjust their risk posture accordingly.

The DBIR also points to the fact that all companies had better be prepared to stop attacks quickly, and even better identify when they’re underway. According to the study’s analysis, in 85 percent of incidents attackers are able to compromise their target in minutes or seconds. It turns out, thanks to easy to use and automated tools, it doesn’t take long to hack a server or point of sale system.

What makes it even more troubling is that in more than 50 percent of cases, for all organizations, the target’s data is successfully removed within hours of the initial breach. In about 40 percent of incidents it took about a day, or more, for the attacker to find and exfiltrate the data.

While that’s certainly concerning enough, the very disheartening news is that enterprises move much more slowly than their attackers. In about a third of incidents (27 percent), it took days between initial comprise and attack discovery. For another 24 percent of organization that discover took weeks. For the remaining 48 percent it took months to years for breach discover.

It goes without saying that’s just not acceptable. For an adversary that moves in minutes - the defender needs to be able to identify and respond to attacks in near real-time. This is only possible when response technology like EnCase Cybersecurity is integrated directly with alerting or event management solutions. It’s a topic we’ve covered previously here.

For a deeper look into the implications of the studies referred to in this post, join Larry Ponemon of the Ponemon Institute and Bryan Sartin, co-author of the Verizon DBIR for the Guidance Software CISO Summit, May 21st at the Red Rock Resort in Summerlin Nevada. Learn more at

Successful incident response requires a sound plan backed by accurate information

Anthony Di Bello In a number of our recent posts we discussed the importance of being able to quickly identify and respond to potential security breaches:

Incident Response: The First Step is Identifying the Breach

Beating the Hacking Latency

SIEM Turbocharger

In those articles we covered why it’s critical, for effective incident response, to have the ability to filter the noise from the various security technologies most organizations have in place. In our SIEM Turbocharger post, for instance, we talked about how EnCase’s SIEM integration capabilities enables instantaneous forensic data collection, and how (when the inevitable breach does occur) an assessment can be quickly conducted across endpoints to scope the breadth of the situation.

But what happens should the breach be significant and turns out to be a reportable incident due to regulatory mandate, SEC reporting expectations, when a considerable amount of confidential data has been stolen, or a related situation is encountered?

News headlines of companies that didn't handle their incidents, once identified, properly are all too common.

The key to success at these times is by having a well-crafted plan in place for how the incident will be handled.

Based on our discussions with customers and industry leaders there are several things that must be in place to make a successful incident response possible. While the IT security incident response team is generally a tight group of IT and security managers and analysts, having the right, and much more organizationally broad, team in place to respond to business-critical incidents is crucial. In fact, if a publicly reportable incident is going to be well managed it will most likely require input from many aspects of the business. That’s the only way to best decide how the general public, partners, suppliers, customers or anyone else affected, and who will be notified, could best be handled.

Unfortunately, this is where many organizations fall short. When they approach their customers, or announce a breach it’s not always handled as well as should be, which can cause loss of trust, loss of customers, and even increased regulatory scrutiny.

However, once it’s determined that an incident is serious enough to notify business leadership the people and the protocol for this need to be in place ahead of time. That includes informing members of the legal and compliance teams, the CIO’s office, corporate communications, and others. Once the legal, business, and regulatory implications of the breach are understood it’s time to take the incident to executive management and eventually notify the affected stakeholders.

Of course, what is required throughout the entire incident response process is accurate and trusted information. Organizations need clarity on the nature and scope of the breach as soon as possible so they can start making intelligent decisions as early in the incident as possible. Fortunately, through the integration of EnCase Cybersecurity with SIEM technology, it’s possible to automate the digital forensics data capture process and therefore quickly understand the nature and true scope of an incident. This way well informed and the more appropriate decisions can be made from the start.

Follow me on Twitter @CyberResponder

SEC Cybersecurity Guidelines Pose Potential Increase in Litigation forOrganizations

Anthony Di BelloChad McManamyOn October 13, the Division of Finance at the Securities and Exchange Commission (SEC) released “CF Disclosure Guidance: Topic No. 2 - Cybersecurity” representing the culmination of an effort on behalf of a group of Senators led by Senator Jay Rockefeller to establish a set of guidelines for publicly traded companies to consider when faced with data security breach disclosures. The concern from the Senators was that investors were having difficulty evaluating risks faced by organizations where they were not disclosing such information in their public filings.
According to the SEC in issuing the guidelines, "[w]e have observed an increased level of attention focused on cyber attacks that include, but are not limited to, gaining unauthorized access to digital systems for purposes of misappropriating assets or sensitive information, corrupting data, or causing operational disruption." And while the guidelines do not make it a legal requirement for organizations to disclose data breach issues, the guidelines lay the groundwork for shareholders suits based on failure to disclose such attacks.

The guidelines come on the heels of number of recent high-profile, large-scale data security breaches including those involving Citicorp, Sony, NBC and others – many of which have affected organizations around the world. A catalyst for the regulations is found in part in many organizations failure to timely report, or complete failure to report, their breaches. To curb any future disclosure issues, the SEC released the guidelines ordering companies to reveal their data security breaches.

As stated in the guidance notes, “[c]yber incidents may result in losses from asserted and unasserted claims, including those related to warranties, breach of contract, product recall and replacement, and indemnification of counterparty losses from their remediation efforts.”

“Cyber incidents may also result in diminished future cash flows, thereby requiring consideration of impairment of certain assets including goodwill, customer-related intangible assets, trademarks, patents, capitalized software or other long-lived assets associated with hardware or software, and inventory.”

Consistent with other SEC forms and regulations, organizations are not being advised to report every cyber incident. To the contrary, registrants should disclose only the risk of cyber incidents “if these issues are among the most significant factors that make an investment in the company speculative or risky.” If an organization determines in their evaluation that the incident is material, they should “describe the nature of the material risks and specify how each risk affects the registrant,” avoiding generic disclosures.

The SEC indicated that in evaluating the risks associated with cyber incidents and determining whether those incidents should be reported, organizations should consider:

-- prior cyber incidents and the severity and frequency of those incidents;

-- the probability of cyber incidents occurring and the quantitative and qualitative magnitude of those risks, including the potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption; and

-- the adequacy of preventative actions taken to reduce cyber security risks in the context of the industry in which they operate and risks to that security, including threatened attacks of which they are aware.

Rather than exposing new obligations for organizations, the SEC guidance highlights what company executives already knew about their obligations to report cyber incidents but may not have fully appreciated. The true lynch pin for every organization will be the determination of materiality and making the decision on which breaches gets reported and which do not. As such, public companies will also need to weigh real-world business risks specific to their particular market associated with incidents. For example, “if material intellectual property is stolen in a cyber attack, and the effects of the theft are reasonably likely to be material, the registrant should describe the property that was stolen and the effect of the attack on its results of operations, liquidity, and financial condition and whether the attack would cause reported financial information not to be indicative of future operating results or financial condition," the statement says.

Given the sophistication and success of recent attacks, forensic response has taken center stage when it comes to exposing unknown threats, assessing potential risks to sensitive data and decreasing the overall time it takes to successfully determine the source and scope of any given incident and the risk it may present.

Cybersecurity threats will continue to proliferate for companies of all sizes around the world. Failing to protect sensitive company data will pose an even greater risk going forward, so too will the legal implications for failing to disclose those material cyber incidents. A proactive, timely approach to prevention of cyber incidents represents the best case scenario for all organizations. Guidance Software’s Professional Services team and partners can help. Our consultants can help expose unknown risks in your environment, remediation of those risks, as well as provide prevention techniques designed to give your organization an active defense and knowledge against possible attacks unique to your organization.

Chad McManamy is assistant general counsel for Guidance Software, and Anthony Di Bello is product marketing manager for Guidance Software.

Beating the Hacking Latency

Guidance SoftwareJournalist Kevin Townsend recently spoke with Guidance Software’s Frank Coggrave about preventing data theft from hacking attacks by reducing the time from security alert to remediation and Guidance Software’s recent announcement of EnCase® Cybersecurity 4.3 that automates incident response through integration with SIEM tools like ArcSight.

The article discusses the value that SIEM solutions provide: they scan logs in real-time looking for anomalies, discover security events and can show where things are happening on the network. But they do have a shortcoming – they lack the next step which is response. That’s where Guidance Software’s EnCase® Cybersecurity comes in. EnCase® Cybersecurity is able to identify the root cause of the event and help IT administrators respond quickly, closing the gap between alert and response.

Kevin writes, “Today’s hacker likes to get in and hide himself. He thinks he can go undetected (and often can and does) while he infiltrates deeper into the network looking for the most valuable data. Hacking comes with its own latency – and you need to use that latency between infiltration by the hacker and exfiltration of your data in order to stop him…SIEM plus forensics has the potential to improve the SIEM and, by reducing the time to remediation, to defeat the hacking latency.”

An additional problem is that IT security is a 24x7 job. When the SIEM solution triggers an alert in the middle of the night, response can’t wait. Frank provided Kevin with an example of how EnCase® Cybersecurity can help:

“One of the filtering systems picks up that something is happening that shouldn’t. It reports it to the SIEM. Correlation with other alerts indicates that it’s potentially a serious incident. ‘But what do you do if it’s 2:00am. Or it’s just part of a whole series of other alerts happening at the same time? Well, the SIEM can now trigger EnCase® Cybersecurity Solution to automatically and immediately dive in and do an investigation. We can capture who is on the machine in question, what applications are running at the time, what processes are in memory; we can kill the applications if we want to, and we can clear up the incident before it becomes too serious.’ Going back to our earlier metaphor, SIEM+EnCase can now close the stable door before the hacking latency expires, while the hacker is still in the stable and before too much damage is done.”
Read the full article on Kevin Townsend’s website.

Incident Response: The First Step Is Identifying the Breach

Anthony Di Bello The objective of malware has moved from weapons of mass disruption, to weapons of ultimate stealth for data theft. Today, attackers want to go unnoticed. And they’ll do anything they can to get past traditional defenses. They’ll try to compromise your users through tainted links on social networking sites, or specially crafted email attachments, and even through infected USB drives. They’ll employ any means they can, and if they’re determined, they won’t stop until they succeed.

The software tools they use today include attack exploit code, Trojans, keystroke loggers, network sniffers, bots – whatever works to infiltrate the network and then ex-filtrate the desired data.

Consider this quote from this story, “Customized, stealthy malware growing pervasive”, from an experienced penetration tester:

"The advanced attack is getting more pervasive. In our engagements and my conversations with peers we are dealing with more organizations that are grappling with international infiltration. Every network we monitor, every large customer, has some kind of customized malware infiltrating data somewhere. I imagine anybody in the global 2,500 has this problem.”

Consider that quote again for a second: “Every network we monitor, every large customer, has some kind of customized malware infiltrating data somewhere.”
Obviously, the goal of the malware is to slither past anti-malware defenses, and too often the attackers are successful.

This is why the ability to quickly detect and respond to infiltrations is more crucial than ever for an effective IT security program. And that makes digital forensics software central to those efforts. By being able to quickly determine the nature and cause of an incident, forensics software can be used to stop future incidents through the increased visibility into the network it provides.

This is where EnCase® Cybersecurity shines. EnCase® Cybersecurity offers enterprises a way to obtain actionable endpoint data related to an event before that data has a chance to decay or disappear from the affected endpoint altogether. EnCase® Cybersecurity can easily be integrated with an alerting solution or SIEM of choice (such as ArcSight ESM) to enable real-time visibility into relevant endpoint data the moment an alert or event is generated. This ensures security teams have instant access to information such as hidden processes running at the time the alert was generated, ports that were open at the time and more. The ability to see the entire picture in regards to what was occurring on an endpoint – at a specific moment in time – allows for a far more accurate incident impact analysis and a way to gain visibility into any given threat. Having a clear view into that moment in time leads to faster incident resolution rather than chasing cold trails.

This type of instant response capability that better addresses potential threats is simply mandatory today, considering the stealthy nature of malware and significant effort that goes into masking any traces of an attack.
Anthony Di Bello is product marketing manager at Guidance Software.