ethical hacking quarterly newsletter issue 4

4
SECURITY BEST PRACTICES The Cost of ISO Compliance For many organizations, an ISO 27001 Security Certification is the gold standard for demonstrating security commitment and is often requested by partners, regulators, or customers. This can often lead to the question: ‘How much will it cost to become ISO certified’? The question is inherently loaded as each organization is unique; overall costs depend on the size of the organization and the scope of assets, information systems, and businesses included in the certification. Additionally, while the cost of an audit and certification process may be relatively flat, the real goal (and cost) is the process by which the organization reaches certification readiness, implementing the necessary security controls. ISO compliance requires a serious commitment to security in a broad variety of areas such as human resources, facilities, public relations, legal, and business resilience. One of the best ways to estimate the real cost of becoming ISO ready is to perform a self-assessment of security controls. Such an effort can help to uncover any issues and allow the organization to update policies, correct technical flaws, train users, etc. before weaknesses become non-conformities in an audit. The self-assessment is also the best measure of resources and time required to become ISO ready. BT offers the TrustCheck as a cost-effective solution to perform a self- assessment of security controls, providing a clear picture of ISO readiness with clear goals and recommendations to reach the ISO ready milestone. The TrustCheck can also be used as a readiness or health check for other standards such as HIPPA or PCI. For more information or to schedule TrustCheck, contact [email protected]. Business Continuity vs. Disaster Recovery A comprehensive business resiliency strategy requires serious preparation including development of plans, training, and testing exercises. Well prepared organizations plan ahead for major disasters, yet the relationship between Disaster Recovery (DR), Business Continuity Planning (BCP), and Contingency Planning (CP), is often misunderstood. All three are important components of resiliency, and may take many forms: an organization may have a discrete plan for each element, or may combine them into a single resiliency plan; they could be stratified across multiple systems or business units, or may be supplemented by system addendums. The key factors to remember in differentiating DR, BCP, and CP are as follows: Contingency address small scale disruptive events which do not threaten overall mission objectives. Examples include: replacing a keyboard, re-routing a network connection, or replacing a drive and restoring backups. Business Continuity is mission-centric, focusing on restoring services or providing equivalent services, rather than repairs or replacement of specific assets. Examples of continuity activities include: running services at an alternate site, providing equivalent services using other systems, or human driven processes (e.g. by phone, on paper, or manually). Disaster Recovery is the most serious and rare condition wherein services have experienced major disruption, requiring re-constitution of major assets or services. DR often follows BCP: services are first restored to an acceptable level and then DR is engaged to return to normal operations. Ethical Hacking Quarterly BT Advise Assure

Upload: bt-lets-talk

Post on 08-Nov-2014

189 views

Category:

Documents


1 download

DESCRIPTION

So here it is — this quarter’s issue of Ethical Hacking Quarterly, brought to you by the BT Ethical Hacking Centre of Excellence (EHCOE).

TRANSCRIPT

Page 1: Ethical Hacking Quarterly Newsletter Issue 4

SECURITY BEST PRACTICES

The Cost of ISO Compliance

For many organizations, an ISO 27001 Security Certification is the gold standard for demonstrating security commitment and is often requested by partners, regulators, or customers. This can often lead to the question: ‘How much will it cost to become ISO certified’? The question is inherently loaded as each organization is unique; overall costs depend on the size of the organization and the scope of assets, information systems, and businesses included in the certification. Additionally, while the cost of an audit and certification process may be relatively flat, the real goal (and cost) is the process by which the organization reaches certification readiness, implementing the necessary security controls. ISO compliance requires a serious commitment to security in a broad variety of areas such as human resources, facilities, public relations, legal, and business resilience. One of the best ways to estimate the real cost of becoming ISO ready is to perform a self-assessment of security controls. Such an effort can help to uncover any issues and allow the organization to update policies, correct technical flaws, train users, etc. before weaknesses become non-conformities in an audit. The self-assessment is also the best measure of resources and time required to become ISO ready. BT offers the TrustCheck as a cost-effective solution to perform a self-assessment of security controls, providing a clear picture of ISO readiness with clear goals and recommendations to reach the ISO ready milestone. The TrustCheck can also be used as a readiness or health check for other standards such as HIPPA or PCI. For more information or to schedule TrustCheck, contact [email protected].

Business Continuity vs. Disaster Recovery A comprehensive business resiliency strategy requires serious preparation including development of plans, training, and testing exercises. Well prepared organizations plan ahead for major disasters, yet the relationship between Disaster Recovery (DR), Business Continuity Planning (BCP), and Contingency Planning (CP), is often misunderstood. All three are important components of resiliency, and may take many forms: an organization may have a discrete plan for each element, or may combine them into a single resiliency plan; they could be stratified across multiple systems or business units, or may be supplemented by system addendums. The key factors to remember in differentiating DR, BCP, and CP are as follows: Contingency address small scale disruptive events which do not threaten overall mission objectives. Examples include: replacing a keyboard, re-routing a network connection, or replacing a drive and restoring backups. Business Continuity is mission-centric, focusing on restoring services or providing equivalent services, rather than repairs or replacement of specific assets. Examples of continuity activities include: running services at an alternate site, providing equivalent services using other systems, or human driven processes (e.g. by phone, on paper, or manually). Disaster Recovery is the most serious and rare condition wherein services have experienced major disruption, requiring re-constitution of major assets or services. DR often follows BCP: services are first restored to an acceptable level and then DR is engaged to return to normal operations.

Ethical Hacking Quarterly BT Advise Assure

Page 2: Ethical Hacking Quarterly Newsletter Issue 4

Konstantinos Karagiannis, Practice Tech Lead

Konstantinos is the Practice Technical Lead for Ethical Hacking in BT Advise Assure. He has extensive experience performing application and network assessments and penetration tests, and specializes in financial applications. With over 13 years of experience in the industry, he has spoken at dozens of technical conferences and customer events around the world. Konstantinos began as a Physics major before finding his way to the world of hacking, and enjoys probing how everything works, from programs to particles.

1) Any thoughts on security concerns or potential security gains in moving enterprise applications to cloud providers now that the technology is maturing?

If you’re storing sensitive data in the cloud, encryption is key. Such a simple solution goes away if you’re working with cloud-hosted applications, though; you may not have full control of how the data is passed around or handled. I think cloud providers will need to step up their security efforts to lure corporations as customers. Attackers know that it’s a hack one, get many scenario with the cloud. Why work so hard to get just one company’s data?

All companies, cloud provider and standard solo corporations included, need to take a look at why some of these sophisticated attacks are working. Are they having ethical hacks performed? Are they using layered security, starting with basic defensive techniques that have amazingly fallen out of popularity? The most notable example of something companies should still be using is

honeypots. If you don’t have a few unpatched servers that are internally known to be off limits, and you’re not monitoring these for obviously illicit use and learning from the attack patterns, you’re missing out on a huge piece of defensive information.

2) What are your thoughts on the recent spike in highly resourced DDoS Attacks? Do you think there is anything that can be done given the Net’s current architecture?

You can certainly use tools like slowhttptest to test for application-layer issues that may bring your site down. But when it comes to defending against a 50,000-node botnet’s pure network traffic, that’s not going to help. Botnets can come from multiple countries simultaneously, so they can’t be defended against by just blocking IP ranges. For now, dynamic monitoring seems to be the key. Companies need to have a way to prioritize legitimate traffic quickly. It’s not easy, but several vendors including BT are now providing that service.

We should do more for home-user victims. When was the last time someone got an email from their ISP saying that their machine is obviously infected and being used in attacks? This really shouldn’t be that hard to do. Service providers notify you now when your kids are downloading illegally; detecting botnet attacks should be comparatively easier. We may need a modification in how we consider sources of traffic to be legitimate if we want better accountability in the future.

3) Any thoughts on threats that are being way overhyped in the current environment?

Unfortunately there’s still a major under-hype situation present in most companies when it comes to Advanced Persistent Threats and ‘hacktivist’ type attacks. While going on presales calls I’m amazed at the number of companies that don’t have any of the following in place: secure development, patch management, and any kind of security testing. The majority of the Anonymous and LulzSec attacks that yielded sensitive data were made possible because of vulnerabilities that have been around over a decade. SQL injection, for instance, is inexcusable in production apps these days, but it’s still the reason you hear headlines about thousands of passwords or credit card numbers being grabbed.

WHITE HAT SPOTLIGHT

Page 3: Ethical Hacking Quarterly Newsletter Issue 4

Ethical hacking tests conducted across a wide variety of leaders in the financial industry yielded interesting statistics regarding the frequency and type of vulnerabilities commonly faced.

On average, applications tested across the industry this quarter were found to have one or two high risks per application, three medium, and four low risks. This represents a large rise from applications tested last quarter because the majority of activity was for new applications which had never been assessed; through diligent improvement and repeated testing many of the applications tested last quarter had reached an acceptable security level for continued operations.

Of those applications tested this quarter for a second time, the remediation activities including code changes recommended by BT penetration testers effectively closed an average of 25% of findings the first pass; systems which completed additional cycles closed 80% of all vulnerabilities.

The most prevalent risk in the applications tested this quarter remained cross-site scripting, though only by a slim margin due to a rise in cross-site request forgery and horizontal privilege escalation attacks.

Since so many of the applications tested this quarter had never before been tested, a look at the vulnerabilities found on these systems gives insight into the types of design and coding errors being made today. Interestingly, a large number of simple errors are still prevalent, such as lack of authentication and encryption, or use of weak ciphers.

Coding errors appear to be on the decline, which could indicate that software development teams are integrating recommendation’s from BT Ethical Hacking reports into best practices, reducing the number of code flaws on the first pass.

The number of infrastructure flaws, such as outdated software or misconfigured web services however, appears to be a growing share of the vulnerabilities present in these new applications. Unpatched infrastructure software such as database engines accounted for the largest single type of vulnerability found this quarter.

Continuing a trend in the last few quarters, SQL server injection vulnerabilities remain on the decline as proper attention is being given to the popular attack vector, while very little improvement has been observed in information disclosure flaws such as source code or error message disclosure. Such disclosures can be difficult to fix because they are not a purely technical, code error but rather a misunderstanding of information handling or confidentiality rules as they apply to live data in a system, which can be misunderstood by software developers. BT Offers training specifically oriented toward software engineers on information handling within application code, which has shown to reduce the frequency and severity of application information disclosure issues.

INDUSTRY METRICS

Page 4: Ethical Hacking Quarterly Newsletter Issue 4

SECURITY NEWS AND NOTABLE ATTACKS

White House on Corporate Cyber Security

In the recent State of the Union speech, President Obama highlighted the increased risk to national infrastructure from cyber-attacks, however to make sure the message was well received the U.S. leader recently summoned chief executives from major banks to the White House to discuss the ongoing threat, specifically Denial of Service attacks. The renewed emphasis on cyber security comes after a series of Denial of Service attacks flooded major banks’ infrastructure with unprecedented volume of phony requests, rendering them inaccessible. Congress has repeatedly tried without success to pass legislation allowing more Government oversight of cyber defense for privately owned institutions on the grounds that the infrastructure provided by banks, utilities, and similar organizations are a national security concern.

Network Spat Ignites Unprecedented Denial of Service

A wide disruption of the Internet occurred in late March when the anti-spam organization Spamhaus criticized Dutch network provider CyberBunker for failing to control spam activities by their hosted clients. CyberBunker is suspected of retaliating, allegedly in cooperation with criminal gangs, with a Distributed Denial of Service (DDoS) attack of “previously unknown magnitude”, overwhelming Spamhaus services and affecting many others worldwide by sending in excess of 300 billion bits per second of unwanted traffic. At least five national cyber response teams are currently investigating the incident. The attack highlights a weakness in the basic architecture of the internet as the DDoS attack model is difficult to defend against and can be executed to great effect by relatively non-sophisticated attackers, so long as they sufficient resources.

U.S. Cyber Command on the Offense

NSA head General Keith Alexander reported to Congress recently on the progress made in standing up the U.S. Cyber Command team, revealing a major shift in U.S. cyber defense doctrine. Inspired by the old adage that the best defense is a good offense, the team’s primary purpose is retaliation in event of a major cyber-attack

against the United States. General Alexander told lawmakers "this defend-the-nation team, is not a defensive team ... the Defense Department would use to defend the nation if it were attacked in cyberspace.” The shift was predicated by new intelligence estimates which show sophisticated cyber-attacks from unfriendly states such as Iran now pose a more credible threat than traditional terrorist attacks from groups such as Al Qaeda. Critics are concerned that the rules of war and proportional response translate poorly to cyberspace, and that the asymmetrical nature of cyber war is poorly understood by traditional strategists, which could lead to unknown diplomatic repercussions.

Flaws Found in Mainstream Encryption Algorithm

Legendary Cryptographer Ron Rivest may not have expected that his RC4 encryption algorithm would still be prominently used 25 years later, and now perhaps it shouldn’t be. Researchers have discovered new faults in SSL and TLS, the top security protocols used on the web, when they are paired with RC4 encryption. The new form of attack can completely decipher messages encrypted with the algorithm by re-encrypting the data billions of times and watching for imperfections caused by limitations on the way computers generate random numbers. While the amount of time needed to decipher actual information is reassuring, the approximately 32 hours required now could be significantly reduced if the technique is refined, or as hardware becomes more powerful.

Cyber Attacks Flare on Korean Peninsula

A serious cyber-attack against South Korean TV networks, banks, and infrastructure was carried out last month which raised speculation about involvement by the North. The attack disabled bank machines and credit systems, forcing citizens of Seoul to use cash only while simultaneously knocking out the computer systems of news broadcasting stations. While China and North Korea were initially suspected in the attack, forensic investigation eventually found that the attacks had originated from the United States and Europe and were likely not the actions of nation states.