DigiCert Announces Certificate Transparency Support

LEHI, UT–(Marketwired – September 24, 2013) – DigiCert, Inc., a leading global authentication and encryption provider, announced today that it is the first Certificate Authority (CA) to implement Certificate Transparency (CT). DigiCert has been working with Google to pilot CT for more than a year and will begin adding SSL Certificates to a public CT log by the end of October.

DigiCert welcomes CT as an important step toward enhancing online trust. For several months, DigiCert has been working with Google engineers to test Google’s code, provide feedback on proposed CT implementations, and build CT support into the company’s systems. This initiative aligns with DigiCert’s focus to improve online trust — including tight internal security controls, development and adoption of the CA/Browser Forum Baseline Requirements and Network Security Guidelines, and participation in various industry bodies that are focused on security and trust standards.

http://finance.yahoo.com/news/digicert-announces-certificate-transparency-support-180554567.html

Google’s Certificate Transparency project fixes several structural flaws in the SSL certificate system, which is the main cryptographic system that underlies all HTTPS connections. These flaws weaken the reliability and effectiveness of encrypted Internet connections and can compromise critical TLS/SSL mechanisms, including domain validation, end-to-end encryption, and the chains of trust set up by certificate authorities. If left unchecked, these flaws can facilitate a wide range of security attacks, such as website spoofing, server impersonation, and man-in-the-middle attacks.

Certificate Transparency helps eliminate these flaws by providing an open framework for monitoring and auditing SSL certificates in nearly real time. Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates.

Because it is an open and public framework, anyone can build or access the basic components that drive Certificate Transparency. This is particularly beneficial to Internet security stakeholders, such as domain owners, certificate authorities, and browser manufacturers, who have a vested interest in maintaining the health and integrity of the SSL certificate system.

Advertisements

Familiarize Yourself With Software Threat Modeling

Threat modeling has two distinct, but related, meanings in computer security. The first is a description of the security issues the designer cares about. This is the sense of the question, “What is the threat model for DNSSec?” In the second sense, a threat model is a description of a set of security aspects; that is, when looking at a piece of software (or any computer system), one can define a threat model by defining a set of possible attacks to consider. It is often useful to define many separate threat models for one computer system. Each model defines a narrow set of possible attacks to focus on. A threat model can help to assess the probability, the potential harm, the priority etc., of attacks, and thus help to minimize or eradicate the threats. More recently, threat modeling has become an integral part of Microsoft’s SDL (Security Development Lifecycle) process.[1] The two senses derive from common military uses in the United States and the United Kingdom.

Threat modeling is based on the notion that any system or organization has assets of value worth protecting, these assets have certain vulnerabilities, internal or external threats exploit these vulnerabilities in order to cause damage to the assets, and appropriate security countermeasures exist that mitigate the threats.

 There are at least three general approaches to threat modeling:

 Attacker-centric
Attacker-centric threat modeling starts with an attacker, and evaluates their goals, and how they might achieve them. Attacker’s motivations are often considered, for example, “The NSA wants to read this email,” or “Jon wants to copy this DVD and share it with his friends.” This approach usually starts from either entry points or assets.

Software-centric
Software-centric threat modeling (also called ‘system-centric,’ ‘design-centric,’ or ‘architecture-centric’) starts from the design of the system, and attempts to step through a model of the system, looking for types of attacks against each element of the model. This approach is used in threat modeling in Microsoft’s Security Development Lifecycle.

Asset-centric
Asset-centric threat modeling involves starting from assets entrusted to a system, such as a collection of sensitive personal information.

More at http://en.wikipedia.org/wiki/Threat_model

Introduction

Threat modeling is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it does complement the security code review process. The inclusion of threat modeling in the SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling process, can give the reviewer a greater understanding of the system. This allows the reviewer to see where the entry points to the application are and the associated threats with each entry point. The concept of threat modeling is not new but there has been a clear mindset change in recent years. Modern threat modeling looks at a system from a potential attacker’s perspective, as opposed to a defender’s viewpoint. Microsoft have been strong advocates of the process over the past number of years. They have made threat modeling a core component of their SDLC, which they claim to be one of the reasons for the increased security of their products in recent years.

When source code analysis is performed outside the SDLC, such as on existing applications, the results of the threat modeling help in reducing the complexity of the source code analysis by promoting an in-depth first approach vs. breadth first approach. Instead of reviewing all source code with equal focus, you can prioritize the security code review of components whose threat modeling has ranked with high risk threats.

The threat modeling process can be decomposed into 3 high level steps:

Step 1: Decompose the Application. The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the application will grant to external entities. This information is documented in the Threat Model document and it is also used to produce data flow diagrams (DFDs) for the application. The DFDs show the different paths through the system, highlighting the privilege boundaries.

Step 2: Determine and rank threats. Critical to the identification of threats is using a threat categorization methodology. A threat categorization such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing & Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and Transit, Data Validation, Exception Management. The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker’s perspective, such as data sources, processes, data flows, and interactions with users. These threats can be identified further as the roots for threat trees; there is one tree for each threat goal. From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. The determination of the security risk for each threat can be determined using a value-based risk model such as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact).

Step 3: Determine countermeasures and mitigation. A lack of protection against a threat might indicate a vulnerability whose risk exposure could be mitigated with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures. The risk mitigation strategy might involve evaluating these threats from the business impact that they pose and reducing the risk. Other options might include taking the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the least preferable option, that is, to do nothing.

More at https://www.owasp.org/index.php/Application_Threat_Modeling

 Computer insecurity is the concept that a computer system is always vulnerable to attack, and that this fact creates a constant battle between those looking to improve security and those looking to circumvent security.

 “Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal.

 A state of computer “security” is the conceptual ideal, attained by the use of the three processes:

1. Prevention
2. Detection
3. Response

User account access controls and cryptography can protect systems files and data, respectively.
Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering.
Intrusion Detection Systems (IDSs) are designed to detect network attacks in progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems.
“Response” is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, a complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected.
Today, computer security comprises mainly “preventive” measures, like firewalls or an Exit Procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide realtime filtering and blocking. Another implementation is a so-called physical firewall which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. However, relatively few organisations maintain computer systems with effective detection systems, and fewer still have organised response mechanisms in place. As result, as Reuters points out: “Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets”.[5] The primary obstacle to effective eradication of cyber crime could be traced to excessive reliance on firewalls and other automated “detection” systems. Yet it is basic evidence gathering by using Packet Capture Appliances that puts criminals behind bars.

Computer code is regarded by some as a form of mathematics. It is theoretically possible to prove the correctness of certain classes of computer programs, though the feasibility of actually achieving this in large-scale practical systems is regarded as small by some with practical experience in the industry — see Bruce Schneier et al.

It’s also possible to protect messages in transit (i.e., communications) by means of cryptography. One method of encryption — the one-time pad — is unbreakable when correctly used. This method was used by the Soviet Union during the Cold War, though flaws in their implementation allowed some cryptanalysis (See Venona Project). The method uses a matching pair of key-codes, securely distributed, which are used once-and-only-once to encode and decode a single message. For transmitted computer encryption this method is difficult to use properly (securely), and highly inconvenient as well. Other methods of encryption, while breakable in theory, are often virtually impossible to directly break by any means publicly known today. Breaking them requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information.

Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Even in a highly disciplined environment, such as in military organizations, social engineering attacks can still be difficult to foresee and prevent.

In practice, only a small fraction of computer program code is mathematically proven, or even goes through comprehensive information technology audits or inexpensive but extremely valuable computer security audits, so it’s usually possible for a determined hacker to read, copy, alter or destroy data in well secured computers, albeit at the cost of great time and resources. Few attackers would audit applications for vulnerabilities just to attack a single specific system. It is possible to reduce an attacker’s chances by keeping systems up to date, using a security scanner or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by careful backing up and insurance.

 More at http://en.wikipedia.org/wiki/Computer_insecurity

Attack trees are conceptual diagrams showing how an asset, or target, might be attacked. Attack trees have been used in a variety of applications. In the field of information technology, they have been used to describe threats on computer systems and possible attacks to realize those threats. However, their use is not restricted to the analysis of conventional information systems. They are widely used in the fields of defense and aerospace for the analysis of threats against tamper resistant electronics systems (e.g., avionics on military aircraft).[1] Attack trees are increasingly being applied to computer control systems (especially relating to the electric power grid ).[2] Attack trees have also been used to understand threats to physical systems.

Some of the earliest descriptions of attack trees are found in papers and articles by Bruce Schneier,[3] CTO of Counterpane Internet Security. Schneier was clearly involved in the development of attack tree concepts and was instrumental in publicizing them. However, the attributions in some of the early publicly available papers on attack trees[4] also suggest the involvement of the National Security Agency in the initial development.

Attack trees are very similar, if not identical, to threat trees. Threat trees were discussed in 1994 by Edward Amoroso

 Attack trees are multi-leveled diagrams consisting of one root, leaves, and children. From the bottom up, child nodes are conditions which must be satisfied to make the direct parent node true; when the root is satisfied, the attack is complete. Each node may be satisfied only by its direct child nodes.

A node may be the child of another node; in such a case, it becomes logical that multiple steps must be taken to carry out an attack. For example, consider classroom computers which are secured to the desks. To steal one, the securing cable must be cut or the lock unlocked. The lock may be unlocked by picking or by obtaining the key. The key may be obtained by threatening a key holder, bribing a keyholder, or taking it from where it is stored (e.g. under a mousemat). Thus a four level attack tree can be drawn, of which one path is (Bribe Keyholder,Obtain Key,Unlock Lock,Steal Computer).

Note also that an attack described.

 More at http://en.wikipedia.org/wiki/Attack_tree

Factor analysis of information risk (FAIR for short) is a taxonomy of the factors that contribute to risk and how they affect each other. It is primarily concerned with establishing accurate probabilities for the frequency and magnitude of loss events. It is not, per se, a “cookbook” that describes how to perform an enterprise (or individual) risk assessment

The unanswered challenge, however, is that without a solid understanding of what risk is, what the factors are that drive risk, and without a standard nomenclature, we can’t be consistent or truly effective in using any method. FAIR seeks to provide this foundation, as well as a framework for performing risk analyses. Much of the FAIR framework can be used to strengthen, rather than replace, existing risk analysis processes like those mentioned above

As a standards body, The Open Group aims to evangelize the use of FAIR within the context of these risk assessment or management frameworks. In doing so, The Open Group becomes not just a group offering yet another risk assessment framework, but a standards body which solves the difficult problem of developing consistent, defensible statements concerning risk

FAIR underlines that risk is an uncertain event and one should not focus on what is possible, but on how probable is a given event. This probabilistic approach is applied to every factor that is analysed. The risk is the probability of a loss tied to an asset.

Asset
An asset’s loss potential stems from the value it represents and/or the liability it introduces to an organization.[3] For example, customer information provides value through its role in generating revenue for a commercial organization. That same information also can introduce liability to the organization if a legal duty exists to protect it, or if customers have an expectation that the information about them will be appropriately protected.

FAIR defines six kind of loss:[3]

1. Productivity – a reduction of the organization to effectively produce goods or services in order to generate value
2. Response – the resources spent while acting following an adverse event
3. Replacement – the expense to substitute/repair an affected asset
4. Fines and judgements (F/J) – the cost of the overall legal procedure deriving from the adverse event
5. Competitive advantage (CA)- missed opportunities due to the security incident
6. Reputation – missed opportunities or sales due to the diminishing corporate image following the event

FAIR defines value/liability as:[3]

1. Criticality – the impact on the organization productivity
2. Cost – the bare cost of the asset, the cost of replacing a compromised asset
3. Sensitivity – the cost associated to the disclosure of the information, further divided into:

   1. Embarrassment – the disclosure states the inappropriate behaviour of the management of the company
   2. Competitive advantage – the loss of competitive advantage tied to the disclosure
   3. Legal/regulatory – the cost associated with the possible law violations
   4. General – other losses tied to the sensitivity of data

Threat
Threat agents can be grouped by Threat Communities, subsets of the overall threat agent population that share key characteristics. It’s important to define precisely threat communities in order to effectively evaluate impact (loss magnitude).

Threat agents can act differently on an asset:[3]

Access – read the data without proper authorization
Misuse – use the asset without authorization and or differently form the intended usage
Disclose – the agent let other people to access the data
Modify – change the asset (data or configuration modification)
Deny access – the threat agent do not let the legitimate intended users to access the asset

This actions can affect differently various asset: the impact is different along with the characteristics of the asset and its usage. Some assets have high criticality and low sensitivity: deny access has a much higher impact than disclosure on them. Vice versa high sensitivity data can have low productivity impact while not available, but huge embarrassment and legal impact if disclosed: former patient health data availability do not affect an healthcare organization productivity but can cost millions dollars if disclosed. [4] A single event can involve different assets: a [laptop theft] has an impact on the availability of the laptop itself but can lead to the potential disclosure of the information stored on it.

The point is that it’s the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss.

Important aspects to be considered are the agent motive and the affected asset characteristics.

 More at http://en.wikipedia.org/wiki/Factor_Analysis_of_Information_Risk

Alert: NSA Buys Zero-Day Exploits from French security firm Vupen

A contract that’s come to light with the recent release of documents from a successful Freedom of Information Act request shows that the NSA bought software exploits from a French hacking firm called Vupen, headquartered in Montpelier. 

The NSA contracted with Vupen for a year-long “subscription” to zero day exploits, previously unknown vulnerabilities in software and hardware. Knowledge of zero day exploits allows for both defense against their use and offensive use for the purposes of surveillance and data theft. 

In 2011, according to leaked documents, the U.S. launched 231 offensive cyber-operations.  Other leaks, reported last week, indicated that the country spends $4.3 billion on such operations.

Vupen CEO Chaouki Bekrar told Slate’s Ryan Gallagher that his company’s services include highly technical documentation and private exploits written by Vupen’s team of researchers for critical vulnerabilities affecting major software and operating systems.” 

The amount paid for this subscription was redacted on the document, and Bekrar did not divulge it, but the company pulled in $1.2 million in 2011—86 percent from non-French clients. 

French investigative hackers Reflets.info has had their eye on Vupen for some time, the publication’s Fabrice Epelboin told the Daily Dot. Hacker and Reflets journalist Kitetoa wrote about the group yesterday

Among his discoveries: Vupen has close ties with the French Army and is deeply involved in the French Army cyber-command’s offensive online initiatives

Read more at http://www.dailydot.com/politics/nsa-malware-vupen/

One of the latest reports claims that the NSA is able to access data from Apple iPhones, BlackBerry devices, and phones that use Google’s Android operating system. In addition, following document leaks which suggested the NSA was accessing email records, a number of companies offering secure email shut down, and in their place, encrypted mobile phone communication applications have risen.

A fresh report, brought on by a Freedom of Information (FOI) request by government transparency site MuckRock, shows that the NSA purchased data on zero-day vulnerabilities and the software to use them from French security company Vupen.

According to the documents, the NSA signed up to a one-year “binary analysis and exploits service” contract offered by Vupen last September.

Vupen describes itself as “the leading provider of defensive and offensive cyber security intelligence and advanced vulnerability research.” In other words, the security firm finds flaws in software and systems and then sells this data on to governments.

In addition, Vupen offers offensive security solutions, including “extremely sophisticated and government grade zero-day exploits specifically designed for critical and offensive cyber operations.”

Zero-day vulnerabilities are security flaws in systems discovered by researchers and cyberattackers which have not been found or patched by the vendor.

Read more at http://www.zdnet.com/nsa-purchased-zero-day-exploits-from-french-security-firm-vupen-7000020825/

 

PCWorld Jan 2013: Vulnerability researchers find weaknesses in industrial systems

Hackers find targets

A recently leaked FBI cyberalert document dated July 23 revealed that earlier this year hackers gained unauthorized access to the heating, ventilation and air conditioning (HVAC) system operating in the office building of a New Jersey air conditioning company by exploiting a backdoor vulnerability in the control box connected to it — a Niagara control system made by Tridium. The targeted company installed similar systems for banks and other businesses.

The breach happened after information about the vulnerability in the Niagara ICS system was shared online in January by a hacker using the moniker “@ntisec” (antisec). Operation AntiSec was a series of hacking attacks targeting law enforcement agencies and government institutions orchestrated by hackers associated with LulzSec, Anonymous and other hacktivist groups.

“On 21 and 23 January 2012, an unknown subject posted comments on a known US website, titled ‘#US #SCADA #IDIOTS’ and ‘#US #SCADA #IDIOTS part-II’,” the FBI said in the leaked document.

“It’s not a matter of whether attacks against ICS are feasible or not because they are,” Ruben Santamarta, a security researcher with security consultancy firm IOActive, who found vulnerabilities in SCADA systems in the past, said via email. “Once the motivation is strong enough, we will face big incidents. The geopolitical and social situation does not help so certainly, it is not ridiculous to assume that 2013 will be an interesting year.”

Targeted attacks are not the only concern; SCADA malware is too. Vitaly Kamluk, chief malware expert at antivirus vendor Kaspersky Lab, believes that there will definitely be more malware targeting SCADA systems in the future.

“The Stuxnet demonstration of how vulnerable ICS/SCADA are opened a completely new area for whitehat and blackhat researchers,” he said via email. “This topic will be in the top list for 2013.”

However, some security researchers believe that creating such malware is still beyond the abilities of the average attackers.

The trend seems to be growing for both attacks and investments in the SCADA security field, according to Donato Ferrante. “In fact if we think that several big companies in the SCADA market are investing a lot of money on hardening these infrastructures, it means that the SCADA/ICS topic is and will remain a hot topic for the upcoming years,” Ferrante said via email.

However, securing SCADA systems is not as straightforward as securing regular IT infrastructures and computer systems. Even when security patches for SCADA products are released by vendors, the owners of vulnerable systems might take a very long time to deploy them.

There are very few automated patch deployment solutions for SCADA systems, Luigi Auriemma said via email. Most of the time, SCADA administrators need to manually apply the appropriate patches, he said.

“The situation is critically bad,” Kamluk said. The main goal of SCADA systems is continuous operation, which doesn’t normally allow for hot patching or updating — installing patches or updates without restarting the system or the program — he said.

In addition, SCADA security patches need to be thoroughly tested before being deployed in production environments because any unexpected behavior could have a significant impact on operations.

Read more at http://www.pcworld.com/article/2023249/brace-for-more-attacks-on-industrial-systems-in-2013.html

Jim Bird, SWReflections Blog: What is Important in Secure Software Design?

There are many basic architectural and design mistakes that can compromise the security of a system:

  1. Missing something important in security features like access control or auditing, privacy and compliance requirements;
  2. Technical mistakes in understanding and implementing defence-against-the-dark-arts security stuff like crypto, managing secrets and session management (you didn’t know enough to do something or to do it right);
  3. Misunderstanding architectural responsibilities and trust zones, like relying on client-side validation, or “I thought that the data was already sanitized”;
  4. Leaving the attack surface bigger than it has to be – because most developers don’t understand what a system’s attack surface is, or know that they need to watch out when they change it;
  5. Allowing access by default, so when an error happens or somebody forgets to add the right check in the right place, the doors and windows are left open and the bad guys can walk right in;
  6. Choosing an insecure development platform or technology stack or framework or API and inheriting somebody else’s design and coding mistakes;
  7. Making stupid mistakes in business workflows that allow attackers to bypass checks and limits and steal money or steal information.

 

Learning about Secure Software Design

If you want to build a secure system, you need to understand secure design.

Read more at http://swreflections.blogspot.com/2013/06/what-is-important-in-secure-software.html

Secure Software Development at the Nuts and Bolts Level

  • Input Validation – Check input from users of the system to be sure it contains no harmful content and to be sure the information entered is only the information expected. Repair improper input when possible or request re-entry of information.
  • Output Validation – Check information being sent to users of the system to be sure no harmful content is being sent. If harmful content is detected, an administrator should be notified.
  • Error checking
    • Access Failure – Be sure the program does not perform in an unexpected manner when access to the registry, any external resource, or a file fails.
    • Buffer Overflow – Code should be written so when data is put into a buffer, the buffer will not overflow. This means there should be checks to be sure more information than the buffer can hold will not be written into it.
    • Check files loaded for legitimacy – Files that are loaded should be checked to be sure they are the expected file. This prevents unexpected program performance and possible security problems.
    • Check to be sure modification to the system environment cannot cause the wrong file to load.
  • Error handling – Error handling determines what the program will do when there is an error. The error may be an operator error or an internal error. All possible errors must have an appropriate response designed and implemented within the program.

http://www.comptechdoc.org/independent/programming/programming-standards/software-best-practices.html