Three methods for performing a risk assessment

An information security risk assessment is the process of identifying vulnerabilities, threats, and risks associated with organizational assets and the controls that can mitigate these threats. Risk managers and organizational decision makers use risk assessments to determine which risks to mitigate using controls and which to accept or transfer. There are two prevailing methodologies for performing a risk assessment. These are the qualitative and quantitative approaches. A third approach, termed mixed or hybrid, combines elements of the qualitative and quantitative approaches.

Quantitative Information Security Risk Assessment

Quantitative information security risk assessments use mathematical formulas to determine the exposure factor and single loss expectancy or each threat as well as the probability of a threat being realized called the Annualized Rate of Occurrence (ARO). These numbers are used to estimate the amount of money that would be lost to exploited vulnerabilities annually called the Annualized Loss Expectancy (ALE).

With these numbers the organization can then plan to control this risk if countermeasures are available and cost effective. These numbers allow for a very straightforward analysis of the costs and benefits for each countermeasure and threat to an asset. Countermeasures that reduce the annualized loss expectancy greater than their annualized cost should be implemented if there is sufficient resource slack available to employ the countermeasure.

For example, a quantitative assessment for Company X identifies $1,000,000 in assets. With an exposure factor of 1%, Company X expects to lose $10,000 annually. In other words, the ALE is $10,000. Countermeasures are available that will reduce this expectation to $2,000 per year and the countermeasures cost $7,000 per year to implement. This assessment makes it easy to see the savings of implementing the countermeasures because the organization would save $1,000. The math is as follows: $10,000 loss reduced to $2,000 is a reduction of $8,000. The countermeasures cost $7,000. $8,000 reduction in loss minus $7,000 for the cost of the countermeasures equals a savings of $1,000.

As you can see, the formulas here are all based on the asset value and exposure factor. Therefore, different quantitative risk assessments could produce very different results if the method of asset valuation differed. One assessment may use purchase cost as the asset value but another may use value to data owners, operational cost, value to competitors, or the liability associated with asset loss. Each of these values would be reasonable to use but they would produce different results.

In the example above, the decision to implement the countermeasures would be different if the asset valuation turned out to be $850,000 instead of $1,000,000. Here the ALE would be $8,500. Now the loss if still reduced to $2,000 would result in a savings of $6,500 but the countermeasures cost $7,000 so the organization would lose $500 implementing the countermeasures. It is important to recognize how different methods of asset valuation impact the assessment. The methods used in asset valuation should be documented so that decision makers understand how the numbers were obtained.

Qualitative Information Security Risk Assessment

Qualitative information security risk assessments use experience, judgment, and intuition rather than mathematical formulas. A qualitative risk assessment may utilize surveys or questionnaires, interviews, and group sessions to determine the threat level and annualized loss expectancy. This type of risk assessment is very useful when it is too difficult to assign a dollar value to a specific risk. This can easily be the case with highly integrated systems that house numerous assets and are subject to a variety of risks.

Qualitative information security risk assessments are usually well received because they involve many people at different levels of the organization. Those involved with a qualitative risk assessment can feel a sense of ownership of the process. Qualitative risk assessments do not require a great deal of mathematical computation but the results are usually less precise than those achieved with a quantitative assessment.

Mixed Information Security Risk Assessment

It is possible to use a mixed approach to information security risk assessments. This approach combines some elements of both the quantitative and qualitative assessments. Sometimes quantitative data is used as one input among many to assess the value of assets and loss expectancy. This approach gives the assessment more credibility due to the hard facts presented but it also involves people within the organization to gain their individual insight. The disadvantage of this approach is that it may take longer to complete. However, a mixed approach can result in better data than what the two methods can yield alone.


Information security risk assessments can use a quantitative or qualitative methodology or a combination of the two to determine asset valuation, threat levels, and the annualized loss expectancy due to vulnerabilities. There are software applications that will make performing quantitative calculations easier for risk assessments so this approach is quite useful for those new to risk assessment. Quantitative assessments provide clear data that makes decision making easy. However, qualitative assessments utilize experience and may uncover things missed by a pure mathematical formula. Qualitative assessments also involve more people which can aid in the acceptance of results.

For further reading

Qualitative Risk Assessment: Excerpt from the Security Practitioner Introduction to Information Network and Internet Security

Connected Solutions: SLE and ALE calculator

The Security Risk Assessment handbook

Increased control over data flows using Data Loss Prevention

Data Loss Prevention (DLP) is one of those terms that is often mentioned but less often defined. The term can be as ambiguous as its scope which can be both large and small. So what is DLP and why does it matter?

Data Loss Prevention (DLP) is an effort to reduce the risk of sensitive data being exposed to unauthorized persons. Data is extremely valuable to organizations. Just think of trade secrets, financial information, research data, health information, personal information, source code or credit card numbers and you begin to understand both the value this data holds for the organization and the threat its unauthorized disclosure would have on a company. Data loss prevention focuses on this threat by enacting controls to limit access and distribution of data. DLP still establishes controls to restrict outsiders but it has a major focus on controlling the usage of data within the organization.

Information security efforts have historically been focused on preventing attacks from outside the organization. Controls such as firewalls, network segmentation, and extensive physical controls try to keep the bad guys out but this is only part of an information security framework. Numerous studies (see further reading below) have identified the weakest information security link as human error or insider threats.

Content Filtering

One method DLP uses is content filtering. Content filtering blocks communication leaving the organization by filtering instant messages, emails, file transfers web pages and many other data transfer methods. DLP programs need to be able to work with many different data types and transmission methods. For example, a user may email a sensitive word document or they may store it on an unencrypted flash drive or download it to a mobile phone. Each of these scenarios and thousands more need to be handled by DLP.

The first step is to determine what data needs to be protected. Above I mentioned trade secrets, financial information, research data, health information, personal information, source code or credit card numbers. These are just some examples of the data an organization holds. Organizations need to determine what to protect and to what extent it should be protected by determining the criticality of each type of information to the business and the loss the organization would incur if the data were to be disclosed to unauthorized entities.

Once the organization understands what it needs to protect, data loss threats to this data can be identified along with effective controls to mitigate such threats. One way to more effectively identify threats is to consider the different states data can be in. These states are as follows:

Data at rest - data that is stored such as data in databases, file shares, backup tapes, laptops, or external storage devices. Data at rest is an important state because it is here that data spends most of its time.

Data in motion - data that is being transmitted from one location to another. As data changes state from being at rest to being in motion it may become unencrypted or travel over an insecure network. This is why it is important to look at this phase.

Data being accessed - data that is being used by a user such as an open word document, a report being viewed in a conference room, or statistics displayed on a cell phone widget. Data being accessed has already passed many information security controls so it is available to the authenticated user. It may be available to others as well. Threats such as shoulder surfing, unlocked and logged in desktops, and printouts on a desk are all potential ways data can be exposed.

Case study

Let’s consider a case study for one type of data so that data loss prevention becomes clearer. A small business determines that financial data needs to be protected. The financial data is stored in a database that is attached to a managerial portal on the company intranet. Accountants use a custom application to input financial data into the database. Each week, managers generate reports and store them on a shared drive. The database and the shared drive are backed up nightly to tapes that are stored in a vault at the company headquarters.

This case study already identified the financial data as something that needs to be protected from disclosure. The company further specifies that financial data should be available only to managers, accounting staff, executives, the IRS, and outside auditors.

First, I will look at the data at rest. The data is stored in the database, file server, and on backup tapes. Data loss prevention can protect the database by limiting the accounts that can directly access the database and by assigning the minimum level of access to each account. The information security data loss prevention system would next establish strict access controls to the file server share and the file server itself. We need to consider the administrative access to the server because anyone who can log onto the server with administrative credentials will have access to the shares as well. Administrators will need to be restricted to one of the groups identified as having access above. Tapes could be encrypted and stored in a separate area for less sensitive data.

Next, I look at data in motion. The data is in motion when it is accessed through the intranet. Granular access controls could be established for intranet access and the communication channel could be encrypted.

Lastly, data being accessed would include viewing reports through the intranet or updating accounting data by accountants. Client side caching of data would need to be restricted as part of the data loss prevention system. The accountants also interface with the data through the custom program. This program would need to be evaluated for any information security holes including developer access to financial data. Now what would prevent managers from storing the financial reports on their local machine? With the information given, I do not know if this happens but it would need to be addressed possibly through a policy stating that the reports cannot be stored locally or by encrypting local hard drives.

This simple example addresses only a small part of data loss prevention. A true information security analysis would include much more than this, such as whether computers accessing the data contain malware or what to do if financial data is emailed or sent via instant messaging. Additionally, it is not enough to just say that data should be encrypted. A detailed design needs to be specified for the encryption if the data loss prevention controls are to be effective.

Bruce Schneier points out the importance of a well architected data loss prevention design in his June 2010 article “data at rest vs. data in motion” where he discusses encrypting credit card information for use in a website.

If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data. 

Bruce Schneier

Those implementing data loss prevention need to have a good understanding of how to architect information security controls and to implement controls in layers so that if one control is compromised another control still prevents data loss. Remember, information security is only as effective as its weakest link.


This article introduced you to some of the complexities associated with data loss prevention. Data loss prevention is a worthy goal and an excellent information security initiative but it requires high level decision making from the beginning and a comprehensive analysis of threats and controls. An understanding of the work flow surrounding organizational data and a detailed design for each control in order for it to be effective is also imperative.


For further reading

Human error biggest threat to computer security 

IT security: the human threat

Security special report: The internal threat

Insiders cause most IT security breaches, study reveals

Data at rest vs. data in motion

Data retention policies reduce the risk of data breach

What if I told you that you could reduce risk and costs at the same time? Skeptical? I would be. It sounds like some cheesy marketing ploy chuck full of hidden costs or high upfront costs with low ROI. No, I am not pitching a product or trying to sell you a solution. I am however trying to get your attention. I am talking about data minimization.

Companies collect millions of gigabytes of information, all of which has to be stored, maintained, and secured. There is a general fear of removing data lest it be needed some day but this practice is quickly becoming a problem that creates privacy and compliance risk. Some call it “data hoarding” and I am here to help you clean your closet of unnecessary bits and bytes.


Risk and Costs

The news is full of examples of companies losing data. These companies incur significant cost to shore up their information security and their reputations. In a study by the Ponemon Institute, the estimated cost per record for a data breach in 2009 was $204. Based on this, losing 100,000 records would cost a company over twenty million dollars. It is no wonder that companies are concerned. Those that are not in the news are spending a great deal of money to protect the information they collect.

So why are I collecting this information in the first place? Like abstinence campaigns, the best way to avoid a data breach is to not store the data in the first place. This is where data minimization steps in to reduce such risk. As part of the data minimization effort, organizations need to ask themselves three questions:


  1. Do I really need to keep this data?
  2. Would a part of the data be as useful as the whole for my purposes?
  3. Could less sensitive data be used in place of this data?


Do I really need to keep this data?

The first data minimization question to ask is: do I really need to keep this data? Some data is transitive in nature. It is needed in the moment but it is not needed in the long-term. Transitive data should not be stored or archived. It can simply be removed as soon as the transaction is complete. Optimally, this data should not be stored on the hard disk, but rather be kept in memory while processing the transaction and then flushed to avoid risk of storing this data where it could be later obtained by an unauthorized entity.

Other information such as buying preferences or survey data is collected to be used in aggregation and reporting. The individual responses may not be needed once the data has been aggregated so it should be purged. When analyzing business workflows, it is worth considering implementing a purge process following the aggregation and reporting process.

Effort should be made to periodically remove any records that are no longer relevant. After all, information has a shelf life, an expiration date if you will. The plain fact is that information that is no longer useful to the organization should be removed. This removes the privacy, compliance, eDiscovery or other risk associated with the data and allows organizational resources to be spent elsewhere.

Another instance where you should ask if you really need to keep data is when you have a copy of the data elsewhere. In this case, you do not need to keep the data because it is a duplicate. I understand the need for redundancy but build that into a centralized database system. In this way you can protect a single area but still provide high availability. If you absolutely need distributed systems, consider segmenting the database so that distributed systems only contain the portion of the data you need.


Would a part of the data be as useful as the whole for my purposes?

The second data minimization question to ask is: would a part of the data be as useful as the whole for my purposes? Sometimes a part of the data can be as useful as the whole. Take a Social Security Number (SSN) for example. Storing the last fmy digits of the social may be as useful as storing the entire number and the damage associated with the disclosure of just those digits is minimal compared to the entire SSN. Similarly, a company could store just the last few digits of a credit card number rather than the entire thing.

This area of data minimization is extremely important when working with credit cards and PCI compliance as places where numbers are stored need to be in full compliance with the regulation. This is a risk that compliance officers are eager to mitigate.


Could less sensitive data be used in place of this data?

The third┬ádata minimization┬áquestion you should ask is: could less sensitive data be used in place of this data? Instead of storing a value that is global in nature, like a driver’s license number or SSN, consider storing a customer ID that is only used by your company. This will allow you to identify the customer without needing to store personal information and be greatly helpful in reducing compliance costs for securing data such as PHI (Personal Health Information) in HIPAA or credit card information in PCI-DSS.

Another option would be to store a security question such as a place of birth or mother’s maiden name instead of a password. If passwords must be stored, make sure they are stored as a hash value rather than plain text. Passwords should never be stored as plain text.

To sum it all up, data minimization can reduce the amount of data you need to protect and store, reducing IT costs and information security costs and risk. Three questions can aid in determining what data to prune. Ask yourself (1) Do I really need to keep this data? (2) Would a part of the data be as useful as the whole for my purposes? And (3) Could less sensitive data be used in place of this data?

For further reading

Time for a Data Diet? Deciding What Customer Information to Keep — and What to Toss┬á

Ponemon Study Shows the Cost of a Data Breach Continues to Increase

Security special report: The internal threat

Less Data, More Security