Mitigating the Threat of Corporate Espionage

Corporate espionage is not just a plot for action movies; it is a real threat to small and large businesses.  Many successful attacks of corporate espionage steal data from companies each year resulting in intellectual property being sold to other companies, often in other countries, or ransomed back to the company.  This, in turn, has made it more difficult for companies to compete and to provide high-quality services.

Corporate espionage is focused on people more often than not, and those who obtain information by manipulating people are called social engineers.  These social engineers recognize that people are the weakest link in organizational security.  It could take time to perform reconnaissance, defeat security controls, and locate the data they need if they target technology but a few well-placed phone calls or a casual meeting in a bar could give them much of the information they need.

So how do you reduce the threat of corporate espionage?  It starts by educating employees of the threat of corporate espionage and the techniques used by social engineers.  Second, since social networking sites are often used to make initial contact or gather information about people in the company, educate employees on the risks and safe practices of social networking including how to validate the identity of a social networking “friend”, signs of information gathering and what can and cannot be disclosed on social networking sites.  The second area of defense is physical security.  Once a person has access to computers and facilities at your organization, it is very likely that they will be able to extract data.  Make sure that guests are escorted through the facility.  Require appointments for vendors and document who made the appointment and the identity of the vendor before allowing them entrance.  Guests should sign in and be tracked.  Employees should lock their workstations when they are not in use, and the organization should consider a clean desk policy.  These are just some examples among many that can protect the physical security against corporate espionage.  For more information, contact one of our security professionals.

Leveraging Vulnerability Scoring in Prioritizing Remediation

The average organization has numerous types of equipment from different vendors. Along with the equipment, businesses also utilize multiple software applications from various developers throughout the organization. This diversity provides many helpful opportunities, but also creates a higher probability for vulnerability. Risk managers are able to stay aware of new vulnerabilities through vendor systems or services such as SANS @RISK, the National Vulnerability Database (NVD), the Open Source Vulnerability Database (OSVDB), or Bugtraq, but how do they prioritize the vulnerabilities. Certainly risk managers need to know which vulnerabilities with the highest risk can be resolved before lesser vulnerabilities? Understanding these vulnerabilities and their impact relevant to other vulnerabilities is quite a challenge.

To overcome this challenge, several scoring systems have been developed. These include the US-CERT (United States Computer Emergency Readiness Team) Vulnerability Notes Database and the Common Vulnerability Scoring System (CVSS). This article provides an overview of both systems and how risk managers can use them to prioritize remediation.

US-CERT Vulnerability Notes Database

Severe vulnerabilities are published in the US-CERT Technical Alerts. One clear problem arises, however -what determines the severity of vulnerability? A severe vulnerability that affects a rare application may be of lower priority to most users; however, those who do use it will want the information about its possible vulnerabilities. The Vulnerability Notes Database allows for vulnerabilities of all severities to be published. This open book policy is because the severity of vulnerabilities is difficult to determine. For example, the few users of the rare application can use the system to find the severe vulnerability that would not be published in the Technical Alerts.

Vendor information is available in addition to the vulnerability notes. For each vendor, this includes a summary of the vendor’s vulnerability status, rated as “Affected”, “Not Affected”, or “Unknown”. This may also include a statement from the vendor that includes solutions to the problem, such as software patches and potential permanent fixes.

The database allows for browsing and searching for vulnerabilities. The notes include the impact of the vulnerability, solutions, and ways to work around it, as well as a list of vendors affected by the vulnerability. Searches can be customized to determine vulnerabilities that impact an organization and their level of severity. Thus this database can be very helpful for risk managers.

Common Vulnerability Scoring System (CVSS)

While the US-CERT Vulnerability Notes Database publishes all vulnerabilities of all severities, it is not the only way risk managers can prioritize their vulnerabilities. There is another system, which companies can apply to their equipment and software. This second method is called the Common Vulnerability Scoring System or CVSS.

CVSS ranks vulnerabilities using three categories of metrics; base, temporal, and environmental.

Base characteristics define the fundamental characteristic of a vulnerability and include the following:

  • Impact to confidentiality, integrity, and availability
  • Access vector – the route through which a vulnerability is exploited such as local, adjacent to the network, or network.
  • Access Complexity
  • Authentication

Temporal metrics are those that change over time. The three temporal metrics are exploitability, remediation level, and report confidence.

  • Exploitability measures the current state of exploit technique availability. Higher availability means there are a higher number of potential attackers.
  • Remediation levels include unavailable, workaround, temporary fix, and official fix. As a vulnerability’s remediation level increases, its severity decreases.
  • Report confidence measures the confidence of the vulnerability’s existence and its technical details. Values include confirmed, uncorroborated, and unconfirmed. Vulnerabilities that are confirmed are considered more severe.

The last category of metrics used by CVSS is environmental metrics. These consist of metrics related to where the vulnerability exists. The metrics are as follows:

  • Collateral damage potential
  • Target distribution – the percentage of potentially affected systems
  • Confidentiality, availability, and integrity requirements

The CVSS system, unlike the US-CERT database, provides different metrics and measures to categorize different vulnerabilities. This system provides a scoring schedule, which quantifies the different vulnerabilities. Thus allowing risk managers in more niche markets and specific businesses isolate particular vulnerabilities important to them.

Organizations usually have large numbers of programs running, in addition to programs there is a multitude of equipment required to operate a successful business. However, these cogs in the corporation’s engine do not always run smoothly. Sometimes vulnerabilities can crop up and can be potentially harmful to the piece of equipment or the larger company. Therefore, risk managers must keep track of all of these vulnerabilities to keep the business running efficiently. Following all of these vulnerabilities can prove to be difficult. Risk managers keep on top of new vulnerabilities through various outlets. For example SANS @RISK, the National Vulnerability Database (NVD), the Open Source Vulnerability Database (OSVDB), or Bugtraq are used in this capacity. Furthermore, more strains occur in the department of ranking vulnerabilities based on severity. This job can be tough, but there are databases, which can aide in dealing with more critical vulnerabilities and ahead of less severe problems. The first is called US-CERT Vulnerability Notes Database and the second is the Common Vulnerability Scoring System (CVSS).

The US-CERT Vulnerability Notes Database utilizes a broader approach. It chronicles many of the known vulnerabilities and outlines the severity, without giving too much of a ranking. This movement away from hard rankings by the database is due to the difficulty of applying a single blanket score for all businesses because of the diversity of businesses. Meanwhile, the CVSS utilizes standardized measurements to rank vulnerabilities. There are three categories, which the CVSS use to evaluate vulnerabilities first is base, second temporal, and finally environmental. Within these, there are several subcategories all of which meticulously sort out various vulnerabilities into a ranking system.

Both the US-CERT Vulnerability Notes Database and the CVSS allow for a type of ranking of vulnerability severity. By using these systems, organizations can determine which vulnerabilities are most likely to affect their applications in the most severe way. It follows that these organizations will then be able to prioritize by remediating the most critical vulnerabilities likely to affect their systems first.

 

Guidelines for Username and Password Risk Management

Hackers often bypass some of the best security technologies by exploiting one of the oldest tricks in the book, your password.  Not only will attackers quickly gain access to whatever you have access to, audits and security monitoring will detect show that you accessed the documents, not the attacker so you will be the one to account for inappropriate use of company resources or access of data.  So what can you do to prevent this?

First, don’t share your password with anyone.  Not your co-workers, secretary, spouse, or even your dog.  Your password should be for your eyes only.  Also, avoid group or departmental accounts that are shared among several people.  Have system administrators create an individual account for each person that accesses a system.  Next, change your password often and follow these guidelines to create a secure password:

  • Use a combination of upper-case and lower-case, numbers and special characters such as ! @ # $ % * ( ) – + = , < > : : “ ‘
  • Make your password long enough: Between 8 to 20 characters is recommended.
  • To help you easily remember your password, consider using a phrase or song to go with the acronym.
  • You can also make the entire phrase your password.  I like to choose something funny and weird that would not be easily guessed like Yeah, Testing for my star riding license which would look like this as a password: “Yeah!Testing4My*RidingLicense”

Criteria for Selecting an Information Security Risk Assessment Methodology: Qualitative, Quantitative, or Mixed

An information security risk assessment is the process of identifying vulnerabilities, threats, and risks associated with organizational assets and the controls that can mitigate these threats. Risk managers and organizational decision makers use risk assessments to determine which risks to mitigate using controls and which to accept or transfer. There are two prevailing methodologies for performing a risk assessment. These are the qualitative and quantitative approaches. A third method termed mixed or hybrid, combines elements of the qualitative and quantitative approaches.

Quantitative Information Security Risk Assessment

Quantitative information security risk assessments use mathematical formulas to determine the exposure factor and single loss expectancy or each threat as well as the probability of a threat being realized called the Annualized Rate of Occurrence (ARO). These numbers are used to estimate the amount of money that would be lost to exploited vulnerabilities annually called the Annualized Loss Expectancy (ALE).

With these numbers, the organization can then plan to control this risk if countermeasures are available and cost effective. These numbers allow for a very straightforward analysis of the costs and benefits for each countermeasure and threat to an asset. Countermeasures that reduce the annualized loss expectancy greater than their annualized cost should be implemented if there is sufficient resource slack available to employ the countermeasure.

For example, a quantitative assessment for Company X identifies $1,000,000 in assets. With an exposure factor of 1%, Company X expects to lose $10,000 annually. In other words, the ALE is $10,000. Countermeasures are available that will reduce this expectation to $2,000 per year, and the countermeasures cost $7,000 per year to implement. This assessment makes it easy to see the savings of implementing the countermeasures because the organization would save $1,000. The math is as follows: $10,000 loss reduced to $2,000 is a reduction of $8,000. The countermeasures cost $7,000. $8,000 reduction in loss minus $7,000 for the cost of the countermeasures equals a savings of $1,000.

As you can see, the formulas here are all based on the asset value and exposure factor. Therefore, different quantitative risk assessments could produce very different results if the method of asset valuation differed. One assessment may use purchase cost as the asset value but another may use value to data owners, operational cost, value to competitors, or the liability associated with asset loss. Each of these values would be reasonable to use, but they would produce different results.

In the example above, the decision to implement the countermeasures would be different if the asset valuation turned out to be $850,000 instead of $1,000,000. Here the ALE would be $8,500. Now the loss if still reduced to $2,000 would result in a savings of $6,500, but the countermeasures cost $7,000 so the organization would lose $500 implementing the countermeasures. It is important to recognize how different methods of asset valuation impact the assessment. The methods used in asset valuation should be documented so that decision makers understand how the numbers were obtained.

Qualitative Information Security Risk Assessment

Qualitative information security risk assessments use experience, judgment, and intuition rather than mathematical formulas. A qualitative risk assessment may utilize surveys or questionnaires, interviews, and group sessions to determine the threat level and annualized loss expectancy. This type of risk assessment is very useful when it is too difficult to assign a dollar value to a particular risk. This can easily be the case with highly integrated systems that house numerous assets and are subject to a variety of risks.

Qualitative information security risk assessments are usually well received because they involve many people at different levels of the organization. Those involved with a qualitative risk assessment can feel a sense of ownership of the process. Qualitative risk assessments do not require a great deal of mathematical computation, but the results are usually less precise than those achieved with a quantitative assessment.

Mixed Information Security Risk Assessment

It is possible to use a mixed approach to information security risk assessments. This approach combines some elements of both the quantitative and qualitative assessments. Sometimes quantitative data is used as one input among many to assess the value of assets and loss expectancy. This approach gives the assessment more credibility due to the hard facts presented, but it also involves people within the organization to gain their individual insight. The disadvantage of this approach is that it may take longer to complete. However, a mixed approach can result in better data than what the two methods can yield alone.

Information security risk assessments can use a quantitative or qualitative methodology or a combination of the two to determine asset valuation, threat levels, and the annualized loss expectancy due to vulnerabilities. There are software applications that will make performing quantitative calculations easier for risk assessments, so this approach is quite useful for those new to risk assessment. Quantitative assessments provide clear data that makes decision making easy. However, qualitative assessments utilize the experience and may uncover things missed by a purely mathematical formula. Qualitative assessments also involve more people who can aid in the acceptance of result.

 

Understanding Data Loss Prevention (DLP)

Data Loss Prevention (DLP) is one of those terms that is often mentioned but less often defined. The term can be as ambiguous as its scope which can be both large and small. So what is DLP and why does it matter?

Data Loss Prevention (DLP) is an effort to reduce the risk of sensitive data being exposed to unauthorized persons. Data is extremely valuable to organizations. Just think of trade secrets, financial information, research data, health information, personal information, source code or credit card numbers and you begin to understand both the value this data holds for the organization and the threat its unauthorized disclosure would have on a company. Data loss prevention focuses on this threat by enacting controls to limit access and distribution of data. DLP still establishes controls to restrict outsiders, but it has a major focus on controlling the usage of data within the organization.

Information security efforts have historically been focused on preventing attacks from outside the organization. Controls such as firewalls, network segmentation, and extensive physical controls try to keep the bad guys out, but this is only part of an information security framework. Numerous studies (see further reading below) have identified the weakest information security link as human error or insider threats.

Content Filtering

One method DLP uses content filtering. Content filtering blocks communication leaving the organization by filtering instant messages, emails, file transfers web pages and many other data transfer methods. DLP programs need to be able to work with many different data types and transmission methods. For example, a user may email a sensitive word document or they may store it on an unencrypted flash drive or download it to a mobile phone. Each of these scenarios and thousands more needs to be handled by DLP.

The first step is to determine what data needs to be protected. Above we mentioned trade secrets, financial information, research data, health information, personal information, source code or credit card numbers. These are just some examples of the data an organization holds. Organizations need to determine what to protect and to what extent it should be protected by determining the criticality of each type of information to the business and the loss the organization would incur if the data were to be disclosed to unauthorized entities.

Once the organization understands what it needs to protect, data loss threats to this data can be identified along with effective controls to mitigate such threats. One way to more effectively identify threats is to consider the different states data can be in. These states are as follows:

Data at rest – data that is stored such as data in databases, file shares, backup tapes, laptops, or external storage devices. Data at rest is an important state because it is here that data spends most of its time.

Data in motion – data that is being transmitted from one location to another. As data changes state from being at rest to being in motion, it may become unencrypted or travel over an insecure network. This is why it is important to look at this phase.

Data being accessed – data that is being used by a user such as an open Word document, a report being viewed in a conference room, or statistics displayed on a cell phone widget. Data being accessed has already passed many information security controls, so it is available to the authenticated user. It may be available to others as well. Threats such as shoulder surfing, unlocked and logged in desktops, and printouts on a desk are all potential ways data can be exposed.

Case study

Let’s consider a case study for one type of data so that data loss prevention becomes clearer. A small business determines that financial data needs to be protected. The financial data is stored in a database that is attached to a managerial portal on the company intranet. Accountants use a custom application to input financial data into the database. Each week, managers generate reports and store them on a shared drive. The database and the shared drive are backed up nightly to tapes that are stored in a vault at the company headquarters.

This case study already identified the financial data as something that needs to be protected from disclosure. The company further specifies that financial data should be available only to managers, accounting staff, executives, the IRS, and outside auditors.

First, we will look at the data at rest. The data is stored in the database, file server, and on backup tapes. Data loss prevention can protect the database by limiting the accounts that can directly access the database and by assigning the minimum level of access to each account. The information security data loss prevention system would next establish strict access controls to the file server share and the file server itself. We need to consider the administrative access to the server because anyone who can log onto the server with administrative credentials will have access to the shares as well. Administrators will need to be restricted to one of the groups identified as having access above. Tapes could be encrypted and stored in a separate area for less sensitive data.

Next, we look at data in motion. The data is in motion when it is accessed through the intranet. Granular access controls could be established for intranet access, and the communication channel could be encrypted.

Lastly, data being accessed would include viewing reports through the intranet or updating accounting data by accountants. Client-side caching of data would need to be restricted as part of the data loss prevention system. The accountants also interface with the data through the custom program. This program would need to be evaluated for any information security holes including developer access to financial data. Now, what would prevent managers from storing the financial reports on their local machine? With the information given, we do not know if this happens, but it would need to be addressed possibly through a policy stating that the reports cannot be stored locally or by encrypting local hard drives.

This simple example addresses only a small part of data loss prevention. A true information security analysis would include much more than this, such as whether computers accessing the data contain malware or what to do if financial data is emailed or sent via instant messaging. Additionally, it is not enough to just say that data should be encrypted. A detailed design needs to be specified for the encryption if the data loss prevention controls are to be effective.

Bruce Schneier points out the importance of a well-architected data loss prevention design in his June 2010 article “data at rest vs. data in motion” where he discusses encrypting credit card information for use in a website.

If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data. Bruce Schneier

Those implementing data loss prevention need to have a good understanding of how to architect information security controls and to implement controls in layers so that if one control is compromised another control still prevents data loss. Remember, information security is only as effective as its weakest link.

Data loss prevention is a worthy goal and an excellent information security initiative but it requires high level decision making from the beginning and a comprehensive analysis of threats and controls. An understanding of the work flow surrounding organizational data and a detailed design for each control in order for it to be effective is also imperative.

Reducing privacy and compliance risk with data minimization

Companies collect millions of gigabytes of information, all of which has to be stored, maintained, and secured. There is a general fear of removing data lest it be needed some day but this practice is quickly becoming a problem that creates privacy and compliance risk. Some call it “data hoarding” and I am here to help you clean your closet of unnecessary bits and bytes.

The news is full of examples of companies losing data. These companies incur significant cost to shore up their information security and their reputations. In a study by the Ponemon Institute, the estimated cost per record for a data breach in 2009 was $204. Based on this, losing 100,000 records would cost a company over twenty million dollars. It is no wonder that companies are concerned. Those that are not in the news are spending a great deal of money to protect the information they collect.

So why are we collecting this information in the first place? Like abstinence campaigns, the best way to avoid a data breach is to not store the data in the first place. This is where data minimization steps in to reduce such risk. As part of the data minimization effort, organizations need to ask themselves three questions:

  1. Do I really need to keep this data?
  2. Would a part of the data be as useful as the whole for my purposes?
  3. Could less sensitive data be used in place of this data?

Business Continuity and Backups in the Virtual World

Virtualization has really become a mainstream technology and an effective way for organizations to reduce costs. As mentioned in previous articles, it simplifies processes but also creates new information security risks to handle. This article is concerned with business continuity and how virtualization can create many new opportunities and efficiencies in your business continuity plan. This is the third article in a series on virtualization.

Specifically, three elements of business continuity that can be enhanced through virtualization. These elements are hot, warm, and cold sites, snapshots, and testing. If you have not considered virtualization in your business continuity plan, I hope you will do so after reading this article. If you have questions on how to implement such a service, please contact us and we will be happy to assist you.