Security and Compliance Synergies with DLP, SIEM, and IAM

Data Loss Prevention (DLP) is a technology that keeps an inventory of data on organizational devices, it tracks when that data moves and applies rule sets to prevent data from moving to unauthorized locations such as a thumb drive, cloud server, or an email recipient outside the company. DLP can significantly help organizations understand and control the data that is used, stored, and transmitted and it is seeing increasing use in by internal compliance groups as they try to meet strict regulatory requirements.

Another technology, Security Information and Event Management (SIEM), collects and analyzes data in real-time from multiple sources including server logs, network devices, firewalls and intrusion detection systems. It then correlates that information to identify relevant patterns and alert on high priority events or event sequences. SIEM systems retain the data separately from the collection source so it is protected from tampering, deletion, or corruption. They also summarize the data in dashboards for easy reporting and analysis.

The third technology, Identity Access Management (IAM), allows an organization to manage credentials across the enterprise, including over a diverse set of equipment and devices. IAM manages information about users and what they are authorized to access and the actions they are authorized to perform.

The combination of SIEM, DLP, and IAM can improve the security and compliance of a corporation. Taken together SIEM, DLP, and IAM can work so that data flow within an organization is transparent, therefore, affording more control to the business and less ability to misuse that information.

What are SIEM, DLP, and IAM

As stated earlier, DLP is a conscious effort to prevent the loss of data due to undesirable individuals, groups, or circumstances. DLP systems figure out which pieces of information are more important than others, therefore, creating a prioritized list. DLP is a comprehensive set of methodologies and technologies that can look at more information across departments, better than localized isolated searches. SIEM is technology that can take and interpret information coming in from network security devices and server logs allowing greater visibility into the use, transmission, and storage of data. SIEM allows a company to consolidate security information from many different areas so that the organization can better understand and prioritize how to protect its data and IAM allows the logs of activity from heterogeneous devices to be tied to an identity of an individual for better auditing and intelligence.

Protecting the company’s data is a primary responsibility for information security. With increased complexity and interoperability of systems, this task becomes much more challenging, especially on a localized basis. With the help of DLP, the job of protecting information becomes much clearer. Using SIEM in conjunction with DLP and IAM can further ease the task of the information security department in protecting organizational data, preventing breaches and in meeting regulatory requirements by restricting data from being exfiltrated and ensuring that authorized use is monitored and audited.

The correlation between real threats in real time and how and where the most sensitive pieces of information are stored and dealt with falls squarely within the realm of SIEM, DLP, and IAM. Furthermore, allowing a combination of SIEM, DLP, and IAM, a company can see its security in one program, not several, thus making the process more efficient. Efficiency is an essential part of making a good business great. This sentiment can be translated into the world of protecting documents. SIEM can be tuned to focus on where the data is found, thus helping the DLP team protect the information at the source, in transit, and at its destination. In addition, SIEM can refine the way that DLP identifies sensitive information, alerts DLP to new resources, and new threats to organizational information.

Combining these three methods of protection, SIEM, DLP, and IAM, can give the organization more insight on where additional security controls should be placed, and it allows for a faster incident response. This combination of insight and coordination allows for a more efficient strategy against potential threats. DLP can prevent malicious or accidental users from abusing the system by only allowing authorized access to certain accounts, as well as, informing the company when these documents have been retrieved. Simultaneously, SIEM is working to sharpen controls by monitoring the retrieval of the information, thus making the retrieval alerts as streamlined, efficient, and quick as possible. These two devices provide what information security offices need, visibility and control.

Internal Threats

Companies sometimes have information but cannot act on it because it is buried in a server log or a database. For example, in 2008 Verizon Business had breach information on 82% of cases but they were unable to use this information. SIEM, DLP, and IAM could have enabled Verizon to understand better and prevent these breaches.

The reality of the world is employees often change positions. Without proper employee termination procedures and security controls, terminated employees could transfer customer documents or steal intellectual property and other sensitive information. The use of SIEM, DLP, and IAM provides real-time information in data access and can flag inappropriate or out of the norm activity.

External Threats

Take a company that deals with the regular transfer of credit card information and is Payment Card Industry (PCI) Data Security Standard (PCI DSS) compliant. PCI-DSS compliance can help protect the organization and mitigate a variety of attacks, but DLP and SIEM can give the organization knowledge on where attacks might be focused. Fingerprinting and other prerequisite external threats can herald the onset of a larger attack, and SIEM, DLP, and IAM would highlight these requirements so that the organization could respond and protect itself and its data.

SIEM, DLP, and IAM in a distributed mobile world

SIEM, DLP, and IAM are particularly valuable to organizations that are increasingly mobile. More and more workers access corporate data from mobile devices, the cloud, or machines connected to a VPN and BYOD is prevalent in many organizations. It is important to tie this activity back to a unique identity and to track patterns across devices and organizational boundaries. Protecting information was already difficult when it was limited to one network and a few select locations. However, that time is well in the past. New facets of current employment widen the gap that information security needs to cover. With the help of DLP, threats can be prioritized according to the importance, and with SIEM the data transfer and storage can be transparent, easing the burden on the information technology and security department in protecting a larger set of assets.

The use of SIEM, DLP, and IAM can significantly enhance the capabilities of information security departments. SIEM allows a company to make the access, transfer, and reception of data within the company more apparent and can further improve DLP initiatives in protecting and controlling data within the organization. The advantage of using SIEM, DLP, and IAM within an individual company streamlines the process of protecting vital information and makes the company more efficient.

This article is sponsored by JURINNOV, a TCDI company specializing in cybersecurity and computer forensic consulting services.

Leveraging Vulnerability Scoring in Prioritizing Remediation

The average organization has numerous types of equipment from different vendors. Along with the equipment, businesses also utilize multiple software applications from various developers throughout the organization. This diversity provides many helpful opportunities, but also creates a higher probability for vulnerability. Risk managers are able to stay aware of new vulnerabilities through vendor systems or services such as SANS @RISK, the National Vulnerability Database (NVD), the Open Source Vulnerability Database (OSVDB), or Bugtraq, but how do they prioritize the vulnerabilities. Certainly risk managers need to know which vulnerabilities with the highest risk can be resolved before lesser vulnerabilities? Understanding these vulnerabilities and their impact relevant to other vulnerabilities is quite a challenge.

To overcome this challenge, several scoring systems have been developed. These include the US-CERT (United States Computer Emergency Readiness Team) Vulnerability Notes Database and the Common Vulnerability Scoring System (CVSS). This article provides an overview of both systems and how risk managers can use them to prioritize remediation.

US-CERT Vulnerability Notes Database

Severe vulnerabilities are published in the US-CERT Technical Alerts. One clear problem arises, however -what determines the severity of vulnerability? A severe vulnerability that affects a rare application may be of lower priority to most users; however, those who do use it will want the information about its possible vulnerabilities. The Vulnerability Notes Database allows for vulnerabilities of all severities to be published. This open book policy is because the severity of vulnerabilities is difficult to determine. For example, the few users of the rare application can use the system to find the severe vulnerability that would not be published in the Technical Alerts.

Vendor information is available in addition to the vulnerability notes. For each vendor, this includes a summary of the vendor’s vulnerability status, rated as “Affected”, “Not Affected”, or “Unknown”. This may also include a statement from the vendor that includes solutions to the problem, such as software patches and potential permanent fixes.

The database allows for browsing and searching for vulnerabilities. The notes include the impact of the vulnerability, solutions, and ways to work around it, as well as a list of vendors affected by the vulnerability. Searches can be customized to determine vulnerabilities that impact an organization and their level of severity. Thus this database can be very helpful for risk managers.

Common Vulnerability Scoring System (CVSS)

While the US-CERT Vulnerability Notes Database publishes all vulnerabilities of all severities, it is not the only way risk managers can prioritize their vulnerabilities. There is another system, which companies can apply to their equipment and software. This second method is called the Common Vulnerability Scoring System or CVSS.

CVSS ranks vulnerabilities using three categories of metrics; base, temporal, and environmental.

Base characteristics define the fundamental characteristic of a vulnerability and include the following:

  • Impact to confidentiality, integrity, and availability
  • Access vector – the route through which a vulnerability is exploited such as local, adjacent to the network, or network.
  • Access Complexity
  • Authentication

Temporal metrics are those that change over time. The three temporal metrics are exploitability, remediation level, and report confidence.

  • Exploitability measures the current state of exploit technique availability. Higher availability means there are a higher number of potential attackers.
  • Remediation levels include unavailable, workaround, temporary fix, and official fix. As a vulnerability’s remediation level increases, its severity decreases.
  • Report confidence measures the confidence of the vulnerability’s existence and its technical details. Values include confirmed, uncorroborated, and unconfirmed. Vulnerabilities that are confirmed are considered more severe.

The last category of metrics used by CVSS is environmental metrics. These consist of metrics related to where the vulnerability exists. The metrics are as follows:

  • Collateral damage potential
  • Target distribution – the percentage of potentially affected systems
  • Confidentiality, availability, and integrity requirements

The CVSS system, unlike the US-CERT database, provides different metrics and measures to categorize different vulnerabilities. This system provides a scoring schedule, which quantifies the different vulnerabilities. Thus allowing risk managers in more niche markets and specific businesses isolate particular vulnerabilities important to them.

Organizations usually have large numbers of programs running, in addition to programs there is a multitude of equipment required to operate a successful business. However, these cogs in the corporation’s engine do not always run smoothly. Sometimes vulnerabilities can crop up and can be potentially harmful to the piece of equipment or the larger company. Therefore, risk managers must keep track of all of these vulnerabilities to keep the business running efficiently. Following all of these vulnerabilities can prove to be difficult. Risk managers keep on top of new vulnerabilities through various outlets. For example SANS @RISK, the National Vulnerability Database (NVD), the Open Source Vulnerability Database (OSVDB), or Bugtraq are used in this capacity. Furthermore, more strains occur in the department of ranking vulnerabilities based on severity. This job can be tough, but there are databases, which can aide in dealing with more critical vulnerabilities and ahead of less severe problems. The first is called US-CERT Vulnerability Notes Database and the second is the Common Vulnerability Scoring System (CVSS).

The US-CERT Vulnerability Notes Database utilizes a broader approach. It chronicles many of the known vulnerabilities and outlines the severity, without giving too much of a ranking. This movement away from hard rankings by the database is due to the difficulty of applying a single blanket score for all businesses because of the diversity of businesses. Meanwhile, the CVSS utilizes standardized measurements to rank vulnerabilities. There are three categories, which the CVSS use to evaluate vulnerabilities first is base, second temporal, and finally environmental. Within these, there are several subcategories all of which meticulously sort out various vulnerabilities into a ranking system.

Both the US-CERT Vulnerability Notes Database and the CVSS allow for a type of ranking of vulnerability severity. By using these systems, organizations can determine which vulnerabilities are most likely to affect their applications in the most severe way. It follows that these organizations will then be able to prioritize by remediating the most critical vulnerabilities likely to affect their systems first.

 

Achieving High Availability with Change Management

Change management is a key information security component of maintaining high availability systems. Change management involves requesting, approving, validating, and logging changes to systems. This process can bring significant benefits to an organization. Namely, it can strengthen the decision-making ability of an organization by training personnel to think fully on and evaluate changes before they are made and it provides a knowledge base of past changes and the lessons learned from situations.

Information security can be divided into three sections: confidentiality, integrity, and availability, often called the CIA triad. Availability is extremely important. After all, if the data is not available to authorized users when they need it, of what use is it? High availability is another term that describes a system that is accessible to users 24×7 with minimal scheduled downtime.

An often mentioned method for obtaining high availability include hardware redundancy such as active/passive firewalls, clustered servers, network load balancing, and round robin DNS. Redundancy is an excellent aspect that high availability networks must have. However, another important factor in achieving high availability is a change management policy.

Any change has the potential to create new vulnerabilities or reduces the availability of systems. Of course, the process of maintaining systems and managing business objectives requires change. Therefore, organizations must determine how to balance the need for change with the minimization of risk. The answer is through change management. This starts with a change management policy that then leads into a change management program whereby change management is implemented throughout the company.

Let’s first define change management and describe what a change management system looks like. Change management is the process whereby changes are requested, approved, validated, and logged to reduce the risk of a change compromising the availability of systems or creating new vulnerabilities. Validation also takes place after a change has been made. The system needs to be tested to determine if the change produced the desired result. Change management approvers should thoroughly consider the impact of changes and notify users and others about the change. It is advantageous to schedule the changes during standard downtimes to minimize the potential impact to system availability.

Moving along with the description laid out for us, the first element of change management is approval. Change management systems require changes to be requested in a system and then approved by an authorized individual such as a supervisor, manager, data owner, or by multiple persons. The process of requesting a change and approving a change validates the actions taken since multiple people consider the decision and actions before they are approved.

The last element is logging. Logging produces some ancillary benefits to change management. Change management logging is a positive step towards knowledge management, and it can aid personnel in reversing any damaging changes that may occur.

Change management can assist in knowledge management objectives because the rationale behind changes along with those who implemented them are stored in the system. If a similar event comes along, such as a server error or a new project, the system can be queried to determine a course of action and the persons involved can be contacted for further information or involvement in future projects or troubleshooting.

Change management also gives an organization the ability to reverse damaging changes because it keeps a log of the actions taken. Not all changes achieve the desired outcome. In such situations, it is imperative that the organization have a method of reversing the changes to bring the system back into a functioning state. Change management accomplishes this by enabling users to view the log of actions taken so that these actions can be undone.

So what kinds of actions should be managed in a change management system? The CISSP common body of knowledge asserts that change management systems should manage changes related to the entire lifecycle of a system including design, development, testing, evaluation, implementation, distribution, and ongoing maintenance.

The next question is what changes in these categories should be logged? This important question that has to be determined on a case-by-case basis by organizational decision makers. The greatest amount of benefits from a change management system will be realized by tracking even minor changes, but this is a determination you will have to make.

Lastly, consider implementing change management metrics and integrating them with other security metrics you track so that you can ensure change management goals are met.  Change management is a process that can greatly strengthen information assurance and provide a framework for high availability in information systems. The process involves requesting, approving, validating, and logging changes to systems. This process aids in knowledge management, incident response, security management, and governance.

Fail Secure – The Correct Way to Crash

Do you think there is a right way to crash?  A system crash sounds like a bad thing all around, but there are safe ways for a system to crash and dangerous ways.  Systems can crash in a way that allows attackers to exploit the data on them or to install back doors gaining control over the system.  In a term called “Fail Secure,” systems are designed in such a way that they fail and then start up without introducing new security vulnerabilities for attackers to exploit.

Let’s look at three areas where systems should fail secure; communication channels, access control systems, and default configurations.  In communication channels, use public key cryptography for communication initialization.  In this way, when new sessions are created, key material will not be exchanged in plain text for an attacker to read.  Likewise, access control systems should deny requests when they fail.  How many times in movies have you seen a person bash a keypad to gain entry to a system?  Attackers perform something similar such as disconnecting the power from a device to gain entry.  These devices should be configured so that they stay locked even when they fail.  Avoid default configurations on systems and disable the ability for a system to roll back to a default state.  Some devices have a button or a menu item that will allow you to reset to factory defaults, but this can create a security hole in your network since many devices have their default configurations well documented.

Reducing privacy and compliance risk with data minimization

Companies collect millions of gigabytes of information, all of which has to be stored, maintained, and secured. There is a general fear of removing data lest it be needed some day but this practice is quickly becoming a problem that creates privacy and compliance risk. Some call it “data hoarding” and I am here to help you clean your closet of unnecessary bits and bytes.

The news is full of examples of companies losing data. These companies incur significant cost to shore up their information security and their reputations. In a study by the Ponemon Institute, the estimated cost per record for a data breach in 2009 was $204. Based on this, losing 100,000 records would cost a company over twenty million dollars. It is no wonder that companies are concerned. Those that are not in the news are spending a great deal of money to protect the information they collect.

So why are we collecting this information in the first place? Like abstinence campaigns, the best way to avoid a data breach is to not store the data in the first place. This is where data minimization steps in to reduce such risk. As part of the data minimization effort, organizations need to ask themselves three questions:

  1. Do I really need to keep this data?
  2. Would a part of the data be as useful as the whole for my purposes?
  3. Could less sensitive data be used in place of this data?