Securing Hybrid IT the Right Way

The average company today is a hybrid collection of traditional on-premise and cloud-based IT solutions.  On-premise solutions may include identity and authorization servers, custom applications, packaged applications, and local data repositories. Cloud services fulfill a wide variety of business tasks such as document sharing, group collaboration, customer relationship management, payment processing, marketing, and communication.  This combination of on-premise and cloud services is called Hybrid IT.

On-premise applications require equipment purchases, software deployment, and user training but cloud services can be purchased with a credit card and used almost immediately.  As a result, the same rigor in assessing the business need, risk, and other factors is not often conducted with adopting cloud applications.

Getting up to speed

Hybrid IT can be difficult to manage when different users who may or may not be tech savvy utilize cloud systems in whatever way they deem best for the situation.  Many organizations are in a hybrid IT situation now that was somewhat unplanned for.  Follow these steps to get up to speed.

  1. Identify the cloud solutions in place.
  2. Determine if it is feasible to continue using the solutions.
  3. Transfer administrative credentials to IT.
  4. Create an approved application list
  5. Enforce restrictions through network and endpoint controls on which cloud services can be utilized for organizational data.
  6. Standardize security controls on systems including those in organizational private clouds.

Identify a security solutions provider that can deploy consistent security onto your on-premise equipment, private clouds, and other assets. For example, Bitdefender delivers solutions that have solved the technical challenges of Advanced Persistent Threats (APT) and zero-day exploits.  These same solutions meet the increasingly stringent compliance requirements and give datacenter owners the ability to know what they don’t know, and act on information from below the operating system.

Maintaining control

The most frequently cited risk in hybrid IT is the potential for a lack of organizational control over customer, employee, and business data.  Without effective endpoint and network security controls, a single user may adopt a cloud platform using their personal email address. They can then load organizational data to it and leave the organization.  At this point, his or her successor tries to assume control over the system but realizes that they have no ability to do so.

Organizations need to strike a balance between agility and administration.  There needs to be a level of control over which cloud applications are used for business purposes, but the process for evaluating and approving applications needs to be able to keep pace with today’s fast-paced business. See the suggested steps below.

  1. Establish a procedure for requesting a cloud application.
  2. Create a semi-automated workflow from the procedure.
  3. Establish a cross-functional approval group that will respond to requests through the workflow.
  4. Educate employees on the process.

Risk mitigation

Hybrid solutions are often user or department initiated with little or no involvement of the IT department or those responsible for security within the organization.  Cloud applications may change the organizational risk profile, but the business as a whole is not often aware of this change in risk and therefore cannot evaluate whether actions are required to reduce the risk to an acceptable level. One good way for data center administrators to be as informed as possible about risks is to deploy solutions such as Hypervisor Introspection which can evaluate security independent of the virtual machine and analyze system memory at the hypervisor level.  This ensures consistent security management and awareness even when users or administrators deploy non-standard virtual machines.

From there, a combination of endpoint and network controls such as software restrictions on agents on user machines and traffic filtering on the network can be used to restrict access to unapproved cloud services and applications.  This way, users will be required to utilize the process to request applications.

Next, using the workflow developed earlier, users can take the information collected on the approved cloud applications and services and compile into a report for risk management.  The entire process of creating this document can be automated in the workflow.  The cross-functional approval team should have included someone from risk management but this portion of the process involves a more in-depth review of the hybrid IT portfolio of applications against the organizational risk tolerance threshold.  Risk management can then make recommendations to ensure that risk is kept to acceptable levels.

Reducing attack surface

In some cases, a cloud application is adopted by a user or department when another cloud application has already been adopted to satisfy the same need.  Redundant cloud services increase management costs as well as the attack surface because they create additional potential avenues for attackers to obtain access to organizational data or systems.

  1. Determine which cloud service offers the greatest fit for the organization
  2. Train users of the redundant service on how to use the preferred one
  3. Transfer data from one service to the other
  4. Terminate the redundant service.

Hybrid IT offers organizations an excellent way to augment existing on-premise IT offerings with cutting-edge cloud services.  However, it can also be a nightmare if not management properly.  Some companies are in a precarious security position. Yet, the problem is not insurmountable.  With some planning, automation, discipline and the right mix of endpoint and network security controls, organizations can deploy and manage hybrid IT so that attack surfaces, cloud costs, and management time and efforts are minimized.

Continue reading

What to expect in 2015 in security and technology

As hard as it is to believe, 2014 is almost at a close. While some think about Holiday gatherings and gifts, I ponder what the next year will bring. What will security, technology, mobile and the cloud look like in 2015?

Security is primed for change and we will see pressure both internally and externally. External pressure will come from compliance and consumers. It takes time for security practices, even those specified by governing bodies, to be widely accepted and practiced. However, we have reached the point where the expectation is for compliance and best practices rather than best effort. Customers are also exerting pressure on organizations to better protect their data and privacy. 2014 was full of significant breaches by major companies and this has shaken consumer confidence.

Internal pressures will be seen in the need for more integrated security technologies that help improve the way they do their job without being so invasive. This will result in a greater push for security architectures to closely conform to operational objectives. Adoption of IDM and MDM will increase along with “software defined” systems, placing the focus on the purpose rather than the process.

I anticipate a growth in the use of analytics and supporting systems such as databases and storage. The Internet of Things is creating more and more data that corporations can use to gain more information on customers and operations. Existing tools and many custom developed tools will be harnessed to take advantage of this data. The cloud will play a big role in allowing companies to scale and to utilize powerful online analytical capabilities of cloud providers.

Lastly, companies will integrate more business operations into mobile apps and employ technologies to create seamless experiences for users no matter where and on which device they connect. Again, the cloud and virtualization technologies will fuel this capability.

I see 2015 as an exciting time for those in security and technology but even more so for those companies and individuals who will be empowered through more effective and secure systems.

Continue reading

Environmentally Conscious Security: Painting Information Security Green

Historically, ecological concerns have been significant drivers for change.  Topics ranging from global warming to protecting various species carry a strong emotional appeal, thus, motivating business and personal change with the ultimate goal of protecting the environment.  These environmental initiatives have been termed “green initiatives” and they impact IT in the form of “green computing.”  The popularity of the green computing initiatives stems not only from environmental concerns but also from a financial concern. A primary goal of many green computing initiatives is to reduce power consumption as this has a direct impact on the bottom line.

This article addresses three green computing initiatives and identifies information security action items associated with each initiative. Information security is a concern when programs such as these are implemented.   These initiatives are important because information security is easier to sell if it’s green.

Setting

Green computing is not necessarily new.  In the early 1990’s, the Environmental Protection Agency (EPA) and the Department of Energy created the Energy Star program that defined, among other things, efficiency requirements for computers.  Restrictions have also been placed on how computing equipment, such as monitors and uninterruptable power supplies, can be disposed of.

Recently, a large amount of government spending has been focused on green initiatives.  In 2009, the American Recovery and Reinvestment Act (AARA) provided $70 billion towards green initiatives including developing more efficient energy use for equipment and software and creating more efficient IT cooling solutions. $47 million of that money was allocated to the datacenter energy consumption and efficiency programs.

You might be thinking, “the environment is great and all but my company doesn’t really care about that.”  It is of little consequence if your company is concerned with the environment or not because it has been proven that green computing saves money.  Power is expensive and these costs continue to rise, thus, making green computing an easy sell.

Is it Green?

Software Efficiency and Green Computing

Software efficiency is important to green computing because as equipment consumes less power, machines can be configured to go into a power saving mode resulting in less power being required to perform the same operations.  This initiative saves fossil fuels through the conservation of energy.

Information security practitioners are also concerned with software efficiency because the possible outcome of combining resources provides hackers with fewer options for malicious use. Advocates of consolidation and reduction efforts can claim that these are not only information security initiatives but also green initiatives.

Virtualization and Green Computing

Virtualization, in computing, is the creation of a virtual (rather than actual) version of a device, such as a hardware platform, operating system, a storage device or network resources which makes it possible to consolidate many machines onto fewer platforms.   This is especially advantageous when legacy systems can be combined onto newer hardware platforms.  Legacy systems often do not incorporate the latest advances in power technology and thus, are less efficient to maintain.  If these systems are virtualized, fossil fuels can be saved through more efficient power management on the newer hardware.

For information security practitioners, virtualization brings an array of advantages and disadvantages.  It can be a great option for improving security, especially availability and business continuity.  However, unless information security personnel are involved in the process and proper controls are tailored to the virtual environment, it may create greater safety risks than benefits.

Terminal Based Computing (Thin Computing) and Green Computing

Terminal based computing is another technology that can reduce the amount of energy consumed by workstations.  Because most of the processing power is consumed on the server side where the terminal sessions are managed, the workstations can be very basic machines that require little power to operate.

Terminal based computing provides advantages to the security architecture of a company because more control can be applied to the actions taken on the terminal based environment than in decentralized client-server models.  The disadvantage to information security is that the terminal environment can introduce a centralized point of attack and point of failure for an environment. Thus, additional controls  may be needed to ensure availability of the terminal servers and confidentiality and integrity of the information contained in such systems.

Think about how software efficiency,  virtualization and terminal based computing can be used to emphasize their inherent  green computing advantage, allowing you to present the additional value of these initiatives to decision makers.  These options are not just a safe choice; they are a green option too.

Developing a Virtualization Security Policy

Since many organizations are rapidly virtualizing servers and even desktops, there needs to be direction and guidance from top management in regards to information security. Organizations will need to develop a virtualization security policy that establishes the requirements for securely deploying, migrating, administering, and retiring virtual machines. In this way, a proper information security framework can be followed in implementing a secure environment for hosts, virtual machines, and virtual management tools. This article is part two of a series on virtualization.

As with other policies, the security policy should not specify technologies to be utilized. Rather, it should define requirements and controls. Technologies will be implemented to satisfy the requirements and controls provided by the policy.

  • Auditing and accountability
  • Server role classification
  • Network service
  • Configuration management
  • Host security
  • Incident response
  • Training

Auditing and accountability

The auditing and accountability portion has to do with the responsibilities of administrators, management, and users of the virtual environment. It is important to specify administrative roles such as backup operators, host administrators, virtual network administrators, server users, and self-service portal users. For smaller organizations, a few people may fill these roles, but larger organizations will specify greater separation of duties between roles. Clearly, identify the server role classifications that each user role can access.

Furthermore, this section should indicate that administrative actions will be logged and audited. Logs should be redundant, backed up regularly, and applications should be available for audit log searching and review.

Server role classification

Virtual machines or guests, serve different roles such as a file server, domain controller, email server, remote access server, or database. Some roles are more sensitive than others, and thus they should be treated differently. Roles can be determined by the applications a server hosts or the data it hosts, as well as its criticality and value.

A series of classification levels such as standard, secure, and highly secure should be specified. The number of levels you have is determined by your organization’s business rules. For each classification, clearly state the server roles and information types that would fall into the category and the level of authentication, segmentation, encryption, and integrity verification necessary. For example, for segmentation, virtual machines classified as highly secure must be located on physically distinct hosts and separate logical networks and backup media should be allocated solely for use on highly secure systems.

Network service

The network service section details how remote access to hosts and virtual machines will be conducted or if it is allowed at all. It specifies Access Control List (ACL) requirements and how logical addresses will be allocated, distributed, and managed for virtual hosts and machines. Resource limits for hosts should be specified so that hosts are not overburdened with virtual machines causing performance degradation. Indicate the need for service accounts and least privilege configuration of service account privileges, i.e., configuring service accounts with the bare minimum privileges necessary for the service to function.

Configuration management

The configuration management section is concerned with maintaining the consistency of the virtual environment. This section should specify the types of changes that require approval and how each type is approved. Any exemptions to the approval process are listed. Some change types include virtual network creation, modification, or removal, host addition or removal, host hardware modification, or virtual machine hardware modification.

Approval stages should be specified including the roles or groups responsible for approving change requests and the types of change requests that can be approved by each role or group. List how authorization will take place and where and how change authorizations are tracked and stored.

The configuration management section should also include statements on how violations of the configuration management policy will be dealt with and how actual changes are validated against logged changes. This includes any auditing that is required for change controls.

Host security

The host security section defines where hosts will be stored, how hosts are monitored, and how physical and remote access to the hosts is controlled. The location of hosts is important because hosts need to be available and secure. The location determines the level of network connectivity such as redundant network links and internet connectivity as well as power redundancy, power availability and cooling.

The next part of host security deals with how the hosts are monitored. Specify the types of monitoring that will take place. For example, physical monitoring may use closed circuit cameras that archive footage to DVD. You might specify logging of successful and failed login attempts to the host servers and directory modification on storage devices containing virtual machine files or configuration data.

Incident response

This section should detail what should happen if the virtual environment is compromised in some way. It should explain how information security incidents in the virtual environment are evaluated and how they are reported. It then defines the persons and groups responsible for controlling the issue and what constitutes problem resolution.

Your business may have an incident response plan in place already. This plan should be consulted when constructing this section so that it is aligned with the main information security policy. This section should still be included even if an incident response plan exists because the virtual environment can differ in how incidents are resolved and in what constitutes an incident.

Business continuity

Virtual environments vary greatly in Business Continuity (BC) methodologies. Since virtual machines are stored as files, they can be easily moved around. Business continuity methodologies, therefore take this into account in specifying how machines will be brought back into production when significant outages or disasters occur.

The business continuity section should specify what should be backed up and how it would be restored in the case of an emergency. Levels of emergency should be stipulated as well as the groups responsible for coordinating BC efforts. The section should also specify if resources such as a cold, warm, or hot site are necessary for BC.

Training

The training section should clearly define what skills a person should have to fulfill the roles specified in the auditing and accountability section and how those skills will be taught and measured. It is important for those working on the environment to be trained in how to not only perform their job duties but to perform them in a secure manner.

The training section should specify ongoing assessment of training gaps and areas of focus for team members including how often training should occur, whether this will be handled internally or outsourced, and how training budgets will be determined. If training is to occur in house, curriculum evaluation and follow-up reviews should be specified in the training portion. In this way, when technology changes, the team’s skills will be kept up to date as well.

The virtualization security policy contains many elements from other organizational security policies, but it is specifically targeted to virtual hosts, the machines they contain, and the tools that manage them. It is important that virtual environments have such a policy because existing security controls do not adequately address the risks associated with using virtual machines. If you do not have a policy in place yet you are encouraged to develop one before your virtual environment is implemented. This policy will resolve security ambiguities associated with managing the environment, and it will ensure a consistent approach to information security within your organization if those affected by the policy are properly trained and required to adhere to it.

 

Business Continuity and Backups in the Virtual World

Virtualization has really become a mainstream technology and an effective way for organizations to reduce costs. As mentioned in previous articles, it simplifies processes but also creates new information security risks to handle. This article is concerned with business continuity and how virtualization can create many new opportunities and efficiencies in your business continuity plan. This is the third article in a series on virtualization.

Specifically, three elements of business continuity that can be enhanced through virtualization. These elements are hot, warm, and cold sites, snapshots, and testing. If you have not considered virtualization in your business continuity plan, I hope you will do so after reading this article. If you have questions on how to implement such a service, please contact us and we will be happy to assist you.

Critical security considerations for server virtualization

Virtualization is an excellent way to make better use of existing IT resources but utilizing them for multiple tasks.  It also allows for hardware and software to be further abstracted so that hardware compatibilities become less of an issue.  Virtual machines can be highly specialized since an entire physical box does not need to be allocated for it.  This reduces potential conflicts of running multiple applications on a single server and minimizes the impact of changes or upgrades.  Virtualization presents a new set of risks to organizations adopting it and it is vital to be aware of risks and information security risk management strategies when implementing a virtualization strategy.

Critical security considerations include:

  • Securing virtual hard disks
  • Reducing the attack surface for hosts
  • Classifying virtual machines
  • Involving information security personnel throughout the lifecycle
  • Segment traffic for administration and storage

Virtualization business continuity with snapshots

Snapshots are a valuable feature virtualization offers business continuity. Organizations can create point-in-time recovery points numerous times a day by creating snapshots. Snapshots record all changes to a virtual machine so that the machine can be restored to the state at which the snapshot was taken. This is especially important when making changes to a virtual server because changes do not always work as planned. If a change impacts a system negatively, the virtual server can quickly be rolled back to the state it was in prior to the change by using snapshots.

Snapshots also increase the Recovery Point Objective (RPO) by enabling the organization to recover a system to some point in time during the day after the daily backups have been taken. Many systems may only be backed up once a day but snapshots can be taken throughout the day. If a failure occurs during the day you can recover back to the snapshot and lose less data than if you had to recover all the way back to the previous night’s backup. This assumes, of course, that the snapshots have not been damaged along with the system.