Cloudsizing: Finding the right fit for your cloud

The maturation of the cloud is fascinating as it continues to adapt, providing more opportunities for companies and consumers to leverage the vast computing and storage power of computers around the world. Whether those resources are housed in a corporate data center or dedicated hosting facility as part of private cloud services or through third party public cloud offerings, the cloud is most likely part of your everyday life and it is one of the biggest technology growth areas, offering companies ways to save money and become more adaptable to change.

There are many options for cloud consumers, those utilizing or wishing to utilize cloud services. A large differentiator in cloud types lies in ownership and operation of the cloud infrastructure and three main types of clouds, private, public and hybrid are used to support differing business needs.

Private cloud

Private clouds allow business units to utilize cloud services without needing direct capital investment. The organization makes the investment in the underlying technology resources and support personnel to maintain the equipment and offers cloud resources to business units as a service.

Private cloud resources are not shared with other companies, resulting in predictable performance and optimized workloads. Neither are they restricted by the requirements of other clients. This allows for private cloud services to be customized so that they are tailored for the organization’s needs.

There are disadvantages to utilizing a private cloud. The main disadvantage is the large capital investment required on the part of the organization to implement and expand a private cloud. This makes it less flexible than public cloud offerings and more difficult for organizations to test the waters by deploying pilot or prototype systems or to offer services. Rather, prototypes and pilots must make a business case that results in realistic expectations of long-term revenue to cover capital expenses. However, an organization can set up a private cloud using outside hosted resources. The difference here between a private cloud that is hosted and a public cloud is that the private cloud resources are dedicated to you, not shared among multiple companies.

Public cloud

Public clouds, on the other hand, are what most end users think of when the word “cloud” is mentioned for these clouds are owned and operated by an outside entity and services are provided on a subscription basis, or sometimes for free. Cloud consumers can purchase only the services they need and they can easily increase or decrease their cloud resources by simply purchasing more or less. Public cloud services can also be made available very quickly to consumers because the infrastructure is already there. This is important for companies that need to rapidly respond to demand. In some cases, public cloud services can be provisioned hours or minutes later compared to days or weeks of procurement time in private clouds

Many public cloud services are designed for a specific use case that may or may not fit your own organizational use case. Public cloud providers do this in order to better manage their solution and reduce complexity of upgrades and maintenance. Public cloud services can be customized but this tends to increase the cost of the service and reduce service portability or the ability of the cloud consumer to migrate from one cloud provider to another.

Since public clouds are operated by a third party, consumers of the cloud do not have the same level of visibility into the underlying technology, processes and procedures that go into providing those services. This makes it more difficult to ensure that services in the cloud meet organizational compliance requirements. This is especially crucial when a data breach occurs and the organization must investigate and notify its customers. Public cloud contracts may not specify notification and compliance requirements leading to issues such as lack of timely notification of a data breach, inability to identify breach scope or other required data, and fines and sanctions against the cloud consumer.

Hybrid cloud

Both of these cloud models are powerful methods for providing organizational technology services but not all companies neatly fit into one of these two categories. This has led to the rise of the hybrid cloud. The hybrid cloud extends the private cloud to the public cloud. This adds the flexibility private clouds lack but still allows the organization to manage the data, processes and controls in the way they do with a purely private cloud.

In a hybrid cloud, customizations can be integrated on the private segment while standardized, out-of-the-box, portions of a solution are located on the public segment. This allows the organization to tailor the solution to their needs without limiting their ability to move the standardized elements to another cloud vendor or to spread the workload and service availability risk among multiple cloud vendors.

One significant benefit of the hybrid cloud is the ability to utilize existing infrastructure and to migrate portions of a service to public segments over time. This reduces the disruption a large change would have on system availability and utilization which can increase productivity. The front-end of a system can stay the same for users while back-end components are moved around the hybrid cloud.

The piece that makes this all work is a hybrid cloud service and associated management tools such as Dell Cloud Manager.  These tools centralize the administration of the hybrid cloud and interface with the public and private segments to enforce defined rule sets and establish communication and functionality between the components.

Wrapping it up

The hybrid cloud offers many of the advantages of both public and private clouds. This is not to say that the hybrid cloud is the best solution for all cloud scenarios as many services may still find that a private or public solution meets their needs. The biggest news and key element of the hybrid cloud is its fit for the myriad solutions that have yet to make their way to the cloud due to one objection or another or for those that had to settle for one type that did not truly meet their needs. With hybrid in the mix, cloud services can be more ubiquitously deployed and utilized, resulting in increased agility, closer alignment to operational objectives, and a better match of technology expenses to revenues.

Continue reading

The missing leg – integrity in the CIA triad

Information security is often described using the CIA Triad. The CIA stands for Confidentiality, Integrity, and Availability and these are the three elements of data that information security tries to protect. If we look at the CIA triad from the attacker’s viewpoint, they would seek to compromise confidentiality by stealing data, integrity by manipulating data and availability by deleting data or taking down the systems that host the data.

By and far, most attacks have been focused on disrupting confidentiality or availability so defense mechanisms and training has also been focused there. The number of data breaches has skyrocketed and there is a flourishing market for stolen data including personal health information, credit card numbers, social security numbers, advertising lists, and proprietary technology. We also see many attacks on availability through Denial of Service.

Integrity attacks are much less commonplace, but they still represent a threat. Organizations must protect more than just confidentiality to be secure (see Overly and Howell’s Myth #3).

So what does an attack on integrity look like? Let’s look at three examples

  1. Enticing an opponent to make a bad decision

There is a software development saying that goes, “Garbage in, garbage out,” meaning if you let junk data into your program, it will produce junk for output. Similarly, junk data used in decision making will result in bad decisions. Integrity attacks of this sort aim to sabotage competitors or opponents by poisoning information stores that their competitors use to make critical decisions.

  1. Exploiting temporary data inconsistencies

Attackers modify the time on a Network Time Protocol server so that door access control systems think it is the middle of the day instead of the middle of the night. Consequently, the doors unlock or require only a pin instead of multi-factor authentication.

In another example, thieves momentarily inflate the balance of accounts before performing a wire transfer or stock ticker symbols are changed in a trading company database resulting in many incorrect stock transactions and inflated or deflated stock valuation by the market.

  1. Online Vandalism

Hacktivists or cyber activists often employ online vandalism to spread their message and others vandalize sites for fun or to hurt brand image. For example, the FBI issued a warning in April that ISIL was mass-defacing WordPress websites using known vulnerabilities.

The good news is that many of the technical controls organizations already have in place to protect the confidentiality and availability of data can also be used to protect its integrity since attackers must exploit similar vulnerabilities or access the same systems on which they perform other attacks. However, procedures and training may need to be updated so that employees are aware of such threats and how to recognize them. Furthermore, the data that goes into critical decisions should be validated through alternate sources. Consider the following:

  • Require application security assessments to address integrity as well as confidentiality and availability.
  • Conduct a risk analysis of the loss of data integrity for key information systems and use these risk calculations to ensure that controls adequately address risk levels.
  • Update security awareness training to include sections on data integrity, validation and incident reporting.
  • Ensure that security policies and procedures address integrity as well as confidentiality and availability.

Continue reading

Regaining your anonymity online

Anonymity has been a longstanding hallmark of the Internet but you should no longer assume that your online activities are anonymous.

A vast amount of information is collected as you use the Internet. Search engines store the key words you search for and the pages you visit, browsers store web history, which may be integrated with the cloud, and websites store information your activities on their sites. Your IP address provides information on your general location and many applications can track your location data, obtained from your address or from GPS.

It takes a concerted effort to regain your anonymity. Anonymity must be protected from end-to-end starting with the operating system and then progressing to your network address, browser and search engine.

Operating System

Last month I wrote about the privacy features and flaws of Windows 10. What many don’t realize is that their operating system is collecting information on their activities which could be retrieved by malware or published to the cloud for data mining. This can be avoided by using an operating system that runs off a CD or DVD. Such systems, called “live” operating systems, run in memory, a storage component of your computer that retains data only while the computer is powered on. This data is not retained when you shut down the computer or restart it. CDs or DVDs are typically read-only, meaning that data cannot be written to them. Files that you are working on can be saved to a flash drive but operating system logs of activity are not stored with live operating systems. Similarly, spyware, malware and other junk cannot install on a live operating system. This further protects you against threats to your anonymity.

Network Address

Each device that connects to the Internet identifies itself with a unique IP address. This address can indicate your location and it can be used to correlate activity collected from multiple sources in order to build a profile on you. One method of obscuring this address is to use a proxy. A proxy requests Internet resources on your behalf and then presents them to you so that the requests appear to originate from the proxy rather than you.

However, one must be careful in using proxies because not all are intended for anonymity. Some send a forwarder that indicates where the data originated and others send data in the clear so that it can be potentially intercepted. Choose a proxy that uses SSL encryption and does not use http “forwarded for” headers. Another limitation of proxies is that attackers see them as a potential target because of the high volume of traffic traversing them. Compromised proxy servers could put your information in the hands of cyber criminals.

The Onion Router (TOR) extends the proxy model by bouncing connections between many computers within its network and then delivering the final request from one of many endpoints. Data within TOR is encrypted using SSL. It is still possible for a TOR server to be compromised but that server would only see a small portion of your traffic or possibly none at all depending on how your traffic was routed through the TOR network. The downside of using TOR is that connections are often slow due to the latency incurred by traversing so many computers.


The most common browsers are Internet Explorer, Mozilla Firefox and Google Chrome. Internet Explorer or its replacement, Edge, is the default browser on Windows machines. Linux variants often come equipped with either Firefox or Chrome, depending on the distribution. Each of these browsers has their share of privacy flaws but your choice of browser is much less important than the privacy settings you select within the browser. Restrict cookies and set your browser security settings to the highest level that still allows you to browse with ease. Many browsers also include a private browsing mode. This is very useful for restricting information from being collected by your browser on your activities while in this mode.

Search Engine

Most of the search engines collect data on your browsing habits so they can target ads to you and improve their search rankings. Some search engines share or sell this information with other parties. However, Duck Duck Go is a search engine that does none of these things and it is a valuable tool for searching the web anonymously.

These technologies and techniques can all be used to protect your anonymity. However, they provide the best protection when used together. It may not be feasible for you to use all of them. For example, you may need to use an application at the same time while you browse, making a live operating system impractical or you might want to test searches in a specific search engine. I encourage you to use as many as possible.  You may additionally use a virtual private network (VPN) to connect to your workplace or other common resources so that traffic between your computer and the VPN is encrypted and you can use wiping tools to more effectively erase data from your machine after deleting it. However, a discussion on these tools will have to wait for another article.

Continue reading

Protecting consumer data in the Internet of Things

The Internet community grows larger every day as more and more devices attach to it. These devices increasingly include, not computing devices, but everyday things such as HVAC systems, lighting, pumps, and even animals. We are at the beginning of a new age where items in the physical world can be monitored, controlled, automated and interacted with in ways never seen before. We refer to this as the Internet of Things (IoT) and many companies are looking at way to best utilize this technology.

IoT is currently used in a widespread distribution of industries. Sensors embedded in components can report when they fail or are about to fail. IoT is used to control the color and illumination level of lights and in other home automation scenarios. Some have already combined IoT technologies such as the new video game, Chariot, that can change the lighting in your room according to what is happening in the game.

IoT allows for those using augmented reality such as Google Glass, Microsoft HoloLens and other tools to obtain more information about the things around them. Robots can also take advantage of this data to better interact with their surroundings. This is certainly an exciting time, but we also must be concerned because as we push the boundaries of “what can be done,” we need to ask “what should be done.”

To understand the security behind IoT, we must first understand that IoT is really about collecting more data from more devices. That data may be used to perform all these wonderful things, but it can seriously diminish privacy unless the use of such data is governed.

Consumers may help drive change

The first item of concern is data ownership. Who owns the data that is collected by so many devices? Is it the person who owns the device, the person who is wearing the device, the person who designed the device, or all those interacting with the device? For example, if a rental car collects data on how fast a person drives, is that the data of the driver, rental company, software/hardware vendor, or the state? If currency is tagged with location data, is that data the property of the treasury and does the user of the money have any say in how that data is used? A serious dialogue is needed to resolve these issues.

Second, securing IoT will require more specific privacy policies and pressure from consumers for companies to use their data only for providing the services that they opt in for. Furthermore, private data should be removed once the need for it in providing these services has expired. On the flip side, companies can get ahead of this curve and generate goodwill and a positive customer experience by enacting such policies and demonstrating adherence to their customers.

Of course, this assumes that the average user cares about his privacy and data protection. The readers of this article are most likely more concerned with their privacy than the average Internet user, and this is precisely the problem because a critical mass is needed to affect changes. Companies will sell the products and services they see are in demand by the majority of individuals while others may or may not be served as a niche market. Overall change will not happen unless enough people express their concern and influence companies through the selection of products and services based on its ability to protect their privacy.

Action needs to take place now while IoT is still in its infancy. As we have seen time and time again, it is much harder to protect information after it has been collected, sold and distributed, and possibly stolen by other individuals.

Continue reading

A breach is found. Now whom do I tell?

In 2014, the Identity Theft Resource Center (ITRC) tracked 783 data security breaches with 85,611,528 confirmed records exposed. This year appears even more dismal. The ITRC Data Breach Reports1 for July 7, 2015, captured 411 data incidents with 117,678,050 confirmed records at risk. Because data breaches are a common occurrence in today’s information security threat landscape, it’s going to become de rigueur for companies to pump up security preparedness within their incident response plan.

“The plan cannot simply be static and gather dust; it requires upkeep. The incident response plan should change as requirements and environments change.”

— Edwin Covert, Norse Dark Matters

Bev Robb and Eric Vanderburg, two information security influencers, discuss which protocols companies should consider when a breach occurs.

Bev Robb

Robb: Attempts to contact a breached company (if the breach is unknown to them) often has no direct point of contact for reporting it. For example: Many times the Darkweb will discover vulnerabilities, exploit/extract the data and share or sell the stolen data before anyone outside of the Darkweb is aware that a breach has occurred. Aside from having a good incident response plan, I believe that there should also be a point of contact for reporting the breach directly to the company and it should be easy to find and openly accessible from the company website.

Eric Vanderburg

Vanderburg: You bring up a good point, and it is quite relevant considering that more than 69 percent of breaches are discovered by outsiders.2 I wonder how often a breach is discovered but not reported due to there being no easily discernible way to contact the organization. When organizations are discovering breaches within weeks or months of the breach3 rather than days, it is imperative that they make use of such a system to allow for breaches to be quickly reported. Of course, I don’t want to imply that organizations should not rely on others to notify them of their own breaches and not perform their own due diligence in breach detection.

So, with that said, the questions I see are who or which group should be the contact within an organization, how that person or entity can be contacted, and how reported breaches should be handled and investigated.

Robb: I think that before we look at who should be delegated as the breach contact within a company, we need to address the incident response plan. How many companies are actually proactively educating IT staff and their employees about breach response workflows? Is the incident response plan just hanging out and gathering dust or is the company conducting regular discussions, scenarios, and incident response drills? I believe that once we establish a solid foundation for a comprehensive incident response plan  the incident response plan should include a communications function within the plan indicating points of contact within the organization, as well as the contacts that will be handling external responses.

Vanderburg: The aforementioned elements should be integrated into the incident response plan and should be part of incident planning discussions. For example, the roles and responsibilities section would contain an incident reporting person and the validation section would describe how a reported incident is validated and whether it will be classified as an incident resulting in the enaction of further elements of the plan. As you mentioned, training should include identifying the contact person. I have led awareness training sessions and often I will ask the group members, “Whom do you contact to report an incident?” I sometimes get a variety of amusing responses but after I point out the person or group they should contact and then we walk through indicators of an incident and each person’s responsibility in protecting against data breaches.

Robb: Should the company have its incident response plan “in-house” or retain the services of a breach resolution partner?

Vanderburg: The incident response plan is an organizational document much like other policies and procedures so it should ultimately go through review from senior management and reside within the organization. However, it can be quite helpful to bring in experts in developing the plan so that best practices are implemented. Also, an organization may not have the necessary response resources available to them in-house so it is best to identify a third party that is willing and able to perform those activities and to document this in the plan. In such cases, the incident response plan or subsections of it would also reside with the third-party incident responder.

Robb: After visiting 10 random company websites from the Alexa top 100, there was no direct point of contact for reporting a data breach at any of these companies. Where should this be implemented in the incident response plan?

Vanderburg: Typically a generic email account such as would be forwarded on to one or more people mentioned in the roles and responsibilities section of the incident response plan. If there are multiple people who receive the message, the roles and responsibilities section should specify who would take the ticket. For example, there may be a rotation where someone is responsible each week or it could be based on which shift a person works. In other cases, there is a primary contact and a secondary or “deputy” contact when the primary is unavailable.

Lastly, metrics should be tracked on response time for breach reports to foster continuous improvement and the actions should be audited to ensure consistency.


There is little doubt that a well-considered, up-to-date and frequently tested incident response plan is a critical part and parcel of any company’s incident security program. It is also recommended that companies revisit their website navigation design and implement a data breach “point of contact” in the primary navigation area.


Breach Defense Playbook: Incident Response Readiness (Part 1)

Breach Defense Playbook: Incident Response Readiness (Part 2)

Dell SecureWorks: Incident Response Plan: 3 Most Important Elements

How To Build a Data Breach Response Plan: 5 Great Resources

Tips for Starting a Security Incident Response Program

Veracode: 5 Best Practices in Data Breach Incident Response


1IDT911.”ITRC Data Breach Reports.” ITRC 2015. Web. 7, July 2015.
2Heberlein, T (2015, May 29) What Percent of Breaches By Outsiders Are Not Detected By Organizations? [Web log post]. Retrieved July 8, 2015, from
3Drinkwater, D (2014, April 14) Data Breach Discovery Takes Weeks or Months [Web log post]. Retrieved July 8, 2015, from

Continue reading

Point/counterpoint: Breach response and information sharing

Some breaches require notification such as those involving patient data or customer information, but sharing is optional. Of course, notification is just one form of information sharing. For example, February’s executive order encourages private sector companies to share information on cybersecurity threats.

There are advantages and disadvantages of sharing information with others, and here to talk about it are two information security influencers and Eric Vanderburg and Bev Robb. Vanderburg will be arguing for information sharing and Robb will discuss potential sharing woes that may arise from government and private sector collaboration.

Eric Vanderburg

Vanderburg: Attackers seek to maximize their return on the development or purchase of new exploits by targeting as many companies as possible. Additionally, just like crimes outside of cyberspace, cyber-criminals have established habits and proven methods that they rely upon to steal data or take over or destroy systems.

The resources of any individual company or person are limited. It takes coordination in order to combat today’s threats. It is essential to protect your company against data breaches but prevention alone does not stop attackers from trying again. The information shared can help track down and catch the bad guys.

I could argue the benefits all day but the main decision point is whether the benefits outweigh the threats so let’s look at some.

Robb: Many information-sharing initiatives proposed by the U.S. government make it slick for the private sector to share information with the government, but not vice versa. You scratch my back and I’ll scratch yours may not apply.

Though I am not completely against information sharing between government and companies in the private sector, some concerns are:

  •         The federal government’s track record in the realm of government data breaches and their ability to safeguard data.
  •         Private sector companies that have reported crimes to the government that rarely receive timely intelligence back (regarding threat actors).

Though it does take coordination and information sharing within the information security community to combat the current threat landscape there is still much room for improvement.

Information overload

Security professionals reading this may be feeling overwhelmed already by the information on vulnerabilities and threats they receive each day. So why should we burden them with even more information?

Vanderburg: It is actually for that very reason that they need this information. There are too many threats out there, and organizations need to know which threats are credible and which vulnerabilities are more likely to be exploited. Information sharing can provide a filter to the vast amount of information out there so that security practitioners can properly prioritize.

Robb: The government is not a knight in shining armor and is already steeped with so much data and myriad software programs that it would be difficult to analyze threat data without the use of “commonly shared tools” to aggregate and analyze all this threat data.

Who decides which threats are credible and which vulnerabilities are more likely to be exploited? If it is the government that makes this decision, what is the ETA before the private sector is notified? My crystal ball tells me that the private sector will get the short end of the stick again, while daydreaming for actionable intelligence to arrive.

Damage to reputation

Vanderburg: An organization’s reputation can also be damaged by what it withholds. We see this especially when an incident occurs that later turns out to be much larger in scope than originally thought. At this point, the damage is much greater and public opinion is set against the company because they took so long to identify the threat and act on it. However, if the information on the incident had been shared, similar incidents could have offered more insight into which systems should be analyzed and related threats that might require investigation. This could potentially reveal and resolve other threats sooner, both minimizing the damage to the company and its customers but also preserving its reputation.

Robb: We’ve all learned over time, that government often takes an exceptionally long time to identify their own security threats and to act upon them. With most government data breaches shrouded in secrecy there is often miniscule acknowledgement of any accountability for weak security practices.

Information to attackers

Vanderburg: Sharing information publicizes the successful attack vectors used in an attack. If this information is shared before the vulnerability has been remediated, other attackers could exploit the same weakness. However, attackers already share information on successful attacks with others. It is likely other attackers will find this information not through security information sharing networks but rather through their own communities. As a general rule, security through obscurity (something is secure because it is unknown) is not a viable strategy because such things generally stay unknown for a short amount of time.

Robb: Deep web hacking communities and forums abound with information on exploits, hacking tutorials, intelligence on business websites (many that are vulnerable to SQL injection), and the like. Hackers are frequently applauded and esteemed when they share knowledge of data breaches they participated (or are currently targeting). They do not need to pay attention to “breach information sharing” because most of these bad boys just want to quickly monetize their hacks. You can bet your bottom line that they will find the means to infiltrate their target(s) with or without any knowledge of “collaborative threat intel”.


Though there is the sharing of threat intelligence within industry-specific sectors such as the Cyber Threat Alliance, ES-ISAC (Electricity Sector Information Sharing and Analysis Center) and NERC (North American Electric Reliability Corporation) – sharing threat intelligence is still in its infancy.

When you locate a data breach, what steps do you take to report it? Who do you go to? How do you tell a company that they’ve been breached if they are unaware? Curious? Be sure to check back next month for another Vanderburg-Robb data breach conversation.

Continue reading

Investigating the negative SEO threat

I read a recent Power More article by Mark Schaefer said that SEO content today is about insight rather than quality, and this reminded me of a case I worked on. As many of you know, one of the many hats I wear is that of a cybersecurity private investigator. A client called me to report that his website was no longer showing up in Google search results. The cause became apparent shortly after I started investigating. Someone had built hundreds of websites that listed their key search terms frequently. Not only did these sites rank higher than my client’s site, but they also warned users that my client’s site was allegedly defrauding its customers. We were dealing with a negative SEO threat.

The world of Search Engine Optimization (SEO) is one where the rules are constantly changing. The big search providers change their criteria for ranking sites and then individuals who understand SEO, modify web pages and processes for promoting their sites, so these sites can appear in more search results and gain more business. Overall, the process is a positive one, resulting in better search results and more refined websites. However, some SEO tactics are no so benign.

For example, search engines used to rank sites based on how many times their site was linked to by other sites. In response to this, some companies created many bogus sites with links back to their site to influence their search rank. Much of the blog spam that contains embedded links utilizes this SEO tactic and the end result was a SEO bubble of sorts where some site ranks were greatly inflated. Search users had to wade through too much garbage to find real content so the search providers changed their strategy to assign a quality ranking to inbound links. Sites were penalized if many low quality links referenced their site in an effort to discourage their use but this also less idealistic individuals the tools to attack others using by performing actions that penalize competitors or other targets.

Negative SEO techniques are being used on targets such as competitors or those with opposing viewpoints to lower their search rankings and diminish their web presence. In my client’s case, the pages the sites mentioned his brand, also mentioned other company brands in other areas on their sites, implying a negative SEO service for hire. Research on the sites themselves was inconclusive as the sites were hosted at various locations around the world with fake contact information.

The first step in combatting negative SEO attacks using inbound links is the disavow tool. However, these sites did not link back to the official company site so a different tactic was required. The sites that listed the client also listed other companies, so I researched them to try to find a common thread. Unfortunately, no common element was found which led further credibility to the theory that this was done through a service. I contacted some of the companies and asked whether they conducted an investigation and whether they would be willing to share information but found that an investigation was not performed or they were not willing to share information.

Our next course of action was to perform more active probes. I set up accounts at various service broker sites, which I like to call “micro-outsourcing” sites, such as Fiverr, Gigbucks, Tenbux and others. I then searched for SEO related services and purchased services where the description implied that black hat techniques might be used. I created several test sites and monitored search results for the sites to see where the traffic was coming from.

I eventually saw activity from a few of the sites that were initially used on our client so I traced the key term back to the transaction and worked with the micro-outsourcing vendor to identify the user and IP addresses used to log on. I then coordinated with local law enforcement in that country to prosecute the individual and also to obtain information on the entity that contracted the work against our client.

In the end, the lesson learned was that companies need to pay attention to their SEO, but also to potential negative SEO campaigns and know the tools and companies to work with if they are targeted for such an attack.

Information in this blog post was intentionally vague and some facts were changed to protect the identity of those involved.

Continue reading