Sorry, you need to enable JavaScript to visit this website.

NetWorks Group Blog

VENOM - Xen, KVM, and QEMU Virtualization - High Vulnerability Advisory

May 13, 2015

VENOM (Virtualized Environment Neglected Operations Manipulation)

If you are currently utilizing Xen, KVM or QEMU virtualization products you need to apply patches. VMware and Microsoft Hyper-V virtualization products are not affected.

This blog post was updated to reflect the now-assigned CVSS score of 7.7 (High).

A security researcher from Crowdstrike has discovered a software flaw in the virtual floppy drive code in QEMU’s virtual floppy disk controller.  This vulnerable code is present by default on Windows, Linux and OSX hosts running the virtualization products Xen, KVM (Kernel-Based Virtualization not Keyboard-Video-Mouse) and the QEMU client whether or not virtual floppy drives are used. This vulnerability has been present since 2004 and affects both x86 and x86-64 guest instances.  In order to exploit this vulnerability, an attacker must have gained access to the virtual machine guest.  This is not remotely exploitable, instead an attacker would have to first compromise the guest virtual machine.

If successfully exploited, this vulnerability could allow an attacker to escape from the virtual environment and execute code on the host system.  Theoretically, a successful attack could also allow access to other systems on the host’s network. 

At the time of this advisory, there have been no reports of successful attacks and there is no publicly available exploit code.  Vendors have begun releasing patches for this.  Please see the links section below.

This vulnerability has been assigned a 7.7 (High) CVSS score.  In this case a 7.7 CVSS score means that the impact of the vulnerability if high, however, the exploitability is much lower due to not being remotely exploitable.  Learn more at and
CVSS, the Common Vulnerability Scoring System, is an industry standard mechanism used to assess the severity of computer security vulnerabilities and works with a scale from 1 -10. More information about the CVSS system can be found at

Next Steps

  • How do I know if it affects us?

  • How serious is this vulnerability?

    • This issue has been receiving a good deal of attention and is potentially serious. There is no known exploit code available at this time, however, the vulnerable code sections have been identified so an exploit could be published soon.
    • Since vendors are releasing patches quickly and because this is not remotely exploitable, NWG considers this a high severity vulnerability as opposed to a critical vulnerability.
  • What should we do next?

    • Apply patches as they become available (see links below).
    • Check with your cloud provider (if applicable) to determine if they have applied the appropriate patches.
    • If you have any of the affected products and have questions regarding next steps please contact us using the contact information below.
    • As details regarding this vulnerability emerge, NWG will also offer Vulnerability Management customers proactive scanning to determine if they are affected by this issue.



If you have questions regarding this notice or about this vulnerability please call us at 734-827-1400, option 3 or email   

Cisco UCS Central Software - Critical Vulnerability Advisory

May 8, 2015

Affected Product
Cisco UCS Central Software versions 1.2 and earlier

If you are currently running Cisco UCS Central Software you should update the software immediately.

Cisco has announced a critical vulnerability in its UCS Central Software product.  The UCS Central Software is a web application framework that can be used to manage a Cisco UCS domain.  If successfully exploited, an unauthenticated remote user could execute arbitrary commands with the privileges of the root user on the vulnerable system.

This vulnerability has been given an initial CVSS score of 10, which represents the highest severity ranking.  CVSS, the Common Vulnerability Scoring System, is an industry standard mechanism used to assess the severity of computer security vulnerabilities. More information about the CVSS system can be found at

At the time of this writing, there is no known publicly available exploit code.

Next Steps


If you have questions regarding this notice please call us at 734-827-1400, option 3 or email   

PCI's Bold Move to Define Penetration Testing


In March 2015, the PCI Council released their Information Supplement for Penetration Testing Guidance.  This is a fantastic move as previous guidelines were centered on the completion of penetration tests and left the methodology for completing those up to the auditor.  With this guidance in place, we now have a clear definition to what qualifies as a penetration test in the eyes of the Council.  There isn’t a need to rehash the document for you here, and I encourage everyone to read it.  I would like to focus on a few key highlights that I’m happy to see added.

In section 2.2, scope is explained in depth.  Traditionally, the cardholder data environment (CDE) has been the primary scope for penetration testers.  Companies like NetWorks Group would educate clients on the need to expand testing to include systems that could impact the CDE.  We are now fortunate enough to have it spelled out in the PCI guidance.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out of scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

This is great news for penetration testers in the community.  If we can identify that there is a threat to the CDE residing in adjacent networks then we can target them as part of PCI penetration tests.  From the client side, this gives you a very real picture of the threat that your user networks, or other adjacent networks, provide to your card holder data.

In section 4.2.5, they cover the topic of “post-exploitation”.  A thorough and well define post exploitation process has been something NetWorks Group has long incorporated into their penetration testing process.  It refers to the actions taken after the initial compromise of a system or device.  Often we come into environments where penetration tests have been previously conducted and they stop at the device itself without evaluating the value of that device inside of the environment or as it relates to the CDE.

The PCI counsel has included VERY helpful charts in the guidance that I think business who are evaluating their environment will find the most helpful.  The first one is in section 2.1 which helps organizations distinguish between vulnerability scans and penetration tests.  NetWorks Group has assisted numerous organizations understand the differences between these two.  Unfortunately, we often times end up helping them after they have received what appears to be a vulnerability report when they actually paid for a penetration test.

Section 5.4 helps organizations to evaluate the penetration testing reports they receive from vendors.  I personally love this about the new guidance.  As a penetration tester, the “deliverable” that we present to the client is the most important part about our business.  It needs to contain information that is most relevant to the client to help them assess the risk an attacker poses to their environment.  The reports that NetWorks Group puts out have been long admired by auditors to meet the criteria they look for when auditing the organizations.  This new section now gives folks receiving penetration testing reports from other companies a yard stick to measure the quality of the reports they are getting.

In the end, I feel this is a great step forward for PCI guidance and NetWorks Group has been in line with these changes for some time now.  This should absolutely raise the bar for other companies and force them to give the quality penetration tests that merchants have needed for a very long time.

Nails in the Coffin: What put SSL in the grave?


Author: Aaron Pohl, Penetration Tester, NetWorks Group

In light of new PCI-DSS requirements stating that SSLv3 no longer meets the specification for “strong cryptography” prescribed by PCI standards, we wanted to give you a brief history of how the industry got here and why SSLv3 is no longer considered secure.

The first stop on this adventure in security history takes us back to 1995, when Netscape Navigator reigned supreme, and you would be lucky to be cruising at 28.8kbps. Netscape was the first to implement the Secure Socket Layer (SSL) protocol inside a web browser to allow for HTTPS communications. The first public release of the SSL protocol was SSLv2 in 1995. SSLv3 was released just a year later due to security flaws already discovered within SSLv2. (1.0 had enough security flaws that it was never publically released.) I want to take a moment here to note this: SSLv2 was the first public version of the SSL protocol; it was superseded in 1996, and I am still seeing servers that support it during penetration tests today. There’s backwards compatibility, and then there’s this.

An interesting note to inject here is that cryptologists in the US had already noted problems in Cipher-Block Chaining (CBC) ciphers before SSL ever started using them. At CRYPTO ’94, UC Davis professor Phillip Rogaway presented work which outlined theoretical attacks against CBC, which were later published in a paper in 2000. While it may be dry, this paper opened the landscape for future researchers.

In 1999, Transport-Layer Security (TLS) v1.0 was released, as an upgrade over SSLv3, with the option to downgrade to SSL. TLS version 1.1 was released in 2006, and included protections against CBC attacks which SSL versions 2 and 3 had already shown weakness to. TLSv1.2, released in 2008, removed the protocol’s ability to downgrade TLS sessions to SSLv2 because of SSL’s growing litany of sins.

Without taking the time to outline every exploit in detail, we want to explain a few that grabbed the security community’s attention:

  • May 2011 – BEAST - CVE-2011-3389 - Browser Exploit Against SSL/TLS - Paper
    • BEAST allows an attacker to retrieve sensitive data about the user’s connection, such as a cookie or other token that may be transmitted in an HTTPS request. To exploit this, an attacker must be able to inject content into the same origin as the targeted website, must be able to sniff/intercept the user’s communication with the server, and the SSL cipher being used must be a block cipher.
  • September 2012 - CRIME - CVE-2012-4929 - Compression Ratio Info-leak Made Easy - Paper
    • Compression before encryption is a common programming mistake, and in this case, allows an attacker to leak the contents of an encrypted connection by modifying the contents of the request many times; each time observing the length of the response to determine how it was compressed. Most major browsers have removed support for SSL and SPDY compression, which has effectively mitigated this vulnerability.
  • February 2013 - LUCKY13 – CVE-2013-0169 - Lucky Thirteen Paper
    • Lucky13 is an attack which affects both SSL3.0 and TLS1.0. It is a padding oracle attack against CBC ciphers. This Man-in-the-Middle attack is considered to be more efficient than either BEAST or CRIME.
  • August 2013 - BREACHCVE-2013-3587 - Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext - Paper
    • BREACH is very similar to CRIME, in that it is dealing with compression and encryption, however instead of attacking SSL-level compression, BREACH attacks HTTP-level compression. This attack requires that the web application reflect some piece of user-controlled data, and that a token (e.g. CRSF), must also be present in the HTTP response body. It is estimated that this attack can be completed in under a minute, but depends on the size of the secret to be guessed.
  • September 2014 - POODLE - CVE-2014-3566 - Padding Oracle On Downgraded Legacy Encryption - Paper
    • POODLE is another padding oracle attack against SSLv3 when CBC ciphers are used. This attack also requires the attacker to be man-in-the-middle on the user’s session to the webserver so that they can intercept and modify the client’s requests. This attack requires that the attacker make at most 256 requests per character of secret to be leaked.

Those are just a few of the nails in SSL’s coffin – what are some of the details we can glean from these? Each of them allowed an attacker to leak details about the user’s communication with the webserver. Some of them required the page to reflect some piece of attacker-controlled data. Most of them required that the attacker be in a privileged position within the network relative to the targeted user – That’s a key point to take away as well – these are attacks against your users, not necessarily against the server itself, however they could be used as a means to that end. Consider the following attack scenarios:

You are an IT admin for BigCorp Inc, and you stop at Starbucks every morning for coffee. There’s an attacker waiting there, listening on the wireless network. Your phone automatically joins the wireless, because you’ve connected here before, and then authenticates to your OWA server over HTTPS to retrieve your latest emails. The attacker intercepts this connection, and starts running one of the above attacks. By the time the barista is done making your vente triple latte, the attacker has already stolen your cookie, logged into OWA as you, and is searching for passwords or other sensitive data. Perhaps instead of OWA, you need to check out something on the corporate network, so you connect into your SSL VPN. Now, the attacker is logged into the VPN with your level of access, and is able to create tunnels to protected systems. This is the real danger with attacks like these – targeted user attacks to facilitate further access; not wholesale exploitation of end-users.

Where do we go from here? Well, TLS 1.3 is currently in draft, so we will be seeing some more great changes soon, such as the complete removal of both compression and re-negotiation from the protocol, in order to prevent vulnerabilities similar to those which have occurred in the past. If you’re wondering at this point whom to trust when it comes to which SSL ciphers you should be enabling on your server, we currently suggest following Mozilla’s recommendations -- as a browser vendor, they are well-placed to help make suggestions that take both browser compatibility and security into account. We here at NetWorks Group hope this has given you a better understanding of why SSLv3.0 is dead, and why you need to be moving all of your servers to TLS 1.2 as soon as possible.

As mentioned earlier in the article, new SSL/TLS vulnerabilities come out every few months, and indeed while we were getting this article ready to be published, two new vulnerabilities called SKIP-TLS and FREAK were disclosed. Check out the new research here.

As of this writing, we are anxiously awaiting YET ANOTHER SSL vulnerability (CVE-2015-0288) to be released as well...details yet forthcoming...

Red Teaming - Is it right for you?


Last week, I wrote an article for a popular online journal regarding the similarities between cyber security agility and militant warfare.  It was an exhaustive piece, and geared toward high level strategic planning (see full article here:

I want to write a separate article here that talks about how to actually apply the concept of “red teams” in your enterprise.  First, and foremost, red teaming for cyber security refers to the concept of a small team of hackers reviewing an organization to determine if they can gain access to critical assets.  This may not sound much different than a penetration test, but one crucial piece is almost non-existent in a red team exercise:  scope.  A red team will utilize a web application, mobile platform, physical, social engineer, and network tester as part of a team whose goal is to profile the organization and gain access.

Let me be the first to say that I am not stating that every organization needs to hire or employ a red team.  As with any security assessment, the right amount of intelligence gathering must be performed to determine if your organization is even a potential target for a red team test.  I want to highlight how to help determine if a red team test is right for you.

First thing that every organization should determine is who is targeting them.  This is a critical and often overlooked step.  Organizations will sometimes default to the answer of “everyone” which is not always the case.  This is often time referred to as “threat intelligence” and involves reviewing several non-technical aspects of the organization.  Threat intelligence is a beast in itself, and outside the scope of this conversation.  Some easy questions that you can ask yourself are:

  • What is valuable in my organization?
    • Do we influence financial market places?
    • Do we provide technical details on any market spaces?
    • Do we employ personnel whose knowledge can be used against us?
  • Does my organization affect other organizations?
    • Do we provide strategic, technical, or monetary advantages for larger corporations or conglomerates?
    • Do we have technical connections (VPNs or other secure connections) or other vendors who may have ties with other corporations?
  • Is our organization a political ally/enemy of someone?
    • Would success or failure of the enterprise or its partners provide positive or negative influence over the regional or global landscape?

As you can see these are not overly technical questions, but they can help you to evaluate the types of threats that could face your organization.  After we have performed our own threat intelligence, we can then look to determine the “Levels of Hackers” that might be interested in our organization.  If you are unfamiliar with our levels or hackers, then I encourage you to read my previous article.  In a nutshell, Level 1 represents our least sophisticated hackers, while Level 3 represents our most sophisticated hackers.  Believe it or not, red teams can be employed by all levels of hackers in our model.  Organizations that are not as fluent in their security posture as they should be can easily find themselves victim to very unsophisticated attacks but hacker groups that do not possess the operation doctrine employed by cybercriminal or state sponsored attackers.  However, the tactics employed by these attackers can resemble red teaming activities.

After your organization has determined its threat, and the types of attacker that could be targeting it, finally it’s time to allow your security team to go to work.  This is the easiest part.  Your security teams should be employed with the people and resources they need to conduct testing in the same fashion your attackers are.  As an organization, you should strive to allow your testing entity the freedom of movement throughout your organization as they see fit.  If the organization attempts to limit the scope of a red team test, you run the risk of negatively isolating segments of your organization that pose the greatest risk.

If you consistently outsource your testing to a third party (such as NetWorks Group), then that organization has to do steps 1 and 2 above before they test your organization, and the good ones will.  After all, the success of the red team helps to push organizational security farther.

Vulnerability Management - A Call to Arms


I had a completely different article typed up, however after catching up on my morning news and seeing a huge amount of controversy regarding Coordinated Vulnerability Disclosure (CVD) from Microsoft, I decided to reach out to the NetWorks Group Community and help our customers (past, current, and prospective) understand what that means to them.


Vulnerability management is a crucial part of an organizations security posture.  But that is a given, I want to talk about the why. The article from Microsoft talks about Coordinated Vulnerability Disclosure (CVD) which, in short, is a way for security researchers to disclose security information in coordination with the vendors to prevent exploit code from being utilized in the wild ahead of security patches.  For those who have not spent a lot of time in this arena, there is a constant battle between how quickly can security researchers find vulnerabilities, and how quickly can companies get them fixed. It is the essential “chicken and egg” problem.  We need researchers to find vulnerabilities and release them to the public, so that scan vendors can create scanning signatures, so that we can defend our networks with more agility.  However, the industry still struggles with what a good vulnerability management program looks like.

The basic areas for vulnerability management can easily be visualized by breaking it down into four distinct areas:  Baseline, Assess, Resolve, and Lifecycle (See figure below).

During the “Baseline” phase, organizations need to prioritize establishing where their technological debt is in the enterprise.  Believe it or not, most organization do not know where all of their assets are on the network, what their purpose is, or the risk they pose to the integrity of the network.  Secondary to that is policy development, especially as it pertains to obtaining new systems and decommissioning retired systems.  I’ve been to organizations that will “decommission” systems only to leave the network cable attached.  I would compromise a decommissioned system as part of a penetration test and the organization would swear that it was no longer on the network only to find out that it was never unplugged.

Phase two is the “Assess” phase.  We need this phase in the enterprise.  This is the organizations “eyes” into the threat footprint of the organization.  We want to assess our networks threats on a monthly basis, at a minimum.  The great thing about a vulnerability management program, is that it can be incredibly affordable when compared to services like penetration tests.  At the same time, with the proper evaluation of the information, it can be a great indicator to the results of a penetration test when utilized effectively.

Equally important, at phase three, is the “Resolve” phase.  Believe it or not, this is where your organization will struggle the hardest.  There is no other way to say it, but your organization must develop a plan to remediate or mitigate anything identified in the Assess phase.  Otherwise:  What is the point?  One concept that is often overlooked, and causes the biggest stumbling block, is the misconception that everything must be patched.  Poorly equipped Infosec people are to blame for this.  Organizations must make a risk decision regarding remediation.  By prioritizing assets from phase one based on criteria such as business impact, and using threat assessment information from phase two, you are better equipped to make informed decisions about when and how to remediate or mitigate your assets.

Finally, we include a phase called “Lifecycle”.  This is about building a culture of good security practices inside of the organization.  We want to have organizations identify how the system got into the vulnerable state to being with.  Was the patching process flawed?  Did the system get added to the inventory outside of the normal purchasing process?  Is there a delinquent equipment refresh process that prevents systems from being upgraded timely?  We need to evaluate how we got into this situation so that we can realistically change those habits and make for a more productive security mindset in the organization.

In the end, organizations should employ a robust and cyclical vulnerability management program.  That program should provide the organization with information that helps them to assess the threat and risk of all information technology assets within the enterprise.  The vulnerability management program as a whole is key to ensuring constant information is being provided to the teams that need to make strategic and dynamic changes to the posture of the network.

Penetration Testing for the Executive


Whether you are a veteran security executive who has received hundreds of penetration testing reports, or a part-time security manager whose primary roles lay in traditional business management, it can be difficult to decipher the encrypted text held within some penetration testing reports.  The problem exists because there is not a standard for penetration testing reporting inside of the industry.  I’ve seen literary works that range anywhere from Dr. Seuss to William Shakespeare.  I have peer reviewed reports for associates whose bad grammar could make a first grader wince.  The goal here is to identify what makes a penetration test report good, how to interpret the results, and finally how to put them to use in your strategic planning to improve organizational security.

There are many frameworks for the penetration testing report, and this is not a discussion of which ones are best.  However, an important conversation to have is what elements make a report valuable to the people reading it.  As penetration testers, we have to remember we could potentially have every level of an organization reading our reports, from the tactical level where technicians will fix our findings to the leadership level where they need to take responsibility for the security of their organization.  First and foremost, the report you receive should tell you the impact a breach on your network would cause.  We, as penetration testers, MUST speak to the business impact a breach would have on an organization.  A well-conducted penetration test, that simulates an attackers attempt to breach your network, should tell you the data and information that was successfully compromised.  This information should be directly relevant to your business.  For example, if your organization stores payment card information then this should indicate what, if any, payment card data was compromised.  This should also include how many systems were compromised as part of the penetration test.

A second, critical, section of any penetration report should be the very detailed “kill chain” on how your organization was compromised, how the data was accessed, and how that information was used to perpetuate additional compromises.  This section can be more tactical and should speak to the organizations technicians who would be tasked with remediating the compromise.  This section should be riddled with screenshots used as validation of the compromise.  We oftentimes joke about how all executives can understand are pictures.  However, a far more practical reason for showing pictures in this section is to illustrate the compromise so that technicians at the tactical level cannot “snowball” leadership with confusing jargon.

The two elements outlined above represent what I feel are the “must haves” to a good penetration testing report.  It is important to point out that other elements may be included, and you should weigh them equally for how they will directly help your organization.  It is also important to remember that a penetration test is a “demonstration of exploitability”.  This means that if you receive a penetration test report that lists your vulnerabilities, but doesn’t have any demonstrative examples or validation you should challenge those who conducted your penetration test.

Now that you have your penetration testing report, you need to be able to execute on your organizations strategic goals for security, using those results as direction.  Remember, a penetration test is a “demonstration of exploitability” and you should utilize the results that come out of a test to show the immediate need for security changes.  Well rounded organizations that are conducting regular vulnerability management, and patching, should consider those security measures as the “good hygiene” efforts of security.  The successful results of a penetration test should immediately highlight security issues that fall outside the identification capabilities of your vulnerability scanner, or assist with identifying issues in your current vulnerability management process.  This helps you to prioritize the penetration test results ahead of your normally identified vulnerabilities.

Penetration testing can be an integral part of your organizations security strategy when the results are presented in a way that helps your organization visualize and prioritize.  Never be afraid to challenge the results of your penetration testing vendor if you do not understand or feel the information is presented in a way that helps your organization strategically.  You paid for this information, and should be able to utilize it.

NetWorks Group is Proud to be Sponsoring BSides Detroit 2013

June 6th, 2013

IT Security is thriving in the Detroit Metro area and we're proud to be sponsoring BSides Detroit 2013 this year!  Security BSides is an innovative new un-conference style meetup that brings local security professionals together to share experiences, knowledge, and network.

Security B-Sides Detroit 2013 comes to the Renaissance Center on June 7-8 . The conference honors the tradition of Security B-Sides while continuing to build on its own unique history. We continue to showcase local speakers and stories that attendees not found at other conferences. With two days of content and several tracks, the conference will also feature some of the best and brightest national speakers. This year's event features workshops, contests, and a capture the flag contest. B-Sides Detroit is setting a new standard for Security B-Sides conferences. The tickets are available to users, security professionals and business leaders at


A play on words, Security B-Sides began as a small conference besides a major conference featuring B-track speakers. The Security B-Sides conference began in 2009 in Las Vegas, along side the Black Hat security conference. The idea of a community-driven event spread. By the end of 2010, Security B-Sides events had been held in San Francisco, Austin, Boston, Atlanta, and Dallas/Fort Worth running concurrently with such conferences as RSA, SxSW, and Source.


BSides Detroit was part of a new wave of cities that followed. Detroit broke the mold in many ways. First, unlike the original events, BSides Detroit began as a standalone destination conference. A commonly told joke is that Detroit is literally beside itself, as the conference is larger and longer than the early BSides events. While BSides Detroit embraces the local speaker model, the organizers also concentrate on attracting A-list national speakers.

We'll see you Tomorrow(6/7/2013) and Saturday(6/8/2013) for some great talks and workshops!

Twitter Adds Two-Factor Authentication for Users

May 24th, 2013

After a string of high-profile account compromises that included the Associated Press and Burger King, Twitter has added an additional (but optional) layer of authentication to help protect users from being the next big-name account that's compromised.

By adding a second-factor of authentication (that's to say, beyond the user's password), Twitter is able to provide a higher-level of integrity to the authentication process by utilizing a user's cell phone number to send an SMS with a one-time token. In this manner, a compromised password will not yield account access unless that same attacker is able to intercept the SMS or steal the user's phone. Clearly, this is a great step in the right direction and something other companies have done previously, such as Dropbox and Facebook.

If you or your company wants to proactively protect a Twitter account, simply review the step-by-step directions posted by CNET. By enabling this extra step, the likelihood of an attacker compromising a Twitter account will generally plummet (save for some very sophisticated attackers).

As end-users, the best way to get other companies to follow-suit is to use these types of features when made available to show that demand exists. Through implementation of two-factor authentication, the user once again has a fighting chance against password brute-forcing and general phishing attacks.

Failing Gracefully: Using AWS for Web Site Failover

May 13th, 2013

When it comes to the Internet, keeping your organization's presence online is crucial to the accessibility of resources for customers, potential and existing. At NetWorks Group, we understand that despite the best of intentions and planning, downtime will likely still occur, at least a few minutes per year. Many teams put forth a goal of 100% uptime for their web site, but often get a dose of reality when a large storm hits their data center or other issues pop-up that may be out of their direct control. To this end, we wanted a way to minimize full-downtime so that our presence on the Internet would only be down as minimally as possible, without going over-the-top on infrastructure to do so.

Amazon Web Services provides a plethora of cloud services to help teams do more for their environment with less overhead of capital expenditures. By cherry-picking needed services with AWS, you can find great cost-saving solutions to otherwise expensive — or complicated — problems. In the instance of a web site, the overhead costs and management of a second (or third?) data center to avoid an hour of downtime a year may be overkill for many organizations. For NetWorks Group, our web site being down, while not desirable, is not so critical that it will impede our ability to provide amazing service to our customers. With that in mind, we wanted to take a direction with web site downtime that would be economical, easy to manage, but also give us a minimal downtime of our Internet presence.

By utilizing the AWS services Route 53 and S3, we're able to provide a great failover solution when our primary web server is unreachable or down. In February 2013, Route 53 added features to allow for DNS Failover and S3 Website Hosting. The idea is that a simple health check — i.e., AWS verifies it can receive a 200 response code from your web server — will decide whether or not to failover your web site from its regular home to a special S3 bucket with your "downtime" page. By configuring a low DNS Time-to-Live (TTL), your DNS record can be changed to point to this failover end-point within a minute or two.  Through having this S3 bucket at the ready, you can automatically failover to a static-content site to provide critical information to customers such as contact information, expected time-to-recovery, etc.

So the next time your team is considering spending double or triple its budget to handle a few annoying minutes of downtime, think about utilizing Amazon or other cloud service providers to handle the problem gracefully and economically.


Drop us a line.

Personal Information
Company Details
What are you interested in?
Anything else we should know?
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
2 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

Subscribe to our mailing list.

* indicates required