Administrator 08/14/2020

The stampede to the cloud is well underway, both as the implementation of cloud principles in-house or moving to an outsourced managed service provider. In all cases, one concern is that of taking reusable backups as part of normal operations and as part of a business continuity plan.

The first question to answer is to define what a network container is.

Simply put, it is a development of virtualisation. It is an isolated environment, formerly called a virtual machine or virtual server running on a Linux server. A network container isolates its contents from the supporting host, meaning that whatever you do in that environment will not affect any other container running on the same host.

The advantages of this approach include version management. The container environment may have different dependencies from the host that could potentially cause conflict. Using containers means that many containers can run on the same host, each with its unique environment.

On the face of it then, containers and virtual machines seem to be the same. There are technical reasons why containers are better in some circumstances, principally in their administration as an activity in the host. Besides, containers are much less extravagant in their use of host resources than a virtual machine.

Management systems like Docker ease the admin load of maintaining several containers on a host. The containers can be discrete, or by using Docker can communicate with each other to create a distributed applications and data environment. Very suitable for cloud deployments.

Backup containers

Other advantages of containers include:

  • Easy Deployment. Configurations are embedded in the container, significantly reducing implementation time and effort.
  • Fault tolerance. Putting up a redundant database or server is a simple process. Start a second copy of the same container on a different physical node.

A significant disadvantage is a requirement for the same Linux platform, down to the Kernel, across all containers. The Linux requirement may make it impossible to deploy some applications that need another OS or Linux version to operate.

In short, in most environments, containers can be an affordable, efficient way to deploy on a Linux platform.

On the other hand, there are several things VMs can do that containers can’t, including using a different OS than the host. They are ideal for cloud hosting where customers of a hosting service want to import their systems images and assured of isolation from other clients.

At a technical level, the types of applications developed for containers will differ from those developed for conventional environments, including VMs. Container applications will be more than likely microservices.

In conventional environments, application elements communicate via the operating system, which can be time-consuming. In a container environment, microservices communicate with each other using APIs, a quicker and less resource-consuming process.

Containers are also more aligned with networking. Each container is, in effect its own website and web server. They communicate using API calls, and their web DNS server routes the requests.

Further developments are the use of orchestration agents like Kubernetes to manage the container subnets. Docker, however, lifts the subnet management above physical network management, allowing the implementation of a single or multiple site software-defined network.

A given is that there must be the means to recreate systems and data should there be a failure and loss of service. This then begs the question “Is there a need for conventional backups?”

As with many things, it depends on individual circumstances, but the answer must be yes.

High Availability sites

High Availability sites

  • Container duplication. The ability to duplicate a container implies that no backup is needed. If the primary container fails for any reason, then the secondary container should switch in automatically. The containers can be on different physical platforms, or even on different sites.

  • Overflow Containers. Similar to, but not quite the same as container duplication. The ability to span multiple networks means that the primary installation can be on-premises, with overflow containers located elsewhere. In a multi-site organisation that could be on a different site. In a single-site environment, a public cloud could be the location of the overflow containers.

In summary, there may be sufficient security against data loss with the use of multiple containers, especially if they are on different sites. It does, however, require the secondary installation to be configured as a hot-standby site so that if the primary container fails for any reason, the secondary overflow will kick in immediately.

Clearly, having the secondary container on the same physical server, or even in the same data centre, does not meet this requirement.

The hot-standby requirement has DRP and Business Continuity implications, and there will probably be additional networking and hosting costs.

Low and medium availability (Non-Critical) Sites

Low and medium availability Sites

Some sites do not need to recover immediately from a loss of service, or it may not be physically possible or financially justifiable to use an outsourced public cloud or service supplier to host a duplicate or overflow container.

In that case, a conventional regular backup is needed. Even if there is an overflow or duplicate site, connection to it could be lost, and the container must be reinstated on the primary site.  The prudent answer is to maintain an appropriate backup regime.

Leave a comment.

Your email address will not be published. Required fields are marked*