This post appeared originally in our cloud microsite and has been moved here following the discontinuation of the blogs part of that site
Container driven development is catching on like wildfire, and for good reasons. In the age of digital transformation, time to market is becoming a competitive edge impossible to ignore. To be able to speed up the software development and deployment, monolithic application development sooner or later will be extinct.
Other factors that drive the world towards containers are micro-service architecture, continuous integration and delivery (CI/CD), DevOps bringing dev and ops closer together and cloud computing with multi infrastructure portability. More on that later, in this text we’ll focus on the container technology basics.
The basics
In a traditional application world, an application requires code, runtime, system tools, system libraries and settings. In a container, the application code is encapsulated together with all those building blocks it needs to run.
Why would you choose to do that?
Mostly because presenting a consistent software environment as a container makes it a lot easier for the application to go from the developer’s desktop to testing to production deployment.
There is simply no more need to make sure all libraries and settings in production are corresponding to the ones used in development, dramatically reducing the time and effort required for releasing new code.
Containers also isolate the application from its surroundings, both reducing conflicts between different applications on the same infrastructure and minimizing application issues to a single container instead of the entire infrastructure.
When updating the application code, a new container is build to replace the old container. When deploying the new container into production, the old container is simply thrown away. This is a good reason why a container should be stateless, that is; no state or data should be stored within it, as the data would be lost when the container is replaced.
Container architecture
Container technology isn’t really new; Linux containers (LXC) have been around for about 10 years. Yet first when a standard way to divide applications into containers was established, a major breakthrough was made.
There are other suppliers involved, but no one disputes that Docker has led the charge and sits at the heart of the market. Docker revolutionized container adaption by providing a container standard and thereby making it easy for developers to build and run their containers.
By fundamentally changing the way developers build applications, Docker became one of the most popular open source projects in history.
As well as holding the container standard, Docker also provides operations to start, stop and build containers.
Container orchestration
Unless handled with care, running containers includes a risk of ending up herding cats. To avoid this, software has been written to handle containers beyond starting and stopping. The ability to automate container management is one of the prime benefits of container based applications.
This brings on container orchestration. Orchestration is where much of the current innovation lies in the container technology ecosystem and where the competition is heating up most.
Tools like Docker Compose provide basic support for defining simple multi-container applications. However, full orchestration involves more complicated tasks like scheduling of how and when containers should run, continuous deployment (CD), cluster management and provisioning of extra resources, possibly across multiple hosts.
Kubernetes, backed by Google, is currently the most popular container orchestration tool. Other container orchestration tools include Docker Swarm and Apache Mesos.
Container platforms: Platform as a Service (PaaS)
Container based applications comes with the ability to run on a variety of different physical and virtual machines, in the cloud or not. PaaS is a general term for a cloud computing service that provides a platform for users to easily develop, run, and manage applications in a cloud.
When offering PaaS, Cloud providers offers infrastructure, servers, networking, storage, database, operating system (OS), security, runtime environment and infrastructure monitoring all in one. Abstracting all those lower infrastructure layers, developers only needs to bring their containers and application data.
PaaS simply enables developers to concentrate on what they do best; coding, as well as empowering them to manage their application without regards to lower infrastructure.
Say you want to move an application from one cloud platform to another, or implement automatic scaling and restarting applications. PaaS solutions offers flexibility, workload management advantages and provides the ability to easily set up fault-tolerant systems.
Well known PaaS includes AWS Elastic Beanstalk, Google App Engine and RedHat OpenShift.
Further reading
For further reading on the business benefits of containers, check out my colleague Mia Ryan’s blog post).
Comparison of different compression tools
Working with various compressed files on a daily basis, I found I didn’t actually know how the different tools performed compared to each other. I know different compression will best fit different types of data, but I wanted to compare using a large generic file.
The setup
The file I chose was a 4194304000 byte (4.0 GB) Ubuntu installation disk image.
The machine tasked with doing of the bit-mashing was an Ubuntu with a AMD Ryzen 9 5900X 12-Core ... [continue reading]