This post appeared originally in our sysadvent series and has been moved here following the discontinuation of the sysadvent microsite
S2I, Source-To-Image, is a toolkit for building Docker images with minimum effort. The S2I project description describes itself like this:
Source-to-Image (S2I) is a toolkit and workflow for building reproducible Docker images from source code. S2I produces ready-to-run images by injecting source code into a Docker container and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use Docker images to version your runtime environments.
This project is currently part of the OpenShift Origin organization on GitHub and is - in my opinion - a central part of the usability delivered by OpenShift.
In short …
In short, the S2I package will, given some constraints, try to download any stated package dependencies, compile the program if applicable, and generate a docker image with the resulting artifacts. It does this by chaining in specific builder images after trying to determine the type of source-code that exists in the project.
An S2I example
As an example, I have a ‘hello world’ python application that I want to run as a docker instance:
from flask import Flask
app = Flask( __name__ )
@app.route('/', methods=[ 'GET' ])
def show():
return 'hello_world!\n', 200, { 'Content-Type': 'text/plain' }
This code resides as a package in myapp/__init__.py
I first need to document any additional non-os libraries this
application needs. This is a simple list of packages stated in a
requirements.txt
file in the project root directory:
gunicorn
flask
I then need to provide a start application so S2I knows what to kick
off. For a python application, this is a file called wsgi.py
:
#!/usr/bin/env python3
import sys
from myapp import app as application
if __name__ == '__main__':
application.run( host='0.0.0.0' )
You can test the code by running python wsgi.py
. This will start Flask’s built-in web-server:
$ python wsgi.py
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Create the application in OpenShift
Presuming OpenShift has been given permission to access the project, and you are logged into an OpenShift instance, you can now run:
oc new-app . --name=myapp
This should create the application, build the docker image and instantiate a docker container service. We have detailed this in an earlier blog-post.
Again - remember to add and link access keys before you run the oc new-app
command.
Language detection
The s2i framework will as of writing detect the following languages based on the following files:
Language | Files |
---|---|
jee | pom.xml |
nodejs | app.json, package.json |
perl | cpanfile, index.pl |
php | composer.json, index.php |
python | requirements.txt, setup.py |
ruby | Gemfile, Rakefile, config.ru |
scala | build.sbt |
golang | Godeps, main.go |
The language detection process can be overridden by prepending the
source repository argument with a docker image path and a ~
. This is
detailed in the documentation.
The devil is in the details
Now, the simplicity of S2I makes it a toolkit you really want to use. But real world challenges often plot against simple, elegant solutions. What if one of the libraries you need only exists on GitHub and not as a gem? What if you want to install this tiny other service you just really need? What if you want to use the HTTP2 protocol which is not supported by the green unicorn web server?
What if I want to run unit-tests on the finished build before pushing the image into production?
You can solve these problems in several ways. This blog will describe two of them.
A simple test framework
You can test a Flask application through the pytest framework. In our case, a simplified test framework would be to create a file called tests/conftest.py
with the following content:
import pytest
import sys
sys.path.append( '.' )
import myapp
@pytest.fixture(scope="module")
def client():
myapp.app.testing = True
client = myapp.app.test_client()
return client
We also need a file called tests/test_hello.py
with the following content:
def test_hello( client ):
ret = client.get( '/' )
assert ret.status_code == 200
assert ret.mimetype == 'text/plain'
assert b'hello' in ret.data
Finally, you need to add pytest
to the requirements.txt file
. In this case, running the py.test framework is done from the project root directory:
$ py.test
================================================== test session starts ===================================================
platform linux2 -- Python 2.7.12, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /home/larso/jobb/git/s2i-test, inifile:
collected 1 items
tests/test_hello.py .
================================================ 1 passed in 0.02 seconds ================================================
So how do I trigger this from an OpenShift build process?
Build Hooks
OpenShift added (around version 3.2) support for build hooks
through the postCommit
field in the build configuration. As the name
implies, this hook is executed after the last layer of the image has
been written and before a registry push. The postCommit step is run in
a separate container so any changes in the image are not persistent.
The field can be used either with a script
or a command
value:
postCommit:
script: "py.test"
The script
value will run a command in a shell context. Any return
value from the script or command other than 0 will mark the build as
failed.
The OpenShift CLI has a oc set build-hook
command that can simplify
setting up this step:
oc set build-hook bc/myapp --post-commit --script="py.test"
Now, starting a new build in OpenShift will trigger the postcommit hook, which in turn will start the py.test framework. A successful return value will tell the builder process to push the image to docker registry and put it in production.
S2I hooks
The designers of S2I has documented how to hook in custom code for building new S2I builder images. We can piggyback on these features for solving the above and other similar problems. This approach is more complex, but on the flip side you do not need any specific OpenShift configuration for this variant to work.
S2I will look for scripts for 5 predefined steps in the build process. Only two of these steps are IMO interesting for hooking into, namely the build and execute steps.
- build step
- This step builds the image and pushes it to the docker registry. It
is controlled by the
assemble
script - execute step
- This step controls the processes that runs in the container and is
controlled by the
run
script
For our case, running tests, the build step is the natural candidate for hooking into.
S2I will look for build (and run) scripts in the following order:
- A script given in the OpenShift application build configuration
- A script in the .s2i/bin directory
- A script given via a label (io.openshift.s2i.scripts-url) in the default image.
The by far easiest way for us is to create a new .s2i/bin/assemble
script. Since this script in this case actually replaces the default
script, we have to supply any installation and build instructions as
well. These are of course defined in the original language specific
S2I image.
In our case, an example assemble
script would look like this:
#!/bin/bash
set -e
shopt -s dotglob
echo "---> Installing application source ..."
mv /tmp/src/* ./
echo "---> Upgrading pip to latest version ..."
pip install -U pip setuptools wheel
echo "---> Installing dependencies ..."
pip install -r requirements.txt
# set permissions for any installed artifacts
fix-permissions /opt/app-root
echo "---> Testing the code ..."
py.test
Most of the contents here is lifted from the python S2I assemble script, but I have dropped code that I don’t need (e.g. Django handling code). In addition, I have added a py.test line at the bottom. The return value of this test then sets the return value of the assemble script. As with the postcommit build hook, any return value other than 0 will fail the build stop the CI pipeline for this commit.
Take note that any steps performed by the test framework here will change the production image.
References
Thoughts on the CrowdStrike Outage
Unless you’ve been living under a rock, you probably know that last Friday a global crash of computer systems caused by ‘CrowdStrike’ led to widespread chaos and mayhem: flights were cancelled, shops closed their doors, even some hospitals and pharmacies were affected. When things like this happen, I first have a smug feeling “this would never happen at our place”, then I start thinking. Could it?
Broken Software Updates
Our department do take responsibility for keeping quite a lot ... [continue reading]