Decreasing dependency on DevOps and minimizing time efforts on building of efficient CI/CD process

Alexander Goida
10 min readSep 28, 2020

When your DevOps are busy and developers are blocked, waiting for a new or updated CI/CD pipeline for a new project, you might start looking for a solution which would save your time and efforts on automation. In this article I’ll tell you how we established the CI/CD process for more than 20 microservices and infrastructure components on top of that in about a month while developing business related features in parallel. Now the process allows us to build and deploy almost instantly to any of our environments. At this moment our CI/CD process is not finished yet, but the ideas around which it’s built are stabilized and allows easy enhancement. We can almost instantly integrate new projects to CI/CD, add environments or easily enhance the procedure of building and deployments for multiple projects. Nevertheless, this article is not a “How To” manual. I’ll tell you about practices which helped us to build our CI/CD with little dependency on DevOps and with little time spent on its development.

Background

Before going to the main subject of the article I want to give you a short overview of the system which we’re developing. Our project is an expert system which processes real-time data streams to raise warnings to our domain experts. A bit more you can read here: Building an automatic fraud prevention system from scratch. In an approximation it looks as follows:

Our system

There are multiple incoming streams to ETL block. Each block represents a bunch of microservices and utility routines. Data is ingested into the system via real-time streams and is transmitted inside via an internal messaging system. We have more than 20 microservices and their number is growing. We use Kafka, PostgreSQL, Siddhi [1] routines, .NET Core routines, jobs, and a bunch of 3rd party tools. Almost everything could be automatically or semi-automatically deployed, including infrastructure components, such as Kafka and Jenkins. We have five Linux nodes in our cluster on which we are deploying our components. The overall average throughput approximately is about 2000 messages in a second and can triple in the current setup. There is no extensive fault tolerance in the system, but it can be quickly re-started without big harm due to ideas around which it’s built. We still have a lot to do in our roadmap though.

Now let’s move to the main topic of the article: deployment processes and procedures which we have.

Our cluster

Problems

Let’s take a look again at what we were trying to solve. We have a cluster of five Linux machines and we want easy deployment (and upgrading) of infrastructure (Jenkins, Grafana/InfluxDB/telegraf, monitoring agents) and enterprise components (microservices, PostgreSQL, Kafka) on each of them. We want to monitor all deployed components in the cluster and be able to scale them out by demand. Also we want to have the identical environments: production, experimentation (also called beta or staging) and development.

A note about Kubernetes
We don’t need Kubernetes at the current phase of the project evolution. It introduces complexities in managing nodes workload and enforces a more complicated approach for deployments from developer perspective which consequently would increase delays with going into production. Because of those reasons we decided to avoid using Kubernetes from the beginning. We still consider using it for cases when we would need automatic scaling and orchestration. When we would need the migration to Kubernetes we’ll do this with Kompose [2] tool quite quickly.

In short, our practices could be described as follows, and I’ll cover them with more details further in this article:

  • We use Docker and don’t use Kubernetes from the beginning. We always deploy in containers and use them even locally.
  • Our Jenkins pipelines are agnostic to project specifics. They are based on templates which all projects are followed.
  • We use the externalized configuration pattern [3,4]
  • We organize Git repositories convenient for automation and all projects use the same approach.

And last, but not the least, we avoid the development of custom solutions, if there is already a solution in the industry. This circumstance introduces additional efforts on analysis, learning and prototyping all over our development, but also we bear in mind the learning curve. It may appear that writing a custom solution will take significantly less time, than learning and deploying a 3rd party one.

So, we have four topics. Let’s take a closer look at each of them.

1. Docker and Managing of the cluster

We follow IaaC approach [5] based on using multiple docker-compose files. All our components (Jenkins, PostgreSQL, microservices, anything else) are containerized and in a Git repository we have a setup of every machine in the cluster described in docker-compose files. If we need a customization of a 3rd party component, we create our own Docker image. For example, we use custom configuration of PostgreSQL, Jenkins and Grafana. Using their official images on DockerHub we built our own Dockerfile which applies necessary configurations during building of their images. We apply custom pg_hba.conf and postgresql.conf.

FROM postgres:12.4
LABEL maintainer="Me"
USER root
# set default password for postgres user
ENV POSTGRES_PASSWORD=secret_password
# copy file to use them during launchin a container
COPY pg_hba.conf /etc/postgresql/pg_hba.conf
COPY postgresql.conf /etc/postgresql/postgresql.conf
# container startup script
COPY setup_postgre.sh /docker-entrypoint-initdb.d/setup_postgre.sh
RUN chmod 0666 /docker-entrypoint-initdb.d/setup_postgre.sh

We installed plugins to Jenkins and Grafana and set up their security. The reliance on official Docker images of infrastructure components helped to save a lot of time on building the infrastructure.

FROM jenkins/jenkins:2.249.1
LABEL maintainer="Me"
USER root
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
COPY security.groovy /usr/share/jenkins/ref/init.groovy.d/security.groovy
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt

All such definitions of Dockerfiles and docker-compose files with infrastructure components are stored in a repository. They are ready to be built and “almost” ready to deploy. Why “almost”? Because the final version of docker-compose definition should use the latest versions of Docker images. We don’t use the tag “latest” because it’s not informative and because of other reasons. [6]

Managing and Monitoring Docker nodes with Portainer

A significant boost in the monitoring of our cluster we got after incorporating the Portainer tool. [7] You can browse images, containers, volumes, networks and perform almost all operations which you would do being connected via SSH to a host.

Screenshot from the internet

This tool allows also downloading of images, migration of images and whole stacks from one node to another. It can connect to Docker via API and other interfaces and provide almost the same user experience as using a console. The difference is that this tool we’ll play a role of a single “door” to your cluster and you won’t need to connect to five machines separately. Moreover it also allows to control Kubetnetes clusters in its latest release.

2. Jenkins & Build pipelines

We are managing 33 components in total across all our nodes (including infrastructure components). They can be built and deployed at any time and this is easy operation. In Jenkins we have only two pipelines for each type of components: #1 for building and #2 for deployment. It’s possible because we enforce uniform project and repository structure which helps to decouple CI/CD process from component specifics. For example, if you have a project with a different build process the building is described in a file build.sh which is located at a standard location in the component’s repository. So that Jenkins pipeline runs the file build.sh and doesn't deal with specifics of building the component. The script build.sh is created by developers who know how to build their project. There is a kind of agreement between CI/CD pipeline and component which I personally call “a delivery contract”. While the contract is followed the CI/CD is decoupled from project specialties.

In our case, since we use containerization, the role of build.sh file plays Dockerfile. It abstracts specialties from the rest of the process.

Build pipeline

We can build an image from any branch, for any of our components at any moment, create a Docker image and tag it with respect to a version (SemVer). We don’t have a shared Docker registry at this moment and because of that images are built on nodes where they are supposed to be deployed later. The build pipeline has three main input parameters: #1 component name, #2 branch name and #3 a node where to build.

Generalized Jenkins pipeline for building of a component

The output of a pipeline is an image which will be called with component name and tagged with version, which is based on Git tag. Depending on the branch from which we build a component, we apply different modifications to the final version. For example, if we build from master branch it’s a release version and according to SemVer should have just numbers MAJOR.MINOR.PATCH.BUILD. If the component is built from another branch (develop or feature branch) we mark it with the pre-release version MAJOR.MINOR.PATCH.-dev.BUILD. In order to always have information about the origins of the images, we set internal environment variables in Dockerfiles to describe necessary information and be accessible in routines. If it’s build locally a suffix “-local” will be applied. For example, a front-end application can show version and branch on QA deployments using values of environment variables taking it from code.

// in Jenkins build pipeline
docker build -f ${dockerfile} -t ${service}:${container_version} \
--build-arg BRANCH=${branch_to_clone} \
--build-arg TAG=${container_version} \
./src
// in DockerfileARG BRANCH
ENV SERVICE_BUILD_BRANCH ${BRANCH:-local}
ARG TAG
ENV SERVICE_BUILD_TAG ${TAG:-local}

Jenkins has a master node and five slave nodes. We have a trick with Jenkins. One slave node is the master one itself and wee build only only on slaves. So the master node is not participating in the CI/CD process, but the slave node, which is physically the same machine, does participate in the process. This allows us to have identical nodes as part of dynamic configuration of Jenkins pipelines. We can specify the node where the image is supposed to be built.

Deployment pipeline

When Jenkins deploys a component it doesn’t deal with the component’s repository anymore.

Generalized Jenkins pipeline for deploying of a component

All deployment scenarios in our system could be approximately described as a process of applying docker-compose manifest to Docker. Jenkins pipeline automatically patches template docker-compose files and applies them to different nodes. Each type of our components has their own docker-compose template, but the process of patching is the same for all of them. We have a Git repository where we store all templates and Jenkins pipelines. If we need to change something in the deployment scenario, we describe it in files, push to the repository and trigger corresponding pipelines in Jenkins.

This helps us in just one minute (or less) create a CI/CD pipeline for a new project. Almost instantly deploy at any time any state of our development and see how it works in the identical to production environment.

3. Externalized Configuration

Another very helpful aspect of development which saves a lot of on delivery is extracting configuration from our components. This doesn’t always mean that it’s stored outside of the Docker image though. In addition to purely external configuration it also means that executable routine (.NET Core microservice, or Siddhi pipeline, or PostgreSQL or anything else) can be easily configured with a single flag. For example, we have different configurations of endpoints depending on the environment. In .NET Core application has a hierarchy of appsettings files which overrides each other depending on the environment. Thus we have an ETL service which can be deployed in multiple configurations to transmit different streams. But it’s the same code base. We apply configuration by using a hierarchical structure of configuration files which overrides each other. This is implemented on a framework level so developers just need to follow the structure and put settings to corresponding files. We could use some external storage to keep it, but this would increase the system complexity because deployment processes become more complex and developers would lose easy control about the configuration. In order to keep the control over configuration at the same simple level we would need to introduce additional processes and tools to our ecosystem. This is not necessary in the beginning and this is the main factor of avoiding it in early stages.

In short, everything which could be a potential subject to change from environment to environment is extracted to an external configuration. Some configuration is stored in databases, some in config files. Our nodes have a marker which is used to determine if this is a production or experimental environment. Components only using information about the current environment acquire necessary configuration. [3]

4. Git Repository Organization

We enforce a certain repository structure to amplify all aspects which were mentioned above. First of all, we have a single repository for every deployment component. This allows for an agreement that all required files and scripts are located at specific places regardless of project type.

root
└───src
Dockerfile
compose.sh
docker-build.sh
docker-compose.yml

Thus Dockerfile is always under src folder and it’s ready to be used to build a ready-to-use image. Files for local build and run are located at root level, they don’t participate in CI/CD process, but useful for quick local setup. The docker-compose file has all necessary dependencies for the component, so that you just deploy it to a functional local system and test your component.

The repository with docker-compose templates has a main docker-compose template and additional YAML files to override the main one. In some approximation it looks as follows. For example, we have a Jenkins pipeline to build and deploy any microservice:

root
└───microservice
└───compose
compose.sh
container-tag.env
docker-compose.yml
└───env
prod.env
beta.env
Jenkinsfile.build.groovy
Jenkinsfile.deploy.groovy

Jenkins pipeline operates with files and variables to fill gaps which are left on purpose in these files. And in the end we can easily enhance this system and migrate to Kubernetes, or to the cloud if we would need this someday.

References

  1. Emerging Need of Cloud Native Stream Processing
  2. Translate a Docker Compose File to Kubernetes Resources
  3. External Configuration Store pattern
  4. Build Once, Run Anywhere: Externalize Your Configuration
  5. What Is Infrastructure as Code? How It Works, Best Practices, Tutorials
  6. What’s Wrong With The Docker :latest Tag?
  7. Managing and creating containers with Portainer

--

--

Alexander Goida

Passionate and entrepreneurial Software Developer with a nonconformist approach to problem-solving. Striving for perfection and completeness.