Deployment to Docker and Kubernetes of StatsD-Grafana bundle

Real Example

First of all, let’s see in action what I’ll tell about in the article. In this example you’ll start two applications: the one is sending requests to another, the other one is performing some heavy CPU-bound task, both of them are sending some StatsD metrics which you’ll see in Grafana.

$ git clone https://github.com/xtrmstep/DockerNetSample
$ cd .\DockerNetSample\
$ kubectl apply -f .\src\StatsDServer\k8s-deployment.yaml
$ .\build.ps1
$ .\run.ps1
$ kubectl delete svc stats-tcp
$ kubectl delete svc stats-udp
$ kubectl delete deployment stats

You deployed StatsD, InfluxDb and Grafana locally to Kubernetes

My recent discover is that DockerHub has a lot of useful already preparaed bundles. The one which I’ll tell about is one of such. The GitHub repo with the image you can find here. This image contains InfluxDB, Telegraf (StatsD) and Grafana which already configured to work together. There are two common ways to deploy images: #1 using docker-compose and #2 Kubernetes. Both ways will help you to deploy several images at once and control network parameters, such as port mapping and others. I’ll cover in short docker-compose and tell more about deployment to Kubernetes. Recently it became possible to deploy docker-compose files to Kubernetes so this would be the most beneficial.

Deployment with Docker-Compose

Docker-compose is distributed with Docker for Windows. But you need to check which version you can use in your YAML. You need to know version of installed Docker and see the compatibility matrix on this page. I’ve got installed Docker of version 19.03.5 so I can use file version 3.x. But I'll use 2 for compatibility reason. All information we need is already described on the bundle's page: image name and ports.

version: '2'
services:
stats:
image: samuelebistoletti/docker-statsd-influxdb-grafana:latest
ports:
- "3003:3003"
- "3004:8888"
- "8086:8086"
- "8125:8125/udp"
$ docker-compose -f docker-compose.yaml up -d
$ docker-compose stop

Deployment to Kubernetes

The deployment to Kubernetes looks a bit more complicated at first glance, since you need to define deployment, services and other parameters. I found a small hack which saved me some time on writing YAML files for Kubernetes. At first, I deploy everything with the minimum required configuration to the cluster using kubectl util, and then I extract objects as YAML configuration and do some tweaks to adjust to my needs.

Deployment with kubectl

So let’s create a deployment from the image. The name stats is the name of deployment which I gave for this object. You can use another name.

$ kubectl run stats --image=samuelebistoletti/docker-statsd-influxdb-grafana:latest --image-pull-policy=Always
$ kubectl expose deployment stats --type=LoadBalancer --port=3003 --target-port=3003
$ kubectl get all

Extracting YAML configuration

At this moment this deployment is not what I need, but I can use it as a draft for my real one. Extracting of YAML configuration:

$ kubectl get deployment,service stats -o yaml --export > exported.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stats
spec:
replicas: 1
selector:
matchLabels:
run: stats
template:
metadata:
labels:
run: stats
spec:
containers:
- image: samuelebistoletti/docker-statsd-influxdb-grafana:latest
imagePullPolicy: Always
name: stats
---
apiVersion: v1
kind: Service
metadata:
name: stats-tcp
spec:
type: LoadBalancer
ports:
- name: grafana
protocol: TCP
port: 3003
targetPort: 3003
- name: influxdb-admin
protocol: TCP
port: 3004
targetPort: 8888
- name: influxdb
protocol: TCP
port: 8086
targetPort: 8086
selector:
run: stats
---
apiVersion: v1
kind: Service
metadata:
name: stats-udp
spec:
type: LoadBalancer
ports:
- name: telegraf
protocol: UDP
port: 8125
targetPort: 8125
selector:
run: stats
$ kubectl delete svc stats
$ kubectl delete deployment stats
$ kubectl apply -f k8s-deployment.yaml

StatsD protocol

StatsD protocol is very simple and you can also build your own client library if you really need. Below you’ll find a summary and here you can read more about StatsD datagrams. StatsD supports such metrics as counters, time measure, gauge and etc.

counter.name:1|c
timing.name:320|ms
gauge.name:333|g

Metrics in .NET Core Service

You’ll need NuGet package JustEat.StatsD. Its description on GitHub is complete and simple. So just follow it to make your own configuration and registration in IoC.

  • How many requests are waiting before ThreadPool gives a thread?
  • How long does the operation take time?
  • How fast the service is exhausted?
public override async Task<FactorialReply> Factorial(FactorialRequest request, ServerCallContext context)
{
// Obtain the number of available threads in ThreadPool
ThreadPool.GetAvailableThreads(out var availableThreads, out _);
// The number of available threads is the example of Gauge metric
// Send gauge metric to StatsD (using JustEat.StatsD nuget)
_stats.Gauge(availableThreads, "GaugeAvailableThreads");

// Increment a counter metric for incoming requests
_stats.Increment("CountRequests");

// The method _stats.Time() will calculate the time while the _semaphoreSlim.WaitAsync() were waiting
// and send the metric to StatsD
await _stats.Time("TimeWait", async f => await _semaphoreSlim.WaitAsync());
try
{
// Again measure time length of calculation and send it to StatsD
var result = await _stats.Time("TimeCalculation", async t => await CalculateFactorialAsync(request.Factor));
// Increment a counter of processed requests
_stats.Increment("CountProcessed");
return await Task.FromResult(new FactorialReply
{
Result = result
});
}
finally
{
_semaphoreSlim.Release();
}
}

See Also

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alexander Goida

Alexander Goida

All opinions are my own || Software Developer, learner, perfectionist and entrepreneur-kind person, nonconformist. Always seeks for the order and completeness.