Nedtes

My ideas, thoughts and questions, but mostly just software

Learning Nix by Example: Building FFmpeg 4.0

What’s NIX ?

Nix is a functional package manager that guarantees reproducible independent and isolated builds for each package. It’s portable across operating systems. Long story short, it’s a greate package for your next software project. It was designed as part of a PhD thesis and it grew from there to be the system it is today.

However there are a few issues that a lot of people have complained about. The top issue I have seen is the learning curve for it. So we will be going through an example step by step now. Compiling the FFmpeg 4.0 is something I have chosen since it contains all the basic concepts to compile a nix package and should be good to give you a good idea how to compile other packages on your own.

Compiling FFmpeg 4.0

Let’s get started by creating our nix expressions.

1
2
mkdir nix_packages && cd nix_packages
touch ffmpeg.nix default.nix

We will be working on these files as we progress.

Importing the source files

Nix imports the sources for your package by itself, you can have them in a compressed file. A git repository, or a git repo on Github.

ffmpeg.nix
1
2
3
4
5
6
7
8
9
10
{ stdenv, fetchurl}:

stdenv.mkDerivation rec {
  name = "ffmpeg-${version}";
  version = "4.0";
  src = fetchurl {
    url = "https://github.com/FFmpeg/FFmpeg/archive/ace829cb45cff530b8a0aed6adf18f329d7a98f6.tar.gz";
    sha256 = "1v3ji6r2mvr4h1f9c4ml95h7zpm3hna8zh73m7769na8gvakfhcg";
  };
}

You can see stdenv and fetchurl at the top. This is the way nix declares its dependencies, stdenv and fetchurl are derivations in nix world. And we will need to use them, you can see that we use stdenv.mkDerivation to create the current ffmpeg derivation. We use fetchurl to fetch the sources from Github. We are choosing to pull a compressed archive.

One question you might have now, is that how can you know the sha256 beforehead. Nix got you covered, there’s a utility that comes with nix called nix-prefetch-url. It will fetch the file and give you the hash for it.

Now, let’s say you don’t want to fetch the archive and want to fetch the git repo.

ffmpeg.nix
1
2
3
4
5
6
7
8
9
10
11
{ stdenv, fetchgit}:

stdenv.mkDerivation rec {
  name = "ffmpeg-${version}";
  version = "4.0";
  src = fetchgit {
    url = "https://git.ffmpeg.org/ffmpeg.git";
    rev = "ace829cb45cff530b8a0aed6adf18f329d7a98f6";
    sha256 = "1055za9dndx9rjq1zhs8ll8v3d8zf7m23frl8q11d7caki7s7yw3";
  };
}

To know the hash for this, there’s a nix utility also, but you have to install it this time. nix-env -i nix-prefetch-git and nix-prefetch-git --rev ace829cb45cff530b8a0aed6adf18f329d7a98f6 https://git.ffmpeg.org/ffmpeg.git

Adding other ffmpeg dependencies

In order to compile ffmpeg we will of course need a lot of other libraries that ffmpeg depends on. And choose to enable or disable ffmpeg own libraries. We set which ffmpeg internal libraries to build next.

ffmpeg.nix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{ stdenv
, fetchgit
/* FFmpeg libraries to build, boolean values */
, avcodecLibrary ? true # Build avcodec library
, avdeviceLibrary ? true # Build avdevice library
, avfilterLibrary ? true # Build avfilter library
, avformatLibrary ? true # Build avformat library
, avresampleLibrary ? true # Build avresample library
, avutilLibrary ? true # Build avutil library
, postprocLibrary ? true # Build postproc library
, swresampleLibrary ? true # Build swresample library
, swscaleLibrary ? true # Build swscale library}:


let
  inherit (stdenv.lib) enableFeature;
in

stdenv.mkDerivation rec {
  name = "ffmpeg-${version}";
  version = "4.0";
  src = fetchgit {
    url = "https://git.ffmpeg.org/ffmpeg.git";
    rev = "ace829cb45cff530b8a0aed6adf18f329d7a98f6";
    sha256 = "1055za9dndx9rjq1zhs8ll8v3d8zf7m23frl8q11d7caki7s7yw3";
  };

  configureFlags = [
    (enableFeature avcodecLibrary "avcodec")
    (enableFeature avdeviceLibrary "avdevice")
    (enableFeature avfilterLibrary "avfilter")
    (enableFeature avformatLibrary "avformat")
    (enableFeature avresampleLibrary "avresample")
    (enableFeature avutilLibrary "avutil")
    (enableFeature (postprocLibrary && gplLicensing) "postproc")
    (enableFeature swresampleLibrary "swresample")
    (enableFeature swscaleLibrary "swscale")
  ];

}

You can notice we are using enableFeature, it’s just a simple function that takes a boolean value and library name and generates the configuration flag.

For the first case, it will take true (avcodecLibrary value) and avcodec and generate –enable-avcodec.

Now, we add some external dependencies, please note that all dependencies you mention need to have already existing nix expressions and you will be calling that nix expression name. To find if your requirement exists or not, you can check all-packages.nix

ffmpeg.nix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
{ stdenv
, fetchgit
/*
 * FFmpeg libraries to build
 */
, avcodecLibrary ? true # Build avcodec library
, avdeviceLibrary ? true # Build avdevice library
, avfilterLibrary ? true # Build avfilter library
, avformatLibrary ? true # Build avformat library
, avresampleLibrary ? true # Build avresample library
, avutilLibrary ? true # Build avutil library
, postprocLibrary ? true # Build postproc library
, swresampleLibrary ? true # Build swresample library
, swscaleLibrary ? true # Build swscale library
/*
 * External libraries (need to be matching nix expressions)
 */
, alsaLib ? null # Alsa in/output support
, avisynth ? null # Support for reading AviSynth scripts
, bzip2 ? null
, celt ? null # CELT decoder
, fdkaacExtlib ? false
, fdk_aac ? null # Fraunhofer FDK AAC de/encoder
, fontconfig ? null # Needed for drawtext filter
, freetype ? null # Needed for drawtext filter
, frei0r ? null # frei0r video filtering
, fribidi ? null # Needed for drawtext filter
, game-music-emu ? null # Game Music Emulator
, gnutls ? null
, gsm ? null # GSM de/encoder
, libjack2 ? null # Jack audio (only version 2 is supported in this build)
, ladspaH ? null # LADSPA audio filtering
, lame ? null # LAME MP3 encoder
, libass ? null # (Advanced) SubStation Alpha subtitle rendering
, libbluray ? null # BluRay reading
, libbs2b ? null # bs2b DSP library
, libcaca ? null # Textual display (ASCII art)
, libdc1394 ? null, libraw1394 ? null # IIDC-1394 grabbing (ieee 1394)
, libiconv ? null
, libmfx ? null # Hardware acceleration vis libmfx
, libmodplug ? null # ModPlug support
, libogg ? null # Ogg container used by vorbis & theora
, libopus ? null # Opus de/encoder
, libsndio ? null # sndio playback/record support
, libssh ? null # SFTP protocol
, libtheora ? null # Theora encoder
, libv4l ? null # Video 4 Linux support
, libvdpau ? null # Vdpau hardware acceleration
, libvorbis ? null # Vorbis de/encoding, native encoder exists
, libvpx ? null # VP8 & VP9 de/encoding
, libwebp ? null # WebP encoder
, libX11 ? null # Xlib support
, libxcb ? null # X11 grabbing using XCB
, libxcbshmExtlib ? true # X11 grabbing shm communication
, libxcbxfixesExtlib ? true # X11 grabbing mouse rendering
, libxcbshapeExtlib ? true # X11 grabbing shape rendering
, libXv ? null # Xlib support
, lzma ? null # xz-utils
, nvenc ? false, nvidia-video-sdk ? null # NVIDIA NVENC support
, openal ? null # OpenAL 1.1 capture support
, opencore-amr ? null # AMR-NB de/encoder & AMR-WB decoder
, openglExtlib ? false
, mesa ? null # OpenGL rendering
, openjpeg_2_1 ? null # JPEG 2000 de/encoder
, opensslExtlib ? false, openssl ? null
, libpulseaudio ? null # Pulseaudio input support
, rtmpdump ? null # RTMP[E] support
, samba ? null # Samba protocol
, SDL2 ? null
, shine ? null # Fixed-point MP3 encoder
, soxr ? null # Resampling via soxr
, speex ? null # Speex de/encoder
, vid-stab ? null # Video stabilization
, wavpack ? null # Wavpack encoder
, x264 ? null # H.264/AVC encoder
, x265 ? null # H.265/HEVC encoder
, xavs ? null # AVS encoder
, xvidcore ? null # Xvid encoder, native encoder exists
, zeromq4 ? null # Message passing
, libaom  # AV1 requirement
, zlib ? null
}:


let
  inherit (stdenv.lib) enableFeature;
in

stdenv.mkDerivation rec {
  name = "ffmpeg-${version}";
  version = "4.0";
  src = fetchgit {
    url = "https://git.ffmpeg.org/ffmpeg.git";
    rev = "ace829cb45cff530b8a0aed6adf18f329d7a98f6";
    sha256 = "1055za9dndx9rjq1zhs8ll8v3d8zf7m23frl8q11d7caki7s7yw3";
  };

  configureFlags = [
    # Licensing options (The original file has them as build options but I removed that for simplicity https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/ffmpeg-full/default.nix#L6-L8)
    (enableFeature true "gpl")
    (enableFeature true "version3")
    (enableFeature true "nonfree")
    "--enable-shared --disable-static"
    # Internal libraries
    (enableFeature avcodecLibrary "avcodec")
    (enableFeature avdeviceLibrary "avdevice")
    (enableFeature avfilterLibrary "avfilter")
    (enableFeature avformatLibrary "avformat")
    (enableFeature avresampleLibrary "avresample")
    (enableFeature avutilLibrary "avutil")
    (enableFeature (postprocLibrary && gplLicensing) "postproc")
    (enableFeature swresampleLibrary "swresample")
    (enableFeature swscaleLibrary "swscale")
    # External libraries
    (enableFeature (avisynth != null) "avisynth")
    (enableFeature (bzip2 != null) "bzlib")
    (enableFeature (celt != null) "libcelt")
    (enableFeature (fdkaacExtlib && gplLicensing) "libfdk-aac")
    (enableFeature (fontconfig != null) "fontconfig")
    (enableFeature (freetype != null) "libfreetype")
    (enableFeature (frei0r != null && gplLicensing) "frei0r")
    (enableFeature (fribidi != null) "libfribidi")
    (enableFeature (game-music-emu != null) "libgme")
    (enableFeature (gnutls != null) "gnutls")
    (enableFeature (gsm != null) "libgsm")
    (enableFeature (ladspaH !=null) "ladspa")
    (enableFeature (lame != null) "libmp3lame")
    (enableFeature (libass != null) "libass")
    (enableFeature (libbluray != null) "libbluray")
    (enableFeature (libbs2b != null) "libbs2b")
    (enableFeature (libcaca != null) "libcaca")
    (enableFeature (libdc1394 != null && libraw1394 != null) "libdc1394")
    (enableFeature (libiconv != null) "iconv")
    (enableFeature (libmfx != null) "libmfx")
    (enableFeature (libmodplug != null) "libmodplug")
    (enableFeature (libopus != null) "libopus")
    (enableFeature (libssh != null) "libssh")
    (enableFeature (libtheora != null) "libtheora")
    (enableFeature (libv4l != null) "libv4l2")
    (enableFeature (libvdpau != null) "vdpau")
    (enableFeature (libvorbis != null) "libvorbis")
    (enableFeature (libvpx != null) "libvpx")
    (enableFeature (libwebp != null) "libwebp")
    (enableFeature (libX11 != null && libXv != null) "xlib")
    (enableFeature (libxcb != null) "libxcb")
    (enableFeature libxcbshmExtlib "libxcb-shm")
    (enableFeature libxcbxfixesExtlib "libxcb-xfixes")
    (enableFeature libxcbshapeExtlib "libxcb-shape")
    (enableFeature (lzma != null) "lzma")
    (enableFeature nvenc "nvenc")
    (enableFeature (openal != null) "openal")
    (enableFeature (opencore-amr != null) "libopencore-amrnb")
    (enableFeature openglExtlib "opengl")
    (enableFeature (openjpeg_2_1 != null) "libopenjpeg")
    (enableFeature (opensslExtlib && gplLicensing) "openssl")
    (enableFeature (libpulseaudio != null) "libpulse")
    (enableFeature (rtmpdump != null) "librtmp")
    (enableFeature (shine != null) "libshine")
    (enableFeature (samba != null) "libsmbclient")
    (enableFeature (SDL2 != null) "sdl2")
    (enableFeature (soxr != null) "libsoxr")
    (enableFeature (speex != null) "libspeex")
    (enableFeature (vid-stab != null) "libvidstab") # Actual min. version 2.0
    (enableFeature (wavpack != null) "libwavpack")
    (enableFeature (x264 != null) "libx264")
    (enableFeature (x265 != null) "libx265")
    (enableFeature (xavs != null) "libxavs")
    (enableFeature (xvidcore != null) "libxvid")
    (enableFeature (zeromq4 != null) "libzmq")
    (enableFeature (zlib != null) "zlib")
    (enableFeature (libaom != null) "libaom")
  ];

}

Now the configuration is all set. Let’s move to the building step, We specify two types of build inputs here.

nativeBuildInputs: those dependencies that are only used at build time to generate our code to run at run time.

buildInputs: those are used by the derivation at run-time.

ffmpeg.nix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{ stdenv
, pkgconfig
, perl
, texinfo
, yasm
...
, zlib ? null
}:


let
  inherit (stdenv.lib) enableFeature;
in

stdenv.mkDerivation rec {
  name = "ffmpeg-${version}";
  version = "4.0";
  src = fetchgit {
    url = "https://git.ffmpeg.org/ffmpeg.git";
    rev = "ace829cb45cff530b8a0aed6adf18f329d7a98f6";
    sha256 = "1055za9dndx9rjq1zhs8ll8v3d8zf7m23frl8q11d7caki7s7yw3";
  };

  configureFlags = [
   ...
  ];

  nativeBuildInputs = [ perl pkgconfig texinfo yasm ];

  # All the dependencies that we have imported at the beginning of the file and enable in configureflags 
  buildInputs = [
    bzip2 celt fontconfig freetype frei0r fribidi game-music-emu gnutls gsm
    libjack2 ladspaH lame libass libbluray libbs2b libcaca libdc1394 libmodplug
    libogg libopus libssh libtheora libvdpau libvorbis libvpx libwebp libX11
    libxcb libXv lzma openal openjpeg_2_1 libpulseaudio rtmpdump opencore-amr makeWrapper
    samba SDL2 soxr openglExtlib mesa speex vid-stab wavpack x264 x265 xavs xvidcore zeromq4 zlib libaom libv4l openssl
  ]
  # Parallel building disabled by default
  enableParallelBuilding = true;
}

Now that should be enough for us to go now. But not really, FFmpeg 4.0 comes with initial support for AV1, which uses libaom. Unfortunately, there’s no already existing nix expression for libaom, so we will write it ourselves. You should be able to understand the next nix expression on your own.

libaom.nix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{ stdenv, fetchgit, yasm, perl, cmake,  pkgconfig }:

stdenv.mkDerivation rec {
  name = "libaom-0.1.0";

  src = fetchgit {
    url = "https://aomedia.googlesource.com/aom";
    rev = "105e9b195bb90c9b06edcbcb13b6232dab6db0b7";
    sha256 = "1fl2sca4df01gyn00s0xcwwirxccfnjppvjdrxdnb8f2naj721by";
  };

  buildInputs = [ perl yasm  ];
  nativeBuildInputs = [ cmake pkgconfig ];

  cmakeFlags = [
    "-DCONFIG_UNIT_TESTS=0"
  ];

}

Next we go to our default.nix file. And we explicitly import the nix expression dependency for libaom into our ffmpeg expression since it doesn’t exist in the nix expressions. Now using libaom inside ffmpeg.nix won’t error.

default.nix
1
2
3
4
5
6
with import <nixpkgs> {};

rec {
  libaom    = pkgs.callPackage ./libaom.nix {  };
  ffmpeg    = pkgs.callPackage ./ffmpeg.nix { inherit libaom; };
}

Let’s install it finally.

1
nix-env -f nix_packages -iA ffmpeg

NOTE: Credit to ffmpeg-full that I used (and edited to be beginner friendly).

Got any feedback? Let me know, please.

Deploy to Kubernetes From Travis

Introduction

When you deploy applications to your kubernetes engine. You’d want it to be done as securely as possible of course. A little issue when you think about that is that you need to grant the deployment job as little permissions as possible and absolutely in a specific namespace for better isolation. In order to do that, we could use Google IAM (In case you are running on GKE). Or we can use the kubernetes native way.

Creating the deployment user

We will need first to create a service account user with specific permissions (based on our deployment script). In a specific namespace, for this example the user will be named travis-echo in the echo namespace. Assuming that our deployment consists of updating Deployment object and Service and Ingress as well as ConfigMaps.

travis-ca.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: v1
kind: ServiceAccount
metadata:
  name: travis-echo
  namespace: echo
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: echo
  name: travis-deploy-role
rules:
- apiGroups: ["extensions", "apps"]
  resources: ["deployments"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: ["extensions", "apps"]
  resources: ["ingresses"]
  verbs: ["get", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: travis-role-binding
  namespace: echo
subjects:
- kind: ServiceAccount
  name: travis-echo
  namespace: echo
roleRef:
  kind: Role
  name: travis-deploy-role
  apiGroup: rbac.authorization.k8s.io

We then create this ServiceAccount as apply its permissions:

kubectl apply -f travis-sa.yml

Getting user credentials

Now that the user is created. We need to get its credentials that kubernetes generated to be able to use it.

You can use the following script

The script will output 3 variables, that CA_CRT, USER_TOKEN and CLUSTER_ENDPOINT. Go to your travis settings and set the variables names and their values.

Deploying

Now that we have everything setup for us, the last thing we need to actually deploy our files to kubernets.

The last part of our .travis.yml looks like this:

travis-ca.yml
1
2
3
4
5
deploy:
  provider: script
  script: scripts/k8s-deploy.sh
  on:
    branch: master

And the our deploy script looks like this

k8s-deploy.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/usr/bin/env bash
set -o pipefail
set -o errexit
set -o nounset
# set -o xtrace

echo $CA_CRT | base64 --decode -i > ${HOME}/ca.crt

kubectl config set-cluster our-k8s-cluster --embed-certs=true --server=${CLUSTER_ENDPOINT} --certificate-authority=${HOME}/ca.crt
kubectl config set-credentials travis-echo --token=$SA_TOKEN
kubectl config set-context travis --cluster=$CLUSTER_NAME --user=travis-echo --namespace=echo
kubectl config use-context travis
kubectl config current-context


kubectl apply -f service.yml
kubectl apply -f deployment.yml
kubectl apply -f ingress.yml

function cleanup {
    printf "Cleaning up...\n"
    rm -vf "${HOME}/ca.crt"
    printf "Cleaning done."
}

trap cleanup EXIT

And that’s it. We’re done, we are good to go for our amazing deployments from Travis. If you’re too lazy and want this whole thing automated in one script, feel free to tweet me @kiloreux.

Docker Cheat Sheet

Running a container in interactive mode

1
docker run IMAGE_NAME --name CONTAINER_NAME -it /bin/bash

Running health check on container

Check the localhost 80 port every 20s, consider it a failure after 5 times.

1
docker run  IMAGE_NAME --name CONTAINER_NAME --health-cmd="curl http://127.0.0.1 || exit 1" --health-interval=20s --health-retries=5

Freeze/Unfreeze a container

1
2
docker pause CONTAINER_NAME
docker unpause CONTAINER_NAME

Limit container CPU

When limiting CPU usage. 1024 means a full CPU.

1
docker run -it --c 512 IMAGE_NAME

Limit it to use only 2 CPUs and use only half of each one.

1
docker run -it --c 512 --cpus 2

Limit container memory

Limit the container memory to 1GB

1
docker run IMAGE_NAME -m 1073741824

Use another image as cache

Sometimes you are on a new machine and you don’t want your image to build from scratch. You can pull a previous image from your registry and use its cache. This will accelerate your build time and allow you to hit production as soon as possible.

1
2
docker pull IMAGE_TAG:latest
docker build -t IMAGE_TAG:COMMIT_HASH --cache-from IMAGE_TAG:latest .

Deleting all networks

1
docker network prune -f

Killing all exited containers

1
docker rm $(docker ps -f status=exited -q)

Running a Robots Cluster With ROS and Kubernetes

Kubernetes brings a lot of value to distributed systems. Including self healing, secrets management and tons of other things I am too lazy to mention. One area where kubernetes magic would bring so much value that is still in development is Robotics.

I have used ROS for more than 2 years and also used it in my thesis. And handling communication between robots, especially when we have a dozen of them can be a pain and since I have been using Kuberentes in my day to day job lately. I thought of experimenting with ROS and Kubernetes. But before I dive in, here’s a quick introduction to the two of them.

Kubernetes:

  • There are Nodes, those are worker machines (previously minions). This could be a VM or a physical machine
  • All nodes in a cluster are managed by a master node.
  • Pods are the smallest building blocks of Kubernetes. A Pod is a group of one or many containers.
  • A node can have one or multiple Pods running inside it.

ROS:

ROS uses a computational graph model with a pub/sub model for communicating data.

  • Nodes in ROS are processes that perform specific computational job. For example we can have a node responsible for moving the robot wheels and another for localization.
  • Master, this is the main component for a running ROS system. It ensures name resolution for nodes and maintaining communications between different Nodes.
  • Messages, Nodes communicate by sending messages. ROS provides a set of default messages types and also the ability to write our own custom message types.
  • Topics are the transport system between Nodes. A topic can have only one message type through it. And it can have subscribing nodes and publishing nodes.
  • To put it short, a Node will publish a Message through a Topic.

From reading previously about the big architecture points of both systems. We can come up with the idea how to run ROS on kubernetes.

  • We will need to have the master node tainted and run master ROS on it.
  • One Node will mean one robot. And we need to run one Pod per Node.

Here’s how we run ROS Master on kubernetes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ros-kinetic-master-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ros-master
    spec:
      nodeSelector:
        dedicated: master
      containers:
      - name: ros-kinetic
        image: ros:kinetic-perception-xenial
        args: ["roscore"]
        ports:
        - containerPort: 11311
---
apiVersion: v1
kind: Service
metadata:
  name: master-service
spec:
  clusterIP: None
  ports:
  - port: 11311
    targetPort: 11311
    protocol: TCP
  selector:
    app: ros-master
  type: ClusterIP

You can see that we expose port 11311, which is the default port. Now any ROS Node can connect to the master. So far so good. We have a running ROS master that can manage ROS Nodes for us.

Now we have a problem: Service in kubernetes need to expose a specific port and don’t have support for random ports support.

That’s why we need to add a headless service. In a headless service, each pod gets its own DNS entry. Which will be helpful for us addressing specific pods.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: listener
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: listener
    spec:
      containers:
      -  name: listener
         image: ros:kinetic-perception-xenial
         args: ["rostopic","echo","such_topic"]
         env:
          - name: ROS_HOSTNAME
            value: listener
          - name: ROS_MASTER_URI
            value: http://master-service.default.svc.cluster.local:11311
---
apiVersion: v1
kind: Service
metadata:
  name: listener
spec:
  clusterIP: None
  ports:
  - port: 11311
    targetPort: 11311
    protocol: TCP
  selector:
    app: listener
  type: ClusterIP

And for the talker we run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: talker
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: talker
    spec:
      containers:
      -  name: talker
         image: ros:kinetic-perception-xenial
         env:
          - name: ROS_HOSTNAME
            value: talker
          - name: ROS_MASTER_URI
            value: http://master-service.default.svc.cluster.local:11311
         args:
            - rostopic
            - pub
            - "-r"
            - "1"
            - such_topic
            - std_msgs/String
            - "Kubernetes is the best"
---
apiVersion: v1
kind: Service
metadata:
  name: talker
spec:
  clusterIP: None
  ports:
  - port: 11311
    targetPort: 11311
    protocol: TCP
  selector:
    app: talker
  type: ClusterIP

That’s it for now, you can check the listener logs to see the echo statements inside it.

Installing Kubernetes on Single Node OVH Server

I recently had to setup a single node kubernetes cluster on OVH. It was mostly for learning purposes. It’s strongly recommended against having a single node cluster in production with kubernetes. I will be using a 4GB RAM dedicated server to for this tutorial. It’s recommnded to have enough room for your intended pods to run.

Assuming that you have a fresh server, first of all, let’s get started by installing Docker on your server.

1
2
3
4
5
6
7
8
9
sudo apt-get update
# Install kernel extra's to enable docker aufs support
sudo apt-get -y install linux-image-extra-$(uname -r)

# The official docker install script
wget -qO- https://get.docker.com/ | sh

# Add user to the docker group. To allow sudoless use of Docker
sudo usermod -aG docker $USER

You need logout and login again. Now you can test your docker install by running the hello container.

1
docker run hello-world

If you get a hello-world text from docker. We are good to continue.

Now we are going to install kubernetes on the server.

1
2
3
4
5
sudo apt-get update && sudo apt-get install -y apt-transport-https
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main"| sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

Now we have kubernetes installed. We need to initialize our cluster

1
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Configuring our kube config which is used for authentication inside cluster operations is next.

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In order for the pods to communicate between them effectively. We will need a networking layer for our cluster. We will be using flannel (developped by coreos) in this tutorial.

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Master nodes doesn’t allow pods to run inside it by default. So we need to disable that, since we only have one node that we want to run everything.

1
kubectl taint nodes --all node-role.kubernetes.io/master-

You can now run your pods on this single node cluster for learning and experimenting with kubernetes.

Here’s the full script that you can use to setup the single cluster node: gist