Monday, October 25, 2021

Booting the plan9 installer in libvirt

After dealing with Unix and Linux systems for so many years, I wanted to have to have a look at Plan9, the post-unix operating system. I am using the 9front variant, which is the most active Plan9 variant. Booting an iso in libvirt is as simple as every Linux or BSD distribution as Plan9 supports the virtio-net and virtio-scsi virtual devices.

virt-install \  
--connect qemu:///session \
--name 9front \
--ram 512 \
--vcpus 2 \
--disk path=$PWD/9front.qcow2,size=4,bus=scsi,format=qcow2 \
--controller type=scsi,model=virtio-scsi \
--cdrom=9front.iso \
--virt-type kvm \
--os-variant generic \
--boot uefi  

Once the CD boots, just press enter to accept the detected defaults, which should work fine, expect for the screen detection: you have to choose vga or lcd otherwise the installer hangs trying to detect vesa modes.

On the iso is booted you land up in a live cd environment where you can install the OS. The environment looks at the same time similar and very different from Unix, which is where the challenge is !

Wednesday, May 26, 2021

Testing a CIFS / Samba share is browsable from the command line

Since I always forget what is the right synthax:

smbclient --user <username> --list <servername>

smbclient is available in the appropriately named smbclient package.

Friday, May 7, 2021

Opensource Operating Systems for 16/32 bits wonders

Debian does not provide official releases for the Atari/m68k since the 2000s, but there is still an ongoing porting effort to make Debian run in the debian-m68k mailing list (and even port the Rust compiler to m68k, hey John Paul:)

The EmuTOS project has released version 1.0 of its Atari TOS GPL clone, providing better hard disk disk support, and allowing thanks to binary compatibility to play the myriad of games released on that platform during the 80s and 90s.

Finally there is FreeMiNT, an Atari specific Unix kernel and OS, also under GPL, bringing true multitasking and memory protection to the cost of lower software compatibility. Currently at release 1.18, and still slowly developed.

As the hardware itself is getting old and overpriced, my next Atari machine will be a FPGA, the MiST. Basically a FGPA is a re-programmable hardware platform. Instead of having transistors and logical gates of a chipset burned to the silicon, the circuit description is loaded on power-on, and thus reconfigurable. MiST can also reproduce the hardware of an Amiga and most of the 16 bits heroes of the late 80s.

Having a MiST available will allow me to reuse my joysticks and Midi gear, have more RAM, and a GPL OS that I can update without having to burn EEPROMS. Retrocomputing meets opensource, a good match. Note for self: those Atari related projects have a disposition for complicated mixed-case names.

Tuesday, March 30, 2021

Manually install a single node Kubernetes cluster on Debian

Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation

Install necessary packages in a VM

Install a throwaway VM with Vagrant.

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.

awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.memory=2048 end"}' Vagrantfile
awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.cpus=2 end"}' Vagrantfile

Start the VM, login, update the package index.

vagrant up
vagrant ssh
sudo apt update

Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.

sudo apt install --yes --no-install-recommends docker.io curl

Install kubernetes binaries. This will install kubelet, the system service which will manage the containers, and kubectl the user/admin tool to manage the cluster.

sudo apt install --yes kubernetes-{node,client} containernetworking-plugins

Although it is not technically mandatory, we will use kubeadm, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.

wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz

sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f  kubernetes-server-linux-arm64.tar.gz

sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm 
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Add a kubelet systemd unit:

RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet

and a default config file for kubeadm

RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

finally we need to help kubelet find the components needed for container networking

echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"' | sudo tee /etc/default/kubelet

Create a cluster

Initialize a cluster with kubeadm: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady.

kubectl get nodes 
NAME      STATUS     ROLES                  AGE    VERSION
testing   NotReady   control-plane,master   9m9s   v1.20.5

At that point you should also have a bunch of containers running on the node:

sudo docker ps --format '{{.Names}}'
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...

The kubelet service also needs an external network plugin to get the cluster in Ready state.

sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059    9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Let’s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml

After a dozen of seconds your node should be in ready status.

kubectl get nodes 
NAME      STATUS   ROLES                  AGE   VERSION
testing   Ready    control-plane,master   16m   v1.20.5

Deploy a test application

Our node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.

kubectl describe node testing | grep ^Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

Let’s allow node testing to run user applications:

kubectl taint node testing node-role.kubernetes.io/master-

Deploy a nginx container:

kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content" 

Create a Kubernetes service to access this pod externally:

cat service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-k8s-service
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector:
    app: http-content 

kubectl create --filename service.yaml

Access the service via IP adress:

curl 192.168.121.63:30000
...
Thank you for using nginx.

Notes

I will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node documentation. Blog posts deprecate and disappear, wiki and project docs live longer.

Playing with cri-o, a container runtime built for Kubernetes

Kubernetes is moving aways from docker to alternative container engines presenting a smaller core having just the functionality needed. The two most populars alternatives are:

  • containerd, a subset of docker, used for instance in Google Kubernetes Engine
  • cri-o, a new implementation of a container engine, used for instance in Red Hat's Kubernetes offering (OpenShift)

These alternatives are meant to be used programatically via a unix domain socket, and therefore have a limited command line interface.

Let's play around in a VM.

Install a throwaway VM with Vagrant

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Start the VM, install dependencies

vagrant up
vagrant ssh
sudo apt update
sudo apt install --yes curl gnupg jq

Install cri-o the container engine

sudo bash
export OS=Debian_Testing VERSION=1.20

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/libcontainers.list
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
apt install cri-o cri-o-runc containernetworking-plugins conntrack

Verify it is running properly

systemctl restart cri-o
systemctl status cri-o
...
Started Container Runtime Interface for OCI (CRI-O).

Say hello to cri-o via its unix domain socket

curl --silent  --unix-socket /var/run/crio/crio.sock http://localhost/info | jq 
{
  "storage_driver": "overlay",
  "storage_root": "/var/lib/containers/storage",
  "cgroup_driver": "systemd",
  "default_id_mappings": {
    "uids": [
      {
        "container_id": 0,
        "host_id": 0,
        "size": 4294967295
      }
    ],
    "gids": [
      {
        "container_id": 0,
        "host_id": 0,
        "size": 4294967295
      }
    ]
  }
}

Install crictl, a Kubernetes debugging tool for containers

wget --directory-prefix=/tmp https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
tar -xaf /tmp/crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/sbin/
chmod +x /usr/local/sbin/crictl

crictl info
{
  "status": {
    "conditions": [
      {
        "type": "RuntimeReady",
        "status": true,
        "reason": "",
        "message": ""
      },
      {
        "type": "NetworkReady",
        "status": true,
        "reason": "",
        "message": ""
      }
    ]
  }
}

From there on you can create a container following the examples in https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

Sunday, March 28, 2021

Switching to FAI (Fully Automatic Installer) for creating Vagrant Boxes

Have you heard of Vagrant ? It is a command line tool to get ready to use, disposable Virtual Machines (VM) from an online catalog. Vagrant works on Linux, FreeBSD, Windows and Mac and you only need three commands to get a shell prompt in a VM (see the Debian wiki).
The online catalog has images for the majority of the OSes you can think of.

We've been building the Debian disk images for Vagrant (available on https://app.vagrantup.com/debian/) with a number of tools over the years:

  • then packer, which is wrapping qemu and the Debian installer CD with automated bootparams and preseed file.
  • and then fai-diskimage, again a wrapper over debootstrap using loopback mounts

Basically there are two category of tools for building a disk image:

- those using an emulator and the OS installer in a automated way

- those using debootstrap/pacstrap/rpmstrap on a loopback mounted filesystem

Personally I prefer the first approach, as you can run the build process as non root, and you benefit from all the quality work of the official installer.
However this requires virtualization, and nested virtualization if your build process run insides a VM. Unfortunately nested virtualization is not that common, for instance my cloud provider, and the VMs used for Debian Continuous Integration, are not supporting nested virtualization.
As the maintainer of fai-diskimage is a Debian Developer (hey MrFAI ! :) and as the debian-cloud folks are it using for Amazon, Azure and Google Cloud Debian images, it made sense to switch to fai-diskimage for now. The fai-diskimage learning curve is a bit steep as you have to learn many internal concepts before using it, but once you get the bits connected it works quite well.

Tuesday, March 9, 2021

Displaying CSV files in a readable way on the terminal

 Until this week I did not know about the column command.

$ head -5 zillow.csv
"Index", "Living Space (sq ft)", "Beds", "Baths", "Zip", "Year", "List Price ($)"
 1, 2222, 3, 3.5, 32312, 1981, 250000
 2, 1628, 3, 2,   32308, 2009, 185000
 3, 3824, 5, 4,   32312, 1954, 399000
 4, 1137, 3, 2,   32309, 1993, 150000

Turned out this file is much more readable with a good pipe (and a large screen)

$ head -5 zillow.csv | column --table --separator ,
"Index"   "Living Space (sq ft)"   "Beds"   "Baths"   "Zip"     "Year"   "List Price ($)"
 1        2222                     3        3.5       32312     1981     250000
 2        1628                     3        2           32308   2009     185000
 3        3824                     5        4           32312   1954     399000
 4        1137                     3        2           32309   1993     150000

column is part of util-linux and is thus available in all distributions.
Example file taken from this example list.

Saturday, January 30, 2021

Playing Tetris over serial console

Today I played Tetris over a serial console connection, on a Vax 4000 running OpenBSD. I haven't felt that 1337 since a long time.
I am going to get rid of that Vax system though. If that's your stuff, contact me privately.

asciinema in its greatness:

Sunday, January 3, 2021

How to move a single VM between cloud providers

I am running since a decade a small Debian VM, that I use for basic web and mail hosting. Since most of the VM setup is done manually and not following the Infrastructure As Code pattern, it is faster to simply copy the filesystem when switching providers instead of reconfiguring everything.
The steps involved are:

1. create a backup of the filesystem using tar of rsync, excluding dynamic content
rsync  --archive \
    --one-file-system --numeric-ids \
    --rsh "ssh -i private_key root@server:/ /local_dir

or
tar -cvpzf backup.tar.gz \
--numeric-owner \
--exclude=/backup.tar.gz \
--one-file-system /


Notice here the --one-file-system switch which avoids back'ing up the content of mount points like /proc, /dev.
If you have extra partitions with a mounted filesystem, like /boot or home you need do add a separate backup for those.

2. create a new VM on the new cloud provider, verify you have a working console access, and power it off.
3. boot on the new cloud provider a rescue image
4. partition the disk image on the new provider.
5. mount the new root partition, and untar your backup on it. You could for instance push the local backup via rsync, or download the tar archive using https.
6. update network configuration and /etc/fstab
7. chroot into the target system, and reinstall grub

This works surprisingly well, and you if made your backup locally, you can test the whole procedure by building a test VM with your backup. Just replace the deboostrap step with a command like tar -xvpzf /path/to/backup.tar.gz -C /mount_point --numeric-owner

Using this procedure, I moved from Hetzner (link in French language) to Digital Ocean, from Digital Ocean to Vultr, and now back at Hetzner.