Monday, August 29, 2022

Moving this blog to https://00formicapunk00.wordpress.com/

 

I switched from blogger.com the Google Blog platform to the hosted wordpress.com of Automaticc, the WordPress blog engine main authors.

I thus gain:

I lose:

  • free CNAME redirect using my own domain name
  • a bit of advertising-free space. The blog at wordpress.com has a prominent header indicating I am using the free plan, but I am OK so far with that.

 

The new blog address is  https://00formicapunk00.wordpress.com/

Wednesday, May 25, 2022

One of the strangest bug I have ever seen on Linux

Networking starts when you login as root, stops when you log off !

SeLinux messages can be ignored I guess, but we see clearly the devices being activated (it's a Linux bridge)

If you have any explanations I am curious.

Monday, October 25, 2021

Booting the plan9 installer in libvirt

After dealing with Unix and Linux systems for so many years, I wanted to have to have a look at Plan9, the post-unix operating system. I am using the 9front variant, which is the most active Plan9 variant. Booting an iso in libvirt is as simple as every Linux or BSD distribution as Plan9 supports the virtio-net and virtio-scsi virtual devices.

virt-install \  
--connect qemu:///session \
--name 9front \
--ram 512 \
--vcpus 2 \
--disk path=$PWD/9front.qcow2,size=4,bus=scsi,format=qcow2 \
--controller type=scsi,model=virtio-scsi \
--cdrom=9front.iso \
--virt-type kvm \
--os-variant generic \
--boot uefi  

Once the CD boots, just press enter to accept the detected defaults, which should work fine, expect for the screen detection: you have to choose vga or lcd otherwise the installer hangs trying to detect vesa modes.

On the iso is booted you land up in a live cd environment where you can install the OS. The environment looks at the same time similar and very different from Unix, which is where the challenge is !

Wednesday, May 26, 2021

Testing a CIFS / Samba share is browsable from the command line

Since I always forget what is the right synthax:

smbclient --user <username> --list <servername>

smbclient is available in the appropriately named smbclient package.

Friday, May 7, 2021

Opensource Operating Systems for 16/32 bits wonders

Debian does not provide official releases for the Atari/m68k since the 2000s, but there is still an ongoing porting effort to make Debian run in the debian-m68k mailing list (and even port the Rust compiler to m68k, hey John Paul:)

The EmuTOS project has released version 1.0 of its Atari TOS GPL clone, providing better hard disk disk support, and allowing thanks to binary compatibility to play the myriad of games released on that platform during the 80s and 90s.

Finally there is FreeMiNT, an Atari specific Unix kernel and OS, also under GPL, bringing true multitasking and memory protection to the cost of lower software compatibility. Currently at release 1.18, and still slowly developed.

As the hardware itself is getting old and overpriced, my next Atari machine will be a FPGA, the MiST. Basically a FGPA is a re-programmable hardware platform. Instead of having transistors and logical gates of a chipset burned to the silicon, the circuit description is loaded on power-on, and thus reconfigurable. MiST can also reproduce the hardware of an Amiga and most of the 16 bits heroes of the late 80s.

Having a MiST available will allow me to reuse my joysticks and Midi gear, have more RAM, and a GPL OS that I can update without having to burn EEPROMS. Retrocomputing meets opensource, a good match. Note for self: those Atari related projects have a disposition for complicated mixed-case names.

Tuesday, March 30, 2021

Manually install a single node Kubernetes cluster on Debian

Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation

Install necessary packages in a VM

Install a throwaway VM with Vagrant.

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.

awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.memory=2048 end"}' Vagrantfile
awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.cpus=2 end"}' Vagrantfile

Start the VM, login, update the package index.

vagrant up
vagrant ssh
sudo apt update

Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.

sudo apt install --yes --no-install-recommends docker.io curl

Install kubernetes binaries. This will install kubelet, the system service which will manage the containers, and kubectl the user/admin tool to manage the cluster.

sudo apt install --yes kubernetes-{node,client} containernetworking-plugins

Although it is not technically mandatory, we will use kubeadm, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.

wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz

sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f  kubernetes-server-linux-arm64.tar.gz

sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm 
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Add a kubelet systemd unit:

RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet

and a default config file for kubeadm

RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

finally we need to help kubelet find the components needed for container networking

echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"' | sudo tee /etc/default/kubelet

Create a cluster

Initialize a cluster with kubeadm: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady.

kubectl get nodes 
NAME      STATUS     ROLES                  AGE    VERSION
testing   NotReady   control-plane,master   9m9s   v1.20.5

At that point you should also have a bunch of containers running on the node:

sudo docker ps --format '{{.Names}}'
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...

The kubelet service also needs an external network plugin to get the cluster in Ready state.

sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059    9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Let’s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml

After a dozen of seconds your node should be in ready status.

kubectl get nodes 
NAME      STATUS   ROLES                  AGE   VERSION
testing   Ready    control-plane,master   16m   v1.20.5

Deploy a test application

Our node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.

kubectl describe node testing | grep ^Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

Let’s allow node testing to run user applications:

kubectl taint node testing node-role.kubernetes.io/master-

Deploy a nginx container:

kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content" 

Create a Kubernetes service to access this pod externally:

cat service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-k8s-service
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector:
    app: http-content 

kubectl create --filename service.yaml

Access the service via IP adress:

curl 192.168.121.63:30000
...
Thank you for using nginx.

Notes

I will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node documentation. Blog posts deprecate and disappear, wiki and project docs live longer.

Playing with cri-o, a container runtime built for Kubernetes

Kubernetes is moving aways from docker to alternative container engines presenting a smaller core having just the functionality needed. The two most populars alternatives are:

  • containerd, a subset of docker, used for instance in Google Kubernetes Engine
  • cri-o, a new implementation of a container engine, used for instance in Red Hat's Kubernetes offering (OpenShift)

These alternatives are meant to be used programatically via a unix domain socket, and therefore have a limited command line interface.

Let's play around in a VM.

Install a throwaway VM with Vagrant

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Start the VM, install dependencies

vagrant up
vagrant ssh
sudo apt update
sudo apt install --yes curl gnupg jq

Install cri-o the container engine

sudo bash
export OS=Debian_Testing VERSION=1.20

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/libcontainers.list
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
apt install cri-o cri-o-runc containernetworking-plugins conntrack

Verify it is running properly

systemctl restart cri-o
systemctl status cri-o
...
Started Container Runtime Interface for OCI (CRI-O).

Say hello to cri-o via its unix domain socket

curl --silent  --unix-socket /var/run/crio/crio.sock http://localhost/info | jq 
{
  "storage_driver": "overlay",
  "storage_root": "/var/lib/containers/storage",
  "cgroup_driver": "systemd",
  "default_id_mappings": {
    "uids": [
      {
        "container_id": 0,
        "host_id": 0,
        "size": 4294967295
      }
    ],
    "gids": [
      {
        "container_id": 0,
        "host_id": 0,
        "size": 4294967295
      }
    ]
  }
}

Install crictl, a Kubernetes debugging tool for containers

wget --directory-prefix=/tmp https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
tar -xaf /tmp/crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/sbin/
chmod +x /usr/local/sbin/crictl

crictl info
{
  "status": {
    "conditions": [
      {
        "type": "RuntimeReady",
        "status": true,
        "reason": "",
        "message": ""
      },
      {
        "type": "NetworkReady",
        "status": true,
        "reason": "",
        "message": ""
      }
    ]
  }
}

From there on you can create a container following the examples in https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md