Since I always forget what is the right synthax:
smbclient --user <username> --list <servername>
smbclient is available in the appropriately named smbclient package.
Sociologie des systèmes libres ...
Since I always forget what is the right synthax:
smbclient --user <username> --list <servername>
smbclient is available in the appropriately named smbclient package.
Debian does not provide official releases for the Atari/m68k since the 2000s, but there is still an ongoing porting effort to make Debian run in the debian-m68k mailing list (and even port the Rust compiler to m68k, hey John Paul:)
The EmuTOS project has released version 1.0 of its Atari TOS GPL clone, providing better hard disk disk support, and allowing thanks to binary compatibility to play the myriad of games released on that platform during the 80s and 90s.
Finally there is FreeMiNT, an Atari specific Unix kernel and OS, also under GPL, bringing true multitasking and memory protection to the cost of lower software compatibility. Currently at release 1.18, and still slowly developed.
As the hardware itself is getting old and overpriced, my next Atari machine will be a FPGA, the MiST. Basically a FGPA is a re-programmable hardware platform. Instead of having transistors and logical gates of a chipset burned to the silicon, the circuit description is loaded on power-on, and thus reconfigurable. MiST can also reproduce the hardware of an Amiga and most of the 16 bits heroes of the late 80s.
Having a MiST available will allow me to reuse my joysticks and Midi gear, have more RAM, and a GPL OS that I can update without having to burn EEPROMS. Retrocomputing meets opensource, a good match. Note for self: those Atari related projects have a disposition for complicated mixed-case names.
Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm
deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation
Install a throwaway VM with Vagrant.
apt install vagrant vagrant-libvirt
vagrant init debian/testing64
Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.
awk -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print " config.vm.provider :libvirt do |vm| vm.memory=2048 end"}' Vagrantfile
awk -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print " config.vm.provider :libvirt do |vm| vm.cpus=2 end"}' Vagrantfile
Start the VM, login, update the package index.
vagrant up
vagrant ssh
sudo apt update
Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.
sudo apt install --yes --no-install-recommends docker.io curl
Install kubernetes binaries. This will install kubelet
, the system service which will manage the containers, and kubectl
the user/admin tool to manage the cluster.
sudo apt install --yes kubernetes-{node,client} containernetworking-plugins
Although it is not technically mandatory, we will use kubeadm
, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.
wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz
sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f kubernetes-server-linux-arm64.tar.gz
sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Add a kubelet
systemd unit:
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet
and a default config file for kubeadm
RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
finally we need to help kubelet
find the components needed for container networking
echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"' | sudo tee /etc/default/kubelet
Initialize a cluster with kubeadm
: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady
.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
testing NotReady control-plane,master 9m9s v1.20.5
At that point you should also have a bunch of containers running on the node:
sudo docker ps --format '{{.Names}}'
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...
The kubelet
service also needs an external network plugin to get the cluster in Ready state.
sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059 9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Let’s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml
After a dozen of seconds your node should be in ready status.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
testing Ready control-plane,master 16m v1.20.5
Our node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.
kubectl describe node testing | grep ^Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Let’s allow node testing
to run user applications:
kubectl taint node testing node-role.kubernetes.io/master-
Deploy a nginx container:
kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content"
Create a Kubernetes service to access this pod externally:
cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-k8s-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: http-content
kubectl create --filename service.yaml
Access the service via IP adress:
curl 192.168.121.63:30000
...
Thank you for using nginx.
I will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node
documentation. Blog posts deprecate and disappear, wiki and project docs live longer.
Have you heard of Vagrant ? It is a command line tool to get ready to use, disposable Virtual Machines (VM) from an online catalog. Vagrant works on Linux, FreeBSD, Windows and Mac and you only need three commands to get a shell prompt in a VM (see the Debian wiki).
The online catalog has images for the majority of the OSes you can think of.
We've been building the Debian disk images for Vagrant (available on https://app.vagrantup.com/debian/) with a number of tools over the years:
Basically there are two category of tools for building a disk image:
- those using an emulator and the OS installer in a automated way
- those using debootstrap/pacstrap/rpmstrap on a loopback mounted filesystem
Personally I prefer the first approach, as you can run the build process as non root, and you benefit from all the quality work of the official installer.
However this requires virtualization, and nested virtualization if your build process run insides a VM. Unfortunately nested virtualization is not that common, for instance my cloud provider, and the VMs used for Debian Continuous Integration, are not supporting nested virtualization.
As the maintainer of fai-diskimage is a Debian Developer (hey MrFAI ! :) and as the debian-cloud folks are it using for Amazon, Azure and Google Cloud Debian images, it made sense to switch to fai-diskimage for now. The fai-diskimage learning curve is a bit steep as you have to learn many internal concepts before using it, but once you get the bits connected it works quite well.
I am running since a decade a small Debian VM, that I use for basic web and mail hosting. Since most of the VM setup is done manually and not following the Infrastructure As Code pattern, it is faster to simply copy the filesystem when switching providers instead of reconfiguring everything.
The steps involved are:
1. create a backup of the filesystem using tar of rsync, excluding dynamic content
rsync --archive \
--one-file-system --numeric-ids \
--rsh "ssh -i private_key root@server:/ /local_dir
or
tar -cvpzf backup.tar.gz \
--numeric-owner \
--exclude=/backup.tar.gz \
--one-file-system /
Notice here the --one-file-system switch which avoids back'ing up the content of mount points like /proc, /dev.
If you have extra partitions with a mounted filesystem, like /boot or home you need do add a separate backup for those.
2. create a new VM on the new cloud provider, verify you have a working console access, and power it off.
3. boot on the new cloud provider a rescue image
4. partition the disk image on the new provider.
5. mount the new root partition, and untar your backup on it. You could for instance push the local backup via rsync, or download the tar archive using https.
6. update network configuration and /etc/fstab
7. chroot into the target system, and reinstall grub
This works surprisingly well, and you if made your backup locally, you can test the whole procedure by building a test VM with your backup. Just replace the deboostrap step with a command like tar -xvpzf /path/to/backup.tar.gz -C /mount_point --numeric-owner
Using this procedure, I moved from Hetzner (link in French language) to Digital Ocean, from Digital Ocean to Vultr, and now back at Hetzner.
# apt install yubikey-manager libu2f-host0
List connected devices on your usb bus:$ lsusb
Bus 002 Device 109: ID 1050:0407 Yubico.com Yubikey 4 OTP+U2F+CCID
Get info about the device capability$ ykman info
Device type: YubiKey 4
Serial number: 1234567
Firmware version: 4.3.7
Enabled USB interfaces: OTP+FIDO+CCID
Applications
OTP Enabled
FIDO U2F Enabled
OpenPGP Enabled
PIV Enabled
OATH Enabled
FIDO2 Not available
The capability which interests us here is FIDO U2F. The Yubikey 4 supports Two Factor Authentification via the U2F standard, and this standard is maintained by the FIDO Industry Association, hence the name.
As I plan to only use the FIDO U2F capability of the key, I set ‘FIDO’ to be the single mode of the key.ykman mode FIDO
firefox-esr
Version 68, so that will work. For testing yubikeys, the manufacturer has a demo website, where you can test U2F.
Go to https://demo.yubico.com and follow the “Explore the Yubikey” link.![]() |
Firefox message on the yubikey demo site. A normal site with U2F would not require the extended information, and have a simpler popup message. |
apt install gimp scrot
Take the screenshot: # Interactively select a window or rectangle with the mouse
scrot --selection screenshot.png
Open the screenshot and annotate it with gimp:gimp screenshot.png
Then in gimp:systemd-nspawn
# /var/lib/machines is where machinectl look for images # see https://superuser.com/a/307542 for rsync flags detial rsync -axHAX --info=progress2 --numeric-ids --whole-file /home/manu/Projects/backups/ada_rootfs/ /var/lib/machines/ada/For networking this container, I installed libvirtd, which comes with a DHCP server serving private IPs on the virbr0 bridge.
systemctl status libvirtd libvirtd.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-06-08 23:15:33 CEST; 6 days ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 2609 (libvirtd) Tasks: 18 (limit: 4915) Memory: 53.0M CPU: 2.448s CGroup: /system.slice/libvirtd.service ├─2609 /usr/sbin/libvirtd ├─4868 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile └─4869 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile juin 09 11:34:34 django dnsmasq-dhcp[4868]: not giving name ada to the DHCP lease of 192.168.122. juin 09 12:04:24 django dnsmasq-dhcp[4868]: DHCPREQUEST(virbr0) 192.168.122.17 06:91:4f:92:3b:a8 juin 09 12:04:24 django dnsmasq-dhcp[4868]: DHCPACK(virbr0) 192.168.122.17 06:91:4f:92:3b:a8 adaStart the container, giving a a Virtual Ethernet Link connected to virbr0
systemd-nspawn --directory /var/lib/machines/ada --machine ada --network-bridge=virbr0 --boot --network-veth(observe a nice booting sequence, up to a login prompt)
# snippet for /etc/network/interfaces # the Virtual Ethernet Link appears as host0 inside the container auto host0 iface host0 inet dhcp
# restart the network systemctl restart networkingFind out from the host side which IP was assigned
machinectl list MACHINE CLASS SERVICE OS VERSION ADDRESSES ada container systemd-nspawn debian 8 192.168.122.17... 1 machines listed.Inspect from the host system if the Apache virtual hosts are running
curl --silent --header 'Host: meinoesterreich.ist-ur.org' http://192.168.122.17/titelblatt | w3m -T text/html -dumpHappy Upgrades !
Last week I was In Seattle for the first Debian cloud team sprint The aim of it was to streamline the producing of official Debian ready to use images for various cloud providers. The three biggest cloud providers were there (Google, Amazon, Microsoft Azure), and my humble self was here because of the work I have been doing producing Debian base boxes for Vagrant.
During the three days the spring took we did the following: * Reviewed the current state of cloud images and their build tools After a demonstration of the vmdebootstrap and fai-diskimage creation tools, agreed we should try fai-diskimage as a default tool. Since a consensus here seemed to be reached I refrained myself from advocating packer or virt-install too much * Reviewed the current state of cloud-init in Debian and considering its non working state in stable, thought about an upgrade in Debian Stable Agreed about having unattended super axes by default inside the team,and proposed it to debian-devel
* Reviewed all the bugs of the debian-cloud virtual package. As with probably every so sprint, getting to bring all the people in one room brings a lot of synergies which couldn't happen on a mailing list. Also we noticed a lot of interest for Debian from the respective cloud providers which is interesting considering Debian is "only" a community based distribution. Seattle is also a nice town, with lots of pine trees and bicycle lanes, looked to me like Canada, a troll statue(?) though I didn't have the time to see that much of it.
Objectif | Debian | Drush |
---|---|---|
Liste les paquets installés | dpkg -l | drush pm-list |
Mets à jour la liste des paquets disponibles | apt-get update | drush pm-refresh |
Montre les mises à jour possibles | apt-get --simulate upgrade | drush --simulate pm-update |
Installe les dernières mises à jour via le réseau | apt-get upgrade | drush pm-update |
![]() |
(NB: Restreint signifie ici que j'utilise Gnome Classic et n'a rien à voir avec le fonctionnement de la carte) |