Showing posts with label debian. Show all posts
Showing posts with label debian. Show all posts

Wednesday, May 26, 2021

Testing a CIFS / Samba share is browsable from the command line

Since I always forget what is the right synthax:

smbclient --user <username> --list <servername>

smbclient is available in the appropriately named smbclient package.

Friday, May 7, 2021

Opensource Operating Systems for 16/32 bits wonders

Debian does not provide official releases for the Atari/m68k since the 2000s, but there is still an ongoing porting effort to make Debian run in the debian-m68k mailing list (and even port the Rust compiler to m68k, hey John Paul:)

The EmuTOS project has released version 1.0 of its Atari TOS GPL clone, providing better hard disk disk support, and allowing thanks to binary compatibility to play the myriad of games released on that platform during the 80s and 90s.

Finally there is FreeMiNT, an Atari specific Unix kernel and OS, also under GPL, bringing true multitasking and memory protection to the cost of lower software compatibility. Currently at release 1.18, and still slowly developed.

As the hardware itself is getting old and overpriced, my next Atari machine will be a FPGA, the MiST. Basically a FGPA is a re-programmable hardware platform. Instead of having transistors and logical gates of a chipset burned to the silicon, the circuit description is loaded on power-on, and thus reconfigurable. MiST can also reproduce the hardware of an Amiga and most of the 16 bits heroes of the late 80s.

Having a MiST available will allow me to reuse my joysticks and Midi gear, have more RAM, and a GPL OS that I can update without having to burn EEPROMS. Retrocomputing meets opensource, a good match. Note for self: those Atari related projects have a disposition for complicated mixed-case names.

Tuesday, March 30, 2021

Manually install a single node Kubernetes cluster on Debian

Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation

Install necessary packages in a VM

Install a throwaway VM with Vagrant.

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.

awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.memory=2048 end"}' Vagrantfile
awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.cpus=2 end"}' Vagrantfile

Start the VM, login, update the package index.

vagrant up
vagrant ssh
sudo apt update

Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.

sudo apt install --yes --no-install-recommends docker.io curl

Install kubernetes binaries. This will install kubelet, the system service which will manage the containers, and kubectl the user/admin tool to manage the cluster.

sudo apt install --yes kubernetes-{node,client} containernetworking-plugins

Although it is not technically mandatory, we will use kubeadm, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.

wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz

sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f  kubernetes-server-linux-arm64.tar.gz

sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm 
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Add a kubelet systemd unit:

RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet

and a default config file for kubeadm

RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

finally we need to help kubelet find the components needed for container networking

echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"' | sudo tee /etc/default/kubelet

Create a cluster

Initialize a cluster with kubeadm: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady.

kubectl get nodes 
NAME      STATUS     ROLES                  AGE    VERSION
testing   NotReady   control-plane,master   9m9s   v1.20.5

At that point you should also have a bunch of containers running on the node:

sudo docker ps --format '{{.Names}}'
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...

The kubelet service also needs an external network plugin to get the cluster in Ready state.

sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059    9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Let’s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml

After a dozen of seconds your node should be in ready status.

kubectl get nodes 
NAME      STATUS   ROLES                  AGE   VERSION
testing   Ready    control-plane,master   16m   v1.20.5

Deploy a test application

Our node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.

kubectl describe node testing | grep ^Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

Let’s allow node testing to run user applications:

kubectl taint node testing node-role.kubernetes.io/master-

Deploy a nginx container:

kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content" 

Create a Kubernetes service to access this pod externally:

cat service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-k8s-service
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector:
    app: http-content 

kubectl create --filename service.yaml

Access the service via IP adress:

curl 192.168.121.63:30000
...
Thank you for using nginx.

Notes

I will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node documentation. Blog posts deprecate and disappear, wiki and project docs live longer.

Sunday, March 28, 2021

Switching to FAI (Fully Automatic Installer) for creating Vagrant Boxes

Have you heard of Vagrant ? It is a command line tool to get ready to use, disposable Virtual Machines (VM) from an online catalog. Vagrant works on Linux, FreeBSD, Windows and Mac and you only need three commands to get a shell prompt in a VM (see the Debian wiki).
The online catalog has images for the majority of the OSes you can think of.

We've been building the Debian disk images for Vagrant (available on https://app.vagrantup.com/debian/) with a number of tools over the years:

  • then packer, which is wrapping qemu and the Debian installer CD with automated bootparams and preseed file.
  • and then fai-diskimage, again a wrapper over debootstrap using loopback mounts

Basically there are two category of tools for building a disk image:

- those using an emulator and the OS installer in a automated way

- those using debootstrap/pacstrap/rpmstrap on a loopback mounted filesystem

Personally I prefer the first approach, as you can run the build process as non root, and you benefit from all the quality work of the official installer.
However this requires virtualization, and nested virtualization if your build process run insides a VM. Unfortunately nested virtualization is not that common, for instance my cloud provider, and the VMs used for Debian Continuous Integration, are not supporting nested virtualization.
As the maintainer of fai-diskimage is a Debian Developer (hey MrFAI ! :) and as the debian-cloud folks are it using for Amazon, Azure and Google Cloud Debian images, it made sense to switch to fai-diskimage for now. The fai-diskimage learning curve is a bit steep as you have to learn many internal concepts before using it, but once you get the bits connected it works quite well.

Sunday, January 3, 2021

How to move a single VM between cloud providers

I am running since a decade a small Debian VM, that I use for basic web and mail hosting. Since most of the VM setup is done manually and not following the Infrastructure As Code pattern, it is faster to simply copy the filesystem when switching providers instead of reconfiguring everything.
The steps involved are:

1. create a backup of the filesystem using tar of rsync, excluding dynamic content
rsync  --archive \
    --one-file-system --numeric-ids \
    --rsh "ssh -i private_key root@server:/ /local_dir

or
tar -cvpzf backup.tar.gz \
--numeric-owner \
--exclude=/backup.tar.gz \
--one-file-system /


Notice here the --one-file-system switch which avoids back'ing up the content of mount points like /proc, /dev.
If you have extra partitions with a mounted filesystem, like /boot or home you need do add a separate backup for those.

2. create a new VM on the new cloud provider, verify you have a working console access, and power it off.
3. boot on the new cloud provider a rescue image
4. partition the disk image on the new provider.
5. mount the new root partition, and untar your backup on it. You could for instance push the local backup via rsync, or download the tar archive using https.
6. update network configuration and /etc/fstab
7. chroot into the target system, and reinstall grub

This works surprisingly well, and you if made your backup locally, you can test the whole procedure by building a test VM with your backup. Just replace the deboostrap step with a command like tar -xvpzf /path/to/backup.tar.gz -C /mount_point --numeric-owner

Using this procedure, I moved from Hetzner (link in French language) to Digital Ocean, from Digital Ocean to Vultr, and now back at Hetzner.

Monday, March 23, 2020

Two Factor Authentification on gitlab with Yubikey

I wanted to have a working Two Factor Authentification (2FA) setup to login on salsa.debian.org, Debians’s gitlab instance.
You might already know Two Factor Authentification via a One Time Password (OTP) generating app on your smartphone, like FreeOTP or Google Authenticator. But it is possible to use a physical device, and a keypress on the device is enough to authenticate (speed up things !). Here I am using a Yubikey 4, a popular USB device for Two Factor Authentification which is officially supported by gitlab, and whose tooling is well packaged in Debian.

Get to know the device

Install the needed packages to work with the yubikey
# apt install yubikey-manager libu2f-host0
List connected devices on your usb bus:
$ lsusb
Bus 002 Device 109: ID 1050:0407 Yubico.com Yubikey 4 OTP+U2F+CCID
Get info about the device capability
$ ykman info
Device type: YubiKey 4
Serial number: 1234567
Firmware version: 4.3.7
Enabled USB interfaces: OTP+FIDO+CCID
Applications
OTP         Enabled             
FIDO U2F    Enabled             
OpenPGP     Enabled             
PIV         Enabled             
OATH            Enabled             
FIDO2       Not available
The capability which interests us here is FIDO U2F. The Yubikey 4 supports Two Factor Authentification via the U2F standard, and this standard is maintained by the FIDO Industry Association, hence the name. As I plan to only use the FIDO U2F capability of the key, I set ‘FIDO’ to be the single mode of the key.
ykman mode FIDO

Testing web browser interaction with Yubico demo system

Now we need to have to have a browser with support for the U2F standard. Firefox has builtin support since Version 67. Debian 10 “Buster” has firefox-esr Version 68, so that will work. For testing yubikeys, the manufacturer has a demo website, where you can test U2F. Go to https://demo.yubico.com and follow the “Explore the Yubikey” link.
Once there you will be asked to register an account on yubicom’s demo systems, to which you will add the Yubikey as an Authenticating Device. After that you can add your security key. First step will be to register the device, which will require a light touch on the Yubikey button, and acceptance of this Firefox warning Window, as the demo website wants to know the model of the device.


Firefox message on the yubikey demo site. A normal site with U2F would not require the extended information, and have a simpler popup message.
As soon as the device is registered, you can login and logout and you will be prompted again to lightly touch the Yubikey button to authenticate, in addition to the classical login / password.

Using U2F on gitlab

When you want to register your yubikey for logging on salsa, you need first to register a One Time Password device in Settings -> Account -> Manage two-factor authentication, and Register Universal Two-Factor (U2F) Device. After the usual Firefox Popup, and the light touch on the key button, that'it you have a fast, and reliable Two Factor Authentification !

Conclusion

Each time I have to look on anything close to cryptography / authentification, it is a terminology avalanche. Here we had already 2FA, OTP, U2F, FIDO. And now there is FIDO2 too. It is the next version of the U2F standard, but this time it was named after the standardizing organization, FIDO. The web browser part of FIDO2 is called Webauthn. Also sometimes the whole FIDO2 is called Webauthn too. Easy to get, isn’t it ?

Monday, January 27, 2020

Mark a Screenshot on Linux

More that than often to explain things quickly, I like to take a screenshot of the (web) application I am talking about, and then circle the corresponding area so that everything is clear. Possibly with a rounded rectangle, as I find it the cutest variant.

This is how I do it on Linux:
Install necessary tools:
apt install gimp scrot                                                                   
Take the screenshot:
# Interactively select a window or rectangle with the mouse                              
scrot --selection screenshot.png                                                                    
Open the screenshot and annotate it with gimp:
gimp screenshot.png                                                                      
Then in gimp:
  • Tools -> Selection Tools -> Rectangle Select, and mark the area
  • Select -> Rounded Rectangle, and keep the default
  • Change the color to a nice blue shade in the toolbox
  • Edit -> Stroke selection
Maybe gimp is a bit overkill for that. But instead of learning a limited tool, I prefer to learn an advanced one like gimp step by step.

Sunday, August 4, 2019

Debian 9 -> 10 Ugrade report

I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

Saturday, June 8, 2019

PowerShell on Debian

I heard some time ago that Microsoft released their interactive and
scripting language PowerShell under an opensource license (MIT) but I completely missed that they were providing a repository and ready to use packages for your favorite distribution.

Anyway an apt-get away and that's it:



New-Object net.sockets.tcpclient("libera.cc", 80) opens a TCP connection to a target host, a quick way to test if a port is open ( look for Connected: True for a successful socket creation)

Sunday, March 17, 2019

Splitting a large mp3 / flac / ogg by detecting silence gaps

If you have a large audio file coming for instance from a whole music album, the excellent mp3splt can do this for you:

mp3splt -o @n-@f -s my_long_file.mp3

will autodetect the silences, and create a list of tracks based on the large file.

mp3splt is  available in the Debian / Ubuntu archive.

Thursday, June 15, 2017

Testing upgrades on a backup system

Like many Debian users, I am planning very soon to upgrade to my own personal server ada to Debian Stretch.
Since I do a full rsync backup of the server to a different location, I was wondering if it was possible to use this backup root fs in a container to test the upgrade. Turns is out it works very well with systemd-nspawn

Copy the backup to a separate dir in /var/lib/machines
# /var/lib/machines is where machinectl look for images
# see https://superuser.com/a/307542 for rsync flags detial
rsync -axHAX --info=progress2 --numeric-ids --whole-file /home/manu/Projects/backups/ada_rootfs/ /var/lib/machines/ada/
For networking this container, I installed libvirtd, which comes with a DHCP server serving private IPs on the virbr0 bridge.
systemctl status libvirtd
libvirtd.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-06-08 23:15:33 CEST; 6 days ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 2609 (libvirtd)
    Tasks: 18 (limit: 4915)
   Memory: 53.0M
      CPU: 2.448s
   CGroup: /system.slice/libvirtd.service
           ├─2609 /usr/sbin/libvirtd
           ├─4868 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile
           └─4869 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile

juin 09 11:34:34 django dnsmasq-dhcp[4868]: not giving name ada to the DHCP lease of 192.168.122.
juin 09 12:04:24 django dnsmasq-dhcp[4868]: DHCPREQUEST(virbr0) 192.168.122.17 06:91:4f:92:3b:a8
juin 09 12:04:24 django dnsmasq-dhcp[4868]: DHCPACK(virbr0) 192.168.122.17 06:91:4f:92:3b:a8 ada
Start the container, giving a a Virtual Ethernet Link connected to virbr0
systemd-nspawn --directory /var/lib/machines/ada --machine ada --network-bridge=virbr0 --boot --network-veth
(observe a nice booting sequence, up to a login prompt)
Login inside the container and configure the network
# snippet for /etc/network/interfaces
# the Virtual Ethernet Link appears as host0 inside the container
auto host0
iface host0 inet dhcp
# restart the network
systemctl restart networking
Find out from the host side which IP was assigned
machinectl list
MACHINE CLASS     SERVICE        OS     VERSION ADDRESSES
ada     container systemd-nspawn debian 8       192.168.122.17...

1 machines listed.

Inspect from the host system if the Apache virtual hosts are running
curl --silent --header 'Host: meinoesterreich.ist-ur.org' http://192.168.122.17/titelblatt | w3m -T text/html -dump
Happy Upgrades !

Wednesday, November 23, 2016

Small Summary of the Debian cloud sprint in Seattle

Last week I was In Seattle for the first Debian cloud team sprint The aim of it was to streamline the producing of official Debian ready to use images for various cloud providers.
The three biggest cloud providers were there (Google, Amazon, Microsoft Azure), and my humble self was here because of the work I have been doing producing Debian base boxes for Vagrant.
 During the three days the spring took we did the following:
* Reviewed the current state of cloud images and their build tools
After a demonstration of the vmdebootstrap and fai-diskimage creation tools, agreed we should try fai-diskimage as a default tool. Since a consensus here seemed to be reached I refrained myself from advocating packer or virt-install too much 
 * Reviewed the current state of cloud-init in Debian and 
considering its non working state in stable, thought about an upgrade in Debian Stable
Agreed about having unattended super axes by default inside the team,and proposed it to debian-devel
 * Reviewed all the bugs of the debian-cloud virtual package.
As with probably every so sprint, getting to bring all the people in one room brings a lot of synergies which couldn't happen on a mailing list. Also we noticed a lot of interest for Debian from the respective cloud providers which is interesting considering Debian is "only" a community based distribution.
Seattle is also a nice town, with lots of pine trees and bicycle lanes, looked to me like Canada, a troll statue(?) though I didn't have the time to see that much of it.

Sunday, April 12, 2015

Désactiver la mise en veille à distance sous Gnome3

Par défaut mon ordinateur de bureau se met en veille au bout de 30 minutes, une mesure d'économie bien pratique.
Seulement de temps en temps, j'ai besoin d'y acceder par ssh, et après 30 minutes d'activité, il se remet en veille.
Le paramètre de mise en veille est configurée dans le registre dconf, qui peut se lire avec:
 
gsettings get org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout
1800 # 30 min x 60 sec

Pour désactiver la mise en veille, on met le timeout à 0.

dbus-launch gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0

La commande dbus-launch est nécessaire en cas de connexion distante.



Thursday, August 29, 2013

Comment vérifier si l'acceleration matérielle est activée pour votre carte graphique

 Accélération 3D:

apt-get mesa-utils
glxinfo | grep render
 

direct rendering: Yes
OpenGL renderer string: Gallium 0.4 on AMD RV710
    GL_EXT_vertex_array_bgra, GL_NV_conditional_render
,

AMD RV710 étant ici le chipset de la carte graphique.

Accélération Video  (hardware scaling)

xvinfo | grep Adaptor
  Adaptor #0: "Radeon Textured Video
"

Accélération 3D pour OpenGL ES ( pour systèmes embarqués / ARM)

es2_info | grep RENDERER
GL_RENDERER: Gallium 0.4 on AMD RV710

Wednesday, June 26, 2013

Provisioning d'une VM Debian avec libvirt, kvm, et pressed

En cherchant comme automatiser la création de machines virtuelles, je me suis penché sur libvirt, l'outil de virtualisation générique pour contrôler KVM, Xen, VmVare et quelques autres.

En combinant l'installeur virt-install avec un fichier preseed qui permet d'automatiser l'installation sur Debian, on peut créer sa propre machine virtuelle sans toucher une seule fois le clavier !
C'est assez impressionnant de voire l'installeur configurer le réseau, partitionner le système, installer le système sans aucune intervention utilisateur. La commande suivant installera un système Debian minimal avec openssh, et les comptes root/root et user/user

La Machine Virtuelle ainsi créée est ensuite controlable avec virt-manager, pour peu que votre utilisateur soit membre du groupe livirt.


Remarques, questions, suggestions ? Plutot que de laisser un commentaire qui passera ici peut etre ici inaperçu, contactez moi @formicapunk sur Twitter !

Thursday, June 6, 2013

Ubuntu + Gnome Classic = Debian boot screen !

Après une upgrade party at $JOB de Ubuntu 10.04 vers 12.04, et  après avoir installé Gnome Classic,  sur deux d'entre elles un phénomène intéressant se produit: l'écran de boot (GRUB) affiche le theme de Debian 6 !




 Un cas de packaging qui a peut être mal tourné, et qui rappelle bien la synergie Debian / Ubuntu: Ubuntu c'est de mémoire 71% de paquets Debian recompilés sans modification et 29 % de paquets nouveaux/modifiés. (NB: Une source plus ancienne sur ce même thème  )


Saturday, June 1, 2013

Le n°1 des ventes d'ordinateurs portable sur Amazon Etats Unis tourne sous Linux

Je ne sais pas depuis combien de temps il est en tête des ventes, mais c'est un chromebook qui occupe actuellement le top des ventes sur Amazon US. Le même portable est en neuxième position des ventes chez Amazon France. Et cette fois ci il s'agit d'une vraie machine Linux avec u boot, noyau, glibc, xorg et upstart comme système de démarrage (l'architecture sécurité mentionne tous ces composants)

La doc officielle mentionne comment activer l'appareil en mode développement pour avoir un accès complet (shell root) Après smartphone et routeur, une nouvelle fois une "appliance" fait confiance à ses utilisateurs, et c'est une bonne nouvelle.

Après l'installation d'un environnement Debian ou Ubuntu (on vout explique ici comment faire avec le savoureux script crouton), ce portable peut faire un outil idéal pour devops (7 heures d'autonomie, boot en 10 secondes)




Monday, November 5, 2012

Drush, un apt-get pour drupal

Je viens de faire la découverte de Drush, un utilitaire en ligne de commandes qui permet de mettre à jour un site Drupal d'une manière  efficace en ligne de commandes. (Dans le cas où Drupal est installé à partir des sources de drupal.org, et non le paquet d'une distribution)

L'usage en est le suivant:

cd my_drupal_site/

Objectif Debian Drush
Liste les paquets installés dpkg -l drush pm-list
Mets à jour la liste des paquets disponibles apt-get update drush pm-refresh
Montre les mises à jour possibles apt-get --simulate upgrade drush --simulate pm-update
Installe les dernières mises à jour via le réseau apt-get upgrade drush pm-update

Un outil tellement puissant qu'on en vient à souhaiter son existence pour wordpress.

Tuesday, September 18, 2012

Linux, ZFS et disque SATA: remplacement de disque sans faute en 5 commandes


Le serveur en question tourne sur Debian 6 avec zfs-fuse comme système de fichier sur un volume stripped mirror, équivalent d'un RAID 10.

zpool get version tank
NAME  PROPERTY  VALUE    SOURCE
tank  version   23       default


Alors que je regardais l'espace disponible sur un volume ZFS dédié à des backups, je m'apercois du message d'erreur suivant:

zpool status | head
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

Effectivement le noyau a du mal à communiquer avec un disque dur
dmesg | grep sd 
[87538.049395] sd 1:0:1:0: [sdd] Unhandled sense code
[87538.049399] sd 1:0:1:0: [sdd]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[87538.049404] sd 1:0:1:0: [sdd]  Sense Key : Medium Error [current] [descriptor]
[87538.049430] sd 1:0:1:0: [sdd]  Add. Sense: Unrecovered read error - auto reallocate failed
[87538.049437] sd 1:0:1:0: [sdd] CDB: Read(10): 28 00 48 06 54 00 00 00 80 00
[87538.049448] end_request: I/O error, dev sdd, sector 1208374353

Cecu est confirmé par le status SMART du disque en question: un disque dur vieux de 1541 jours, on peut le changer !
smartctl --all /dev/sdd | grep ^Error
Error logging capability: (0x01) Error logging supported.
Error 7453 occurred at disk power-on lifetime: 36998 hours (1541 days + 14 hours)
Error 7452 occurred at disk power-on lifetime: 36998 hours (1541 days + 14 hours)
Error 7451 occurred at disk power-on lifetime: 36998 hours (1541 days + 14 hours)
Error 7450 occurred at disk power-on lifetime: 36998 hours (1541 days + 14 hours)
Error 7449 occurred at disk power-on lifetime: 36998 hours (1541 days + 14 hour
Notons tout d'abord son numéro de série:
hdparm -I /dev/sdd | grep "Serial Number"
Serial Number:      WD-WCASJ0402738

Un petit tour sur le site du constructeur permet d'ailleurs de constater que la date de garantie est déja dépassée:


Et retirons le disque dur du pool zfs:
zpool offline tank /dev/sdd

Notre pool "tank" apparait alors en statut "dégradé" mais heureusement pas d'erreurs sur les données.
zpool status
  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scrub: none requested
config:

NAME        STATE     READ WRITE CKSUM
tank        DEGRADED     0     0     0
 mirror-0  ONLINE       0     0     0
   sda     ONLINE       0     0     0
   sdb     ONLINE       0     0     0
 mirror-1  DEGRADED     0     0     0
   sdc     ONLINE       0     0     0
   sdd     OFFLINE      0     0     0

errors: No known data errors

Une fois le serveur arrêté il s'agit de remplacer le disque dur comme le noyau l'avait reconnu comme /dev/sdd, il est sans doute sur la troisième nappe SATA (la numérotation commençant à partir de 0)
Un coup d’œil au disque permet de vérifier le numéro de série:



Après avoir recablé le disque, rajoutons le dans le pool zfs:
zpool replace tank /dev/sdd

Si vous utilisez les numeros de series des disques comme identifiant:
zpool replace tank ata-WDC_WD2003FYYS-02W0B0_WD-WMAY02196811 ata-Hitachi_HDS721010KLA330_GTE002PBGTNYME

Zfs va maintenant resynchroniser les blocs de données:
(Dans le jargon de ZFS, resilvering signifie copier les blocs d'un disque à un autre pour regagner un état initial)
zpool status
  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h33m, 34.92% done, 1h1m to go
config:

NAME             STATE     READ WRITE CKSUM
tank             DEGRADED     0     0     0
 mirror-0       ONLINE       0     0     0
   sda          ONLINE       0     0     0
   sdb          ONLINE       0     0     0
 mirror-1       DEGRADED     0     0     0
   sdc          ONLINE       0     0     0
   replacing-1  DEGRADED     0     0     0
     sdd/old    OFFLINE      0     0     0
     sdd        ONLINE       0     0     0  67.0G resilvered


On peut aussi suivre l'état de la synchronisation, qui s'effectue environ à  47MB/s sur ce système:
iostat -d 1 -m
Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.00         0.00         0.00          0          0
sdd             137.00         0.00        48.13          0         48
sdc             138.00        48.69         0.00         48          0
sdb               0.00         0.00         0.00          0          0
sde               0.00         0.00         0.00          0          0

Environ deux heures plus tard la synchronisation des blocs de données est terminée, et le pool zfs est de nouveau en statut normal:
zpool status
  pool: tank
 state: ONLINE
 scrub: resilver completed after 1h38m with 0 errors on Tue Sep 18 17:04:25 2012
config:

    NAME        STATE     READ WRITE CKSUM
    tank        ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sda     ONLINE       0     0     0
        sdb     ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0  192G resilvered

errors: No known data errors


Saturday, September 8, 2012

Installation sur un Imac 24'': comment je suis devenu intégrateur OEM sans le vouloir

Une histoire d'upgrade

Un Imac 24'', c'est sans doute un ordinateur assez répandu, si on juge par les succès de la maison mère, et bien pourtant l'installation de Debian 7 (Beta)  s'est révélé loin d'être une partie de plaisir. Évidemment avec 10 d'expérience de Linux, et le fait que je sois Debian Maintainer, j'avais pas mal d'atouts dans les mains et en quelques heures tout fonctionnait nickel. Cependant.

Carte graphique: tout le monde descend du bus
Pour installer le driver nvidia propriétaire, un petit coup de
apt-get install nvidia-kernel-dkms linux-headers-amd64
est suffisant.

On reboot, on se connecte tranquille, et après deux minutes le système se bloque complètement. Après deux heures d'Essais infructueux je remarque dans le /var/log/syslog:

Sep  7 21:50:58 leonard kernel: [  257.212660] NVRM: GPU at 0000:01:00.0 has fallen off the bus.

Finalement je supprime  nvidia-kernel-dkms et linux-headers-amd64, je reboote le système, et là miraculeusement Xorg se utilise le driver nouveau avec accélération 3D Gallium, et plus de plantage.
 (NB: Restreint signifie ici que j'utilise Gnome Classic et n'a rien à voir avec le fonctionnement de la carte)

Carte Son: mbp3 pour jouer des mp3
En branchant mes baffles sur la sortie, je me rend compte tout d'un coup que le système utilise le haut parleur interne pour au lieu de ma luxueuse HiFi Sony. Après avoir incriminé à tort PulseAudio pendant une bonne demi heure, je me rends compte que le coupable est le module snd-hda-intel.

Celui ci a besoin de l'entrée
options snd-hda-intel model=mbp3

dans /etc/modprobe.d/alsa-base.conf
pour fonctionner correctement (source)  

Clavier: Une carte pourrie pour nous sortir de là
Pas au bout de mes peines je me rends compte que les touches ^ (accent circonflexe) et '<' '>'  sont inversées sur mon clavier mac allemand. Apparemment il s'agit d'un bug sur les claviers apple, qui n'affichent pas les codes qu'ils prétendent envoyer.
Il faut donc corriger le problème en ajoutant:

XKBMODEL="pc105"
XKBLAYOUT="de"
XKBVARIANT="mac"
XKBOPTIONS="lv3:rwin_switch,apple:badmap"

dans /etc/default/keyboard

Conclusion: je fais le boulot d'Apple, Dell, et Toshiba
Les trois bugs mentionnés plus hauts ne me seraient jamais arrivés sur un portable acheté à la Fnac au rayon PC ou Apple.
Pourquoi ? Tout simplement pour n'importe quel PC de marque, le constructeur se charge de préinstaller Windows avec les meilleurs drivers, et se charge à cette occasion de masquer les défauts de ses propres produits via une couche de plâtre logiciel.
En installant vous même Linux ou un autre OS non préinstallé c'est vous qui effectuez ce travail.

Sur un serveur le problème est quasi inexistant car vous avez seulement besoin d'un driver pour votre contrôleur de disque et pour la carte réseau, le plus souvent maintenu directement par le constructeur (Intel, Broadcom) pour Linux et FreeBSD dans les sources du noyau.
Pour un portable vous avez en plus besoin de suspend to disk, suspend to RAM, bluetooth, wifi, accélération 3D, carte son, lecteur de carte Smart Media, touches de fonction Volume/Luminosité et maintenant écran tactile, carte graphiques hybride.

Update: ajout de liens vers les contributions  Intel/Broadcom à Linux & FreeBSD