umount -l / needs --make-slave
The other day I learned — the hard way — that umount -l can be dangerous. Using the --make-slave mount option makes it safer. The scenario went like this: A virtual machine on our Proxmox VE cluster wouldn't boot. No biggie, I thought. Just mount the filesystem on the host and do a proper grub-install from a chroot: # fdisk -l /dev/zvol/zl-pve2-ssd1/vm-215-disk-3 /dev/zvol/zl-pve2-ssd1/vm-215-disk-3p1 * 2048 124999679 124997632 59.6G 83 Linux /dev/zvol/zl-pve2-ssd1/vm-215-disk-3p2 124999680 125827071 827392 404M 82 Linux swap / Solaris # mount /dev/zvol/zl-pve2-ssd1/vm-215-disk-3p1 /mnt/root # cd /mnt/root # for x in dev proc sys; do mount --rbind /$x $x; done # chroot /mnt/root There I could run the necessary commands to fix the boot procedure.
a singal 17 is raised
When running the iKVM software on the BMC of SuperMicro machines, we regularly see an interesting "singal" typo. (For the interested, we use a helper script to access the KVM console: ipmikvm. Without it, you need Java support enabled in your browser, and that has always given us trouble. The ipmikvm script logs on to the web interface, downloads the required Java bytecode and runs it locally.) Connect to somewhere, wait for the KVM console to open, close it, and you might see something like this:
mariabackup / selective table restore
When using mariabackup (xtrabackup/innobackupex) for your MySQL/MariaDB backups, you get a snapshot of the mysql lib dir. This is faster than doing an old-style mysqldump, but it is slightly more complicated to restore. Especially if you just want access to data from a single table. Assume you have a big database, and you're backing it up like this, using the mariadb-backup package: # ulimit -n 16384 # mariabackup \ --defaults-file=/etc/mysql/debian.cnf \ --backup \ --compress --compress-threads=2 \ --target-dir=/var/backups/mysql \ [--parallel=8] [--galera-info] .
apt / downgrading back to current release
If you're running an older Debian or Ubuntu, you may sometimes want to check out a newer version of a package, to see if a particular bug has been fixed. I know, this is not supported, but this scheme Generally Works (*): replace the current release name in /etc/apt/sources.list, with the next release — e.g. from bionic to focal do an apt-get update and an apt-get install SOME-PACKAGE You can test the package while replacing the sources.
k8s / lightweight redirect
Spinning up pods just to for parked/redirect sites? I think not. Recently, I had to HTTP(S)-redirect a handful of hostnames to elsewhere. Pointing them into our well maintained K8S cluster was the easy thing to do. It would manage LetsEncrypt certificates automatically using cert-manager.io. From the cluster, I could spin up a service and an nginx deployment with a bunch of redirect/302 rules. However, spinning up one or more nginx instances just to have it do simple redirects sounds like overkill.
traverse path permissions / namei
How does one traverse a long path to quickly find out where you lack permissions? So, I wanted to test some stuff in Debian/Buster. I already had an LXC container through LXD. I just needed to get some source files to the right place. lxd$ sudo zfs list | grep buster data/containers/buster-builder 692M 117G 862M /var/snap/lxd/common/lxd/storage-pools/data/containers/buster-builder lxd$ sudo zfs mount data/containers/buster-builder Make sure there's somewhere where I can write: lxd$ sudo mkdir \ /var/snap/lxd/common/lxd/storage-pools/data/containers/buster-builder/rootfs/home/osso/walter lxd$ sudo chown walter \ /var/snap/lxd/common/lxd/storage-pools/data/containers/buster-builder/rootfs/home/osso/walter Awesome.
migrating vm interfaces / eth0 to ens18
How about finally getting rid of eth0 and eth1 in those ancient Ubuntu VMs that you keep upgrading? Debian and Ubuntu have been doing a good job at keeping the old names during upgrades. But it's time to move past that. We expect ens18 and ens19 now. There's no need to hang on to the past. (And you have moved on to Netplan already, yes?) Steps: rm /etc/udev/rules.d/80-net-setup-link.rules update-initramfs -u rm /etc/systemd/network/50-virtio-kernel-names.
kioxia nvme / num_err_log_entries 0xc004 / smartctl
So, these new Kioxia NVMe drives were incrementing the num_err_log_entries as soon as they were inserted into the machine. But the error said INVALID_FIELD. What gives? In contrast to the other (mostly Intel) drives, these drives started incrementing the num_err_log_entries as soon as they were plugged in: # nvme smart-log /dev/nvme21n1 Smart Log for NVME device:nvme21n1 namespace-id:ffffffff ... num_err_log_entries : 932 The relevant errors should be readable in the error-log. All 64 errors in the log looked the same:
openssl / error 42 / certificate not yet valid
In yesterday's post about not being able to connect to the SuperMicro iKVM IPMI, I wondered “why stunnel/openssl did not send error 45 (certificate_expired) for a not-yet-valid certificate.” Here's a closer examination. Quick recap: yesterday, I got SSL alert/error 42 as response to a client certificate that was not yet valid. The server was living in 2015 and refused to accept a client certificate that would be valid first in 2016.
supermicro / ikvm / sslv3 alert bad certificate
Today I was asked to look at a machine that disallowed iKVM IPMI console access. It allowed access through the “iKVM/HTML5”, but when connecting using the “Console Redirection” (Java client, see also ipmikvm) it would quit after 10 failed attempts. TL;DR: The clock of the machine had been reset to a timestamp earlier than the first validity of the supplied client certificate. After changing the BMC time from 2015 to 2021, everything worked fine again.