chrony is the default service on newer OS releases (Red Hat 7.2 and later, any recent Ubuntu release).
chrony has several advantages over ntpd:
Quicker synchronisation.
Better response to changes in clock frequency (very useful for VMs).
Periodic polling of time servers isn’t required.
It lacks some features like broadcast, multicast, and Autokey packet authentication. When this is required, or for systems that are going to be switched on continuously ntpd is a better choice.
A more comprehensive comparison list is available here:
chrony is installed by default on many distros. If you don’t already have it, install it.
Edit the configuration file.
# vi /etc/chrony.conf
Make the following changes.
# Edit the time sources of your choice
# iburts helps making initial sync faster
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
# Helps stabilising initial sync on restarts
driftfile /var/lib/chrony/drift
# Allows serving time even if above sources aren't available
local stratum 8
# Opens the NTP port to respond to client's requests
# Edit it with your client's subnet
allow 192.168.1.0/24
# Enables support for the settime command in chronyc
manual
Check the firewall configuration in the last section.
Chrony client configuration
server [IP/HOSTNAME OF ABOVE SERVER] iburst
driftfile /var/lib/chrony/drift
logdir /var/log/chrony
log measurements statistics tracking
Checking chrony
[Check if the service is running]
$ systemctl status chrony
[Display the system's clock performance]
$ chronyc tracking
[Display time sources]
$ chronyc sources
If the system is going to be isolated, with no internet connection, or any other time source available you can use its internal clock.
Edit /etc/ntp.conf.
# To point ntpd to sync with its own system clock
server 127.127.1.0 prefer
fudge 127.127.1.0
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace
This will work in a network “island”, but it won’t be a correct time. It is best to sync from other time sources (next section).
Syncing to other NTP servers
# Edit the time sources of your choice
# iburts helps making initial sync faster
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
# Insert your own subnet address
# nomodify - Disallows clients from configuring the server
# notrap - Clients can't be used as peers for time sync
restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap
# Indicates where to keep logs
logfile /var/log/ntp.log
You might want to also modify the rule to limit access only to certain subnets or clients.
You can add lines to chrony and ntpd configurations to allow IPv6 traffic. You would need to add also additional firewall rules. IPv4 shown here for simplicity (and also because I don’t have the requirement). π
SNFS/Xsan: Quantum SNFS metadata controller and Xsan client compatibility chart
In a previous life, I designed and built many SANs based on Xsan (I believe I started with Xsan 1.3). I then migrated to looking after SANs based on SNFS, either from 3rd party vendors, or Quantum.
I believe that the age of Fibre Channel is long over (although SNFS also works on Infiniband if I recall correctly). The advantages of block-level access have been eclipsed by the much higher bandwidth with Ethernet, at a fraction of the cost.
The information has been collected from Apple support articles (current and obsolete ones), ADIC’s and Quantum’s StorNext documentation, and personal experience.
Every Xsan 2.0 and above client has been included. Maybe one day I will add Xsan 1.x releases for historical purposes
Xsan 20.0
Xsan 5.0.1
Xsan 5
Xsan 4.1
Xsan 4
Xsan 3.1
Xsan 3
Xsan 2.3
Xsan 2.2 to 2.2.2
Xsan 2 to 2.1.1
11.0.1
10.13, 10.14, 10.15
10.12
10.11
10.10
10.9
10.8
10.7
10.6
10.5
SNFS 7.0.x
β
β
β
β
β
β
β
β
β
β
SNFS 6.4.0
β
β
β
β
β
β
β
β
β
β
SNFS 6.3.x
β
β
β
β
β
β
β
β
β
β
SNFS 6.2.x
β
β
β
β
β
β
?
?
?
SNFS 6.1.x
β
β
β
β
β
?
?
?
?
β
SNFS 6.0.5, 6.0.5.1, 6.0.6
β
β
β
β
β
?
?
?
?
β
SNFS 6.0, 6.01, 6.0.1.1
β
β
β
β
β
?
?
?
?
β
SNFS 5.4.x
β
β
β
β
β
?
?
?
?
β
SNFS 5.3.2.x
β
β
β
β
β
β
β
β
β
β
SNFS 5.3.1
β
β
β
β
β
β
β
β
β
β
SNFS 5.3.0
β
β
β
β
β
β
β
β
β
β
SNFS 5.2.2
β
β
β
β
β
β
β
β
β
β
SNFS 5.2.1
β
β
β
β
β
β
β
β
β
β
SNFS 5.2.0
β
β
β
β
β
β
β
β
β
β
SNFS 5.1.x
β
β
β
β
β
β
β
β
β
β
SNFS 5.0.x
β
β
β
β
β
β
β
β
β
β
SNFS 4.7.x
β
β
β
β
β
β
β
β
β
β
SNFS 4.6
β
β
β
β
β
β
β
β
β
β
SNFS 4.3
β
β
β
β
β
β
β
β
β
β
SNFS 4.2.1
β
β
β
β
β
β
β
β
β
β
SNFS 4.2.0
β
β
β
β
β
β
β
β
β
β
SNFS 4.1.1 to 4.1.3
β
β
β
β
β
β
β
β
β
β
SNFS 4.0 to 4.1
β
β
β
β
β
β
β
β
β
β
SNFS 3.5.x
β
β
β
β
β
β
β
β
β
β
SNFS 3.1.2 to 3.1.5
β
β
β
β
β
β
β
β
β
β
SNFS controller and Xsan client compatibility chart.
There are some caveats with some of the supported configurations. Some releases were originally marked by Apple as incompatible and then reverted. In the same way, some configurations that were originally marked as working were then updated as not compatible.
Double-check official documentation before any deployment.
I hope you find this table useful. There are some additional Xsan curiosities I will post in the future.
macOS : Removing caches and temporary files in macOS
The easiest way is using an application like Onyx.
You can check the amount of space taken by the files using du.
$ du -ch /path
# du -ch /path
Before using any deletion command you can check with ls what exactly is going to be deleted.
$ ls /path
# ls /path
When using rm enable verbose mode (-v). Don’t delete the root folders, just the files contained in them. If you have a trailing /** it will ensure you don’t have any mishaps.
$ rm -rfv /path/**
# rm -rfc /path/**
As with everything related to deleting files. Be careful.
User & App Caches
List files.
[Logged user caches]
$ du -ch ~/Library/Caches/
$ du -ch~/Library/Logs/
[All users caches, requires sudo]
# du -ch /Users/*/Library/Caches/
# du -ch /Users/*/Library/Logs/
Raspberry Pi: Installing, hardening and optimising Ubuntu 20.04 Server
I have been trying to document the process of configuring a Raspberry Pi as a Time Machine Capsule, but the article became far too long. It covered far too much information and was really hard to read.
I then decided to break the stages into more manageable steps. This has the advantage of allowing the common stages, like setting up the OS, to be shared between different projects.
Therefore, this is that first entry. Some others will follow about how to build different things from this first base image.
Selecting the OS
The 64-bit beta release of Raspberry Pi OS I tried didn’t let ZFS install easily. Ubuntu has the advantage of being a like for like experience regardless of the platform, so it is my preferred choice. Any experience you gain with it will be easily transferable.
The Raspberry Pi model will determine the supported versions of the OS.
Model
32-bit Ubuntu
64-bit Ubuntu
Raspberry Pi 2
Supported
Not supported
Raspberry Pi 3
Supported
Recommended
Raspberry Pi 4
Supported
Recommended
Supported Ubuntu versions.
The Raspberry Pi 3 has limited benefits when using the 64-bit image due to its limited RAM. In addition, it won’t support ZFS for the same reason. The Pi will restart/reset when ZFS volumes are accessed due to a lack of RAM.
If you are going to use a GUI, you should choose a Raspberry Pi 4 with at least 4GB of RAM.
The image can be directly installed on a micro SD card:
It is possible to boot from a USB stick, which is preferable for several reasons. They are cheaper, easier to access from another system, and simple to replace.
First, enable USB boot on your Pi.
Model
USB Boot Support
Notes
Raspberry Pi 1
Not supported
n/a
Raspberry Pi 2 and 3B
Supported
On Raspberry Pi OS echo program_usb_boot_mode=1 | sudo tee -a /boot/config.txt and reboot.
Raspberry Pi 3B+
Supported
Supported out of the box
Raspberry Pi 4
Supported
On Raspberry Pi OS rpi-eeprom-config --edit and set BOOT_ORDER=0xf41 and reboot.
Raspberry Pi’s with supported USB boot.
You might have to boot from an SD card at least once to configure USB boot. Once enabled, it remains activated.
Additional information about the different boot modes for the Raspberry Pi
For the image to be bootable, you need to make some changes. I extracted the steps from this Raspberry Pi forum post. You might find it easier to apply changes if you mount it on another system.
There are two options to make the changes:
Mount the USB stick on another system, and then issue the commands on the USB device. This other system can be the Raspberry Pi itself booting from the SD card, and accessing the USB device.
Or make the changes on the SD card, and then copy the SD card image to the USB device.
Apply the following changes.
1) On the /boot of the USB device, uncompress vmlinuz.
$ cd /media/*/system-boot/
$ zcat vmlinuz > vmlinux
2) Update the config.txt file. The pi4 section is shown in this example, but it has also been tested on a Pi 3. Just enter the information for your Pi model.
$ vim config.txt
The dtoverlay line might be optional for headless systems, but if you have the time and inclination, there is some documentation regarding Raspberry Pi’s device tree parameters.
3) Create a script in the boot partition called auto_decompress_kernel with the following content:
#!/bin/bash -e
## Set Variables
BTPATH=/boot/firmware
CKPATH=$BTPATH/vmlinuz
DKPATH=$BTPATH/vmlinux
## Check if compression needs to be done.
if [ -e $BTPATH/check.md5 ]; then
if md5sum --status --ignore-missing -c $BTPATH/check.md5; then
echo -e "\e[32mFiles have not changed, Decompression not needed\e[0m"
exit 0
else
echo -e "\e[31mHash failed, kernel will be compressed\e[0m"
fi
fi
# Backup the old decompressed kernel
mv $DKPATH $DKPATH.bak
if [ ! $? == 0 ]; then
echo -e "\e[31mDECOMPRESSED KERNEL BACKUP FAILED!\e[0m"
exit 1
else
echo -e "\e[32mDecompressed kernel backup was successful\e[0m"
fi
#Decompress the new kernel
echo "Decompressing kernel: "$CKPATH".............."
zcat $CKPATH > $DKPATH
if [ ! $? == 0 ]; then
echo -e "\e[31mKERNEL FAILED TO DECOMPRESS!\e[0m"
exit 1
else
echo -e "\e[32mKernel Decompressed Succesfully\e[0m"
fi
# Hash the new kernel for checking
md5sum $CKPATH $DKPATH > $BTPATH/check.md5
if [ ! $? == 0 ]; then
echo -e "\e[31mMD5 GENERATION FAILED!\e[0m"
else
echo -e "\e[32mMD5 generated Succesfully\e[0m"
fi
# Exit
exit 0
Normally you would need to mark the script as executable, but unless you modify the partition from its FAT32 default, there is no executable flag to set. So leave it as it is.
If you can mount the root filesystem in the system you are using to edit the files, you can go ahead with steps 4 and 5. Otherwise, you should be able to boot now and manually do these steps after your first boot.
4) Create a script in /ect/apt/apt.conf.d/ directory and call it 999_decompress_rpi_kernel
# cd /media/*/writable/etc/apt/apt.conf.d/
# vi 999_decompress_rpi_kernel
You can save yourself some time and configure the network at this stage.
In my case, I have a static DHCP lease associated with the Pi MAC address, but if you don’t, you can configure the network with a static IP address by editing the network-config file in /boot.
Check the status and check that the time source is correct.
# systemctl status systemd-timesyncd.service
Finally, check that the time zone is correct.
# timedatectl status
Local time: Sun 2021-08-29 23:24:49 BST
Universal time: Sun 2021-08-29 22:24:49 UTC
RTC time: n/a
Time zone: Europe/London (BST, +0100)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Customising the MOTD
You can get the MOTD from the login screen manually with the following command.
$ for i in /etc/update-motd.d/* ; do if [ "$i" != "/etc/update-motd.d/98-fsck-at-reboot" ]; then $i; fi; done
To get system information (including temperature):
$ /etc/update-motd.d/50-landscape-sysinfo
You can edit, add and reorder scripts in /etc/update-motd.d/.
Configuring SSH
SSH will be enabled by default. Test access with the newly created account.
By default, only the password is required to access the server, but we will add the requirement of needing an SSH key with the password. And also limit access only from authorised IP addresses.
If you haven’t generated a public and private key pair on your system (the one used to log into the Pi), you will need to do it (explained below).
A brief note on encryption. Elliptic curve cryptography (ECC) generates smaller keys and provides faster encryption than non-ECC. The smaller ECC keys also provide an equivalent level of encryption provided only with bigger RSA keys:
ECC key size
RSA equivalent
160 bits
1024 bits
224 bits
2048 bits
256 bits
3072 bits
384 bits
7680 bits
512 bits
15360 bits
ECC uses smaller keys with higher equivalent security.
You can use either ECDSA or ED25519 keys. ED25519 isn’t as universally implemented yet due to being quite new, so some clients might not support it, but it is the fastest and most secure one.
For both types of encryption, it is recommended to use the bigger key size. This is 521 bits for ECDSA (note that 521 isn’t a typo). ED25519 keys have a fixed length of 512 bits.
When issuing ssh-keygen, use the -o option. This forces the use of the new OpenSSH format (instead of PEM) when saving your private key. It increases resistance to a known brute-force attack. It breaks compatibility with OpenSSH versions older than 6.5, but this version of Ubuntu runs version 8.2, so this isn’t an issue.
Note that you use the -i flag with your private key, and ssh-copy-id will send the public key for storage on the remote host.
SSH can be configured on the server side to allow only password logins, only key logins, or to require both.
# vim /etc/ssh/sshd_config
“PasswordAuthentication no” will only use the key, and “PasswordAuthentication yes” will use both password and key. Obviously, the second option is safer.
We also disable the option to allow root to login via SSH. The root account is disabled on the image by default, but ensure SSH has been configured correctly anyway.
PermitRootLogin no
PasswordAuthentication yes
# systemctl restart sshd
SSH from another terminal with the new user account, and ensure that the access is working.
If it works, delete the old ubuntu account.
# userdel -r ubuntu
Activate and configure the firewall
Set default rules (deny all incoming, allow all outgoing).
Mosh might require some ports to be opened in the firewall.
The range of ports goes from 60001 to 60999, but if you are expecting few connections, you can make the range smaller.
# ufw allow proto udp from <SOURCE> to <SERVER> port 60001:60010
# ufw limit 60001:60010/udp
Install Cockpit
# apt install -y cockpit
# ufw allow proto tcp from <SOURCE< to <SERVER> port 9090
# ufw limit 9090/tcp
The system can now be reached via the web browser via port 9090:
https://<hostname/IP>:9090
Other customisation
Argon Fan HAT configuration
If you have an Argon fan HAT, you can configure it as follows.
$ curl https://download.argon40.com/argonfanhat.sh -o argonfanhat.sh
$ bash argonfanhat.sh
[...]
Use argonone-config to configure fan
Use argonone-uninstall to uninstall
I have configured with the following triggers.
30 ΒΊC -> 0%
60 ΒΊC -> 10%
65 ΒΊC -> 25%
70 ΒΊC -> 55%
75 ΒΊC -> 100%
Aliases
On Ubuntu, and most distros, there will be an entry in ~/.bashrc that will look like this:
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
This entry can be added manually if not present. This allows all of the aliases to be grouped in ~/.bash_aliases.
$ vim ~/.bash_aliases
# Show free RAM
alias showfreeram="free -m | sed -n '2 p' | awk '{print $4}'"
# Release and free up RAM
# alias freeram='freeram && sync && sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && freeram -m'
# Show temperature
alias temp='cat /sys/class/thermal/thermal_zone0/temp | head -c -4 && echo " C"'
# Show ZFS datasets compress ratios
alias ratio='sudo zfs get all | grep " compressratio "'
This would create a base image with a decent level of security. I will likely add how to add Fail2Ban to improve security even further.
This week my 20.04 LTS installation started to freeze randomly. I suspected several things, but through a process of elimination it ended up pointing to the Wi-Fi adapter.
I can’t rule out a hardware issue yet, but the new driver has been very stable and no freezes have happened so far. This started happening after the last Ubuntu upgrade I ran, and to be fair, the Wi-Fi adapter’s DKMS driver I was using was quite dated.
First check the hardware
Unplug and re-plug the adapter, remember that it will only work on USB 3.0 ports, and it won’t be recognised by USB 3.1 ports. Check the output of:
$ dmesg
The following commands will also help in showing if the adapter is correctly detected.
$ lsusb
Bus 004 Device 002: ID 4791:205a G-Technology ArmorATD
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 2357:0103 TP-Link Archer T4UH wireless Realtek 8812AU
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
The Bus 003 Device 002: ID 2357:0103 entry above is the one with the USB Wi-Fi adapter on my system. You can remove the adapter and issue the command again and compare results to help you identify yours.
For non-USB adapters you can use:
$ lspci
More detailed information about the device can be obtained with the lshw command.
$ lshw -C network
WARNING: you should run this program as super-user.
*-network
description: Ethernet interface
product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
vendor: Realtek Semiconductor Co., Ltd.
[output truncated]
*-network:2
description: Wireless interface
physical id: 4
bus info: usb@3:1
logical name: enx18d6c70fbacc
serial: 18:d6:c7:a1:22:ab
capabilities: ethernet physical wireless
configuration: broadcast=yes driver=rtl8812au ip=192.168.x.2 multicast=yes wireless=IEEE 802.11AC
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.
This last command is really useful because it will give really important information about what driver to use.
In this case the chipset and driver to use is identified in this string driver=rtl8812au. We already knew this in any case. If yours is a different driver/adapter this solution is unlikely to work for you.
Checking the drivers
Now check that the driver is loaded, you need to look for a string that is similar to the driver string above.
$ lsmod | grep 8812
8812au 999424 0
If the module isn’t loaded you can manually load it:
# modprobe 8812au
Installing updated drivers
If all of the above seems to work but the Wi-Fi adapter isn’t detected you can install the drivers manually.
The version of the drivers is newer than the ones provided via apt.
Uninstall the system provided drivers
From the GUI:
Go to Software & Updates
Select Additional Drivers
Find the entry for the Wi-Fi adapter (rtl8812-au) and select Do not use the device
Or from the CLI:
[find the installed driver]
# apt list rtl8812au*
[and uninstall it]
# apt purge rtl8812au-dkms
[Make sure that in /etc/NetworkManager/NetworkManager.conf]
# vim /etc/NetworkManager/NetworkManager.conf
[The following entry is inserted]
[device]
wifi.scan-rand-mac-address=no
If the driver is recognised you can configure the wireless network as normal. Restart to make sure everything works and remains persistent.
Uninstall
If you ever need to uninstall the driver you can do it with:
# dkms remove -m rtl8812au -v 5.9.3.2 --all
If you edited /etc/modules you will need to revert the changes. In the previous tutorial for Ubuntu 18.04 the module had to be added manually. It isn’t the case for this version.
macOS: Application stealing focus
A few weeks ago I had an issue with a macOS system which focus kept being stolen by an unknown application. All the windows were displaying as inactive so it had to be a background process.
A StackExchange user called medmunds had adapted a scriptfrom another post, that seems to have been modified from another from some Apple forum. Isn’t the Internet great?
His script very quickly displayed the culprit. Very useful indeed.
#!/usr/bin/python
try:
from AppKit import NSWorkspace
except ImportError:
print "Can't import AppKit -- maybe you're running python from brew?"
print "Try running with Apple's /usr/bin/python instead."
exit(1)
from datetime import datetime
from time import sleep
last_active_name = None
while True:
active_app = NSWorkspace.sharedWorkspace().activeApplication()
if active_app['NSApplicationName'] != last_active_name:
last_active_name = active_app['NSApplicationName']
print '%s: %s [%s]' % (
datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
active_app['NSApplicationName'],
active_app['NSApplicationPath']
)
sleep(1)
VirtualBox/KVM: Reduce VM sizes
There are two utilities that can help discard unused blocks so that VMs can be shrunk.
zerofree finds unused blocks with non-zero content in ext2, ext3 and ext4 filesystems and fills them with zeros. The volume can’ be mounted which makes the process of running it a bit convoluted.
fstrim will discard unused blocks on a mounted filesystem. It is best and preferred when working with SSD drives and thinly provisioned storage. It will work with more filesystems, and it won’t hammer your SSD with unnecessary writes.
It is recommended to use fstrim and only use zerofree if unavoidable.
CentOS 7/8
fstrim
# fstrim -va
zerofree (ext2, ext3, ext4)
# yum install epel-release
# yum install zerofree
[Reboot]
Press e on GRUB menu
Go to line that starts with 'linux'
Add init=/bin/bash
Ctrl-X
[Find which disk to trim]
# df
# zerofree -v /dev/mapper/centos_centos7-root
[Shutdown machine]
zerofree (xfs)
# yum install epel-release
# yum install zerofree
[Reboot]
Press e on GRUB menu
Go to line that starts with 'linux'
Change ro to rw
Add init=/bin/bash
Ctrl-X
[Find the partition/filesystem to trim]
# df
[Fill the filesystem with zeros. This will work with any filesystem but it will write a lot of data on your drives.]
# dd if=/dev/zero of=/tmp/dd bs=$((1024*1024)); rm /tmp/dd
# sync
# exit
[Shutdown machine]
Debian 9/10
fstrim
[Debian 9]
# fstrim -va
[Debian 10]
# fstrim -vA
zerofree
# apt install zerofree
[Reboot]
Press e on GRUB menu
Go to line that starts with 'linux'
Add init=/bin/bash
Ctrl-X
[Find disk to trim]
# df
# zerofree -v /dev/sda1
[Shutdown machine]
Be aware that if you are using ZFS on Ubuntu (or any other distro) the above commands won’t work. In fact, it will generate a lot of extra writes on the filesystem.
Just ensure that ZFS is using compression, or avoid it in the guest system.
Reducing the image size
Virtualbox
[List all disks]
$ vboxmanage list hdds
[Just the paths]
$ vboxmanage list hdds | grep 'Location.*.vdi' | awk '{$1=""}1'
[Compress one image]
$ vboxmanage modifymedium disk --compact /home/user/Virtualbox/Kali-Linux-2021.1/Kali-Linux-2020.4-vbox-amd64-disk001.vdi
[List all images path]
$ vboxmanage list hdds | grep 'Location.*.vdi' | awk '{$1=""}1' | sed 's/^ /"/;s/$/"/'
I wish I knew the syntax to automatise compressing all the images with one line. I might revisit it in the future with a script.
#!/bin/sh
# All images
for file_name in `ls -1 *.cow2`
do
echo
echo ==================
echo Image: $file_name
echo -n Old `qemu-img info $file_name | grep 'disk\ size'` ; echo
mv $file_name $file_name.tmp
qemu-img convert -O qcow2 $file_name.tmp $file_name
rm $file_name.tmp
echo -n New `qemu-img info $file_name | grep 'disk\ size'` ; echo
echo ==================
done
Ubuntu 20.4: Virtualbox not running after the last upgrade
When launching a VM in Virtualbox I got an error saying that it can’t be started because a required module isn’t loaded. It suggests to manually load it.
# modprobe vboxdrv
[This outputs an error message]
modprobe: FATAL: Module vboxdrv not found in directory /lib/modules/5.8.0-34-generic
Re-installing Virtualbox also fails because the virtualbox-dkms package can’t be configured
# apt install virtualbox virtualbox-dkms
[...]
Removing old virtualbox-6.1.10 DKMS files...
------------------------------
Deleting module version: 6.1.10
completely from the DKMS tree.
------------------------------
Done.
Loading new virtualbox-6.1.10 DKMS files...
Building for 5.8.0-34-generic 5.8.0-36-generic
Building initial module for 5.8.0-34-generic
ERROR: Cannot create report: [Errno 17] File exists: '/var/crash/virtualbox-dkms.0.crash'
Error! Bad return status for module build on kernel: 5.8.0-34-generic (x86_64)
Consult /var/lib/dkms/virtualbox/6.1.10/build/make.log for more information.
[..]
E: Sub-process /usr/bin/dpkg returned an error code (1)
The last system update upgraded the kernel from 5.4 to 5.8, and there is something in the new kernel that breaks Virtualbox.
There are two solutions:
Installing Virtualbox from source or downgrading to the previous kernel.
I have chosen the latter as I expect this to be a temporary issue, and a fix to be released soon.
The process to revert is simple.
Reboot and in the GRUB screen select Advanced Options.
Ubuntu 20.04.1 LTS
*Advanced options for Ubuntu 20.04.1 LTS
History for Ubuntu 20.04.1 LTS
UEFI Firmware Settings
Select a trusted 5.4 version to boot from. Most likely the 3rd option in the list. Your exact version numbers might differ from mine.
Ubuntu 20.04.1 LTS, with Linux 5.8.0-34-generic
Ubuntu 20.04.1 LTS, with Linux 5.8.0-34-generic (recovery mode)
*Ubuntu 20.04.1 LTS, with Linux 5.4.0-59-generic
Ubuntu 20.04.1 LTS, with Linux 5.4.0-59-generic (recovery mode)
Ubuntu 20.04.1 LTS, with Linux 5.4.0-54-generic
Ubuntu 20.04.1 LTS, with Linux 5.4.0-54-generic (recovery mode)
After the boot, check that you are running 5.4.
$ uname -r
5.4.0-59-generic
See which versions of 5.8 you have installed in your system.
$ apt list --installed | grep linux-image
Make a note of the 5.8 versions listed (or use grep again), and remove them manually.
Linux 5.8 seems to have been removed for the time being, so if you run any updates you are safe.
Ubuntu: ZFS bpool is full and not running snapshots during apt updates
When running apt to update my system I kept seeing a message saying that bpool had less than 20% space free and that the automatic snapshotting would not run.
What I didn’t realise is that this would apply to the rpool even if it had plenty of free space. They are run together and have to match. Checking the snapshots it seems they had stopped running for several months. Yikes!
You can list the current snapshots in several ways:
[List existing snapshots with their names and creation date.]
$ zsysctl show
Name: rpool/ROOT/ubuntu_dd5xf4
ZSys: true
Last Used: current
History:
- Name: rpool/ROOT/ubuntu_dd5xf4@autozsys_qfi5pz
Created on: 2021-01-12 23:35:01
- Name: rpool/ROOT/ubuntu_dd5xf4@autozsys_1osqbq
Created on: 2021-01-12 23:33:22
You can also use the zfs commands for the same purpose.
List existing snapshots with default properties information
(name, used, references, mountpoint)
$ zfs list -t snapshot
You can also list the creation date asking for the creation property.
$ zfs list -t snapshot -o name,creation
It should list then in creation order, but if not, you can use -s option to sort them.
$ zfs list -t snapshot -o name,creation -s creation
Deciding which snapshots to delete will vary. You might want to get rid of the older ones, or maybe the ones that are consuming the most space.
My snapshots were a few months old so there wasn’t much point in keeping them. I deleted all with the following one-liner:
[-H removes headers]
[-o name displays the name of the filesystem]
[-t snapshot displays only snapshots]
# zfs list -H -o name -t snapshot | grep auto | xargs -n1 zfs destroy
I can’t stress how important it is that whatever zfs destroy command you issue, especially if doing several automatic iterations, only applies to the snapshots you want to.
You can delete filesystems, volumes and snapshots with the above command. Deleting snapshots isn’t an issue. Deleting the filesystem is a pretty big one.
Please, ensure that the command lists only snapshots you want to remove before running it. You have been warned.
Ubuntu: System freezing for a few seconds with iwlwifi microcode sw error
For a few months now my main system would momentarily freeze or stall (usually about 20-30 seconds) and then continue working. It was something that started after one system update and wasn’t fixed with any further updates.
The system would notify that one of the CPU cores timed out and for a few moments the computer would stall or freeze before resuming as if nothing had happened.
dmesg was showing timeouts related to iwlwifi:
[ 2313.312941] Timeout waiting for hardware access (CSR_GP_CNTRL 0x0c04000c)
[ 2313.312995] WARNING: CPU: 4 PID: 1424 at drivers/net/wireless/intel/iwlwifi/pcie/trans.c:2066 iwl_trans_pcie_grab_nic_access+0x1f9/0x230 [iwlwifi]
iwlwifi is the kernel driver for several Intel based wireless adapters.
It is possible to install a different versions of the driver manually but I don’t like to deviate too much from a standard installation. It can complicate maintenance and troubleshooting in the future.
The issue would happen several times throughout the day. The truth is that with some of the updates it became less frequent, but it was still happening often enough and filling the syslog with errors.
Doing some reading it seems that Intel wireless drivers have some known issues. It seems that there isn’t much that can be done from that side. It is very likely that even newer drivers and firmware would behave the same.
But ignoring recurring issues is never good practice!
There was a note on the Debian’s wiki about how to disable driver options for troubleshooting. On the Arch Linux forum the user mkdy created a modprobe file to do that after experiencing similar freezes.
I tried his workaround and it also works on my Ubuntu system.
Create /etc/modprobe.d/iwl.conf and add the following content:
After rebooting the system the errors and freezes stopped. It could be that not all of the options are needed. If I have time I will experiment and try to determine if one in particular is the one responsible for my freezes.
Addendum
A few weeks after I applied the patch and experienced no more entries in the log I noticed me experiencing lag online. I then noticed that my log had some new entries related to iwlwifi and to top it off I realised that the above settings made the connection slower.
I don’t know if this was caused by the last kernel and system updates, it could be. The system is on 5.4.0-47 which is currently the latest release on Ubuntu 20.04.
I ended up testing the different options on /etc/modprobe.d/iwl.conf. This entry seems to remove the syslog iwlwifi entries, the random freezes, the lag and the slow connection.