Ubuntu: apt upgrades failing with “Cannot initiate the connection to ports.ubuntu.com”
|
While doing a distro upgrade with
# do-release-upgrade
I kept getting failures half-way stating
Cannot initiate the connection to ports.ubuntu.com:80
The errors showed several IPv6 addresses that couldn’t be reached. My router supports IPv6, but not my ISP. Somehow I was expecting that the router would be doing the translation or DNS resolution between the two but this wasn’t the case.
Disabling IPv6 on the router didn’t to much. I have a recollection that some of the services I am running on my Ubuntu server require IPv6 enabled or else the OS breaks. So it can’t be disabled for the whole OS.
Luckily you can configure apt to only use IPv4:
# apt-get -o Acquire::ForceIPv4=true update
This will automatically refresh the sources and next time you run apt it will complete the upgrade. If not, your problem lies somewhere else.
Ubuntu: apt error message “Key is stored in legacy trusted.gpg keyring”
|
After upgrading to Ubuntu 22.04 running apt shows an error message saying “Key is stored in legacy trusted.gpg keyring“:
# apt update
[..]
All packages are up-to-date.
W: https://apt.syncthing.net/dists/syncthing/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
The key needs to be exported from the legacy keyring and then imported back to the current system.
List the keys and find the key ID of the repository that is showing the error. In this case it is Syncthing.
Update the source file for the repository adding the exported key.
# vim /etc/apt/sources.list.d/syncthing.list
deb [arch=amd64 signed-by=/usr/share/keyrings/syncthing.gpg] https://apt.syncthing.net/ syncthing stable #Syncthing
Confirm that the error message is no longer showing.
# apt update
[...]
Hit:5 https://apt.syncthing.net syncthing InRelease
[...]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up-to-date.
Finally, remove the old signature.
# apt-key del 00655A3E
Linux: Booting in single-user mode
|
Sometimes it might be necessary to start in single-user mode to do some administration work, or even reset an existing password.
Normally this can be achieved via the GRUB boot loader.
CentOS / RedHat (with root account enabled)
Switch on your system.
Press Esc until the GRUB menu shows up.
This will bring up the GNU GRUB menu. If the CentOS/RedHat logo/boot messages show up you will need to restart (Ctrl-Alt-Del) and try again.
Select the OS/boot you want to edit. Normally the first line. Press e to edit it.
CentOS Linux (3.10.0-1160.53.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.45.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.42.2.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.41.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.36.2.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-d0401f7cdedb4955a0a262b3e0054323) 7 (Core)
Use the ↑ and ↓ keys to change the selection.
Press 'e' to edit the selected item, or 'c' for command prompt.
You will need to find the entry for the kernel. Normally starts with linux16.
Type Ctrl-X to exit and the system will start in single- user mode.
If required, remount the root filesystem:
# mount -o remount,rw /
[If there are other filesystems you need to mount from fstab:]
# mount --all
CentOS / RedHat (without root account enabled)
It might be that your system didn’t have a root account enabled, in which case the above steps will fail. There is a workaround.
Switch on your system.
Press Esc until the GRUB menu shows up.
This will bring up the GNU GRUB menu. If the CentOS/RedHat logo/boot messages show up you will need to restart (Ctrl-Alt-Del) and try again.
Select the OS/boot you want to edit. Normally the first line. Press e to edit it.
CentOS Linux (3.10.0-1160.53.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.45.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.42.2.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.41.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.36.2.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-d0401f7cdedb4955a0a262b3e0054323) 7 (Core)
Use the ↑ and ↓ keys to change the selection.
Press 'e' to edit the selected item, or 'c' for command prompt.
You will need to find the entry for the kernel. Normally starts with linux16.
Type Ctrl-X to exit and the system will start emergency mode in read-only mode.
Remount the root filesystem as read/write:
# mount -o remount,rw /sysroot
Switch to the sysroot jail.
# chroot /sysroot
Reset the password, or do any required tasks.
If you have SELinux enforcing mode enabled it will protect the OS from any changes. After you change the password type the following to overcome this:
# touch /.autorelabel
Restart.
# reboot -f
Ubuntu / Debian
Switch on your system.
Press and hold the Shift key.
In some instances pressing the Esc key several times (instead of holding it) achieves the same result. Just be aware that if you press it too many times it will bring you to the GRUB CLI. You can type normal and you will get to the menu described below.
This will bring up the GNU GRUB menu. If the Ubuntu logo/boot messages show up you will need to restart (ctrl-alt-del) and try again.
Select Advanced Options on the GRUB menu.
GNU GRUB version 2.04
Ubuntu 20.04.4 LTS
*Advanced options for Ubuntu 20.04.4 LTS
History for Ubuntu 20.04.4 LTS
UEFI Firmware Settings
And select the recovery mode option. Normally the latest kernel installed on your system.
GNU GRUB version 2.04
* Ubuntu 20.04.4 LTS, with Linux 5.13.0-37-generic
** Ubuntu 20.04.4 LTS, with Linux 5.13.0-37-generic (recovery mode)
Ubuntu 20.04.4 LTS, with Linux 5.13.0-35-generic
Ubuntu 20.04.4 LTS, with Linux 5.13.0-35-generic (recovery mode)
This will boot the system and show a series of options. Select root.
Recovery Menu (filesystem state: read only)
resume Resume normal boot
clean Try to make free space
dpkg Repair broken packages
fsck Check all file systems
grub Update grub bootloader
network Enable networking
root Drop to root shell prompt
system-summary System summary
<OK>
This message will show. Press Enter.
Press Enter for maintenance
(or press Ctrl-D to continue)
If your / volume is ZFS it will be already read and write. Other filesystems might start in read only mode. If so, remount:
# mount -o remount,rw /
[If there are other filesystems you need to mount from fstab:]
# mount --all
Changing an user’s password
# passwrd <username>
Adding a new user
In the rare event of not having an user, you can add one and give it sudo privileges.
ZFS ‘Failed to start Mark current ZSYS boot as successful’ fix
|
On Ubuntu 20.04 after installing the NVIDIA driver 510 metapackage the system stopped booting.
It will either hang with a black screen and blinking cursor on the top left or show the following error message:
[FAILED] Failed to start Mark current ZSYS boot as successful.
See 'systemctl status zsys-commit.service' for details.
[ OK ] Stopped User Manager for UID 1000.
Attempting to revert from a snapshot ends up with the same error message. This wasn’t the case on another separate system that had the same upgrade.
The “20.04 zys-commit.service fails” message is quite interesting and it seems that the overall cause is a mismatch of user/kernel zfs components.
These are the steps I followed to fix it. Many thanks to Lockszmith for his research in identifying the issue and finding a fix. He created two posts raising it, links provided here.
[In GRUB]
*Advanced options for Ubuntu 20.04.3 LTS
[Select the first recovery option in the menu]
*Ubuntu 20.04.3 LTS, with Linux 5.xx.x-xx-generic (recovery mode)
[Wait for the system to load the menu and select:]
root
[Press Enter for Maintenance to get the CLI]
Check the reason for the error.
# systemctl status zsys-commit.service
[...]
Feb 17 11:11:24 ab350 systemd[1] zsysctl[4068]: level=error msg="couldn't commit: couldn't promote dataset "rpool/ROOT/ubuntu_733qyk": couldn't promote "rpool/ROOT/ubuntu_733qyk": not a cloned filesystem"
[...]
Attempting to promote it manually fails:
# zfs promote rpool/ROOT/ubuntu_733qyk
cannot promote `rpool/ROOT/ubuntu_733qyk` : not a cloned filesystem
[boot in recovery mode]
# apt reinstall zfs-initramfs zfs-zed zfsutils-linux
# zfs promote rpool/ROOT/ubuntu_733qyk
[reboot in normal mode]
[Configure the 470 drivers]
Reverting to previous ZFS version
The system should now be back to normal, but you might want to revert to the mainline ZFS version despite the bug. After all, this was a hack to promote the filesystem and get it back to work.
# add-apt-repository --remove ppa:jonathonf/zfs
[Check that is has been removed]
$ apt policy
# apt update
[Pray]
# apt remove zfs-initramfs zfs-zed zfsutils-linux
# apt install zfs-initramfs zfs-zed zfsutils-linux
[Check the right version is installed]
# apt list --installed | grep zfs
# apt autoremove
[Pray harder]
# reboot
With that I managed to bring my system back to a working condition, but updating the drivers a second time made it fail again and I couldn’t fix it. A clean install of 20.04.3 doesn’t seem to exhibit this problem. Not sure what is the reason behind it but there are a few bugs open with Canonical regarding this.
I hope that 22.04 will bring a better ZFS version.
Linux / Unix: Comparing differences between folders
|
I had to check the file changes between two Backintime snapshots recently. You can always use rsync for that, but there is a more straightforward way by using diff.
$ diff -qr directoyr-1/ directory-2/
-q will display only the files that differ.
-r will make the comparison recursive.
There is a GUI application called Meld that provides similar functionality, but the diff approach will work anywhere and requires memorising less flags than rsync.
Raspberry Pi : Configuring a Time Capsule/Backintime server
You are going to have to create users for each of the services/users that will be connecting to the server. You want to keep files and access as isolated as possible. As in a given user shouldn’t have any visibility or notion of other users’ backups. We are also creating accounts that can’t login into the system for Time Machine, only authenticate.
If required, the default shell can be changed with:
# usermod -s /usr/sbin/nologin timemachine_john
Setting up backup user groups
If more than one system is going to be backed up it is advisable to use different accounts for each.
It is possible to isolate users by assigning them individual datasets, but that might create storage silos.
An alternative is to create individual users that belong to the same backup group. The backup group can access the backintime dataset, but not each other’s data.
Create the group.
# addgroup backupusers
Assign main group and secondary group (the secondary group would be the shared one).
Edit the settings of the netatalk service so that that share can be seen with the name of your choice and work as a Time Capsule server.
# vim /etc/netatalk/AppleVolumes.default
Enter the following:
/backups/timecapsule "pi-capsule" options:tm
Note that you can give the capsule a name with spaces above.
Restart the service:
# systemctl restart netatalk
Check that netatalk has been installed correctly:
# afpd -V
afpd 3.1.12 - Apple Filing Protocol (AFP) daemon of Netatalk
[...]
afpd has been compiled with support for these features:
AFP versions: 2.2 3.0 3.1 3.2 3.3 3.4
CNID backends: dbd last tdb
Zeroconf support: Avahi
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: No
EA support: ad | sys
ACL support: Yes
LDAP support: Yes
D-Bus support: Yes
Spotlight support: Yes
DTrace probes: Yes
afp.conf: /etc/netatalk/afp.conf
extmap.conf: /etc/netatalk/extmap.conf
state directory: /var/lib/netatalk/
afp_signature.conf: /var/lib/netatalk/afp_signature.conf
afp_voluuid.conf: /var/lib/netatalk/afp_voluuid.conf
UAM search path: /usr/lib/netatalk//
Server messages path: /var/lib/netatalk/msg/
Configure netatalk
# vim /etc/nsswitch.conf
Change this line:
hosts: files mdns4_minimal [NOTFOUND=return] dns
to this:
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns
Note that if you are running Netatalk 3.1.11 or above it is not necessary any more to create the /etc/avahi/services/afpd.service. Using this file will cause an error.
If you are running an older version go ahead, otherwise jump to the next section.
[Global]
; Global server settings
mimic model = TimeCapsule6,106
[pi-capsule]
path = /backups/timecapsule
time machine = yes
Check configuration and reload if needed:
# systemctl status avahi-daemon
[restart if necessary]
# systemctl restart netatalk
[Make the service automatically start]
# systemctl enable netatalk.service
If you go to your Mac’s Time Machine preferences the new volume will be available and you can start using it.
netatalk troubleshooting
Some notes of things to check from the server side (Time Capsule server):
If you have disabled passwords and are only using keys, you will need to temporarily change the security settings to allow Backintime to exchange keys.
On the remote system/Pi/server:
# vim /etc/ssh/sshd_config
PasswordAuthentication yes
# systemctl restart ssh
Backintime uses SSH, so the user accounts need to be allowed to login. Therefore the default login shell needs to reflect this.
If not created already, assign the user a home directory. Finally, allow the user to read and write the folder containing the backups.
[Local system]
The client machine that is running Backintime and that you want to backup your data from.
[Remote system]
The SSH server that has the storage where your backup is going to be stored.
From the local system account you want to run backintime (either your user or root, depending on how you run Backintime) SSH into the remote system. In my case, a Raspberry Pi.
Auto-remove
Older than 10 years
If free space is less than 50GiB
If free inodes is less than 2%
Smart remove
Run in background on remote Host
Keep last
14 days (7 days)
21 days (14 days)
8 weeks (6 weeks)
36 months (14 months)
Don't remove named snapshots
Options
Enable notifications
Backup replaced files on restore
Continue on errors (keep incomplete snapshots)
Log level: Changes & Errors
After the first run has completed you can check which is the best performing cipher from the CLI.
# backintime benchmark-cipher --profile-id 2
After a few rounds, aes192-ctr came out as the best performing cipher for me.
Secure SSH
If you changed the SSH configuration at the beginning, after setting everything up, remember to secure SSH again on the server/remote system.
# vim /etc/ssh/sshd_config
PasswordAuthentication no
# systemctl restart ssh
Restoring restrictions to backup users
The login account is required for Backintime to be able to run rsync. It is worth doing a bit more research on how to harden/limit these accounts.
Troubleshooting
Some examples of some issues and some troubleshooting steps you can apply.
Time Capsule can’t be reached / firewall settings
Make sure the server is allowing AFP connections from the Mac client.
# ufw allow proto tcp from CLIENT_IP to PI_CAPSULE_IP port 548
Time Capsule – Configuring Time Machine backups via the network on a macOS VM
The destination needs to be configured manually.
Mount the AFP/Time Capsule mount via the Finder.
In the CLI configure the destination:
# tmutil setdestination -a /Volumes/pi-capsule
The backups can then be started from the GUI.
You can get information about the current configured destinations via the CLI.
# tmutil destinationinfo
====================================
Name : pi-capsule
Kind : Network
Mount Point : /Volumes/pi-capsule
ID : 7B648734-9BFC-417F-B5A1-F31B8AD52F4B
Time Capsule – Checking backup status
# tmutil currentphase
# tmutil status
ZFS stalling on a Raspberry Pi
Check the recordsize property. Reduce it to the default 128 kiB.
Reduce ARC size to reduce the amount of memory consumed/reserved for ZFS.
Understanding rsync logs
The logs indicate the type of change rsync is seeing. A reference is available here:
XYcstpoguax path/to/file
|||||||||||
||||||||||╰- x: The extended attribute information changed
|||||||||╰-- a: The ACL information changed
||||||||╰--- u: The u slot is reserved for future use
|||||||╰---- g: Group is different
||||||╰----- o: Owner is different
|||||╰------ p: Permission are different
||||╰------- t: Modification time is different
|||╰-------- s: Size is different
||╰--------- c: Different checksum (for regular files), or
|| changed value (for symlinks, devices, and special files)
|╰---------- the file type:
| f: for a file,
| d: for a directory,
| L: for a symlink,
| D: for a device,
| S: for a special file (e.g. named sockets and fifos)
╰----------- the type of update being done::
<: file is being transferred to the remote host (sent)
>: file is being transferred to the local host (received)
c: local change/creation for the item, such as:
- the creation of a directory
- the changing of a symlink,
- etc.
h: the item is a hard link to another item (requires
--hard-links).
.: the item is not being updated (though it might have
attributes that are being modified)
*: means that the rest of the itemized-output area contains
a message (e.g. "deleting")
If you are new to ZFS, I would advise doing a little bit of research first to understand the fundamentals. Jim Salter’s articles on storage and ZFS are very recommended.
The examples below are to create a pool from a single disk, with separate datasets used for network backups.
In some examples, I might use device names for simplicity, but you are advised to use disks IDs or serials.
Installing ZFS
Ubuntu makes it very easy.
# apt install zfsutils-linux
ZFS Cockpit module
If Cockpit is installed, it is possible to install a module for ZFS. This module is sadly no longer in development. If you know of alternatives, please share!
By default, the configuration runs the following snapshots and retention policies:
Period
Retention
Hourly
24 hours
Daily
31 days
Weekly
Eight weeks
Monthly
12 months
I configured the following snapshot retention policy:
Period
Retention
Hourly
48 hours
Daily
14 days
Weekly
Four weeks
Monthly
Three months
Hourly
# vim /etc/cron.hourly/zfs-auto-snapshot
#!/bin/sh
# Only call zfs-auto-snapshot if it's available
which zfs-auto-snapshot > /dev/null || exit 0
exec zfs-auto-snapshot --quiet --syslog --label=hourly --keep=48 //
Daily
# vim /etc/cron.daily/zfs-auto-snapshot
#!/bin/sh
# Only call zfs-auto-snapshot if it's available
which zfs-auto-snapshot > /dev/null || exit 0
exec zfs-auto-snapshot --quiet --syslog --label=daily --keep=14 //
Weekly
# vim /etc/cron.weekly/zfs-auto-snapshot
#!/bin/sh
# Only call zfs-auto-snapshot if it's available
which zfs-auto-snapshot > /dev/null || exit 0
exec zfs-auto-snapshot --quiet --syslog --label=weekly --keep=4 //
Monthly
# vim /etc/cron.monthly/zfs-auto-snapshot
#!/bin/sh
# Only call zfs-auto-snapshot if it's available
which zfs-auto-snapshot > /dev/null || exit 0
exec zfs-auto-snapshot --quiet --syslog --label=monthly --keep=3 //
Setting up the ZFS pool
This post has several use cases and examples, and I recommend it highly if you want further details on different commands and ways to configure your pools.
In my example there is no resilience, as there is only one attached disk. For me, this is acceptable because I have an additional local backup besides this filesystem.
It is preferable to have a second backup (ideally off-site) than a single one regardless of any added resilience you might set.
I create a single pool with an external drive. Read below for an explanation of the different command flags.
Of the above values, the most important one by far is ashift.
The ashift property sets the block size of the vdev. It can’t be changed once set, and if it isn’t correct, it will cause massive performance issues with the filesystem.
recordsize is another performance impacting property, especially on the Raspberry Pi. Smaller sizes can improve performance when accessing random batches, but higher values will provide better performance and compression when reading sequential data. The problem on the Raspberry Pi has been that with a value of 1M the system load increased, eventually stopping the filesystem activity until the system was restarted.
The default value (128k) has performed without any noticeable issue.
Compression
lz4 compression is going to yield an optimum performance/compression ratio. It will make the storage perform faster than if there is no compression.
ZFS 0.8 doesn’t give many choices regarding compression but bear in mind that you can change the algorithm on a live system.
gzip will impact performance but yields a higher compression rate. It might be worth checking the performance with different compression formats on the Pi 4. With older Raspberry Pi models, the limitation will be the USB / network in most cases.
For reference, on the same amount of data these were the compression ratios I obtained:
All in all, the performance impact and memory consumption didn’t make switching from lz4 worthwhile.
Permissions
acltype=posixacl xattr=sa
It enables the POSIX ACLs and Linux Extended Attributes on the inodes rather than on separate files.
Access times
atime is recommended to be disabled (off) to reduce the number of IOPS.
relatime offers a good compromise between the atime and notime behaviours.
Normalisation
The normalization property indicates whether a file system should perform a Unicode normalisation of file names whenever two file names are compared and which normalisation algorithm should be used.
formD is the default set by Canonical when setting up a pool. It seems to be a good choice if sharing the volume via NFS with macOS systems and avoiding files not being displayed due to names using non-ASCII characters.
Additional properties
The pool is configured with the canmount property off so that it can’t be mounted.
This is because I will be creating separate datasets, one for Time Capsule backups, and another two for Backintime, and I don’t want them to mix.
All datasets will share the same pool, but I don’t want the pool root to be mounted. Only datasets will mount.
dnodesize is set to auto, as per several recommendations when datasets are using the xattr=sa property.
sync is set as standard. There is a performance hit for writes, but disabling it comes at the expense of data consistency if there is a power cut or similar.
A brief test showed a lower system load when sync=standard than with sync=disabled. Also, with standard there were fewer spikes. It is likely that the performance is lower, but it certainly causes the system to suffer less.
Encryption
I am not too keen to encrypt physically secure volumes because when doing data recovery, you are adding an additional layer that might hamper and slow things down.
For reference, I am writing down an example of encryption options using an external key for a volume. This might not be appropriate for your particular scenario. Research alternatives if needed.
Automatic trimming of the pool is essential for SSDs:
# zpool set autotrim=on backup_pool
Disabling automatic mount for the pool. (This applies only to the root of the pool, the datasets can still be set to be mountable regardless of this setting.)
# zfs set canmount=off backup_pool
Setting up the ZFS datasets
I will create three separate datasets with assigned quotas for each.
[Create datasets]
# zfs create backup_pool/backintime_tuxedo
# zfs create backup_pool/backintime_ab350
# zfs create backup_pool/timecapsule
[Set mountpoints]
# zfs set mountpoint=/backups/backintime_tuxedo backup_pool/backintime_tuxedo
# zfs set mountpoint=/backups/backintime_ab350 backup_pool/backintime_ab350
# zfs set mountpoint=/backups/timecapsule backup_pool/timecapsule
[Set quotas]
# zfs set quota=2T backup_pool/backintime_tuxedo
# zfs set quota=2T backup_pool/backintime_ab350
# zfs set quota=2T backup_pool/timecapsule
Changing compression on a dataset
The default lz4 compression is recommended. gzip consumes a lot of CPU and makes data transfers slower, impacting backups restoration.
If you still want to change the compression for a given dataset:
# zfs set compression=gzip-7 backup_pool/timecapsule
A comparison of compression and decompression using different algorithms with OpenZFS:
You can add an additional disk/partition and make the pool redundant in a RAID-Z configuration. Unfortunately, it doesn’t work to make it a RAID-Z2 or RAID-Z3.
# zpool attach backup_pool /dev/sda7 /dev/sdb7
Renaming disks in pools
By default, Ubuntu uses device identifiers for the disks. This should not be an issue, but in some cases, adding or connecting drives might change the device name order and degrade one or more pools.
This is why creating a pool with disk IDs or serials is recommended. You can still fix this if you created your pool using device names.
With the pool unmounted, export it, and reimport pointing to the right path:
ZFS should be running on a system with at least 4GiB of RAM. If you plan to use it on a Raspberry Pi (or any other system with limited resources), reduce the ARC size.
In this case, I am limiting it to 3GiB. It is a change that can be done live:
Linux / Ubuntu / hdparm: Identifying drive features and setting sleep patterns
|
Preparing the storage
Install hdparm and smartmontools
Install hdparm and the SMART monitoring tools.
# apt install hdparm smartmontools
Identify the right hard drive
Make sure you identify the correct drive, as some of the commands will destroy data. If you don’t understand the commands, then check them first. You have been warned.
Identify the block size
Knowing the block size of the device is important. It will help optimising writes, and in the case of SSD or flash drives avoid write amplification and wear and tear.
Pay attention to the physical/optimal size. This is the one that matters.
SSDs will hide the true size of the pages and blocks. Even the same drive models might be built with different components, so getting it right is tricky.
Use the drive’s sector physical size to match the ZFS ashift (block size).
Retrieve drive IDs
When setting ZFS pools or using disk tools it is best to avoid using device names as they can easily change their order. Using the drive ID or serial will ensure that no matter in which port or in which order the drives are plugged it will be the correct drive selected.
This matters with any disk accessing utility if you have several drives, or will be inserting external drives regularly.
$ ls -l /dev/disk/by-id/
[...]
lrwxrwxrwx 1 root root 9 Mar 9 13:16 usb-TOSHIBA_External_USB_3.0_20150612015531-0:0 -> ../../sda
[...]
You can also extract model and serial numbers with hdparm.
Even better, depending on the use of the drive, and if there is a plan to add mirror drives, is to partition the drive to ensure there is enough space if a different drive model is later added. Although I believe ZFS already does this and rounds down partitions using Mebibytes.
Test for damaged sectors
An additional and optional step is to test the hard drive for damaged sectors. This kind of test tends to be destructive so it is best if it is done before configuring the pools.
badblocks is a useful tool to achieve this.
It is installed by default, but if not you can do it manually.
# apt install e2fsprogs
A destructive test can be done with:
# badblocks -wsv -b 4096 /dev/sda
If you want to run the test while preserving the disk data you can run it in a non-destructive way. This will take longer.
# badblocks -nsv -b 4096 /dev/sda
ZFS has built-in checks and protection so in most cases you can skip this step.
Setting hard drive sleep patterns
Above I explained that using disk IDs is always a better idea. For simplicity, I will be using device names in several examples below, but I still advise using IDs or serials.
Check if the disk supports sleep
Check if the drive supports standby.
# hdparm -y /dev/sda
If supported the output will be:
/dev/sda:
issuing standby command
Any other output might indicate that the drive doesn’t support sleep, or that a different tool/setting might be required.
Next, check if the drive supports write cache:
# hdparm -I /dev/sda | grep -i 'Write cache'
The expected output is:
* Write cache
The * indicates that the feature is supported.
An example of a complete hdparm output from a drive is shown below for reference. Different drives, with different features, will show different output, or even none at all.
# hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: TOSHIBA MD04ACA500
Serial Number: 55OBK0SPFPHC
Firmware Revision: FP2A
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 9767541168
Logical Sector size: 512 bytes
Physical Sector size: 4096 bytes
Logical Sector-0 offset: 0 bytes
device size with M = 1024*1024: 4769307 MBytes
device size with M = 1000*1000: 5000981 MBytes (5000 GB)
cache/buffer size = unknown
Form Factor: 3.5 inch
Nominal Media Rotation Rate: 7200
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Advanced power management level: 128
DMA: sdma0 sdma1 sdma2 mdma0 mdma1 *mdma2 udma0 udma1 udma2 udma3 udma4 udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* NOP cmd
* DOWNLOAD_MICROCODE
* Advanced Power Management feature set
SET_MAX security extension
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
unknown 119[7]
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
* Native Command Queueing (NCQ)
* Host-initiated interface power management
* Phy event counters
* Host automatic Partial to Slumber transitions
* Device automatic Partial to Slumber transitions
* READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
DMA Setup Auto-Activate optimization
Device-initiated interface power management
* Software settings preservation
* SMART Command Transport (SCT) feature set
* SCT Write Same (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
* reserved 69[3]
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
more than 508min for SECURITY ERASE UNIT. more than 508min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 500003964bc01970
NAA : 5
IEEE OUI : 000039
Unique ID : 64bc01970
Checksum: correct
An example of a complete smartctl output from a drive is shown below also for reference. As mentioned earlier, different systems will generate different outputs.
# smartctl --all /dev/sda
smartctl 7.1 2019-12-30 r5022 [aarch64-linux-5.4.0-1029-raspi] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Toshiba 3.5" MD04ACA... Enterprise HDD
Device Model: TOSHIBA MD04ACA500
Serial Number: 55OBK0SPFPHC
LU WWN Device Id: 5 000039 64bc01970
Firmware Version: FP2A
User Capacity: 5,000,981,078,016 bytes [5.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Mon Mar 8 15:02:10 2021 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.
General SMART Values:
Offline data collection status: (0x80) Offline data collection activity
was never started.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 120) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 533) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 9003
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 9222
5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 6418
10 Spin_Retry_Count 0x0033 253 100 030 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 9212
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 482
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 104
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 9225
194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 37 (Min/Max 15/72)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0
220 Disk_Shift 0x0002 100 100 000 Old_age Always - 0
222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6393
223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0
224 Load_Friction 0x0022 100 100 000 Old_age Always - 0
226 Load-in_Time 0x0026 100 100 000 Old_age Always - 214
240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 5617 -
# 2 Short offline Completed without error 00% 4702 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
More information about hdparm and smartctl is available on the following sites.
Depending on the drive manufacturer and model you might need to query the settings with different flags. Check the man page.
[Get/set the Western Digital Green Drive's "idle3" timeout value.]
# hdparm -J /dev/sd[a-e]
/dev/sda:
wdidle3 = 300 secs (or 13.8 secs for older drives)
/dev/sdb:
wdidle3 = 8.0 secs
/dev/sdc:
wdidle3 = 300 secs (or 13.8 secs for older drives)
/dev/sdd:
wdidle3 = 300 secs (or 13.8 secs for older drives)
/dev/sde:
wdidle3 = 300 secs (or 13.8 secs for older drives)
From the man page:
A setting of 30 seconds is recommended for Linux use. Permitted values are from 8 to 12 seconds, and from 30 to 300 seconds in 30-second increments. Specify a value of zero (0) to disable the WD idle3 timer completely (NOT RECOMMENDED!).
There are flags for temperature (-H for Hitachi drives), acoustic management (-M), measuring cache performance (-T), and others. Go on, read that man page. 🙂
The -S flag sets the standby/spindown timeout for the drive. Basically, how long the drive will wait with no disk activity before turning off the motor.
Value
Description
0
Disable the feature.
1 to 240
Five seconds multiples (a value of 120 means 10 minutes).
241 to 251
Thirty minutes intervals (a value of 242 means 1 hour).
Note that hdparm might wake the drive up when is queried. smartctl can query the drive without waking it.
# APM setting (-B)
apm = 127
# APM setting while on battery (-B)
apm_battery = 127
# on/off drive's write caching feature (-W)
write_cache = on
# Standby (spindown) timeout for drive (-S)
spindown_time = 120
# Western Digital (WD) Green Drive's "idle3" timeout value. (-J)
wdidle3 = 300
hdparm.conf method
Edit the configuration file:
# vim /etc/hdparm.conf
And insert an entry for each drive. Select only settings/features/values that are supported by that drive, otherwise the rest of the options won’t be applied. Test, test, test!
In my case, the above method works. I couldn’t get this one to work on my system, but it could be because of the OS. I am leaving it for reference in case it might be of help.
# vim /etc/udev/rules.d/69-disk.rules
Create an entry for each drive editing the serial number and hdparm parameters. Make sure that only supported flags are added or it will fail.
chrony is the default service on newer OS releases (Red Hat 7.2 and later, any recent Ubuntu release).
chrony has several advantages over ntpd:
Quicker synchronisation.
Better response to changes in clock frequency (very useful for VMs).
Periodic polling of time servers isn’t required.
It lacks some features like broadcast, multicast, and Autokey packet authentication. When this is required, or for systems that are going to be switched on continuously ntpd is a better choice.
A more comprehensive comparison list is available here:
chrony is installed by default on many distros. If you don’t already have it, install it.
Edit the configuration file.
# vi /etc/chrony.conf
Make the following changes.
# Edit the time sources of your choice
# iburts helps making initial sync faster
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
# Helps stabilising initial sync on restarts
driftfile /var/lib/chrony/drift
# Allows serving time even if above sources aren't available
local stratum 8
# Opens the NTP port to respond to client's requests
# Edit it with your client's subnet
allow 192.168.1.0/24
# Enables support for the settime command in chronyc
manual
Check the firewall configuration in the last section.
Chrony client configuration
server [IP/HOSTNAME OF ABOVE SERVER] iburst
driftfile /var/lib/chrony/drift
logdir /var/log/chrony
log measurements statistics tracking
Checking chrony
[Check if the service is running]
$ systemctl status chrony
[Display the system's clock performance]
$ chronyc tracking
[Display time sources]
$ chronyc sources
If the system is going to be isolated, with no internet connection, or any other time source available you can use its internal clock.
Edit /etc/ntp.conf.
# To point ntpd to sync with its own system clock
server 127.127.1.0 prefer
fudge 127.127.1.0
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace
This will work in a network “island”, but it won’t be a correct time. It is best to sync from other time sources (next section).
Syncing to other NTP servers
# Edit the time sources of your choice
# iburts helps making initial sync faster
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
# Insert your own subnet address
# nomodify - Disallows clients from configuring the server
# notrap - Clients can't be used as peers for time sync
restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap
# Indicates where to keep logs
logfile /var/log/ntp.log
You might want to also modify the rule to limit access only to certain subnets or clients.
You can add lines to chrony and ntpd configurations to allow IPv6 traffic. You would need to add also additional firewall rules. IPv4 shown here for simplicity (and also because I don’t have the requirement). 🙂
Raspberry Pi: Installing, hardening and optimising Ubuntu 20.04 Server
|
I have been trying to document the process of configuring a Raspberry Pi as a Time Machine Capsule, but the article became far too long. It covered far too much information and was really hard to read.
I then decided to break the stages into more manageable steps. This has the advantage of allowing the common stages, like setting up the OS, to be shared between different projects.
Therefore, this is that first entry. Some others will follow about how to build different things from this first base image.
Selecting the OS
The 64-bit beta release of Raspberry Pi OS I tried didn’t let ZFS install easily. Ubuntu has the advantage of being a like for like experience regardless of the platform, so it is my preferred choice. Any experience you gain with it will be easily transferable.
The Raspberry Pi model will determine the supported versions of the OS.
Model
32-bit Ubuntu
64-bit Ubuntu
Raspberry Pi 2
Supported
Not supported
Raspberry Pi 3
Supported
Recommended
Raspberry Pi 4
Supported
Recommended
Supported Ubuntu versions.
The Raspberry Pi 3 has limited benefits when using the 64-bit image due to its limited RAM. In addition, it won’t support ZFS for the same reason. The Pi will restart/reset when ZFS volumes are accessed due to a lack of RAM.
If you are going to use a GUI, you should choose a Raspberry Pi 4 with at least 4GB of RAM.
The image can be directly installed on a micro SD card:
It is possible to boot from a USB stick, which is preferable for several reasons. They are cheaper, easier to access from another system, and simple to replace.
First, enable USB boot on your Pi.
Model
USB Boot Support
Notes
Raspberry Pi 1
Not supported
n/a
Raspberry Pi 2 and 3B
Supported
On Raspberry Pi OS echo program_usb_boot_mode=1 | sudo tee -a /boot/config.txt and reboot.
Raspberry Pi 3B+
Supported
Supported out of the box
Raspberry Pi 4
Supported
On Raspberry Pi OS rpi-eeprom-config --edit and set BOOT_ORDER=0xf41 and reboot.
Raspberry Pi’s with supported USB boot.
You might have to boot from an SD card at least once to configure USB boot. Once enabled, it remains activated.
Additional information about the different boot modes for the Raspberry Pi
For the image to be bootable, you need to make some changes. I extracted the steps from this Raspberry Pi forum post. You might find it easier to apply changes if you mount it on another system.
There are two options to make the changes:
Mount the USB stick on another system, and then issue the commands on the USB device. This other system can be the Raspberry Pi itself booting from the SD card, and accessing the USB device.
Or make the changes on the SD card, and then copy the SD card image to the USB device.
Apply the following changes.
1) On the /boot of the USB device, uncompress vmlinuz.
$ cd /media/*/system-boot/
$ zcat vmlinuz > vmlinux
2) Update the config.txt file. The pi4 section is shown in this example, but it has also been tested on a Pi 3. Just enter the information for your Pi model.
$ vim config.txt
The dtoverlay line might be optional for headless systems, but if you have the time and inclination, there is some documentation regarding Raspberry Pi’s device tree parameters.
3) Create a script in the boot partition called auto_decompress_kernel with the following content:
#!/bin/bash -e
## Set Variables
BTPATH=/boot/firmware
CKPATH=$BTPATH/vmlinuz
DKPATH=$BTPATH/vmlinux
## Check if compression needs to be done.
if [ -e $BTPATH/check.md5 ]; then
if md5sum --status --ignore-missing -c $BTPATH/check.md5; then
echo -e "\e[32mFiles have not changed, Decompression not needed\e[0m"
exit 0
else
echo -e "\e[31mHash failed, kernel will be compressed\e[0m"
fi
fi
# Backup the old decompressed kernel
mv $DKPATH $DKPATH.bak
if [ ! $? == 0 ]; then
echo -e "\e[31mDECOMPRESSED KERNEL BACKUP FAILED!\e[0m"
exit 1
else
echo -e "\e[32mDecompressed kernel backup was successful\e[0m"
fi
#Decompress the new kernel
echo "Decompressing kernel: "$CKPATH".............."
zcat $CKPATH > $DKPATH
if [ ! $? == 0 ]; then
echo -e "\e[31mKERNEL FAILED TO DECOMPRESS!\e[0m"
exit 1
else
echo -e "\e[32mKernel Decompressed Succesfully\e[0m"
fi
# Hash the new kernel for checking
md5sum $CKPATH $DKPATH > $BTPATH/check.md5
if [ ! $? == 0 ]; then
echo -e "\e[31mMD5 GENERATION FAILED!\e[0m"
else
echo -e "\e[32mMD5 generated Succesfully\e[0m"
fi
# Exit
exit 0
Normally you would need to mark the script as executable, but unless you modify the partition from its FAT32 default, there is no executable flag to set. So leave it as it is.
If you can mount the root filesystem in the system you are using to edit the files, you can go ahead with steps 4 and 5. Otherwise, you should be able to boot now and manually do these steps after your first boot.
4) Create a script in /ect/apt/apt.conf.d/ directory and call it 999_decompress_rpi_kernel
# cd /media/*/writable/etc/apt/apt.conf.d/
# vi 999_decompress_rpi_kernel
You can save yourself some time and configure the network at this stage.
In my case, I have a static DHCP lease associated with the Pi MAC address, but if you don’t, you can configure the network with a static IP address by editing the network-config file in /boot.
Check the status and check that the time source is correct.
# systemctl status systemd-timesyncd.service
Finally, check that the time zone is correct.
# timedatectl status
Local time: Sun 2021-08-29 23:24:49 BST
Universal time: Sun 2021-08-29 22:24:49 UTC
RTC time: n/a
Time zone: Europe/London (BST, +0100)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Customising the MOTD
You can get the MOTD from the login screen manually with the following command.
$ for i in /etc/update-motd.d/* ; do if [ "$i" != "/etc/update-motd.d/98-fsck-at-reboot" ]; then $i; fi; done
To get system information (including temperature):
$ /etc/update-motd.d/50-landscape-sysinfo
You can edit, add and reorder scripts in /etc/update-motd.d/.
Configuring SSH
SSH will be enabled by default. Test access with the newly created account.
By default, only the password is required to access the server, but we will add the requirement of needing an SSH key with the password. And also limit access only from authorised IP addresses.
If you haven’t generated a public and private key pair on your system (the one used to log into the Pi), you will need to do it (explained below).
A brief note on encryption. Elliptic curve cryptography (ECC) generates smaller keys and provides faster encryption than non-ECC. The smaller ECC keys also provide an equivalent level of encryption provided only with bigger RSA keys:
ECC key size
RSA equivalent
160 bits
1024 bits
224 bits
2048 bits
256 bits
3072 bits
384 bits
7680 bits
512 bits
15360 bits
ECC uses smaller keys with higher equivalent security.
You can use either ECDSA or ED25519 keys. ED25519 isn’t as universally implemented yet due to being quite new, so some clients might not support it, but it is the fastest and most secure one.
For both types of encryption, it is recommended to use the bigger key size. This is 521 bits for ECDSA (note that 521 isn’t a typo). ED25519 keys have a fixed length of 512 bits.
When issuing ssh-keygen, use the -o option. This forces the use of the new OpenSSH format (instead of PEM) when saving your private key. It increases resistance to a known brute-force attack. It breaks compatibility with OpenSSH versions older than 6.5, but this version of Ubuntu runs version 8.2, so this isn’t an issue.
Note that you use the -i flag with your private key, and ssh-copy-id will send the public key for storage on the remote host.
SSH can be configured on the server side to allow only password logins, only key logins, or to require both.
# vim /etc/ssh/sshd_config
“PasswordAuthentication no” will only use the key, and “PasswordAuthentication yes” will use both password and key. Obviously, the second option is safer.
We also disable the option to allow root to login via SSH. The root account is disabled on the image by default, but ensure SSH has been configured correctly anyway.
PermitRootLogin no
PasswordAuthentication yes
# systemctl restart sshd
SSH from another terminal with the new user account, and ensure that the access is working.
If it works, delete the old ubuntu account.
# userdel -r ubuntu
Activate and configure the firewall
Set default rules (deny all incoming, allow all outgoing).
Mosh might require some ports to be opened in the firewall.
The range of ports goes from 60001 to 60999, but if you are expecting few connections, you can make the range smaller.
# ufw allow proto udp from <SOURCE> to <SERVER> port 60001:60010
# ufw limit 60001:60010/udp
Install Cockpit
# apt install -y cockpit
# ufw allow proto tcp from <SOURCE< to <SERVER> port 9090
# ufw limit 9090/tcp
The system can now be reached via the web browser via port 9090:
https://<hostname/IP>:9090
Other customisation
Argon Fan HAT configuration
If you have an Argon fan HAT, you can configure it as follows.
$ curl https://download.argon40.com/argonfanhat.sh -o argonfanhat.sh
$ bash argonfanhat.sh
[...]
Use argonone-config to configure fan
Use argonone-uninstall to uninstall
I have configured with the following triggers.
30 ºC -> 0%
60 ºC -> 10%
65 ºC -> 25%
70 ºC -> 55%
75 ºC -> 100%
Aliases
On Ubuntu, and most distros, there will be an entry in ~/.bashrc that will look like this:
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
This entry can be added manually if not present. This allows all of the aliases to be grouped in ~/.bash_aliases.
$ vim ~/.bash_aliases
# Show free RAM
alias showfreeram="free -m | sed -n '2 p' | awk '{print $4}'"
# Release and free up RAM
# alias freeram='freeram && sync && sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && freeram -m'
# Show temperature
alias temp='cat /sys/class/thermal/thermal_zone0/temp | head -c -4 && echo " C"'
# Show ZFS datasets compress ratios
alias ratio='sudo zfs get all | grep " compressratio "'
This would create a base image with a decent level of security. I will likely add how to add Fail2Ban to improve security even further.