The latest posts in full-text for feed readers.
At work I had to block some very annoying spammers from POSTing to the contact form on the website. I ssh'ed into the Ubuntu server and blocked their IP addresses with ufw:
$ ufw insert 1 deny from 203.17.245.205
Unfortunately, this did not work. The spammers were still able to access the nginx webserver in the Docker container:
203.17.245.205 - - [17/Apr/2025:11:58:28 +0200] "POST /contact HTTP/1.1" 200 24111 "https://example.org/contact" "Mozilla/5.0 (Macintosh; Intel Mac OS X 12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 OPR/89.0.4447.51"
It turns out that Docker heavily uses iptables for its container networking, and that the rules in the default INPUT chain of the filter table that are generated by ufw are too late in the game - earlier rules already route the packets into the containers.
The iptables section in the Docker documentation tells us that rules need to be put into the DOCKER-USER chain:
$ iptables -I DOCKER 1 -j DROP -s 203.17.245.205
In the end, the chain looked like this:
$ iptables -L DOCKER-USER --numeric --line-numbers
Chain DOCKER-USER (1 references)
num target prot opt source destination
1 DROP 0 -- 103.106.241.170 0.0.0.0/0
2 DROP 0 -- 113.176.64.56 0.0.0.0/0
3 DROP 0 -- 176.102.128.140 0.0.0.0/0
4 DROP 0 -- 193.163.116.88 0.0.0.0/0
5 DROP 0 -- 103.255.9.53 0.0.0.0/0
6 DROP 0 -- 116.212.106.162 0.0.0.0/0
7 DROP 0 -- 5.254.26.39 0.0.0.0/0
8 DROP 0 -- 5.254.26.37 0.0.0.0/0
9 DROP 0 -- 203.17.245.205 0.0.0.0/0
10 DROP 0 -- 172.111.204.6 0.0.0.0/0
11 DROP 0 -- 94.43.48.194 0.0.0.0/0
12 DROP 0 -- 188.169.38.71 0.0.0.0/0
13 DROP 0 -- 181.204.9.178 0.0.0.0/0
14 DROP 0 -- 103.246.84.78 0.0.0.0/0
15 DROP 0 -- 122.175.12.83 0.0.0.0/0
16 DROP 0 -- 92.255.57.64 0.0.0.0/0
17 DROP 0 -- 185.208.8.200 0.0.0.0/0
18 RETURN 0 -- 0.0.0.0/0 0.0.0.0/0
Published on 2025-04-18 in linux
Automatic printer configuration under Linux (Debian 12) does not really work because - supposedly - the auto-detected IPv6 addresses change over time and printing at some day don't work anymore.
Instead, I manually configured my Canon LBP722C network laser printer
as follows:
The IPv4 address is statically configured (via the printer's awful web interface) and does not change.
Published on 2025-03-18 in linux
I wanted to copy some movies to an external disk in preparation for our summer vacation, and attached an external USB3 disk to our Dreambox satellite receiver.
Nothing happened; the disk did not get automatically mounted. Manually mounting also failed. dmesg told me:
usb 10-2: new SuperSpeed USB device number 2 using xhci_hcd scsi2 : usb-storage 10-2:1.0 scsi 2:0:0:0: Direct-Access TOSHIBA External USB 3.0 0 PQ: 0 ANSI: 6 sd 2:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 2:0:0:0: [sdb] Write Protect is off sd 2:0:0:0: [sdb] Mode Sense: 43 00 00 00 sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sd 2:0:0:0: [sdb] Attached SCSI disk EXT4-fs (sdb1): couldn't mount RDWR because of unsupported optional features (400)
I formatted the disk on a Debian unstable (kernel 4.19), and it enabled the metadata checksum feature that is not supported by the DreamOS 2.6's kernel (3.4-4.0-dm7080). I had to disable that feature on the disk with my laptop:
$ e2fsck -f /dev/sdb1 $ tune2fs -O ^metadata_csum /dev/sdb1
Source: Couldn't mount RDWR because of unsupported optional features (400)
The next mount try also resulted in an error:
usb 10-2: new SuperSpeed USB device number 3 using xhci_hcd scsi3 : usb-storage 10-2:1.0 scsi 3:0:0:0: Direct-Access TOSHIBA External USB 3.0 0 PQ: 0 ANSI: 6 sd 3:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdb] Write Protect is off sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00 sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdb1 sd 3:0:0:0: [sdb] Attached SCSI disk JBD2: Unrecognised features on journal EXT4-fs (sdb1): error loading journal
So kernel 4.19 also adds new journal data.
$ tune2fs -l /dev/sdb1 tune2fs 1.43-WIP (18-May-2015) Filesystem volume name: videos Last mounted on: /media/cweiske/videos Filesystem UUID: 7e07e565-99c7-4ea9-b2a3-1eb02ba23572 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 61054976 Block count: 244190208 Reserved block count: 12209510 Free blocks: 210555405 Free inodes: 61042713 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 965 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 15 20:05:55 2019 Last mount time: Sat Jul 13 10:51:53 2019 Last write time: Sat Jul 13 11:02:12 2019 Mount count: 0 Maximum mount count: -1 Last checked: Sat Jul 13 11:02:12 2019 Check interval: 0 (<none>) Lifetime writes: 121 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 83196fde-051e-43c5-967c-7aa3935c571e Journal backup: inode blocks
Instead of finding out which journal feature I had to disable, I disabled the whole journal:
$ tune2fs -O ^has_journal /dev/sdb1
I could now finally mount the disk on the Dreambox.
Source: Debian User Forums: Boot failing: No init found
Tom told me how to reactivate the journal:
$ tune2fs -O has_journal /dev/sdb1
Published on 2019-08-06 in dreambox, linux
My Purism 13 v3 laptop that I bought and setup in 2018 has a 500 GB SSD inside. During installation, I had chosen to size the boot partition 256 MiB and a root partition 30 GiB. Both had proven to be the wrong choices, given that a kernel initramfs is ~80 MiB now, and the root partition was constantly filled to ~90%.
The rest of the disk was the home partition, which also began to feel crowded - I archive games of the PlayJam GameStick, have kernel sources lying around and extract many firmwares and software packages for reverse engineering purposes.
It was time to get a larger harddisk.
I had solved the /boot size problem with the help of a forum post and configured initramfs to only include needed kernel modules:
# dateien waren zu groß # https://debianforum.de/forum/viewtopic.php?p=1367229 MODULES=dep
After running update-initramfs -u, the initrd.img-6.x.x-amd64 files have only ~26 MiB instead of the previous ~80.
I wanted to go and buy a M.2 SSD, but when searching for the answer to "M.2 with NVMe or SATA connector", I saw that the laptop shipped with a 2.5" SATA SSD.
I wondered a bit why it wasn't a M.2 disk, but went to a shop and bought a 2TB SATA SSD. Only when opening up the laptop case and exchanging the drives, I saw that the Librem v3 indeed has a M.2 slot :(
I could have had nearly 10x the speeds of the ~530MiB/s SATA disks, - had I only looked at the whole image in the wiki instead of just focusing on the SATA disk.
I downloaded the Debian netinstall image and copied it to a USB flash drive: sudo cp debian-12.7.0-amd64-netinst.iso /dev/sdb.
Then I put the new disk into the laptop, plugged in the flash drive and powered it on and waited until I saw the boot menu.
Now I connected a the old disk with a new SATA-to-USB3-adapter to the laptop and started the graphical rescue application.
The rescue mode worked, but required me to go back to the main menu several times to re-read the partition table, which I did not manage to do via the shell.
Copying the old disk to the new one was easily done with a standard cp command:
$ cp /dev/sdc /dev/sda
sda was the new disk, sdb the flash drive and sdc the old disk I connected via USB.
My system is encrypted, and so I had to move and resize several things.
Number | Size | Device | Usage | Task |
---|---|---|---|---|
1 | 256 MiB | /dev/sda1 | boot | Increase to 2 GiB |
2 | 487 GiB | /dev/sda2 | extended | Move by 1.75 GiB; increase to 1951 GiB |
5 | 487 GiB | /dev/sda5 | encrypted data | Increase to 1951 GiB |
To be able to increase the boot partition, I had to make space and move the extended partition further behind by 1.75 GiB.
Moving the second partition was possible with sfdisk (2048 - 256 = 1792):
$ echo '+1792M,' | sfdisk --move-data /dev/sda 2
It had a speed of ~200MiB/s.
After that I increased sizes of boot and extended partitions with parted:
$ parted
(parted) resizepart 1 2048MB
...
(parted) resizepart 2 2000GB
...
(parted) resizepart 5 2000GB
Then I had to leave the shell and use the rescue menu's "read harddisk" to reload the partition table.
The actual data on the laptop are encrypted, and so I first had to increase the /dev/mapper/crypted_sda5 partition, which I again did with parted.
The encrypted partition contains a LVM "volume group", which can be inspected with vgdisplay. I think it already had the correct size.
Inside the volume group are LVM "logical volumes", the root, swap and home partitions (lvdisplay). I simply used lvresize to change their sizes - 200GiB for root, and the rest of the 2 TB to the home partition.
For some reasons the lvresize -s option did not work and I used resizefs on each of the partitions afterwards, which required me to e2fsck manually.
After all the partitions were resized, I ran grub-install /dev/sda. Unfortunately booting did not work, I only saw a UEFI tool coreinfo 0.1 with CPU and RAM information.
This was solved by running update-grub, which installed the boot manager into the UEFI instead of the MBR.
Published on 2024-11-03 in linux
I had a bunch of .opus music files obtained with yt-dlp, painstakingly filled with meta data: Title, artist, lyrics, cover image.
The speaker I wanted to play them on does not have support for the opus codec, so I had to convert the files to .mp3:
mkdir mp3
for i in *.opus; do ffmpeg -i "$i" "mp3/$i.mp3"; done
Unfortunately the meta data were not automatically copied to the mp3 files.
Fortunately is is possible to instruct ffmpeg to copy all of them:
ffmpeg -i in.opus -map_metadata 0:s:a:0 out.mp3
Source: Preserve metadata when converting .opus audio with embedded covers
Published on 2024-08-13 in linux
I got an error when installing gdb on my Linux laptop running Debian unstable:
$ LC_ALL=C apt install gdb
[...]
dpkg: error processing package libc6:amd64 (--configure):
package libc6:amd64 2.38-14 cannot be configured because libc6:i386 is at a different version (2.37-19)
Errors were encountered while processing:
libc6:amd64
It suggested to run a command to fix it automatically, but that failed with the exact same error:
$ LC_ALL=C apt --fix-broken install -y
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
arch-test cgroupfs-mount criu debootstrap libmodule-find-perl libmodule-scandeps-perl libnet1 needrestart runc systemd-dev tini
Use 'apt autoremove' to remove them.
Upgrading:
libc-bin libc6:i386 libnss-systemd libpam-systemd libsystemd0 libsystemd0:i386 libudev-dev libudev1 libudev1:i386 systemd udev
Installing dependencies:
linux-sysctl-defaults systemd-cryptsetup
Summary:
Upgrading: 11, Installing: 2, Removing: 0, Not Upgrading: 1615
10 not fully installed or removed.
Download size: 0 B / 10.4 MB
Space needed: 1141 kB / 5175 MB available
apt-listchanges: Reading changelogs...
Preconfiguring packages ...
dpkg: error processing package libc6:amd64 (--configure):
package libc6:amd64 2.38-14 cannot be configured because libc6:i386 is at a different version (2.37-19)
Errors were encountered while processing:
libc6:amd64
Error: Timeout was reached
needrestart is being skipped since dpkg has failed
Error: Sub-process /usr/bin/dpkg returned an error code (1)
The problem is that libc6:amd64 is installed in a different version than its i386 counterpart (although libc6:i386 is available in the same version, as I confirmed via apt show libc6:i386). Since the preconfiguration already fails, it will never reach the stage where it can update the i386 version to the same that amd64 has.
The solution was to circumvent all the configuration automatisms that apt uses and manually install the correct libc6:i386 version:
$ apt download libc6:i386
$ dpkg -i libc6_2.38-14_i386.deb
$ LC_ALL=C apt --fix-broken install -y
After that apt behaved properly again.
Published on 2024-07-14 in linux
I wanted to manually sync specific music albums from my media server to my laptop and wondered which graphical tool I could use (except the file manager). Then I used the application I already use to solve git merge conflicts: Meld.
Two things were necessary for meld to be useful for my task:
Published on 2024-06-08 in linux
It's 2024 and I want to send a sign a contract and send it via e-mail to a company.
The company does not accept electronically signed PDF files (most don't), and even when they did - I don't have an electronic signature I can use with PDF files, nor do I know how to create one.
The only option I have is to sign the contract by hand: Print it out, write my signature with a pen, scan the signed paper and send the scan via e-mail to the company.
A variation of this option that takes less time and paper is to add an image of my signature to the PDF. But how can I do that? Let's look at the software on my Debian 12 laptop:
The PDF viewer shipped with the Mate Desktop environment tells me that it can't open PDF files.
The default Gnome PDF viewer can't add images to PDFs, since 9 years (new ticket)
The feature request is open since 2013, 11 years.
There seems to be a trick with stamps, but I failed because the KDE QT interface looks totally broken in Mate:
Inserts water marks because I have no license. Buying the license would mean giving money to a Russian company, which is something I won't do with the Russia's war against the Ukraine happening.
Adding a .png or .jpg image crashes the application.
I could import the multi-page PDF, but then I failed to find out how to switch to the second page :(
The text in the imported PDF does not look as it should.
In the end I opened the PDF in Firefox, which contains an PDF editor.
It's sad that I have to use a browser for something that a native PDF tool should be able to do.
Published on 2024-06-03 in bigsuck, linux
I wanted to prevent clients to see my home server's list of NFS shares, so I disabled its nfs-mountd.service because it is only needed for NFSv3:
- rpc.mountd
- This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the nfs-mountd service replies with a Success status and provides the File-Handle for this NFS share back to the NFS client.
And indeed, no share list was visible anymore:
$ showmount -e dojo
clnt_create: RPC: Program not registered
But this had consequences, although I tried to use NFSv4 only:
$ cat /etc/fstab | grep media-dojo
dojo:/data/media /mnt/media-dojo nfs noauto,user,nolock,nfsvers=4
$ mount -v /mnt/media-dojo/
mount.nfs: timeout set for Thu Dec 21 21:20:46 2023
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=fdc3:e153::3,clientaddr=fdc3:e153::dcbb:9cea:9873:1f10'
mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=192.168.3.3,clientaddr=192.168.3.5'
mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=fdc3:e153::3,clientaddr=fdc3:e153::dcbb:9cea:9873:1f10'
mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=192.168.3.3,clientaddr=192.168.3.5'
mount.nfs: mount(2): Connection refused
I could not mount the shares from my Debian experimental (trixie) laptop anymore! After re-enabling nfs-mountd on the home server I could mount again:
$ mount -v /mnt/media-dojo/
mount.nfs: timeout set for Thu Dec 21 21:20:54 2023
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=fdc3:e153::3,clientaddr=fdc3:e153::dcbb:9cea:9873:1f10'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=192.168.3.3,clientaddr=192.168.3.5'
$
A comment on severfault.com explains it:
According to rpc.mountd(1),
"The rpc.mountd daemon implements the server side of the NFS MOUNT protocol, [...] It also responds to requests from the Linux kernel to authenticate clients and provides details of access permissions."
... so it's not needed on an NFSv4 client, but an NFSv4 server still needs it, even though there's no direct communication between clients and rpc.mountd.
Sam Morris, 2023-12-16
So I have to keep nfs-mountd running on the server, but I can deny access from the outside:
mountd: ALL
Listing mounts is not possible anymore, but mounting is:
$ showmount -e dojo rpc mount export: RPC: Authentication error; why = Failed (unspecified error)
Published on 2023-12-21 in linux, network
My new home server mounts some NFS shares from the NAS. When electricity is restored after an outage, both NAS and home server boot up at the same time. The new home server is much faster than my rusty NAS, and so services depending on NFS mount data would start too early, possibly removing data from their database (e.g. Gerbera or paperless-ngx).
To not run into that problem I added a systemd service that waits for the NAS to be available before trying to mount the NFS shares.
At first a service that waits for the NAS ("disa", short for DiskStation) to be reachable ping:
[Unit]
Description=Blocks until it successfully pings disa 192.168.3.96
After=network-online.target
[Service]
ExecStartPre=/usr/bin/bash -c "while ! ping -c1 192.168.3.96; do sleep 1; done"
ExecStart=/usr/bin/sh -c "echo good to go"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
This has to be enabled with
$ systemctl daemon-reload
$ systemctl enable wait-for-disa
Now the NFS mounts are configured to wait for that service to become available:
disa:/volume2/media /mnt/media-disa nfs x-systemd.after=wait-for-disa.service,timeo=50
A systemctl daemon-reload and all is set.
Published on 2023-12-21 in linux, network