Christians Tagebuch: server

The latest posts in full-text for feed readers.


Shelly Plug S in Volkszähler einbinden

Seit Jahren schon zeichne ich den Gesamtstromverbrauch des Hauses direkt von unserem Stromzähler mit Hilfe eines Optokopfes auf. Die niedrigsten Verbrauchswerte sind um die 150 Watt, darunter liegt der Stromverbrauch niemals. Ich wollte nun herausfinden, wo der ganze Strom verbraucht wird.

Von Weihnachten hatte ich noch ein paar Shelly Plug S-Zwischenstecker rumliegen. Diese kann man nicht nur per HTTP-Befehlen an- und ausschalten, sondern sie messen auch die verbrauchte Energie und bieten eine HTTP-basierte API zum Abruf des Verbrauchs: /meter/0.

Zur Datenaufzeichnung und Visualisierung nutze ich den Volkszähler. Der vzlogger wird alle 10 Minuten per cron gestartet, liest die Daten vom Stromzähler aus und schreibt sie dann in die "Middleware", die die Daten in die Datenbank wirft. Die Ausgabe erfolgt per volkszähler-Webinterface im Browser:

Volkszähler-Webinterface im Browser

Kanal anlegen

Bevor man den Logger konfigurieren kann, muss im Webinterface ein neuer Kanal angelegt werden. Also Kanal hinzufügen und dann Kanal erstellen und folgende Einstellungen vornehmen:

Typ
El. Energie (Zählerstände)
Auflösung
60000 (der normale Stromzähler hat kWh-Auflösung, und der Shelly Plug S misst aber Wattminuten. 60.000 Wattminuten sind eine kWh.)
Stil
steps (die richtige Einstellung für Zähler)

Der Rest ist nach Belieben wählbar.

Hat man den Kanal angelegt bekommt man eine UUID, die man sich merken bzw. in eine Textdatei kopieren sollte - man braucht sie für die Loggerkonfiguration.

Shelly Plug S: Daten abrufen

Der vzlogger muss die Daten vom Shelly abrufen. Dazu habe ich im "meters"-Abschnitt der vzlogger.conf-Datei ein neues Gerät angelegt:

Das Protokoll ist "exec", weil ein Befehl ausgeführt werden soll. Hier ist ganz wichtig, daß der vzlogger nicht als root laufen darf, weil das exec dann (absichtlich) nicht funktioniert.

Der Befehl command holt sich die aktuellen Verbrauchsdaten mit curl vom Shelly ab und formatiert sie mit jq so um, daß die Zeile "total 2342" rauskommt (wenn der Zählerstand 2342 ist).

Das format sagt, daß zuerst der Identifier ("total"), dann ein Leerzeichen und dann der Zählerstand ("2342") kommt. Ich hatte es ohne Identifier probiert und das format nur auf "$v" gesetzt, aber dann hat der vzlogger nicht mitbekommen, daß Daten vorhanden waren:

[Mar 26 13:00:02][exec] MeterExec::read: Closing process 'curl -s http://shellyplug1/meter/0 | jq -r .total''
[Mar 26 13:00:02][mtr3] Stopped reading.
[Mar 26 13:00:02][chn4] ==> number of tuples: 0
[Mar 26 13:00:02][chn4] JSON request body is null. Nothing to send now.

Mit diesen Einstellungen schreibt der Logger jetzt brav aller 10 Minuten die Verbrauchswerte für die fünf ShellyPlugS in die Datenbank.

Shelly PM Mini Gen3

Der Befehl für einen Shelly PM Mini Gen3 ist (API-Doku):

Das + 12412868 habe ich eingebaut, weil ich von einem Shelly Plug S auf einen Shelly PM Mini Gen3 umgestiegen bin und der Absolutwert in der Datenbank fortgeführt werden musste. Ohne die Anpassung kommt es zu einem krass Sprung von -154kW beim Wechsel vom alten zum neuen Zähler.

Außerdem muss man im Frontend beim Kanal die Auflösung auf 1000/kWh setzen, weil der PM Mini den Gesamtverbrauch in Wattstunden zurückgibt, und nicht in Wattminuten.

Aus diesem Grund ist es auch besser, wenn man beim Wechsel von Shelly Plug auf Shelly PM Mini einen neuen Kanal anlegt, weil sich die Basiseinheit geändert hat.

Restmenge

Die Daten wurden nun mitgeloggt und auch im Browser hübsch angezeigt. Was mir fehlte war noch eine Anzeige, wieviel des verbrauchten Stroms nicht gemessen wird.

Dazu legte ich einen "Verbrauchssensor (virt.)" an. In die Felder für die Eingänge schreibt man die UUIDs der Kanäle, und bei "Regel" die Berechnung:

val(in1)-val(in2)-val(in3)

Hier wird der Wert on Eingang 1 als Basis genutzt und dann die Werte von Eingang 2 und 3 abgezogen. So sehe ich, wieviel "unbekannten" Strom noch im Haus verbraucht wird.

Ich habe 2 weitere virtuelle Sensoren definiert, in denen ich jeweils 4 Messwerte zusammengerechnet habe (weil das Webinterface nur Felder für 4 Eingänge anzeigt). Diese habe ich dann im 3. virtuellen Sensor final vom Gesamtstrom abgezogen.

Restmengenanzeige (lila)

Published on 2022-03-30 in ,


Hibiscus mit Debian 11: Service database nicht gefunden

Nach dem Update meines Homeservers von Debian 10 auf 11 startete der Hibiscus Payment Server nicht mehr:

java.rmi.RemoteException: Der Service "database" wurde nicht gefunden

Es war aber kein Problem mit Hibiscus, sondern mit MySQL bzw. MariaDB. Mit dem Update lauschte die Datenbank nur noch am Loopback-Interface, aber nicht mehr an allen Netzwerkschnittstellen.

Nachdem ich in /etc/mysql/mariadb.conf.d/50-server.cnf die bind-address-Zeile auskommentiert hatte, funktioniert Hibiscus auch wieder.

Published on 2021-10-16 in


Running Lightmeter 1.0 on Debian

I recently stumbled over Lightmeter, a tool that is supposed to help running your own mail server by monitoring delivery problems. It was published in version 1.0, so I thought I'd give it a try.

Installation

Since Debian has no package for it yet, and I did not want to install the whole Go compilation chain on the server, I downloaded the Control Center v1.0.1 binary for amd64 from the releases page.

I wanted to the Lightmeter control center under its own user and group, with all data in /var/lib/lightmeter:

$ mkdir /var/lib/lightmeter
$ useradd --system --home-dir /var/lib/lightmeter --groups adm --shell /usr/sbin/nologin lightmeter
$ chown lightmeter:lightmeter /var/lib/lightmeter/

Then I moved the downloaded binary into /usr/local/bin/lightmeter and made it executable. A systemd service file was also needed:

/etc/systemd/system/lightmeter.service
[Unit]
Description=Postfix mail monitoring
Documentation=https://gitlab.com/lightmeter/controlcenter
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/lightmeter -listen 127.0.0.1:10020 -watch_dir /var/log -workspace /var/lib/lightmeter
TimeoutStopSec=0
Restart=always
User=lightmeter
Group=lightmeter

WorkingDirectory=/var/lib/lightmeter
ProtectHome=yes
ReadOnlyDirectories=/
ReadWriteDirectories=-/var/lib/lightmeter

[Install]
WantedBy=multi-user.target

After writing that file, systemd needs to re-read is configuration and the service can be started:

$ systemctl daemon-reload
$ systemctl start lightmeter
$ systemctl status lightmeter
$ journalctl -fu lightmeter

Now Lightmeter is running and not accessible from the outside, because I use Apache for SSL-termination, proxying access to LM:

<VirtualHost *:443>
    ServerName lightmeter.example.org
    DocumentRoot /var/www/dummy

    Header set Referrer-Policy same-origin

    ProxyPass / http://localhost:10020/
    ProxyPassReverse / https://localhost:10020/

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/lightmeter.example.org/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/lightmeter.example.org/privkey.pem

    CustomLog /var/log/apache2/system/lightmeter-access.log combined
    ErrorLog /var/log/apache2/system/lightmeter-error.log
</VirtualHost>

Findings

Update 2021-06: All the problems listed here have been fixed.

Lightmeter itself is in its very early stages, more an alpha version than 1.0. Here is a list of some of the most serious problems I found with it:

I was pretty disappointed by that Software. Version 1.0.1 only checks if you are on an e-mail blacklist and shows you which domains rejected e-mails. That's all.

The people planning and creating it seem to be more interested in layout, new technologies and monitoring instead of .. functionality. As soon as you start using it, you send tracking data ("telemetry" in new-speak) to their servers. They also decided to spend time on re-implementing their frontend instead of fixing those serious bugs and making whole software useful.

At their current development speed, I think that Lightmeter will need two years of work until it becomes actually useful.

Published on 2020-12-06 in , ,


Debian 10: /tmp/ is empty

I upgraded my home server to Debian 10 (buster) so that I could get the latest Gerbera UPnP server version.

After the update, my Munin monitoring website did not show the room temperatures anymore:

Munin plugin for room temperatures

Using munin-run gave proper results, but the logs said something different:

Error output from usb-wde1_humidity:
 /etc/munin/plugins/usb-wde1_humidity: 124: cannot open /tmp/usb-wde1-last: No such file
Service 'usb-wde1_humidity' exited with status 2/0.

The file in /tmp was there, had the proper rights and contained content.

While checking the version (2.0.49-1) I saw that the buster-backports repository had an update to 2.0.66-1~bpo10+1, so I installed that. Trying munin-run again gave me different results:

$ munin-run usb-wde1_humidity
/etc/munin/plugins/usb-wde1_humidity: 124: /etc/munin/plugins/usb-wde1_humidity: cannot open /tmp/usb-wde1-last: No such file
Log line does not begin with $1
Warning: the execution of 'munin-run' via 'systemd-run' returned an error. This may either be caused by a problem with the plugin to be executed or a failure of the 'systemd-run' wrapper. Details of the latter can be found via 'journalctl'.

Hm. Could it have to do with systemd? Looking for "debian 10" "ls /tmp" directory breaking changes did not yield any results, but debian tmp systemd did.

It turned out that systemd has a service option PrivateTmp=true that mounts an empty directory as /tmp for the service, so that no other program can look into the tmp files of that application. And that means that the application cannot look into the normal /tmp as well :(

In the end I modified my usb-wde1-tools to log into /var/spool/usb-wde1/, and let the munin plugin read from there.

Published on 2021-03-15 in ,


OUYA usage statistics 2020

At the end of 2019 my OUYA replacement server went online, after Razer (owner of the OUYA trademark) shut down the official server a couple of months earlier.

2020 saw two OUYA game jams that resulted in some very nice games being added to the catalog.

I wondered if the OUYAs are still used and check the server logs, as well as the logs from Devin Rich's server.

Since my server does not store the telemetry data that OUYAs send automatically, I had to fall back to "normal" URL statistics.

OUYA bootups

When an OUYA boots up, it once sends its public encryption key to the server before doing any purchase-related API calls. This URL can thus be used to see how often OUYAs are started and the "Play" or "Discover" sections are opened.

10.398 times OUYAs were booted in 2020!

OUYA bootups per month in 2020

Initial logins

Whenever a factory-reset OUYA is setup, users need to log in. After login, the console ID is associated with the user account. That URL is not called again unless someone manually logs out. I use this to count how many OUYAs have been freshly setup.

733 OUYAs have been rescued in 2020! If our OUYA servers did not exist, those consoles would be electronic waste because you cannot use them without initially logging in onto a server.

Initial OUYA logins per month in 2020

Also on: Twitter.

Published on 2021-01-03 in ,


SMTP error: 550 5.7.1 IP listed on RBL (SBLCSS)

My wife replied to an e-mail and got the following error back:

<user@example.org>: host smtpin.rzone.de[2a01:238:20a:202:50f0::1097]
said: 550 5.7.1 IP listed on RBL (3.0.0.127
https://www.spamhaus.org/sbl/query/SBLCSS) (in reply to RCPT TO command)

I immediately checked if my server was on blacklists. MXToolbox's blacklist check did not show any errors, and my own is-my-server-sending-too-many-mails check had not triggered.

The error message indicated that IPv6 was used, and Spamhaus's CSS FAQ says that they only list /64 subnets, and not individual IPv6 addresses - the database would simply get too large.

Now our server hoster Hosteurope only gives us a /128 block, which is a single IPv6 address. This means that another server hosted there sent spam mails, got blacklisted and now my server is punished, because we share the same /64 block :(

I contacted Hosteurope's support, but they told me that IPv6 is still in beta for them and that they could not give me my own /64 range. They also told me I should send my mail via IPv4 :(

I then configured postfix to prefer IPv4 over IPv6 (man 5 postconf):

smtp_address_preference = ipv4

Published on 2020-11-21 in ,


Using Noxon iRadio with HTTPS podcasts

A couple of days ago someone contacted me by e-mail and told me of his problem with a Noxon iRadio Cube: He used it to listen to his favorite podcast, and that podcast had fully switched to HTTPS, and had stopped to provide HTTP feeds and files. Since the iRadio does not support HTTPS, he could not listen to the podcast anymore.

He asked if my noxon gateway software could help there and I said yes. After helping him setting up the software on his Linux machine, we found a bug that prevented the radio from fetching the encryption token and using the server at all. This was quickly fixed.

Registering a podcast on the noxon-gateway software is as easy as creating a text file with the feed URL inside. The gateway will then fetch the RSS, parse it and convert it to the menu structure that the iRadios understand.

The last problem was HTTPS: I wrote a small proxy script that you can pass any URL to, and it will fetch and stream the response to the client. If you have noxon-gateway running, simply set the following in your data/config.php file:

$enablePodcastProxy = true;

The radio now fetches the MP3 file from the gateway's HTTP proxy URL and is happy.

My conversation partner declared success; he could now listen to his podcast again on the old iRadio Cube.

Published on 2020-01-08 in , ,


Postfix: Immediate disconnect from mout.web.de

My wife expected some e-mail, but it did not arrive in her mailbox. Looking at mail.info log file I saw:

postfix/smtpd[14884]: connect from mout.web.de[212.227.17.11]
postfix/smtpd[14884]: disconnect from mout.web.de[212.227.17.11] ehlo=2 starttls=1 quit=1 commands=4

Nothing more. A connect and an immediate disconnect from the web.de mail server.

Sending a mail from a test account worked fine and arrived in my inbox. Then I saw that the e-mail should have had an attachment, and I added a 12 MiB file to the mail in the web.de freemail interface and sent it. The log showed the same behavior: Connect and disconnect.

The web.de interface did only show that the mail was sent, but nothing more.

postconf told me that the maximal e-mail size was 10240000, which is 10 MiB. I increased message_size_limit to 100 MiB (104857600), reloaded postfix daemon and I could receive the mail.

Published on 2019-05-20 in ,


showmount: clnt_create: RPC: Program not registered

I'm using a Synology DiskStation (DSM 4.2) as NFS server. When configuring NFS mounts on my new laptop, I got an error:

$ mount --verbose /mnt/media-diskstation
mount.nfs: timeout set for Mon Apr 29 17:15:25 2019
mount.nfs: trying text-based options 'nolock,vers=4.2,addr=192.168.3.96,clientaddr=192.168.3.5'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting diskstation:/volume2/media

I tried to debug this by checking which mounts are available on the NFS server:

$ showmount -e diskstation
clnt_create: RPC: Program not registered

The same command worked with a second NFS server. Let's look at the RPC information:

$ rpcinfo -p diskstation
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100021    1   udp  52435  nlockmgr
    100021    3   udp  52435  nlockmgr
    100021    4   udp  52435  nlockmgr
    100021    1   tcp  38051  nlockmgr
    100021    3   tcp  38051  nlockmgr
    100021    4   tcp  38051  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100024    1   udp  40426  status
    100024    1   tcp  36552  status

A forum post suggested that mountd is missing in that list - and really, the second (working) NFS server had mountd in its rpcinfo list.

mountd is explained by The Linux Documentation Project:

The rpc.mountd daemon in some way or other keeps track of which directories have been mounted by what hosts.

This information can be displayed using the showmount program.

So under normal circumstances mountd should be there. Let's have a look at the logs on the DiskStation in /var/log/messages:

Apr 29 16:33:10 mountd[8781]: Could not bind name to socket: Address already in use

So while reconfiguring the access rights in the DiskStation web interface, it failed to restart mountd.

Solution

I rebooted my NAS, and after two minutes of booting everything was as it should:

$ rpcinfo -p diskstation
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100021    1   udp  33885  nlockmgr
    100021    3   udp  33885  nlockmgr
    100021    4   udp  33885  nlockmgr
    100021    1   tcp  42001  nlockmgr
    100021    3   tcp  42001  nlockmgr
    100021    4   tcp  42001  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100024    1   udp  41427  status
    100024    1   tcp  43319  status

Mounting and showmount also work again:

$ mount --verbose /mnt/media-diskstation
mount.nfs: timeout set for Mon Apr 29 17:47:29 2019
mount.nfs: trying text-based options 'nolock,vers=4,addr=192.168.3.96,clientaddr=192.168.3.5'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'nolock,addr=192.168.3.96'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.3.96 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.3.96 prog 100005 vers 3 prot UDP port 892

$ showmount -e diskstation
Export list for diskstation:
/volume2/backup 192.168.3.3,192.168.3.4,192.168.3.1,192.168.3.2,192.168.3.5
/volume2/data   192.168.3.42,192.168.3.32,192.168.3.4,192.168.3.3,192.168.3.2,192.168.3.1,192.168.3.5
/volume2/media  192.168.3.32,192.168.3.42,192.168.3.4,192.168.3.3,192.168.3.2,192.168.3.1,192.168.3.5

Published on 2019-04-29 in ,


My mails got marked as spam

Since setting up and moving to our new server, we had problems sending e-mails to other people. Mails did get marked as spam on the other side, and got moved to the Spam folder, or their subject got a prefix like [SPAM] $subject.

I remembered that while setting up a mail server at work, I used mail-tester.com to check my mails for problems. I sent a mail to them and ... got 6/10 points: The DKIM signature was invalid.

I'm using rspamd as milter in Postfix:

/etc/postfix/main.cf
smtpd_milters = inet:localhost:11332
non_smtpd_milters = inet:localhost:11332

and rspamd is configured to add DKIM signatures:

/etc/rspamd/local.d/dkim_signing.conf
path = "/var/lib/rspamd/dkim/$selector.key";
selector = "2019";

### Enable DKIM signing for alias sender addresses
allow_username_mismatch = true;

Now I compared the contents of /var/lib/rspamd/dkim/2019.txt with the one of my DNS zone /etc/tinydns/tinydns4/root/data-cweiske.de and found that the keys differed!

While setting up the new server, I installed rspamd for the first time. It automatically created new DKIM keys, and I did forget that I also have to update my DKIM DNS entry.

The solution for me was to change the selector to 2020, renaming the files in rspamd's dkim folder from 2019 to 2020, and adding the new 2020 DKIM signature to djbdns' zone configuration. mail-tester gave my mail a 10/10 now.

mail-tester.com only analyzes 3 mails per day per server/IP. If you want more, you have to pay.

Published on 2019-03-20 in ,