At work I recently wanted to let the central TV speak the company's main chatroom messages.
I'm using Psi+ to connect to our XMPP server. Psi writes chatroom logs into ~/.local/share/psi+/profiles/default/history/*.history, with one message per line.
At first I had to remove the timestamp from the messages, and then I had to filter out all non-alphanumeric characters which would not be spoken correctly anyway:
$ tail -n 1 -f .local/share/psi+/profiles/default/history/room\ | cut -b18-\ | tr -C -d '[:alnum:][:cntrl:] '
It did not work. Any single command after the tail worked, but piping several times did not.
After quite a lot of searching I found out that the pipe streams are buffered (cached). This is done to improve performance - with buffering, the tools receiving piped content will not have to handle each byte separately, but can do so in chunks.
Luckily there is a GNU coreutils command stdbuf which lets us modify the buffering settings:
$ tail -n 1 -f .local/share/psi+/profiles/default/history/room\ | stdbuf -i0 -oL tr -C -d '[:alnum:][:cntrl:] '\ | stdbuf -i0 -oL cut -b18-\ | spd-say -l de -e
stdbuf -i0 disables input buffering, and stdbuf -oL uses line-based buffering for the output, which suits my use case very well.
spd-say is the command line interface to speech-dispatcher.
Then I started pulseaudio-dlna to get a local audio output device that sends the data to the TV-attached Chromcast dongle, and used pavucontrol to set the speech-dispatcher audio output device.
This was the first time in my live that I liked pulseaudio.