One of our customers at work uses a TYPO3 CMS to manage the website's content,
and files are stored on a S3-compatible
MinIO server
running on a files. subdomain.
Uploading and using a .vtt subtitle file for a .mp4 video
did not work - the video played, but the subtitles were not available.
Firefox showed the following (German) error:
Quellübergreifende (Cross-Origin) Anfrage blockiert:
Die Gleiche-Quelle-Regel verbietet das Lesen der externen Ressource auf
https://files.example.org/video.mp4.
(Grund: CORS-Anfrage schlug fehl). Statuscode: (null).
In the network tab it showed Not same-origin.
Chromium's error message in English:
Unsafe attempt to load URL https://files.example.org/subtitles.vtt
from frame with URL https://www.example.org/.
Domains, protocols and ports must match.
CORS?
At first I thought the MinIO server did not send
CORS
headers, but it
automatically sets
them already:
Your application has a bug - we support CORS by default on all requests.
On the browser or your client you need to set the valid Origin: HTTP header
CORS headers were a red herring.
crossorigin attribute
It
turned out
that the HTML <video> tag has a
crossorigin attribute.
Setting it to anonymous fixed the issue and subtitles
were loaded:
Ever since
bringing back the server
for the
OUYA gaming console,
I thought that it would be nice if I could browse the games list on my PC.
Last week I had some spare time, and wrote an HTML generator that
takes the static API and builds HTML for the discover store listings
and game detail pages.
Each took an evening, and now everyone is able to
browse the OUYA discover store
and look at the
game details
and download the .apk files that are stored at the Internet Archive.
Now that I was able to browse the images with my PC,
I wanted the Push to my OUYA feature back.
It was available on the original ouya.tv game details pages,
and it worked like that:
Log into your account on ouya.tv
Click "Push to my OUYA" on a game page, which puts the game into the download queue.
The OUYA checks the
queue API
every 5 minutes and installs the games in it.
My static API does not have user accounts, but I figured that most people
would browse the games list at home,
and that their OUYA will also be on the same home network.
With IPv4, both PC and OUYA will have the same public IP address,
so that can be used as "user account".
With IPv6, they will share the same 64-bit network prefix, so that one can
be used as identifier as well.
So when you click "Push to my OUYA" on the website, the combination of
your IP address/prefix and the game's package name will be stored in
a tiny SQLite database.
When the OUYA fetches the download queue, it will get the game UUID
and install it.
To make the push feature work when stouyapi is installed in your
local network,
local IP addresses
are mapped
to a special "local" IP address.
I also added a limitation to 30 games per IP, and automatically delete
entries older than 24 hours from the queue.
Thanks to Devin's work,
all the .apk files are mirrored on our own server
statics.ouya.world.
While downloading the games from the Internet Archive was prettly slow
(1MiB/s),
the new server allows 10-20MiB/s, depending on your location.
Installing large games is a snap now.
While layouting a HTML page I discovered a weird issue with
the HTML <picture> tag
that created empty space below the actual image.
Consider the following code:
]]>
There is some space between the two <div> tags:
Looking at that space with the dev inspector, we see that
the space is coming from the picture tag:
The picture tag is seemingly unrelated to the <img>
tag, and has a height of 17px (but the same width as the image).
If we remove the <picture> tag and only keep
div and img, we have the same issue
- only that this time div is providing the empty space
below the images.
h-feed is a set of rules to
add CSS classes to HTML tags so that normal HTML pages can be parsed
automatically by feed readers.
Indieweb proponents like
Tantek Çelik prefer it over
Atom feeds
and have a list of criticisms:
Criticism #1: DRY Violations
As a duplicate of information already in the HTML of a web page,
feed files are an example of the usual DRY violations.
This tells only half of the story.
Most websites are split up in two parts:
Index pages that list articles with their titles and a short summary,
and article pages that contain the full article text.
In that case, the premise of information already [available] in the HTML
is not correct, and the h-feed is
more than 2 times larger
then the full-text Atom feed.
Criticism #2: Maintenance
Higher maintenance (requiring a separate URL, separate code path to
generate, separate format understanding)
This is true for the initial setup/implementation.
However, when the site gets a new layout/redesign, the Atom feed can stay
untouched and will not break,
while extra care and testing is needed to keep an h-feed working.
Criticism #3: Out of date
Feed files become out of date with the visible HTML page
(often because of broken separate code path), e.g.: [...]
aaronpk:
Whoops tantek the name on your event on your home page is a mess,
i'm guessing implied p-name? It's fine on your event permalink
Following all these indieweb feeds is making these markup
issues super obvious now.
tantek:
Even when the data is visible, consuming it and presenting it in a
different way can reveal issues!
If you're still around I think I have a fix for the p-name problem you found.
Seems to work locally
Alright, deployed
!tell aaronpk try tantek.com h-feed again, p-name issue(s) should be fixed. e-content too.
Tantek
added h-feed
because he feared that the Atom "side file" could break silently
since invisible.
Now his h-feed failed silently, and it needed a feed reader user to tell him
- just like it would have been the case when his Atom feed would have been
broken
(except that you can validate an Atom feed
automatically).
The HTML pages on my blog are served with the MIME content type
application/xhtml+xml.
This forces browsers to use an XML parser instead of a lenient
HTML parser, and they will bail out with an error message if
the XML is not well-formed.
Yesterday I was someone complained by e-mail that he could not read
my blog because Firefox showed an XML parsing error.
In addition to that, the archive.org version
of my blog also only showed an XML parsing error.
Internet Archive
The internet archive version is broken because their software injects
additional navigation header into the content, which is not well-formed
at all:
My blog is static hand-written HTML, and I have a couple of scripts that
help me writing articles:
Image gallery creator, TOC creator, ID attribute adder and so on.
Using an XML parser for those tools is so much easier than a HTML5-compliant
parser.
Moving from my old lightbox gallery script to Photoswipe was only possible
because I could automatically transform
the XHTML code with XML command line tools.
At work
we're usingxmllint
to syntax check TYPO3 Fluid template files.
Sometimes microdata attributes like
itemscope
are used which don't have a value - and xmllint bails out
because <div itemscope> is not well-formed:
$ xmllint --noout file.html
file.html: 26: parser error: Specification mandate value for attribute itemscope
If the attribute is present, its value must either be the empty string
or a value that is an ASCII case-insensitive match for the attribute's
canonical name, with no leading or trailing whitespace.
I made a list of features that a good data pager should have,
implemented them in my search engine and made many screenshots of
pagers in other web applications.
When implementing my self-made search engine
phinde,
I needed a way to split the search results onto multiple pages.
At first I used the standard sliding pager provided by
PEAR's HTML_Pager
but found it to be lacking usability.
States of a standard sliding pager
Let's look at a standard sliding pager that is configured
to show data on 5 page numbers of a total of 9:
To me, a good pager must be based on the sliding pager and have the following
additional features:
Prev and Next links need to be on the outside
because they are used most.
Many standard pagers have First and Last
links outside.
Prev and Next need to be big so they are easier
to hit.
Too many pagers I've seen use < and >
that are not easy to click.
No duplication.
A standard sliding pager on page 1 starts with
FirstPrev12,
in which First and 1 do the same.
When following rule #1, they are next to each other:
PrevFirst12.
This means the pager "phase" that most users see is inefficient
because it contains buttons with same functionality.
It's better to ditch the First and Last buttons
altogether, but always show buttons for page numbers 1
and the last number.
Only show "..." when more than one page is hidden.
I've often seen the following pager state:
12...45.
The 3 should be shown instead of the dots because
it takes the same space.
The position of the Next button should be stable across pages.
I want to be able to click on the first page on Next,
use PageDown on page 2 and click the mouse again without moving
to get to page 3.
That's very handy for quickly evaluating the results of many pages.
States of the good pager
The good pager looks like this, also with 5 pages being shown of 9:
Woltlab
has a nice pager implementing all the good parts.
The "..." is clickable and opens a popup.
When clicking on "...", a popup opens.
(Very old) Burning Board version 2 pager, which is really bad.
To be able to adjust some non-configurable menu items
I wanted to inject my own CSS into the
TYPO3 backend.
TYPO3 combines all CSS and JavaScript files in the backend automatically,
so in order to be able to debug your custom CSS you have to turn that off
at first:
typo3conf/LocalConfiguration.php
$TYPO3_CONF_VARS['BE']['debug'] = 1;
Now that CSS files do not get merged anymore, you can load your CSS
in the additional configuration file or you extension's
ext_tables.php:
I'm using
bdrem
to get notified about current and upcoming birthdays by e-mail.
bdrem sends e-mails with both a text/plain and
a text/html MIME part.
The HTML e-mails looked nice in Thunderbird and Claws, but not
so on the stock Android mail client.
Reason for this was that I - as it is being done on web pages -
simply prepended a <style> tag to the HTML table
markup.
This is
not supported by some mail clients,
and thus background colors and watchclock-icons were missing.
CSS in HTML
In HTML e-mail, you are supposed to inline all your styles:
<p style="color: red; padding: 5px;">..</p>
Normal <style> blocks are stripped when the e-mail is
displayed to the user.
I think the technical reason for this behavior is that the layout of
web mail clients would break when they show emails that
re-define the client's layout with their <style> tags.
We cannot use scoped CSS
in HTML yet
(CSS that only gets applied to the content of a certain tag)
- and
probably never will.
If browsers supported it, web mailers could support
<style> tags without fear.
Inlining CSS
I did not want to maintain two HTML variants in bdrem, so I looked
for a way to re-use the existing HTML and CSS by inlining the CSS into
HTML tags automatically.
For PHP you could use the
emogrifier
library, but that was a dependency I did not want to introduce.
Instead I opted to write the CSS inliner myself.
It isn't that hard:
Parse CSS rules into an array with the rule as key and the desired style
as value
Convert each CSS rule into XPath
Load the HTML code with SimpleXML
Iterate through all elements matched by the XPaths
and add the style attribute
Shell command of the day: Image size as XML attributes
for file in `grep -l 'rel="shadowbox' raw/*.htm`; do echo $file; for imgsrc in `xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"`; do size=`exiftool -T -Imagesize raw/$imgsrc`; echo $imgsrc $size; xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"; done; done
For this blog I wanted to have an image gallery that works on mobile devices.
I found the open source PhotoSwipe
library, and after some days I had it integrated in my blog.
PhotoSwipe requires you to specify the full image size when initializing;
it does not auto-detect it.
I had 29 blog posts with image galleries, and over a hundred images in them
- adding the image sizes manually was not an option.
I opted for a HTML5 data attribute on the link to the large image:
<a href="image.jpg" data-size="1200x800">..
What I had to do:
Find all files with galleries
$ grep -l 'rel="shadowbox' raw/*.htm
Extract image paths from the HTML files
$ xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"
Extract the image size
$ exiftool -T -Imagesize "raw/$imgsrc"
Add the data-size attribute to the link tags which
link to the image:
$ xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"
And this all into one nice shell script:
for file in `grep -l 'rel="shadowbox' raw/*.htm`
do
echo $file
for imgsrc in `xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"`
do
size=`exiftool -T -Imagesize raw/$imgsrc`
echo $imgsrc $size
xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"
done
done
This all did only work because I my blog posts are XHTML.