The latest posts in full-text for feed readers.
Ever since bringing back the server for the OUYA gaming console, I thought that it would be nice if I could browse the games list on my PC.
Last week I had some spare time, and wrote an HTML generator that takes the static API and builds HTML for the discover store listings and game detail pages. Each took an evening, and now everyone is able to browse the OUYA discover store and look at the game details and download the .apk files that are stored at the Internet Archive.
DISCOVER main menu, OUYA vs. web:
"cweiske's picks" on the discover page, OUYA vs. web:
Antichromatic game details page, OUYA vs. web:
Also on: Twitter.
Now that I was able to browse the images with my PC, I wanted the Push to my OUYA feature back. It was available on the original ouya.tv game details pages, and it worked like that:
My static API does not have user accounts, but I figured that most people would browse the games list at home, and that their OUYA will also be on the same home network. With IPv4, both PC and OUYA will have the same public IP address, so that can be used as "user account". With IPv6, they will share the same 64-bit network prefix, so that one can be used as identifier as well.
So when you click "Push to my OUYA" on the website, the combination of your IP address/prefix and the game's package name will be stored in a tiny SQLite database. When the OUYA fetches the download queue, it will get the game UUID and install it.
To make the push feature work when stouyapi is installed in your local network, local IP addresses are mapped to a special "local" IP address.
I also added a limitation to 30 games per IP, and automatically delete entries older than 24 hours from the queue.
Also on: Twitter.
Thanks to Devin's work, all the .apk files are mirrored on our own server statics.ouya.world.
While downloading the games from the Internet Archive was prettly slow (1MiB/s), the new server allows 10-20MiB/s, depending on your location. Installing large games is a snap now.
Published on 2020-05-17 in html, ouya
While layouting a HTML page I discovered a weird issue with the HTML <picture> tag that created empty space below the actual image.
Consider the following code:
There is some space between the two <div> tags:
Looking at that space with the dev inspector, we see that the space is coming from the picture tag:
The picture tag is seemingly unrelated to the <img> tag, and has a height of 17px (but the same width as the image).
If we remove the <picture> tag and only keep div and img, we have the same issue - only that this time div is providing the empty space below the images.
HTML 5.2: 4.7.3. The picture element says:
[...] the picture element itself does not display anything; it merely provides a context for its contained img element [...]
It is also defined to be a part of the Phrasing content category, which means it is like a letter in a text.
In CSS2, letters are 10.6.1 Inline, non-replaced elements for which the rule is:
The height of the content area should be based on the font [...]
So when the height of picture is based on the font size, we can change the line height of the surrounding div container:
div {
line-height: 0;
}
And now, voila:
Published on 2018-03-21 in html, web
h-feed is a set of rules to add CSS classes to HTML tags so that normal HTML pages can be parsed automatically by feed readers. Indieweb proponents like Tantek Çelik prefer it over Atom feeds and have a list of criticisms:
As a duplicate of information already in the HTML of a web page, feed files are an example of the usual DRY violations.
This tells only half of the story. Most websites are split up in two parts: Index pages that list articles with their titles and a short summary, and article pages that contain the full article text.
In that case, the premise of information already [available] in the HTML
is not correct, and the h-feed is
more than 2 times larger
then the full-text Atom feed.
Higher maintenance (requiring a separate URL, separate code path to generate, separate format understanding)
This is true for the initial setup/implementation.
However, when the site gets a new layout/redesign, the Atom feed can stay untouched and will not break, while extra care and testing is needed to keep an h-feed working.
Feed files become out of date with the visible HTML page (often because of broken separate code path), e.g.: [...]
When reading up the indieweb chat logs I saw the following and had a very good laugh:
aaronpk: Whoops tantek the name on your event on your home page is a mess, i'm guessing implied p-name? It's fine on your event permalink
Following all these indieweb feeds is making these markup issues super obvious now.tantek: Even when the data is visible, consuming it and presenting it in a different way can reveal issues!
If you're still around I think I have a fix for the p-name problem you found.
Seems to work locally
Alright, deployed
!tell aaronpk try tantek.com h-feed again, p-name issue(s) should be fixed. e-content too.
Tantek
added h-feed
because he feared that the Atom "side file" could break silently
since invisible
.
Now his h-feed failed silently, and it needed a feed reader user to tell him - just like it would have been the case when his Atom feed would have been broken (except that you can validate an Atom feed automatically).
Published on 2018-03-12 in html, indieweb, web
The HTML pages on my blog are served with the MIME content type application/xhtml+xml. This forces browsers to use an XML parser instead of a lenient HTML parser, and they will bail out with an error message if the XML is not well-formed.
Yesterday I was someone complained by e-mail that he could not read my blog because Firefox showed an XML parsing error. In addition to that, the archive.org version of my blog also only showed an XML parsing error.
The internet archive version is broken because their software injects additional navigation header into the content, which is not well-formed at all:
Example: Goodbye, CAcert.org @2017-06-06.
I opened a bug report for issue: internetarchive/wayback: #156 xhtml pages broken
But my contact person also complained that his browser brought an XML parsing error:
XML-Verarbeitungsfehler: nicht wohlgeformt Adresse: http://cweiske.de/tagebuch/ Zeile Nr. 42, Spalte 328: function cleanCSS2277284469133491(d) { if (typeof d != 'string') return d; var fc = fontCache2277284469133491; var p = /font(\-family)?([\s]*:[\s]*)(((["'][\w\d\s\.\,\-@]*["'])|([\w\d\s\.\,\-@]))+)/gi; function r(m, pa, p0, p1, o, s) { var p1o = p1; p1 = p1.replace(/(^\s+)|(\s+$)/gi, '').replace(/\s+/gi, ' '); if (p1.length < 2) { p1o = ''; } else if (fc.indexOf(p1) == -1) { if (fc.length < fontCacheMax2277284469133491) { fc.push(p1); } else { p1o = fc[0]; } } return 'font' + pa + p0 + p1o; } fontCache2277284469133491 = fc; return d.replace(p, r); } ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
It turned out that he had the firegloves extension installed, which injects not well-formed HTML tags as well: #2: Breaks XHTML pages delivered as application/xhtml+xml.
My blog is static hand-written HTML, and I have a couple of scripts that help me writing articles: Image gallery creator, TOC creator, ID attribute adder and so on. Using an XML parser for those tools is so much easier than a HTML5-compliant parser.
Moving from my old lightbox gallery script to Photoswipe was only possible because I could automatically transform the XHTML code with XML command line tools.
Published on 2017-09-19 in html, web, xml
At work we're using xmllint to syntax check TYPO3 Fluid template files. Sometimes microdata attributes like itemscope are used which don't have a value - and xmllint bails out because <div itemscope> is not well-formed:
$ xmllint --noout file.html
file.html: 26: parser error: Specification mandate value for attribute itemscope
What now?
The microdata specification's itemscope section says:
The itemscope attribute is a boolean attribute.
A boolean attribute may actually have values:
If the attribute is present, its value must either be the empty string or a value that is an ASCII case-insensitive match for the attribute's canonical name, with no leading or trailing whitespace.
So both the following variants are correct:
<div itemscope="">...</div>
<div itemscope="itemscope">...</div>
Published on 2017-01-18 in html, xml
I made a list of features that a good data pager should have, implemented them in my search engine and made many screenshots of pagers in other web applications.
When implementing my self-made search engine phinde, I needed a way to split the search results onto multiple pages.
At first I used the standard sliding pager provided by PEAR's HTML_Pager but found it to be lacking usability.
Let's look at a standard sliding pager that is configured to show data on 5 page numbers of a total of 9:
FirstPrev123...NextLast
FirstPrev1234...NextLast
FirstPrev12345...NextLast
FirstPrev...23456...NextLast
FirstPrev...34567...NextLast
FirstPrev...45678...NextLast
FirstPrev...56789NextLast
FirstPrev...6789NextLast
FirstPrev...789NextLast
This is not ideal.
To me, a good pager must be based on the sliding pager and have the following additional features:
Prev and Next links need to be on the outside because they are used most.
Many standard pagers have First and Last links outside.
Prev and Next need to be big so they are easier to hit.
Too many pagers I've seen use < and > that are not easy to click.
No duplication.
A standard sliding pager on page 1 starts with FirstPrev12, in which First and 1 do the same.
When following rule #1, they are next to each other: PrevFirst12. This means the pager "phase" that most users see is inefficient because it contains buttons with same functionality.
It's better to ditch the First and Last buttons altogether, but always show buttons for page numbers 1 and the last number.
Only show "..." when more than one page is hidden.
I've often seen the following pager state: 12...45. The 3 should be shown instead of the dots because it takes the same space.
The position of the Next button should be stable across pages.
I want to be able to click on the first page on Next, use PageDown on page 2 and click the mouse again without moving to get to page 3. That's very handy for quickly evaluating the results of many pages.
The good pager looks like this, also with 5 pages being shown of 9:
Prev123...9Next
Prev1234...9Next
Prev12345...9Next
Prev123456...9Next
Prev123456789Next
Prev1...456789Next
Prev1...56789Next
Prev1...6789Next
Prev1...789Next
This is implemented in phinde. Have a look at search.cweiske.de/?q=php&page=7.
I'm showing the first and last two pages there, but I'm not sure yet if that's better than showing only one.
A collection of pagers in other web applications.
Published on 2016-12-29 in html, php, web
To be able to adjust some non-configurable menu items I wanted to inject my own CSS into the TYPO3 backend.
TYPO3 combines all CSS and JavaScript files in the backend automatically, so in order to be able to debug your custom CSS you have to turn that off at first:
$TYPO3_CONF_VARS['BE']['debug'] = 1;
Now that CSS files do not get merged anymore, you can load your CSS in the additional configuration file or you extension's ext_tables.php:
$GLOBALS['TBE_STYLES']['stylesheet'] = '/path/to/style.css'
Published on 2016-09-09 in html, mogic, php, typo3
I'm using bdrem to get notified about current and upcoming birthdays by e-mail.
bdrem sends e-mails with both a text/plain and a text/html MIME part. The HTML e-mails looked nice in Thunderbird and Claws, but not so on the stock Android mail client.
Reason for this was that I - as it is being done on web pages - simply prepended a <style> tag to the HTML table markup. This is not supported by some mail clients, and thus background colors and watchclock-icons were missing.
In HTML e-mail, you are supposed to inline all your styles:
<p style="color: red; padding: 5px;">..</p>
Normal <style> blocks are stripped when the e-mail is displayed to the user.
I think the technical reason for this behavior is that the layout of web mail clients would break when they show emails that re-define the client's layout with their <style> tags.
We cannot use scoped CSS in HTML yet (CSS that only gets applied to the content of a certain tag) - and probably never will. If browsers supported it, web mailers could support <style> tags without fear.
I did not want to maintain two HTML variants in bdrem, so I looked for a way to re-use the existing HTML and CSS by inlining the CSS into HTML tags automatically.
For PHP you could use the emogrifier library, but that was a dependency I did not want to introduce.
Instead I opted to write the CSS inliner myself. It isn't that hard:
The CSS rules I use for bdrem are not complex, and selecting elements by class in XPath is doable.
In the end the method has 70 lines of code and does the job nice.
Inlining CSS rules into tags greatly increases the size of the HTML code, since rules are repeated again and again for every matching tag.
Here are the email sizes for a birthday reminder e-mail containing a list of 6 anniversaries:
Format | Size in bytes |
---|---|
Plain text | 846 |
HTML, CSS in <style> tag | 4171 |
HTML, CSS inlined | 8931 |
While the basic HTML version adds 300% to the plain text e-mail, just inlining CSS doubles the size of the HTML e-mail.
My android phone - and probably also web mailers - displays the birthday reminder mails in a nice way now:
Published on 2016-06-22 in html, mail, tools
for file in `grep -l 'rel="shadowbox' raw/*.htm`; do echo $file; for imgsrc in `xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"`; do size=`exiftool -T -Imagesize raw/$imgsrc`; echo $imgsrc $size; xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"; done; done
For this blog I wanted to have an image gallery that works on mobile devices. I found the open source PhotoSwipe library, and after some days I had it integrated in my blog.
PhotoSwipe requires you to specify the full image size when initializing; it does not auto-detect it. I had 29 blog posts with image galleries, and over a hundred images in them - adding the image sizes manually was not an option.
I opted for a HTML5 data attribute on the link to the large image:
<a href="image.jpg" data-size="1200x800">..
What I had to do:
Find all files with galleries
$ grep -l 'rel="shadowbox' raw/*.htm
Extract image paths from the HTML files
$ xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"
Extract the image size
$ exiftool -T -Imagesize "raw/$imgsrc"
Add the data-size attribute to the link tags which link to the image:
$ xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"
And this all into one nice shell script:
for file in `grep -l 'rel="shadowbox' raw/*.htm`
do
echo $file
for imgsrc in `xmlstarlet sel -q -t -v '//_:a[@rel and not(@data-size)]/@href' "$file"`
do
size=`exiftool -T -Imagesize raw/$imgsrc`
echo $imgsrc $size
xmlstarlet ed --inplace -P --append "//_:a[@href='$imgsrc' and not(@data-size)]" --type attr -n data-size --value "$size" "$file"
done
done
This all did only work because I my blog posts are XHTML.
You can see the new galleries in e.g. Kinderzimmerlampe im Eigenbau and Playing Tomb Raider 1 on OUYA.
Published on 2016-05-20 in html, shell, xml
End of 2013, one of our customers were convinced by the marketing team of the German company Sevenval that their FitML server was the best solution to make their website mobile ready.
FitML's only remnants are blog posts by Sevenval about it, many wasted work hours and unhappy customers.
Sevenval's FitML marketing promise was that your web server would output the Fit markup language, and their Fit server would automagically do the rest to deliver it in the correct format to mobile devices, with auto-scaled images and everything. It was also said that the server would work around device bugs and missing features; if an iPhone would not support HTML select tags correctly it would generate working replacements.
None of this was true; it was all a big mess. None of the ~10 developers working on the project had any joy with it.
Here is what FitML did wrong:
When a new iOS version came out during that project and the FitML templates provided by SevenVal did not look good on it, their answer was: "iOS 6 is not in the browser matrix".
There is no magic going on in FitML. You will have per-device (class/version) templates. You can do that yourself already without FitML.
There is no official DTD or XML schema to validate your output.
Sevenval will tell you that it's "Standard HTML with a bunch of <div> tags", but it's not.
When your FitML is broken a bit, you will get a hard error from your FIT server, without any notice what went wrong. var_dump() cannot be used because of this.
To get more information, you have to add /dd=1/ to your URL and repeat the request. Good luck with POST requests!
Also, HTTP headers used for debugging (e.g. Firebug) are stripped away, leaving you without information.
Server-side debug helpers like TYPO3's admin tool, Symfony's console or Smarty's variable debug view are unusable since they are not FitML, and converting them is near to impossible.
Form element attributes checked and disabled have to have the value true, while normal HTML allows checked="checked":
<input type="checkbox" checked="true" disabled="true"/>
This means your standard HTML form library cannot be used.
Every class attribute is seen as special FitML instruction.
If you want to generate a class attribute in the HTML output, you have to use the style attribute. Yes.
You can not use a standard CMS unless you customize *all* the output, even of standard content elements like text. This means you are forced to develop and maintain two versions of your web site/web app.
Their solution for redirecting mobile clients from the normal to the FIT server is an apache rewrite rule that's several years old.
Their documentation is not wrong, it just does not describe this aspect of the behavior. We really got that reply from their support staff.
Until 2 days before launch, we had reproducible crashes of the entire FIT server software.
Some iOS versions don't support the <label for> attribute, and the FitML server does nothing to compensate that automatically (e.g. by adding some javascript).
The FitML/Sevenval people will tell you that everything automatically works, while in reality things are really nasty - up to the point that their support staff says "no, that can't be done".
Their website also states:
FITML ist eine eigens von Sevenval entwickelte Markup-Syntax, die sich an XHTML und Microformats orientiert.
This is wrong; there are no microformats at all.
FitML has something called "ContentUpdate", but that only gives 2% of standard AJAX functionality.
Throw away all libraries you use for your desktop site. jQuery.ajax()? Forget it. You have to re-develop all the existing functionality to fit into FitML.
Published on 2013-11-19 in html