Christians Tagebuch: http

The latest posts in full-text for feed readers.


Google broke my local domains

My laptop's name is boo, and all the web projects I'm developing on my laptop have a domain name $project.boo. Today I wanted to open http://archive.boo/ in Firefox and got an error message:

Kein Verbindungsversuch unternommen: Mögliches Sicherheitsproblem

Firefox hat ein mögliches Sicherheitsrisiko erkannt und daher archive.boo nicht aufgerufen, denn die Website benötigt eine verschlüsselte Verbindung.

archive.boo verwendet eine Sicherheitstechnologie namens "HTTP Strict Transport Security (HSTS)", durch welche Firefox nur über gesicherte Verbindungen mit der Website verbinden darf. Daher kann keine Ausnahme für die Website hinzugefügt werden.

My local Apache web server does not server any HSTS headers, and tracing the HTTP request in Wireshark shows me that no HTTP request is made at all. Only a HTTPS request.

It turns out that the .boo top level domain (TLD) is owned by Google and that it is on the HSTS preload list for both Firefox and Chromium.

Game over. I could try to disable HSTS in the browser (which I won't do), or I move to a new development TLD (that is specific to my machine, so I can share links in my network).

Published on 2022-02-17 in ,


OUYA server replacement is online

I wrote a replacement API for the OUYA gaming console because the original one had been shut down, making useless bricks out of the OUYAs in your living rooms.

If you just want to get the discover store running on your OUYA, have a look at the instructions.

OUYA?

At the beginning of 2014 I bought the android-based OUYA gaming console to have a device in the living room that I could use to play old Super Nintendo games on. The console did not use Google Play to install games with, but had its own "Discover" store that was hosted by OUYA itself.

In 2015, Razer bought OUYA to fill its own Cortex store with the 1200+ games that were available for the OUYA already. At that time they said:

If you already own the hardware, we're going to be keeping the lights on for at least a year.

In may 2019, Razer sent an e-mail to all registered users and developers:

From: Razer <no.reply@razer.com>
Subject: OUYA & Razer Forge TV Services Shutdown
Date: Wed, 22 May 2019 04:07:17 +0000
Reply-To: no.reply@razer.com

Dear Developer,

We would like to inform you that the Forge TV Games store and OUYA service will cease operations on 6-25-2019

This email serves as notice of termination of the Marketplace Agreement, in accordance with section 9 of the agreement, with effect on 5-25-2019.

Thank you for the support which you have extended us these past years. It has been a privilege to have worked with you and to have had you as part of our community.

- Razer

FAQ: https://support.razer.com/console/razer-forge-tv/

Server shutdown

With the OUYA server not available anymore, the OUYA cannot be used anymore:

  • If you buy a used OUYA, or did a factory reset, you're stuck at the startup screen at which you have to login or register with the server.
  • You cannot install any of the games, because the "Discover" store is gone.
  • Games you have already bought are back into demo mode because the server does not confirm that you really bought them.

A group of people formed that were determined to preserve what OUYA had, and they coordinated in the OUYA Saviors Discord chat server. I decided to let them do their work and went on with my life.

In november I went back to see what they managed to do and found... not much. No working replacement server software, only the "work in progress" BrewyaOnOuya store that let your OUYA login.

With a heavy heart I stepped in and did it myself.

Games!

To build a game store, I needed a list of games and their meta data like title, description, categorization/genres, images and download links. There was no such game data repository :/

Game data schema

My plan was to have a git repsitory with one file for each game that contains all data about it, including an .apk download link. This game data repository could then be used to fill the store.

At first I analyzed the API: The list of games ("discover") contained title, genres and an image, but not the description. The detail page API had the description and .apk file information, but the website. The app data API had the website, the "like count" and the video URL that were missing in the other API responses.

In the end of the first day I had a big HTML table that listed every data point a game could have.

On day two I build the schema for the data files, and manually added the first game - Bloo Kid 2.

On day 3 I found that "apps" and "details" were not the same, and that my API backup did not have "apps" data. I got a "apps" data backup from Devin Rich (@szeraax) and integrated their fields into my schema.

The next two days were filled with doing refinements to the schema, and at the end of day 5 my store API generation script was good enough so that I could use the OUYA to register, browse the store, look at game details and download and install a game!

First store api demo: Downloading Bloo Kid 2

All the games

Now I had the proof that the game data schema was usable for an actual store, and I began to build a script that created the game data files from the API backups I had, and from the games that Jason Scott put into the Internet Archive.

During that work I found that the app uuid was not an UUID for the application itself but for the release. Also, details responses allowed videos and images in custom order, which I had not seen yet. That and some other observations required schema adjustments.

Another big issue were the images and videos. The API backup files contained image URLs to cloudfront.net, filepicker.io and amazonaws.com. The amazon bucket was still available, but the other two were down already. Some games in the Internet Archive had copies of those images; they had the same file name as they had on cloudfront. I had backed up some others, and got another large backup from @blackcharcz in chat.

In the end, only 419 of the 20159 images could not be recovered. The images are currently hosted on ouya.cweiske.de/game-images/, and I asked Jason Scott to import them into the Internet Archive. The whole import process took a bit under two weeks of work.

You can find the OUYA game meta data in a git repository: ouya-saviors/ouya-game-data.

Server

I laid the basis for the server already in 2013 when I built an image store that converted the discover section to an image browser, and in 2015 when I documented the OUYA server API.

A normal API server would need to manage users and allow registration, support uploads of new games and updates to existing games. It would need to track user downloads, purchases and maybe even the Push to OUYA feature.

To have all of that, a programming language and a database would be required, which means constant maintenance and adjustments when the Python or PHP version gets updated. Another big issue is security - if someone finds a bug in the code, the libraries, the framework or the interpreter they could be able to break in and do bad things with the server.

I did not want to have any of those issues and thus had decided very early that I would build a script that creates a couple of static files, which would be served by a web server without any dynamics in it.

Limitations of static

My server's user registration API responds with "OK", but does not even look at the data the user sends to it. The "login" API route returns static data: A hard-coded session ID, and a static name "stouyapi". Every user is the same.

The "agreements" API does not track which marketplace terms you have already read and confirmed, it only says that you already did :) If the OUYA sends a crash report, the server says "ok" but does not look at the data.

Game rating submissions are ignored as well; the number of votes and average rating is taken from the static game data files and will never change. The server also does not track which user messages you have read or not.

PUT

It turned out that it is impossible to let Apache 2.4 return static responses to PUT requests - it needs a scripting language for that.

Apache's own content handler returns "405 Method not allowed" if it gets a HTTP PUT request, even if there is a rewrite rule that says "return 200 OK". So I had to resort to providing an empty-json.php PHP script that has to be registered as PUT handler in the Apache configuration:

Script PUT /empty-json.php

Without it, the OUYA will forever try to submit the user's public key and the marketplace agreement status.

Categorization

The "hard" part was creating the "Discover" menu because I could not remember how the original thing looked like. In the meantime @szeraax had implemented game data import and a first discover section in his BrewyaOnOuya store, and I took some ideas from there:

  • A row of "last updated" games
  • A row of "best rated" games
  • My own favorites, "cweiske's picks"
  • Categories with games for "2 players", "3 players" and "4 players"
  • Games grouped by content rating: Everyone, 9+, 12+ and 17+
  • Categories for all the original genres
  • A category for each letter

Each category begins with a "last updated" and a "best rated" row, and then lists 4 games per row. That way you scroll vertically instead of the horizontal scrolling in the original OUYA store, but it gives you the chance to really see all of the games.

Using the server

When the OUYA saviors chat server was launched, a former OUYA employee joined and gave us some information that helped us very much.

One of those important bits of information was that there was a ini-style config file that the OUYA developers used when they programmed: The configuration file ouya_config.properties. Just connect to the OUYA via USB, and create a plain text file with that name in the auto-mounted root folder.

Setting the options OUYA_SERVER_URL and OUYA_STATUS_SERVER_URL will immediately change the server that the OUYA uses, and lets us point our OUYAs to the new server in the easiest way imaginable.

Success

My stouyapi store went live on november 22, 3 weeks after I began to work on the project. The first gamers are using it already.

stouyapi's source code is on my git server (github mirror). I use the ouya-game-data repository to build it.

The next big task is getting all the now unpurchasable games into a fully playable state...

Screenshots

OUYA's main menu DISCOVER store main page cweiske's picks Alphabetical game categories Game details 3 players category Scrolling in a category

Also on: Twitter, ouya.world.

Published on 2019-11-25 in , ,


HTTP headers for debugging

In my PHP web applications I sometimes use custom HTTP headers to aid debugging when things go wrong.

The Laravel framework redirects unauthenticated users to the login page when they access an URL that requires an authenticated user. Especially with API clients this is not helpful, and so my user guard implementation sends a "redirect reason" header with specific explanations:

X-Redirect-Reason: User token is invalid

This helped me a couple of times already, and prevents me from digging into the authorization code.


Another header is used in error handlers to provide more information than just "404 Not Found":

X-404-Cause: CRUD ModelNotFound

Other reasons could be that the user is not allowed to access the resource for auth reasons, or that the model exists in database but has been deleted.

Others using custom headers

Content Delivery Networks (CDNs) often aid debugging with custom HTTP headers. Fastly uses X-Cache, Akamai uses multiple headers like X-Check-Cacheable, X-Akamai-Request-ID, X-Cache and X-Cache-Remote.

Apache's Traffic Server has a XDebug plugin that sends out X-Cache and other headers.

The PHP FOSHTTPcache library aids debugging by configuring Varnish to send out a X-Cache header indicating cache hits and misses.

PHP HTTP client library Guzzle tracks the redirect history of a single HTTP request in the X-Redirect-History header.

TYPO3's realurl extension sends out X-TYPO3-RealURL-Info indicating what the reason for a redirect was:

X-TYPO3-RealURL-Info: redirecting expired URL to a fresh one
X-TYPO3-RealURL-Info: redirect for missing slash
X-TYPO3-RealURL-Info: redirect for expired page path
X-TYPO3-RealURL-Info: postVarSet_failureMode redirect for ...

Published on 2019-09-19 in ,


wget -O empties file on error

wget can be used to fetch a file via HTTP, and it supports -O filename.html to force the downloaded file to have that exact name. If the file exists, it is simply overwritten with the new file content.

If the request fails, the local file is empty - but that's often not desired.

The reason for this has been explained in 2006 already:

-O, as currently implemented, is simply a way to specify redirection. You can think of it as analogous to "command > file" in the shell.

Hrvoje Niksic, wget@sunsite.dk mailing list

The solution to this problem is to not use wget but curl:

$ curl -f http://nonexistent/file.jpg -o localfile.jpg

It will keep the current file contents if the request fails, and overwrite it if the request succeeds.

Published on 2019-01-10 in ,


Geokoordinaten eines Flurstücks

Ich wollte für meine Flurstücke die Geokoordinaten haben, damit ich sie mir auf dem Handy mit OsmAnd anzeigen lassen kann.

Sachsen hat ein Geoportal, in dem eine Karte mit allen möglichen Informationen zu finden ist.

Aktiviert man Karteninhalt -> 1/14 Verwaltung -> Flurstück, so sieht man alle Flurstücksgrenzen und deren Nummer. Die Grenzen und Nummern sind allerdings eine Grafik, aus der man die Koordinaten nicht herausbekommt.

Klickt man auf ein Flurstück, um sich mehr Informationen anzeigen zu lassen, sieht man dessen Nummer und Größe - aber nicht mehr.

Sucht man allerdings nach einem Flurstück, dann wird es mit einer roten Umrandung dargestellt - und das ist keine Grafik. Jetzt ging es mir nur noch darum herauszufinden, wie man an diese Daten herankommt.

Anleitung

  1. Auf sachsen.de -> Sachsenatlas -> Karte ins Suchfeld Ort, "Flurstück" und die Nummer eingeben:

    Streuben, Flurstück 23

    Enter drücken, Suchergebnisse werden angezeigt.

    Klickt man jetzt bei einem auf "Karte", so wird es auf ebendieser rot umrandet angezeigt.

  2. Im Browser den Netzwerkinspektor aktivieren: Rechts auf die Karte klicken, "Element untersuchen". Jetzt auf "Netzwerkanalyse" klicken.

    Die Netzwerkanfrageliste sollte leer sein. Wenn nicht, leeren.

  3. Nochmal auf "Suchen" (Lupe) klicken. Im Netzwerkinspektor ist jetzt eine Anfrage vom Typ "json" zu sehen, mit "proxy" im Dateipfadnamen.

  4. In der Liste auf die Anfrage klicken und auf "Antwort". Jetzt sieht man die Daten der Suchergebnisse als Datenbaum.

    In Firefox konnte ich doppelt auf die URL in GEOM klicken, und ein neuer Tab mit den Flurstückskoordinaten wurde geöffnet.

    Die konnte ich jetzt per Strg+S einfach auf die Festplatte speichern.

Suche im Geoportal Sachsenatalas Markierung eines gefundenen Flurstücks Netzwerkanalyse im Firefox Suchergebnisse als JSON JSON eines Flurstücks

Koordinatensysteme

Nach der ersten Freude rüber die Koordinaten kam die Ernüchterung: Die Koordinaten passten nicht zu dem, was ich sonst gewöhnt bin.

[350597.7, 5688842],
[350642.5, 5688850.2],
...

Weiter unten steht auch noch:

spatialReference: {
    wkid: 25833
}

Über $suchmaschine fand ich folgendes:

In Deutschland hat man sich für die UTM-Abbildung des ETRS89 entschieden.

Katastermodernisierung in Nordrhein-Westfalen: Wie wird das ETRS89 in die Ebene abgebildet?

Auf epsg.io/25833 erfährt man, daß 25833 nämlich ETRS89 / UTM zone 33N bedeutet - die 33. nördliche Zone.

Nach langem Suchen fand ich php-coord, eine Bibliothek zum Umwandeln von Koordinaten von einem Koordinatensystem in ein anderes - und eine der wenigen, die UTM unterstützt. Mit ihrer Hilfe konnte ich die ersten UTM-Koordinaten in WGS 84-Koordinaten umwandeln:

<?php
// first: $ composer require php-coord/php-coord
require_once __DIR__ . '/vendor/autoload.php';

$UTMRef = new \PHPCoord\UTMRef(350531.8, 5689109.2, 0, 'N', 33);
$LatLng = $UTMRef->toLatLng();

echo (string) $UTMRef . "\n";
echo (string) $LatLng . "\n";
?>
33N 350531.8 5689109.2
(51.3336, 12.85437)

Mit diesem Wissen baute ich mir ein Script, welches die ArcGis-JSON-Dateien in GeoJSON umwandelt: arcgis-to-geojson.php.

Zum Schluß noch ein Script, welches mehrere GeoJSON-Dateien mit je einem Polygon-Feature zu einer Datei mit einer FeatureCollection zusammenfasst: combine-geojson.php.

Das Ergebnis kann man sich in Google Earth auf dem Desktop oder auch geojson.io anzeigen lassen:

Polygone auf geojson.io

OsmAnd auf dem Handy unterstützt nur .gpx-Dateien, weshalb ich meine GeoJSON-Dateien noch mit gpsbabel konvertieren musste.

Published on 2018-03-14 in ,


Firefox: No connection to localhost domains when offline

After upgrading from Ubuntu 14.04 to Ubuntu 16.04, I was not able to connect to domains pointing to 127.0.0.1 when I had no network connection - in Firefox. In Chrome, it worked correctly.

As a web developer, I have a bunch of projects running on my laptop with domain names defined in /etc/hosts that point to the local machine, 127.0.0.1. On the train I have no internet connection but still want to work on my projects, so it's crucial to be able to access them.

Symptom

When accessing http://www.bogo/ that has an entry in /etc/hosts, Firefox showed an error page:

Seite wurde nicht gefunden

Die Verbindung mit dem Server www.bogo.com schlug fehl.
Falls die Adresse korrekt ist, können Sie noch Folgendes versuchen:

  • Die Seite später noch einmal aufrufen.
  • Die Netzwerkverbindung überprüfen.
  • Falls der Computer sich hinter einer Firewall befindet, überprüfen Sie bitte, ob Firefox der Internetzugriff erlaubt wurde.

In english:

Hmm. We’re having trouble finding that site.

We can’t connect to the server at www.bogo.com.
If that address is correct, here are three other things you can try:

  • Try again later.
  • Check your network connection.
  • If you are connected but behind a firewall, check that Firefox has permission to access the Web.

Solution

I found question that had the correct solution for me:

  1. Open about:config
  2. Search for network.dns.disableIPv6
  3. Set it to false

After that it worked for me.

Firefox bug 1267257 seems to be describing this issue.

Published on 2018-03-06 in ,


Unsupported: 406 Not Acceptable

While implementing the crawler for my own search engine phinde, I tried to minimize the amount of data transferred between web servers and the crawler.

The crawler can only extract links from HTML, XHTML and Atom feeds, so it sends a HTTP Accept header stating that:

Accept: application/atom+xml, application/xhtml+xml, text/html

Unfortunately, my Apache still sends out the content for large .bz2 files that my crawler then has to throw away.

Specification

The HTTP/1.1 RFC 2616 states in section 10.4.7:

Note: HTTP/1.1 servers are allowed to return responses which are not acceptable according to the accept headers sent in the request.

I think this was noted to make it easier to implement HTTP/1.1.

Server support

Unfortunately, none of the 3 big web servers makes it possible to send out a 406 status code when the Accept condition cannot be fulfilled. I've opened a feature request for Apache: Option to send "406 Not Acceptable" when mime type in "Accept" header cannot be fulfilled

Standard configuration doesn't support it by no means:

Apache

$ curl -IH 'Accept: image/png' http://httpd.apache.org/
HTTP/1.1 200 OK
[...]
Server: Apache/2.4.7 (Ubuntu)
[...]
Content-Type: text/html

Lighttpd

$ curl -IH 'Accept: image/png' http://www.lighttpd.net/
HTTP/1.1 200 OK
[...]
Content-Type: text/html
[...]
Server: lighttpd/2.0.0

nginx

$ curl -IH 'Accept: image/png' http://nginx.org/
HTTP/1.1 200 OK
Server: nginx/1.9.8
Date: Wed, 10 Feb 2016 20:11:30 GMT
Content-Type: text/html; charset=utf-8

Published on 2016-02-10 in ,


SSL certificate chains

SSL certificate verification

Your browser knows a list of Certificate Authorities (CAs) that it trusts; this list is called the "trust store". On Firefox 41 you can find it at Preferences / Advanced / Certificates / View Certificates / Authorities.

When connecting to a web site via HTTPS, the web server sends the public part of the web site's SSL certificate to your browser. The browser then checks if the certificate is signed by one of the CAs in its trust store.

 Browser -----[asks]----> Trust store
           +----------------------+          +-------------+
           |   SSL certificate    |          |Do we trust  |
           |valid for: example.org|          | 0x123456789 |
           |signed by: 0x123456789|          |           ? |
           +----------------------+          +-------------+
]]>

If it finds the certificate in the trust store, all is fine and the green lock icon is shown. If it does not find a trusted CA certificate, a warning is shown:

This Connection is Untrusted

You have asked Firefox to connect securely to example.org, but we can't confirm that your connection is secure.

Normally, when you try to connect securely, sites will present trusted identification to prove that you are going to the right place. However, this site's identity can't be verified.

What Should I Do?

If you usually connect to this site without problems, this error could mean that someone is trying to impersonate the site, and you shouldn't continue.

example.org uses an invalid security certificate.

The certificate is not trusted because the issuer certificate is unknown. The server might not be sending the appropriate intermediate certificates. An additional root certificate may need to be imported.

(Error code: sec_error_unknown_issuer)

What is a SSL certificate chain?

In reality, the web site's certificate will not be signed by the CA's certificate directly.

Web site certificate signing at CAs is done automatically these days. Imagine what happens when the CA's certificate gets stolen - the certificate's new owners would be able to issue certificates for all domain names, and the CA could do nothing against it - except telling browser vendors to remove its certificate from the trust store, which would mean that the CA could close its doors.

Instead, the certificate authority's main certificate (root certificate) is locked away on some offline storage medium in a safe. It was only used to sign some intermediate certificates, which have a limited life span, and which can be distrusted and revoked in case they get compromised.

|valid for: signing      |--->|valid for: signing    |
|signed by: 0x112345678|    |signed by: 0x223456789  |    |in browser trust store|
+----------------------+    +------------------------+    +----------------------+
]]>

Missing links

These intermediate certificates are used to sign web site certificates - but the browsers do not know about them. If your web server only sends out the domain's SSL certificate, the browser will show the "untrusted connection" warning, because it does only see that the certificate is signed by the intermediate certificate - which it does not know anything about it; it cannot make the connection to the CA's root certificate.

The solution to this problem is to send a certificate chain that contains both the web site's certificate as well as the intermediate certificate. With that information, the browser can follow the trust chain from the web site's certificate via the intermediate certificate up to the root certificate which is has in its trust store.

The Mozilla Certificate Authority FAQ writes about this :

Why does SSL handshake fail due to missing intermediate certificate?

This type of error indicates that the web server is incorrectly configured. The web server itself has to send the intermediate certificate along with their own SSL cert to complete the certificate chain. Only root certificates or trust anchors are included in the Mozilla root store.

Web server configuration

Apache

Since Apache 2.4.8 you can use the SSLCertificateFile directive to point to a file that contains both the web site's certificate and the intermediate certificate. Versions lower than 2.4.8 have to use the SSLCertificateChainFile directive.

Simply concatenate first the website certificate, then the intermediate certificate into the file:

$ cat /path/to/example.org.pem > /path/to/example.org-chain.pem
$ cat /path/to/intermediate.pem >> /path/to/example.org-chain.pem

Then add the following code to your virtual host configuration:

SSLCertificateFile    "/path/to/example.org-chain.pem"
SSLCertificateKeyFile "/path/to/example.org.key"

Published on 2015-10-16 in , ,


HTTPS: SSL client certificate unknown error

My SSL client certificate expired a few days ago, and I renewed it (created a new one) at cacert.org. Visiting my feed reader instance and confirming login with the client certificate, I got an error:

ssl_error_certificate_unknown_alert

Firefox 38.0

and

Certificate-based authentication failed
ERR_BAD_SSL_CLIENT_AUTH_CERT

Chromium 41.0.2272.76

The apache error log did not show anything, and the access log didn't even show the requests the browsers made.

Wireshark

As always, Wireshark helped me understand what was going on.

The data I got from wireshark during the SSL handshake were:

TLSv1.2 Certificate, Client Key Exchange, Certificate Verify
TLSv1.2 Alert (Level: Fatal, Description: Certificate Unknown) (Code 46)

This alone does not say much; the corresponding RFC says about Code 46:

certificate_unknown
  Some other (unspecified) issue arose in processing the
  certificate, rendering it unacceptable.

Looking deeper into wireshark's network log showed that the client certificate was issued by the CAcert class 3 certificate. It is not the root CA certificate, but an intermediate certificate which itself is signed by the CAcert class 1 root certificate.

The trust chain thus was the following:

CAcert class 1 root >> CAcert class 3 >> my client certificate

As I described in my SSL client cert server configuration article , you have to tell Apache how deep the trust chain may be with the SSLVerifyDepth setting.

My server had a setting of 1, while my new client certificate requires 2. After changing that and restarting apache, it worked again in all browsers.

Published on 2015-05-22 in , ,


"Push to my OUYA" is a pull

Push to my OUYA button

Each game on the OUYA games website has a "Push to my OUYA" button on the top right. I wondered if it's really a push, or more a pull in technical terms.

Clicking it gives you - when logged in - the following message:

VLC MEDIA PLAYER FOR OUYA will start downloading to your OUYA within the next minutes

Push confirmation

The text "within the next minutes" strongly suggest a pull. But let's have a look at the actual implementation.

Implementation

Registration

Clicking the push button on the website calls the following URL: https://www.ouya.tv/integration/remoteAccount.php?URI=org.videolan.vlc.betav7neon

This click event is tracked by Google analytics and optimizely.com, as OUYA tracks everything you do.

Push

To get access to the OUYA console's actual network traffic, I set up a man-in-the-middle proxy like before.

Once mitmproxy was running, I could see the HTTP requests. Here is the "push" request from the console to the server:

GET https://devs.ouya.tv/api/v1/queued_downloads?auth_token=...

{
    "queue": [
        {
            "source": "gamer", 
            "uuid": "org.videolan.vlc.betav7neon"
        }
    ]
}

This URL is fetched every 5 minutes by the console, which explains the "within the next minutes" text. The OUYA-claimed push is actually pull.

I would have been really surprised if it was a push in every aspect.

Queue cleanup

After OUYA found a game in the download queue, it fetches its meta data via a GET request on https://devs.ouya.tv/api/v1/apps/org.videolan.vlc.betav7neon?auth_token=...

If that worked, it adds the game to its internal queue and removes it from the queue on the server:

DELETE https://devs.ouya.tv/api/v1/queued_downloads/org.videolan.vlc.betav7neon?auth_token=...

Finally a popup is shown on the TV screen:

Popup on your OUYA, indicating download

Published on 2014-05-12 in ,