The latest posts in full-text for feed readers.
The Defold game engine is able to generate applications that run on Desktop, smartphones and HTML5 web browsers. Just for fun I built an application that extracts textures and Lua scripts from the generated binary data files.
A game would not run on my computer, and I thought it would be worthwile to at least extract its image assets. It had a 1.5 GiB file called game.arcd that probably contained all the data, and I just had to obtain them from there.
I had no idea what tool was used to create the game.
Searching for .arcd file type
did not give me any results
(looking for game.arcd
would have helped,
but I only learned that later).
Instead I ran strings on the executable file to see messages are
inside it:
$ strings -n 10 executable |wc -l
5799
$ strings -n 10 executable |grep -iE 'error|fail'
...
ERROR:DLIB: dmLog already initialized
...
$ strings -n 10 executable
...
dmGraphics::ValidateAsyncJobProcessing
dmGraphics::OpenGLClear
dmGraphics::OpenGLFlip
...
I wanted at least 10 characters because otherwise it would just be too much output - 5800 lines were enough already. Then I looked for non-generic error messages and non-standard API strings, in the hope that that would tell me which engine was in use.
Searching for both dmLog
and dmGraphics
told me that
the Defold game engine was used here.
I looked for existing tools that could unpack the game.arcd file and found Unfold, a unpacker written in Defold itself. Unfortunately it did not work.
The Defold documentation talks about the game archive format, and it did not seem to be very complicated.
When inspecting the Defold source code I saw that there were .proto files, which are specifications for "Protocol buffers", a binary serialization format. This proto files can be compiled into Java, C, C++, PHP and other languages. That's enough to read all the files.
I also found a ArchiveReader.java file that already implemented a file extractor! So I downloaded IntelliJ idea, set up a new Java project and wrote a tool that utilized the reader.
Working on a 1.5GiB file would be slow, so I checked out the Defold games showcase and downloaded a game that looked small: At first I used the browser's network inspector to obtain archive_files.json and then downloaded all files linked from it:
$ jq -r .content[].pieces[].name < archive_files.json | xargs -L1 I{} wget http://example.org/archive_files_dir/{}
The files extracted by my tool were of no use. After some tinkering I found out that files can be compressed and encrypted, and both features were not implemented in the ArchiveReader class.
An hour later I had a BetterArchiveReader with support for both compression and decryption, and it was able to extract .luac and .texturec files.
The .luac files has the Lua source code, but also some binary before and after the actual script code. This means the files were again packed protobufs, and needed to be unpacked.
.texturec files may contain multiple texture formats,
and each of them had meta data about the actual image format.
The files I was interested in had Compression: basis UASTC
,
which
turned out
to be a format suitable for directly loading onto the graphics card.
Luckily the basis_universal basisu command line tool already contains an unpacker that is able to convert .basis texture files back into .png!
The tool itself worked, but had hard-coded paths and I had to comment out code to switch between extraction and listing.
I decided to add a nice command line interface so others would not have to go through the same hoops as I. After evaluating Java command line parsing libraries I decided on JCommander.
After the initial work of about 5 hours, I spent 1.5 days to build the cli interface. It even has a pretty help screen:
[files to extract]
Options:
--arci
.arci index file path
--dmanifest
.dmanifest file path
--extract-lua
Extract Lua scripts from .luac files
Default: false
--extract-textures
Extract textures from .texturec files
Default: false
-f, --filter
File extension filter
-l, --list
List archive contents
Default: false
--outdir
Directory to extract files to
-v, --verbose
Show names of extracted files
Default: false
lua Inspect or extract a .luac file
Usage: lua [options] <.luac file path>
Options:
-v, --verbose
Show names of extracted files
Default: false
-i
Show .luac file information. Do not extract.
Default: false
texture Inspect or extract a texture file
Usage: texture [options] <.texturec file path>
Options:
-v, --verbose
Show names of extracted files
Default: false
-i
Show texture information. Do not extract.
Default: false
]]>
The tool is called arcdEx
, is open source and can be downloaded
from codeberg.org/cweiske/arcdEx.
Published on 2022-08-20 in games, programming
While developing a Drupal 7 module at work, I got the following error message:
The handler for this item is broken or missing and cannot be used. If a module provided the handler and was disabled, re-enabling the module may restore it. Otherwise, you should probably delete this item.
The actual problem was that the hook method I used was declared in a file that was not listed in the module's .info file. After adding it there the error was gone:
files[] = mymodule_handler_file.inc
Published on 2020-11-21 in drupal, programming
Devin Rich builds his own dynamic OUYA API server, and you will be able to edit existing game data and add new ones with it.
While building that, some changes to the 1200+ OUYA game data files were necessary, and I got those changes as one big commit. It added missing uuid for all developers, but also trimmed some text fields here and there.
I love to have sensible and clean commits that do one thing, and so I wanted to have one commit that fixed the UUIDs, and one for the rest. Using git add -p is the normal way to go when splitting up commits into several managable ones, but this was not possible with over 900 changed files.
After some searching I found grepdiff, which is part of patchutils.
It allows me to find all changes that match a given regex in a patch file. In the end it shows you the files that contained the match - but it is also able to create a new patch with only the matching lines in it! The magic parameter for this is --output-matching=hunk.
A simple file:
$ cat gamelist
best=Bloo Kid 2
second=Babylonian Twins
third=SNES9x
Now we change second and third place:
$ git diff
diff --git gamelist gamelist
index b0aeab0..548adad 100644
--- gamelist
+++ gamelist
@@ -1,3 +1,3 @@
-best=Bloo Kid 2
+best=Hidden in plain sight
second=Babylonian Twins
-third=SNES9x
+third=Bomb Squad
My goal is now to only add the change to the "best" line to the git staging area. We do not want a unified diff here as shown above, because that gives us both changes in one hunk. Instead I use 0 lines of context:
$ git diff -U0
diff --git gamelist gamelist
index b0aeab0..548adad 100644
--- gamelist
+++ gamelist
@@ -1 +1 @@
-best=Bloo Kid 2
+best=Hidden in plain sight
@@ -3 +3 @@ second=Babylonian Twins
-third=SNES9x
+third=Bomb Squad
Now grepdiff can be used to get only those hunks that contain
best
:
$ git diff -U0 | grepdiff best --output-matching=hunk
diff --git gamelist gamelist
index b0aeab0..548adad 100644
--- gamelist
+++ gamelist
@@ -1 +1 @@
-best=Bloo Kid 2
+best=Hidden in plain sight
This patch file can now be piped into git apply to stage it:
$ git diff -U0\
| grepdiff best --output-matching=hunk\
| git apply --cached --unidiff-zero -p0
$ git diff --cached
diff --git gamelist gamelist
index b0aeab0..02e36aa 100644
--- gamelist
+++ gamelist
@@ -1,3 +1,3 @@
-best=Bloo Kid 2
+best=Hidden in plain sight
second=Babylonian Twins
third=SNES9x
$ git diff
diff --git gamelist gamelist
index 02e36aa..548adad 100644
--- gamelist
+++ gamelist
@@ -1,3 +1,3 @@
best=Hidden in plain sight
second=Babylonian Twins
-third=SNES9x
+third=Bomb Squad
-p0 is necessary because I use the diff.noprefix config option that omits the leading a/ and b/ in the filenames in diffs.
--unidiff-zero is needed because we used zero lines of context.
Published on 2019-12-04 in git, programming
While doing some fixes for my Tomboy note sync server grauphel, I needed a way to test its authentication code on my local server - but I did not want to lose the Tomboy configuration for my live server.
tomboy has a --note-path parameter which allows you to work with a different set of notes. This still keeps the original sync server configuration.
~/.config/tomboy/ does not contain any sync server information; in fact I found that it didn't really contain any usable configuration information that I had expected there.
Using gconfeditor I found that all configuration was stored in /apps/tomboy, the synchronization server settings are in /apps/tomboy/sync/tomboyweb/.
With gconftool it is possible to backup and restore parts of the gnome configuration tree, and I could use it to get a kind of "tomboy configuration profiles":
$ mkdir ~/tmp/tomboy
$ cd ~/tmp/tomboy
$ gconftool --dump /apps/tomboy > tomboy-config-live.xml
#clear all settings
$ gconftool --unload tomboy-config-live.xml
$ tomboy --note-path .
#.. test things
#.. restore & start normally
$ gconftool --load tomboy-config-live.xml
$ tomboy
Published on 2016-03-18 in linux, programming
At work we want to add an e-ink display to each conference room door, and let it show the next appointments happening in there.
To easily access the Google calendars, a colleague decided to write a Google Apps Script that fetches the calendar data and converts it into JSON that the mini computer can use.
When running a script as web application, you do not get any output in the case of errors, which makes it hard to debug.
So instead of doing that, set a breakpoint on the last line of your calculation function (click on its line number). Then select doGet in the toolbar, and press the bug button to start debugging.
The code will run, and you will be able to inspect all the variables (except objects, which is pretty lame) at the breakpoint. If an error occurs, it will also jump to the errorneous line and show you the error message.
Published on 2018-09-23 in programming
You don't call the framework, the framework calls you.
Other people can better explain why composing libraries is better than using a framework:
If you don't want to read much then I can still persuade you that working with (and thus debugging inside) a framework is no joy:
Call Stack: 1. {main}() 2. Illuminate\Foundation\Http\Kernel->handle() 3. Illuminate\Foundation\Http\Kernel->sendRequestThroughRouter() 4. Illuminate\Pipeline\Pipeline->then() 5. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() 6. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 7. Dingo\Api\Http\Middleware\Request->handle() 8. Dingo\Api\Http\Middleware\Request->sendRequestThroughRouter() 9. Illuminate\Pipeline\Pipeline->then() 10. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 11. Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode->handle() 12. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 13. Immogic\Http\Middleware\LogAfterRequest->handle() 14. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 15. Dingo\Api\Http\Middleware\Request->Dingo\Api\Http\Middleware\{closure}() 16. Dingo\Api\Routing\Router->dispatch() 17. Dingo\Api\Routing\Adapter\Laravel->dispatch() 18. Illuminate\Routing\Router->dispatch() 19. Illuminate\Routing\Router->dispatchToRoute() 20. Illuminate\Routing\Router->runRouteWithinStack() 21. Illuminate\Pipeline\Pipeline->then() 22. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() 23. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 24. Dingo\Api\Http\Middleware\PrepareController->handle() 25. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() 26. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() 27. Immogic\Http\Middleware\Authenticate->handle() 28. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() 29. Illuminate\Routing\Router->Illuminate\Routing\{closure}() 30. Illuminate\Routing\Route->run() 31. Illuminate\Routing\Route->runController() 32. Illuminate\Routing\ControllerDispatcher->dispatch() 33. Illuminate\Routing\Controller->callAction() 34. call_user_func_array:{/var/www/vendor/laravel/framework/src/Illuminate/Routing/Controller.php:55}() 35. Immogic\Http\Controllers\Api\V1\BrokerController->index() 36. Dingo\Api\Http\Response\Factory->paginator() 37. Dingo\Api\Http\Response->__construct() 38. Symfony\Component\HttpFoundation\Response->__construct() 39. Dingo\Api\Http\Response->setContent() 40. Illuminate\Http\Response->setContent() 41. Illuminate\Http\Response->morphToJson() 42. Illuminate\Pagination\LengthAwarePaginator->toJson() 43. Illuminate\Pagination\LengthAwarePaginator->jsonSerialize() 44. Illuminate\Pagination\LengthAwarePaginator->toArray() 45. Illuminate\Support\Collection->toArray() 46. array_map() 47. Illuminate\Support\Collection->Illuminate\Support\{closure}() 48. Immogic\User->toArray() 49. Immogic\User->getFileSizeField() 50. Immogic\User->getFileUrl() 51. app() 52. Illuminate\Foundation\Application->make() 53. Illuminate\Container\Container->make() 54. Illuminate\Container\Container->build() 55. Dingo\Api\Provider\RoutingServiceProvider->Dingo\Api\Provider\{closure}() 56. Dingo\Api\Routing\Router->getRoutes() 57. Dingo\Api\Routing\Router->createRoute() 58. Dingo\Api\Routing\Route->__construct() 59. Dingo\Api\Routing\Route->setupRouteProperties() 60. Dingo\Api\Routing\Route->mergeControllerProperties() 61. Dingo\Api\Routing\Route->makeControllerInstance() 62. Illuminate\Foundation\Application->make() 63. Illuminate\Container\Container->make() 64. Illuminate\Container\Container->build() 65. ReflectionClass->newInstanceArgs() 66. Immogic\Http\Controllers\Api\ProducerDb\ProducersController->__construct() 67. Dingo\Api\Transformer\Factory->register() 68. xdebug_print_function_stack()
I've shortened the output for brevity. I'm sure you spotted the error.
Published on 2016-12-09 in php, programming
I spent two weeks with my family on vacation in Austria. When I came back, I had a long thread of emails in my inbox about a feature request for my JsonMapper library.
The worst thing in there was the following comment:
But anyways, I see that this repository or it's dead or the author doesn't care with the PR's.
Claudio Santoro, student with much time
14 days. Fourteen. No answer within 14 days, and you do not care. THIS SUCKS.
This falls into the same line as What it feels like to be an open-source maintainer , Why I took October off from OSS volunteering and After 10 years, I'm stopping my work on sabre/dav .
People expect everything and more from Open Source software and their maintainers, without thinking at least a little bit. Those people suck.
Published on 2017-06-28 in bigsuck, leben, php, programming
Last week someone thought that it's a good idea to invent a new standard for feeds: JSON feed.
So in addition to the four incompatible-with-each-other and underspecified RSS formats (RSS 0.90, RSS 0.91, RSS 1.0, and RSS 2.0), the correctly spec'ed Atom format and the HTML-based h-feed we have a seventh one that future feed readers will also have to support.
One of the reasons for inventing this new format is:
For most developers, JSON is far easier to read and write than XML.
One of the problems with the XML-based feed formats is that software spits out non-wellformed XML, which cannot be read with XML libraries.
The reason for this is that people think "that looks like HTML, let's write a HTML template for the XML feed" - which breaks at the first character that needs to be escaped. This could have been prevented if those people would have simply use an XML library to generate the feed XML. And yes, every programming language has an XML lib, since 15 years.
So now the JSON feed people come, see this as problem and say: Hey, JSON is so easy to generate with libraries - let's ditch XML and use JSON.
Now guess what happens? People use the HTML templating engine to generate JSON that breaks at the first character that needs to be escaped.
Dear Brent Simmons and Manton Reece: You tried to fight human nature with a new standard, and failed.
The JSON feed spec v1 states:
JSON Feed files must be served using the same MIME type - application/json - that's used whenever JSON is served.
Congratulations, my tools now cannot differentiate between normal JSON files, JF2 feeds and JSON feeds when trying to discover feeds on a HTML page.
A proper solution would have been to use the mime/type+format schema that's already used by Atom (which has application/atom+xml): application/jsonfeed+json.
[JSON feed] reflects the lessons learned from our years of work reading and publishing feeds.
HTTP responses, HTML pages and Atom feeds have the ability to link to other resources. This is all nicely specified in RFC 5988: Web Linking.
New technologies like the realtime change-notification system WebSub rely on the ability of feeds to link to their hub. And the JSON feed people did not even think to add support for links, because in the years of publishing feeds they never wanted to notify subscribers in realtime about updates.
Published on 2017-06-01 in indieweb, php, programming, web
When I started a new project at work, I configured our Jenkins server to automatically deploy to production and testing servers when either git master or develop branches get pushed to - but only if all the tests pass.
In our case, it's only syntax checks for HTML, PHP, SCSS, SQL and XML files as well as coding style checks for those files.
But when things get time-critical, nobody can take an excuse that "it has to be quick now" - because your code will just not get live unless you follow the rules. And not only this, everybody else's code will not go live because of you.
This really helps keeping developers play by the rules :)
In case you were wondering which tools we use for syntax and style checking:
Published on 2015-12-15 in html, php, programming, tools, xml
Ich wollte Geokoordinaten in einer SQL-Datenbanktabelle speichern und fragte mich, welche Auflösung / wieviel Nachkommastellen ich bei den DECIMAL -Werten konfigurieren sollte.
OpenStreetMap nutzt in der URL maximal 5 Nachkommastellen. Google Maps nutzt immer 7 Nachkommastellen, um Koordinaten in der URL einzubetten. Bing wiederum nutzt 6 Nachkommastellen :)
Ein Kreis hat 360°, und Geokoordinaten laufen von -180° bis +180°. Drei Stellen vor dem Komma sind also schonmal gesetzt.
Für die Anzahl der Nachkommastellen müssen wir wissen, wieviel Kilometer mit einem Grad, einem Zehntel, einem hunderstel Grad und so weiter ausgedrückt werden.
Die Erde ist näherungsweise eine Kugel, und wir wollen deshalb eigentlich die Länge eines Kreisbogens mit einem bestimmten Winkel berechnen.
Die Länge eines Bogens b ist b = π × r × α / 180°.
Der Radius der Erde ist 6378,16 Kilometer, also bekommen wir folgende Tabelle:
Stellen | Grad | Länge eines Grades |
---|---|---|
0 | 1 | 111,31 km |
1 | 0,1 | 11,13 km |
2 | 0,01 | 1,11 km |
3 | 0,001 | 111,3 m |
4 | 0,0001 | 11,1 m |
5 | 0,00001 | 1,11 m |
6 | 0,000001 | 0,1 m |
7 | 0,0000001 | 11,1 mm |
8 | 0,00000001 | 1,1 mm |
9 | 0,000000001 | 0,11 mm |
Wenn man eine metergenaue Auflösung haben möchte, muss man mindestens 6 Nachkommastellen nehmen.
Das ist in MySQL DECIMAL(9,6) (9 Stellen insgesamt, 6 Nachkommastellen).
Published on 2015-12-13 in programming