The latest posts in full-text for feed readers.
It is possible to pack an entire PHP web application up in one single file and run it without unpacking it. This files usually have a .phar extension, which is an acronym for PHp ARchive, loosely based on jar (Java ARchive).
The PEAR installer has been distributed since ages as a single .phar file, thanks to the PHP_Archive package.
With PHP 5.3.0, the Phar extension is an official part of PHP. Shipping your applications as Phar thus is safe since 5.2 has reached its EOL already.
Distributing a application as Phar is not all sunshine, some things need to be considered:
For me, Phar archives are a nice way to try out new software with minimal setup issues.
Until the Linux distributions have strong Phar support, you should not rely on Phar exclusively to distribute your web application.
While .phar files can be saved as .zip and .tar and you can open them with a normal compression utility, adding/extracting the meta data and index file stub is impossible without special tools.
PHP's source distribution ships with a phar executable that provides a comprehensive interface to Phar files:
$ phar help-list add compress delete extract help help-list info list meta-del meta-get meta-set pack sign stub-get stub-set tree version
With its command line interface, you can create new Phar files, extract files from existing ones or repack, compress, sign and change their meta data and index stub.
Debian ships it since PHP 5.6, but Ubuntu is still missing it.
Krzysztof Kotowicz's phar-util tool has been written for
building, signing and verifying Phar archives with OpenSSL public/private keys
Either clone the git repository or install it from it's PEAR channel:
$ pear channel-discover pear.kotowicz.net $ pear install kotowicz/PharUtil-beta
Phing, my favorite build tool, is able to create Phar archives natively:
]]>
I'm using it to generate the SemanticScuttle Phar release file on deployment automatically.
Since everything is in one big file, accessing the README and INSTALL files inside the Phar archive is hard for most users.
Making it available through the .phar in the browser is not the best option because this requires that the user has a web server running and already needs to have your application setup. Also, having the README with the version number available from outside gives potential attackers important version information .
Alternatively you can offer CLI commands to extract the whole documentation or parts of it. This requires the user to know that it's possible.
SemanticScuttle's .phar offers several CLI commands:
[options] [args] Options: -h, --help show this help message and exit -v, --version show the program version and exit Commands: list (alias: l) extract (alias: x) run (alias: r) ]]>
The user can get a list of certain files - only the ones he'll need, like documentation, default configuration file template and database schema files - and extract them.
There is also a way to execute tool scripts inside the phar, e.g. the avahi export or upgrade scripts.
I used PEAR's awesome Console_CommandLine package to handle input arguments and options. It also generates the help screen automatically.
Your application probably needs to be configured by the user. Normal web apps have a config distribution file the user copies and makes the necessary changes in - easy. Your application also knows where it is and can load it without problems.
With a Phar, things are different. First, the user needs to get the config file template from somewhere, preferably from the phar itself. As seen above, listing the files and extracting it is possible via the CLI interface.
The user probably does know how to do it at the beginning, so the application should detect that the file is missing and give the user instructions how to extract the file and where to save it. Don't expect the directory to be writable.
As for the configuration file location: I chose $nameOfThePhar.config.php for SemanticScuttle, because it is possible to have several installations beside each other this way. It also makes clear that the the phar and the config file belong together.
To get some hard data to talk about, I did some benchmarks comparing delivery of normal files vs. files inside the .phar.
I used ApacheBench, Version 2.3 $Revision: 655654 $, Apache/2.2.17, mod_php 5.3.5-1ubuntu7.2 and apc 3.1.3p1-2.
All URLS have been fetched 1000 times, with a concurrency of 20.
Greg, Phar's father, benchmarked phpMyAdmin in 2008 and measured nearly identical performance.
Most applications have static files - CSS, images, Javascript.
Total time | Requests/second | Transfer rate | |
---|---|---|---|
Direct | 0.275 s | 3636.72 | 39709.15 Kb/s |
readfile() with APC | 0.464 s | 2156.37 | 23357.88 Kb/s |
readfile() without APC | 0.386 s | 2589.13 | 28045.55 Kb/s |
Phar with APC | 6.298 s | 158.78 | 1723.30 Kb/s |
Phar without APC | 6.259 s | 159.78 | 1734.13 Kb/s |
Unsurprisingly, static files are delivered really really fast when Apache delivers them directly without asking PHP. Delivery times of static files from the Phar do not differ when the bytecode cache is off or on.
Here are the numbers for the SemanticScuttle index page - some SQL is executed, application caching is disabled.
Total time | Requests/second | Transfer rate | |
---|---|---|---|
Direct without APC | 35.534 s | 28.14 | 349.06 Kb/s |
Phar without APC | 30.614 s | 32.67 | 466.57 Kb/s |
Direct with APC | 22.377 s | 44.69 | 554.20 Kb/s |
Phar with APC | 21.613 s | 46.27 | 660.74 Kb/s |
Direct with APC, apc.stat=0 | 22.731 s | 43.99 | 545.86 Kb/s |
Phar with APC, apc.stat=0 | 32.931 s | 30.37 | 433.84 Kb/s |
This was a bit of a surprise for me: The pages are delivered fastest when the Phar was used. Reason is probably the saved filetime lookups APC does to check if the bytecode cache is stale.
To rule out the cache check performance, I set apc.stat=0 and ran the tests again. Now what? The application was slower! The Phar was even slower than without APC! I guess this is because apc.stat set to 0, combined with relative includes (which SemanticScuttle uses everywhere) make a really bad combination.
Published on 2011-08-25 in pear, php, semanticscuttle, tools
Imagine you visit a web site and are instantly and automatically logged in.
Without filling in username and password in a login form.
Without filling the OpenID field and clicking 3 times.
Without clicking the button of your browser's autologin extension.
Without a single cookie sent from your browser to the server.
Yet, you authenticate yourself at and get authorized by the web server.
Yes, this is possible - with SSL client certificates. I use them daily to access my self-hosted online bookmark manager and feed reader.
During the last weeks I spent quite some time implementing SSL Client Certificate support in SemanticScuttle, and want to share my experiences here.
Client certificates are, as the name indicates, installed on the client - that is the web browser - and transferred to the server when the server requests them and the user agrees to send it.
The certificates are issued by a Certificate Authority (CA), that is a commercial issuer, a free one like CAcert.org, your company or just you yourself, thanks to the power of the openssl command line tool (or a web frontend like OpenCA).
The CA is responsible for giving you a client certificate and a matching private key for it. The client certificate itself is sent to the server, while the private key is used to sign the request. This signature is verified on the server side, so the server knows that you are really the one that the certificate belongs to.
Note that client certificates can only be used when accessing the server with HTTPS.
The certificates also have an expiration date, after which they are not valid anymore and need to be renewed. When implementing access control, we need to take this into account.
Your web server must be configured for HTTPS, which means you need a SSL server certificate. Get that working first, before tackling the client certificates.
Until some years ago, there was a rule "one port, one certificate". You could only run one single HTTPS website on a single port on the server, except in certain circumstances. The HTTPS port is 443, and using another port for the next SSL-secured domain means to get problems with firewalls and much harder linking - just using "https://example.org/" does not work anymore, you need some port number in it which you - as a visitor - don't know in advance.
The "special circumstances" were wildcard domains and the assumption that you only wanted to secure subdomains: app1.example.org, app2.example.org etc.
The problems lie in the foundations of SSL: SSL certificate exchange is being made before any HTTP protocol data are submitted, and since the certificate contains the domain name, you cannot deliver the correct certificate when you have several SSL hosts on the same port.
Fast-forward to now. We have SNI which solves the problem and gives nobody an excuse anymore to not have SSL secured domains.
With SNI, the browser does send (indicate) the host name it wants to contact during certificate exchange, which causes the server to return the correct certificate.
All current browsers on current operating systems support that. Older systems with Windows XP or OpenSSL < 0.98f do not support it and will get the certificate of the first SSL host.
I assume you're going to get the certificate from CAcert.
First, generate a Certificate Signing Request with the CSR generator. Store the key file under
/etc/ssl/private/bookmarks.cweiske.de.key
Use the the .csr file and the CAcert web interface to generate a signed certificate. Store it as
/etc/ssl/private/bookmarks.cweiske.de-cacert.pem
Now fetch both official CAcert certificates (root and class 3) and put both together into
/etc/ssl/private/cacert-1and3.crt
A basic virtual host configuration with SSL looks like this:
ServerName bookmarks.cweiske.de
LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon
CustomLog /var/log/apache2/access_log vcommon
VirtualDocumentRoot /home/cweiske/Dev/html/hosts/bookmarks.cweiske.de
AllowOverride all
SSLEngine On
SSLCertificateFile /etc/ssl/private/bookmarks.cweiske.de-cacert.pem
SSLCertificateKeyFile /etc/ssl/private/bookmarks.cweiske.de.key
SSLCACertificateFile /etc/ssl/private/cacert-1and3.crt
]]>
Apart from that, you might need to enable the SSL module in your webserver, i.e. by executing
$ a2enmod ssl
Restart your HTTP server. You should be able to request the pages via HTTPS now.
A web server does not require any kind of client certificate by default; this is something that needs to be activated.
The client certs may be required or optional, which leaves you the comfortable option to let users login normally via username/password or with SSL certificates - just as they wish.
Modify your virtual host as follows:
SSLVerifyClient optional
SSLVerifyDepth 1
SSLOptions +StdEnvVars
]]>
There are several options you need to set:
You may choose optional or require here. optional asks the browser for a client certificate but accepts if the browser (the user) does choose not to send any certificate. This is the best option if you want to be able to login with and without a certificate.
The setting require makes the web server terminate the connection when no client certificate is sent by the browser. This option may be used when all users have their client certificate set.
If you want to allow self-signed certificates that are not signed by one of the official CAs, use SSLVerifyClient optional_no_ca.
Your client certificate is signed by a certificate authority (CA), and your web server trusts the CA specified in SSLCACertificateFile. CA certificates itself may be signed by another authority, i.e. like
CAcert >> your own CA >> your client certificate
In this case, you have a higher depth. For most cases, 1 is enough.
This makes your web server pass the SSL environment variables to PHP, so that your application can detect that a client certificate is available and read its data.
In case you need the complete client certificate, you have to add +ExportCertData to the line.
This multiplies the size of data exchanged between the web server process and PHP, which is why it's deactivated most times.
If you restart your web server now, it will request a client certificate from you.
It may happen that the browser does not pop up the cert selection dialog. The web server may send a list of CAs that it considers valid to the browser. If the browser does not have a certificate from one of those CAs, it does not display the popup.
You can fix this issue by setting SSLCADNRequestFile or SSLCADNRequestPath .
Thanks to Gerard Caulfield for bringing this to my attention.
With Apache, you may use SSL client certificate details in your log files: Create a new log format and use the SSL client environment variables :
%{SSL_CLIENT_S_DN_Email}e %{SSL_CLIENT_M_SERIAL}e
Thanks to Hans Schou for this idea.
Let's collect the requirements for SSL client cert support in a typical PHP application:
When a client certificate is available, the $_SERVER variable contains a bunch of SSL_CLIENT_* variables .
$_SERVER['SSL_CLIENT_VERIFY'] is an important one. Don't use the certificate if it does not equal SUCCESS. When no certificate is passed, it is NONE.
Another important variable is SSL_CLIENT_M_SERIAL with the serial that uniquely identifies a certificate from a certain Certificate Authority.
All variables with SSL_CLIENT_I_* are about the issuer, that is the CA. SSL_CLIENT_S_* are about the "subject", the user that sent the client certificate.
SSL_CLIENT_S_DN_CN for example contains the user's name - "Christian Weiske" in my case - which can be used during registration together with SSL_CLIENT_S_DN_Email to give a smooth user experience.
Always remember that you need to configure your web server to pass the variables to your PHP process.
Associating a user account with a client certificate is brutally possible by just enabling +ExportCertData and storing the client certificate in your user database.
This is bad for two reasons:
According to the PostgreSQL manual ,
The combination of certificate serial number and certificate issuer is guaranteed to uniquely identify a certificate (but not its owner — the owner ought to regularly change his keys, and get new certificates from the issuer).
So we can use SSL_CLIENT_M_SERIAL together with SSL_CLIENT_I_DN to uniquely identify a certificate, without storing its as a whole.
A "renewed" certificate is in reality a new certificate with a new serial number. Thus the serial changes after renewal.
The combination of issuer + SSL_CLIENT_S_DN_Email should be more stable; letting the user login even when he renewed his certificate - but only if he didn't change his email address.
SemanticScuttle uses the following code to check if a certificate is valid:
While you can calculate a hash over serial and issuer DN, it's better to store each of the values separately in the database. This has the advantage that you may display them in the user's certificate list, and allow him to identify the cert later on.
Here is the database table structure of SemanticScuttle:
Apart from the two required columns, we also store the subject's name and email to again allow better certificate identification:
Certificate list in SemanticScuttle
The user needs to be able to manually register his current certificate with the application. Automatic registration without confirmation may not be desired, so the best is to just drop that idea.
SemanticScuttle offers a "register current certificate" button on the user profile page:
No certificates and a registration button in SemanticScuttle
Another option that improves usability is to associate the client certificate upon user registration. There should be a checkbox which alerts the user of that detail and allows him to disable it.
The following code registers the current certificate with the user's account in SemanticScuttle:
getTableName()
. ' '. $this->db->sql_build_array(
'INSERT', array(
'uId' => $userId,
'sslSerial' => $_SERVER['SSL_CLIENT_M_SERIAL'],
'sslClientIssuerDn' => $_SERVER['SSL_CLIENT_I_DN'],
'sslName' => $_SERVER['SSL_CLIENT_S_DN_CN'],
'sslEmail' => $_SERVER['SSL_CLIENT_S_DN_Email']
)
);
]]>
You might want to store the certificate's expiration date and automatically remove them when it is reached.
The Online Certificate Status Protocol allows you to check if a certificate has been revoked. Revocation is useful if your certificate has been compromised somehow; it could be stolen or the password gotten public.
A certificate contains information about the CA's OCSP server, you can view it by running
$ openssl x509 -text -in /path/to/cert.pem
...
Authority Information Access:
OCSP - URI:http://ocsp.cacert.org
...
This information is not available in the standard SSL environment variables but need to be extracted from the certificate data, which means that +ExportCertData needs to be enabled.
PHP's OpenSSL extension does not have a method to generate, send or evaluate OCSP requests, so checking the certificate is only possible with the openssl ocsp commandline tool or by implementing it yourself.
The command line tool is easy to use after you stored the client certificate on disk. At first we need the URL
$ openssl x509 -in /tmp/client-cert.pem -noout -text |grep OCSP OCSP - URI:http://ocsp.cacert.org
Now that the URL is ours, it can be queried:
$ openssl ocsp -CAfile /etc/ssl/private/cacert-1and3.crt\
-issuer /etc/ssl/private/cacert-1and3.crt\
-cert /tmp/client-cert.pem\
-url http://ocsp.cacert.org
Example output of a revoked certificate:
At the time of writing, there sadly does not seem to be any PHP library that eases verifying SSL client certificates.
Apache since version 2.3 is able to do the OCSP checks itself if you activate the SSLOCSPEnable option.
Published on 2011-06-03 in apache, http, php, semanticscuttle, server, ssl, web
I'm implementing OpenID for SemanticScuttle, your self-hosted social bookmark manager. To log in with OpenID, you need to know your OpenID URL, which many people do not know, and don't want to know. Most know their email address, and thanks to WebFinger, this is all you have to know!
WebFinger enables applications to discover information about people by just their e-mail address - for example their OpenID URL!
I didn't find a single standalone WebFinger library for PHP, so I asked on StackOverflow, but did not get any responses. Failed to stand on the shoulders of giants, I went the hard way and implemented it all myself: Net_WebFinger, based on XML_XRD.
WebFinger weaves RFC 6415: Web Host Metadata with LRDD which both use XRD files.
Thus the first step was to build a clean XRD library for PHP, with an intuitive API and 100% unit test coverage. I proposed the XML_XRD package on 2012-02-01, called for votes 8 days later. It was accepted with 11 votes. Extensive documentation does also exist now.
After the foundation was laid, I proposed the Net_WebFinger package. It was accepted as new PEAR this night, and just some minutes ago it got its first official release and a lot of documenation.
So, discovery is easy now! First, install the PEAR package:
$ pear install net_webfinger-alpha
Now the PHP code:
<?php
require_once 'Net/WebFinger.php';
$wf = new Net_WebFinger();
$react = $wf->finger('user@example.org');
if ($react->openid !== null) {
echo 'OpenID provider found: ' . $react->openid . "\n";
}
//list all other links:
foreach ($react as $link) {
echo 'Link: ' . $link->rel . ' to ' . $link->href . "\n";
}
?>
Net_WebFinger ships with a command line client that you can use to try it out. Find it with
$ pear list-files net_webfinger|grep cli
doc /usr/share/php/docs/Net_WebFinger/examples/webfinger-cli.php
Yahoo and Google already support WebFinger. Distributed social networks like status.net (that powers identi.ca) and Diaspora use WebFinger to distribute public encryption keys, OStatus and Salmon URLs. You can try one of those user addresses, too.
$ php /usr/share/php/docs/Net_WebFinger/examples/webfinger-cli.php klimpong@gmail.com
Discovering klimpong@gmail.com
Information secure? false
OpenID provider: http://www.google.com/profiles/klimpong
Link: http://portablecontacts.net/spec/1.0: http://www-opensocial.googleusercontent.com/api/people/
Link: http://portablecontacts.net/spec/1.0#me: http://www-opensocial.googleusercontent.com/api/people/102024993121974049099/
Link: http://webfinger.net/rel/profile-page: http://www.google.com/profiles/klimpong
Link: http://microformats.org/profile/hcard: http://www.google.com/profiles/klimpong
Link: http://gmpg.org/xfn/11: http://www.google.com/profiles/klimpong
Link: http://specs.openid.net/auth/2.0/provider: http://www.google.com/profiles/klimpong
Link: describedby: http://www.google.com/profiles/klimpong
Link: describedby: http://www.google.com/s2/webfinger/?q=acct%3Aklimpong%40gmail.com&fmt=foaf
Link: http://schemas.google.com/g/2010#updates-from: https://www.googleapis.com/buzz/v1/activities/102024993121974049099/@public
$ php /usr/share/php/docs/Net_WebFinger/examples/webfinger-cli.php singpolyma@identi.ca
Discovering singpolyma@identi.ca
Information secure? false
OpenID provider: http://identi.ca/singpolyma
Link: http://webfinger.net/rel/profile-page: http://identi.ca/singpolyma
Link: http://gmpg.org/xfn/11: http://identi.ca/singpolyma
Link: describedby: http://identi.ca/singpolyma/foaf
Link: http://apinamespace.org/atom: http://identi.ca/api/statusnet/app/service/singpolyma.xml
Link: http://apinamespace.org/twitter: https://identi.ca/api/
Link: http://schemas.google.com/g/2010#updates-from: http://identi.ca/api/statuses/user_timeline/15779.atom
Link: salmon: http://identi.ca/main/salmon/user/15779
Link: http://salmon-protocol.org/ns/salmon-replies: http://identi.ca/main/salmon/user/15779
Link: http://salmon-protocol.org/ns/salmon-mention: http://identi.ca/main/salmon/user/15779
Link: magic-public-key: data:application/magic-public-key,RSA.jylO6IUdOFhUadS0bkvq4Vkx_fh...
Link: http://ostatus.org/schema/1.0/subscribe: http://identi.ca/main/ostatussub?profile={uri}
Link: http://specs.openid.net/auth/2.0/provider: http://identi.ca/singpolyma
Published on 2012-02-24 in pear, peardoc, php, semanticscuttle, web
SemanticScuttle will be distributed as .phar file with the next version. The command line interface uses PEAR's awesome Console_CommandLine package, which needs to be packaged up in the Phar file to make it work out of the box.
I let Phing handle all of the deployment of new versions, and generating the Phar file is handled by Phing, too.
In general, adding files to a .phar is easy, even a bunch of PEAR installed files: Put their absolute locations in a <fileset> tag and you're set.
Unfortunately, this solution has several problems:
I chose to go the correct way and solve all of the problems by writing a Fileset that takes care of everything. All I have to do is specify the package I want to package up:
]]>
This bit of Phing build file XML packages up all Console_CommandLine php files into myapp.phar.
I've opened a feature request on the Phing website to get it included in the official distribution and started a discussion to make it easier to integrate custom types in Phing in the future (see the ticket).
The README
is available on GitHub, the code also on
Gitorious.
The task is in the official phing distribution since
version 2.4.8.
Side note: The rSTTask I wrote about three months ago is part of Phing since 2.4.7.
Published on 2011-09-12 in php, semanticscuttle, tools
A while ago I began to write documentation for SemanticScuttle in rST (reStructuredText) format. When deploying new versions, the .rst files need to be converted to HTML and will get packaged up with the release.
SemanticScuttle uses Phing for deployment, which is a great time saver and packaging bug preventer.
Since Phing itself didn't contain a task to render rST files, I had to use the foreach and exec task:
]]>
Yes, totally ugly and unreadable. Since I am on vacation, I took the time to write a separate rST task that takes all the work from you:
]]>
It automatically renders HTML but supports all other formats supported by Python's docutils, supports file name mappers, multiple filesets, has filter chain support and renders files only if the sources are newer (and you want that). Have a look at the README.
The code is available on
github
and gitorious,
and I'm trying to
get it into the official phing distribution
.
The task is part of phing since
version 2.4.7.
@Mario B.: Do that with Ant.
Published on 2011-06-17 in php, semanticscuttle, tools
After 5 months of development, I released SemanticScuttle version 0.98.0 yesterday evening. Your own self-hosted online bookmark manager got a bunch of new features added, as well as some nagging bugs fixed.
It's possible now to get protected and private bookmarks in an RSS feed by using a private key. As soon you enable it in your profile, all pages with a feed will have a second - private - feed linked.
While private feeds were possible before with HTTP basic auth (user:pass@host/...), private keys are not related to your password and can be changed with the click of a button.
Thanks to Mark Pemberton for implementing it!
In previous versions, bookmarks were public by default unless you changed the setting manually when adding a new bookmark. When importing bookmarks, all were made public automatically.
Now it's possible to change the default privacy setting in your configuration file. All users will get the new privacy setting by default (but still can change it on a case-by-case basis when adding bookmarks).
Brett Dee deserves thanks for adding this feature!
SemanticScuttle is skinnable now. You can change the look of your installation by installing new themes.
Since this is a new feature, nobody has yet designed any, but since it's documented it should not take long for the first to appear.
To prevent XSS, SemanticScuttle does not allow javascript: URLs to be added anymore. config.default.php contains an array of whitelisted protocols $allowedProtocols which you may adapt to your special needs.
Sharing the code of a SemanticScuttle installation for multiple - differently configured - hosts was not possible without manual changes to the sources.
To make it easy to share the code, you may use per-host configuration files now.
The admin just installs the PEAR package, and all users on the server can have their own differently configured SemanticScuttle installation.
Your bytecode cache will love it, and your admin, too - because only one package needs to be updated, and all installations benefit.
SemanticScuttle uses jQuery and jQueryUi now, and we got rid of the old Dojo library. The benefit is that SemanticScuttle can be used with all features offline now, which helps development. Also, jQueryUi is themeable which eases changing the look of your installation.
HTTPS connections are fully supported now, even when no $root is configured.
Database changes from one version to the next are in single .sql files now, which should make it easier to apply the changes when upgrading. We also store the current schema version in the database - which lays the foundations for automatic database updates in the future.
Last but not least, the website has documentation now.
See the full ChangeLog if you want all the details.
Published on 2011-07-22 in pear, php, semanticscuttle
The world has a million PHP frameworks, and now that I need one, there is not a single one that fulfils my requirements.
SemanticScuttle, my pet project, is a good ol' PHP project with several dozen files in the document root and which do a single task: login.php, register.php, admin.php, profile.php, edit.php - you name it.
While it works and Rasmus likes such code , does it make unit testing slow because I have to do real HTTP requests, and I cannot really re-use the action code in other situations. The proper solution is to use a framework/library for MVC and take the C from it. Alternatively, I could write it myself, debug it myself, maintain it myself and have all the hassle. But as I wrote, there are a million frameworks out there to use.
This is a thing that's very important for me. I want to unit test everything in the application, and I expect that a framework does as much work for me as possible.
Let's look at the tasks when testing a web application:
Those testing requirements are the same that most other web apps also have, thus the framework/controller library needs to assist me writing such standard tests.
I want predefined assertions for common uses, i.e. assertHttpResponseCode(200). Using assertEquals($res->getCode(), 200) is of course possible but I don't have a nice error message which I need when debugging the code half a year later, and it's just too much code to write when I have to do it again and again.
And no, I don't want to write those assertions myself. That's the framework's task!
I've spent quite some time analyzing the existing frameworks: Zend Framework, Lithium, Yii, Kohona, CodeIgniter, Symfony2.
Zend has a nice ControllerTestCase class with dozens of the assertions I long for. Unfortunately, it's a big fat pack from which I can't pick the controller component only and automatically keep that up to date. Downloading the 6.x MiB minimal zip file after each release and copying the required files does not count.
All others don't have advanced unit testing support. Most don't have a PEAR channel (Symfony2 is the notable exception here) and the devs don't feel that it's necessary. Looks like they never worked professionally on a app that's supposed to run several years and needs security and feature updates without hassle.
It seems to me that everyone wrote an own framework because he wasn't satisified with others but failed to make it awesome and finish it to 100%.
Do you have an answer or suggestion? Then please comment on the stackoverflow question or send a mail.
Published on 2011-06-17 in bigsuck, pear, php, programming, semanticscuttle
SemanticScuttle, your self-hosted social bookmark manager, uses SourceForge as project hosting space. We're utilizing its Git hosting facilities to coordinate development between several developers. Apart from that, we host file releases there, use the bug tracker and have the demo installation on the SourceForge servers.
GitHub on the other hand is a verrry popular Git repository hosting service that offers some unique features with strong social components.
To attract more developers and make our development progress more visible, I wanted to mirror the SemanticScuttle main Git repository on GitHub. Other people had the same idea but unfortunately failed because the SourceForge Git servers don't allow outgoing connections.
A bit of searching led me to a small bash script that - instead of a post-receive hook - uses a cronjob on a third server (server-in-the-middle) to do the mirroring. Since I have my own server anyway, this solution was chosen.
At first, you need to prepare the server-in-the-middle and equip it with a passwordless SSH key. This key needs to be added to your GitHub account. Using deployment keys does not work since they allow pulling only and are meant for private repos. The alternative is to create a second account on GitHub and add it as collaborator in your project.
The next step is mirroring the SourceForge Git repository on your middle server:
$ git clone --bare --mirror git://semanticscuttle.git.sourceforge.net/gitroot/semanticscuttle/sc SemanticScuttle.git $ cd SemanticScuttle.git $ git remote add github git@github.com:cweiske/SemanticScuttle.git $ git config remote.github.mirror true
After everything is setup, we're going to write our mirror script:
#!/bin/sh cd /home/cweiske/git/SemanticScuttle.git git fetch --quiet origin git push --quiet github
Make it executable, add it to your personal crontab (i.e. every 30 minutes) and you're set. Run it manually once - you need to accept GitHub's remote SSH key.
Instead of using a shell script, one can use a git alias to keep the configuration in one place:
$ git config alias.update-mirror '!git fetch -q origin && git push -q github'
The ! before the commands tells git that the command to follow is a shell command. Now the cronjob:
10,40 * * * * cd /home/cweiske/git/SemanticScuttle.git && git update-mirror
Published on 2011-05-24 in git, network, php, semanticscuttle, server
This post is just a heads up that development in SemanticScuttle is still going on.
In 2010-06-09, your own bookmark manager had been released with a number of bug fixes and some new features:
On 2010-09-28 I got a private security-related bug report that there was a permission problem with the "delete bookmark" API, and probably also with other API methods. I verified the bug and also verified that the other methods did not suffer from the same problem, and a day later, 2010-09-29, the security updated version 0.97.1 got released.
The issue had been that, although the user authentification had been verified, SemanticScuttle did not actually make sure that the bookmark that was to be deleted belongs to the user. You could delete any bookmark by just having a valid user account.
I'm still spending quite a lot of time hacking on SemanticScuttle, with some interesting enhancements to come:
Published on 2010-10-15 in pear, php, semanticscuttle
FrOSCon is still running, even though it's after midnight now. After the party I was not really tired so I continued to hack on the PEAR package for SemanticScuttle. The hard task was to get the phing build script to generate it correctly. The pearpkg2 task that is shipped with Phing is totally unusable, so I had to drop that and use Phing_d51PearPkg2Task as we do at work.
Only two hours later, package generation via phing is fully working. I can run the pear-installed SemanticScuttle without any problems, and even running the tests via pear run-tests -pu __uri/semanticscuttle is working flawlessly! The build.xml changes are in SVN now.
The package of the current version 0.97.0 can be found on my server. Please note that the channel is currently __uri since I did not yet setup a channel server. That will change soonish.
Published on 2010-08-22 in pear, php, semanticscuttle