I’ve always liked to keep things relatively simple and concise. This stated, I also realize that certain objectives have to be met, and the overal goal is to make it the best, most secure, and fastest, possible.
A recent client was developing their product upon PHP’s frameworks. PHP is quite robust, it’s fast, and it has many native features, with people creating several external modules for every day. I like PHP. My entire website is managed by PHP.
PHP is an interpreted language. This means that it has to be read in and parsed, then converted into machine code every time it is executed. This makes it slower than, say, CGIs written in C, or the ultimate, embedded within the webserver product itself.
Their problem was that whist their product was sound, it was not fast enough. They’ve added servers to the farm, and placed dedicated failover systems, but things still had a fairly arbitrarily high load.
I looked into their servers, and found that they were each running X processes of the common Apache server, rather stock, with PHP built as a module.
The first project I undertook was testing various solutions, the first of which being a caching product for PHP itself. Being that they are still in their infancy ‘dotcom’ stage, this meant that there was little chance at purchasing Zend’s flagship product, and Zend Optimizer is closed to any configuration, so this will limit my testing abilities. So, I set out to test Turck MMCache, ionCube, and APC.
ionCube is also a closed product, and although I have a rather good relationship with it’s author (we once spoke about working together to port his product to MacOS X), opinions were voiced about ‘not getting into a closed source system’, of which I agree – but still wished to have some numerics!
I found that Zend’s free Optimizer product placed first in a few tests, but overall, the winner was Turck MMCache, with it’s ability to use SHM and disk spanning. ionCube came afterwards, followed by APC.
This significantly raised the burden upon the servers – they were able to connect and send off data quite fast after the initial scripts were cached. The client was quite pleased.
I was not yet done.
After analyzing the flow of their software, I made note of how image-heavy their software was, and they did not have dedicated systems for serving the images. Rather than attempt to entirely change their existing system – with a nearly-null budget, I setup a secondary webserver, a modified boa, to serve the static images, with a customized squid cache acting as an http accelerator.
The end result: Pages that were taking over 40 seconds to process, were now rendering in under two seconds, with less load, upon the exact same hardware.
The client was estatic. All in a day’s work. :)