Skip to content

Hacked – it really happens!

For the first time in my experience, I’ve had my web site hacked. I’ve heard other people saying it happened to them, but I imagined it was just an excuse for something they might have done wrong. Well it does happen.

In my case, the index.php file in the root of this site (a wordpress blog) – which is usually only a few hundred bytes of php code – suddenly clocked in at 125KB. Someone, somewhere had hacked in an inserted a HUGE bock of PHP comments regarding some xantex product. Some sort of quack ED pill, I guess. Though why they would do something like this is beyond me.

The symptom was easy enough to spot – instead of seeing my regular site – I got a terse PHP error message – somewhere around line 3 thousand (!!) there was a syntax error.
Anyway, the cure was simple enough. I just downloaded the affected index.php file and using a text editor – snipped out the HUGE block of crap content. Re-copying the fixed file back fixed everything – still, weird stuff.

I don’t understand the motivation for such things – I mean, even though the marginal cost of sending spam emails, or of hacking into regular web sites etc. is approaching zero. Even then, ding this to promote viagra or some such – I can’t imagine it has any sort of return on investment. Who on earth would want to order something from an ad filled with misspellings and other errors anyway?

Posted in web hosting.

Super-fast screen updates – thousands per second – really.

More on my Open Heartbeat project (my open source, cross platform real-time market data feed system), I am in the process of adding a couple of simple GUI clients to show it all off. At last – something we can actually SEE. I say a couple, since it needs to work on both Windows and everything else. I know there are cross-platform GUI frameworks out there, but I want to keep it simple and, above all – fast.

The basic structure of any GUI environment is more or less the same. One thread (the GUI thread) creates a window structure then sits in a “Message Loop” – waiting for user inputs events and calling pre-defined handlers on the Window object to do whatever is needed. For a read-only GUI such as the one I’m writing, there are only a few events that we need to worry about – chiefly to redraw the entire screen when another window moves out of the way etc., and to shut down when the window is closed. Everything else – resizing, printing, menus etc. – is just gravy.
One of the rules of the MessageLoop is that – like the Ever-Ready Bunny – it just keeps going. No interruptions. Doing any serious work in the GUI thread can back up the message loop and make the window unresponsive or jerky. We’ve all seen plenty of applications that violate this rule, but rules are rules for a reason.
So getting data in – in this case, listening and responding to market data updates – has to be handled in a secondary thread. Using the OpenHeartbeat library (which handles everything we need), this means several threads. But no matter. Only the GUI thread can update the screen though, so some inter-thread communication process is needed.
As anyone following the story so far will know – the simulator can generate an impressive series of quotes. I have mine set to generate a thousand bid/ask quotes a second, or about 2 a second on each of the 500 demo instruments. It thought it would be nice to show them all on the GUI but in earlier work, this has always been the bottleneck. There is clearly some limit to the speed at which the eye can see all these changes – still, it seems a shame to be limited by computer technology. Let’s see what we can do.

1st try -a standard Linux Client using GTK+
The first attempt at making a client used GTK+ on linux. Actually, I used the GTKmm libraries which are a thin C++ wrapper over the base GTK+ library. Interestingly, this is pretty close in may ways to the Windows Presentation Framework (WPF) which is becoming very popular on Financial Windows workstations – despite the long hard learning curve. I gather the big plus for WPF is the automatic use of DirectX display acceleration – something I’ve only read about, but knowing a bit about how Windows handles screen invalidation/updating etc. I can see how this might be very important for exactly this kind of problem. Linux, of course doesn’t have anything like directX. The layout mechanism under GTK+, however, is surprisingly similar. The basic operation is to define a series of “widgets” along with the rules about how they should be placed on the screen. In WPF, this is all done in an XML file using the Blend interface designer (xaml actually – pronounced “zammel”). In GTK+ you have the choice of an xml file (and the Glade interface designer). Happily for me, you can bypass the designer steps altogether and just use raw code. All this is very OO and all that. The widget designer only has to know how to respond to the various events etc and the framework takes care of all the rest.
As I said, this was my first approach and while I was enjoying the afore mentioned benefits – I also suffered the consequences. It was so dammed slow. Using my simulator as a data source, with some simple scheme for inter-thread communication, I was only able to get 20 or 30 updates a second. And the whole computer was brought to it’s knees.
I’d seen this kind of thing before and I’m pretty sure the problem comes with the basic screen-repainting mechanism built into Gnome and Windows. The problem comes from the fact that widgets aren’t supposed to know when or where they appear. The window manager knows where everything is. When a widget wants to change it’s appearance (say after a value change etc.), it advised the window manager that it’s area is “invalid”. When the window manager has nothing better to do, it then calls on the widgets occupying those parts of the screen that have been marked invalid to repaint themselves. The real problem comes when two widely separated parts of the screen need to be repainted. In the interest of efficiency, windows managers construct a super-rectangle, encompassing all the mini-rectangles that have been marked as invalid since the previous paint message. When a lot of widgets are being updated in a short amount of time, this ends up being most of the screen – so the windows manager continually asks for near whole screen repaints, which take so long that by the time one repaint has finished, another slew of updated rectangles triggers another near-whole screen repaint and on, and on. All the time, the updates are queuing up and the computer is bogging down. Ugh!

Back to the drawing board.
I thought about trying this under WPF, with it’s ability to leverage DirectX and hardware graphics accelerators, but somehow it seem like the wrong solution. I also started work on a smarter updating queue for the inter-thread communication piece (to avoid having multiple updates to the same symbol waiting in the queue). Again, this seemed like the wrong solution. What I really wanted was a way to update the screen directly, as fast as I could receive updates. And after going back to the drawing board so to speak, that’s what I did.

Going back to raw basics, GTK+ etc relies on a crude window manager known as X11 (More or less like raw Win32 GDI calls under Windows). This is as close to the raw hardware as you can get with any degree of portability, so I decided to write my GUI clients there.
The basic structure of a direct window client is very similar in both systems. On start-up, there is some initialization code to run (registering window classes and call-back functions etc.), then call for the creation of a Window structure etc. before entering a MessageLoop. In Win32, all windows are sub-classed from one or other predefined windows which handle any details we don’t want to handle differently. X11 is almost the same, without the super-classing thing. On the other hand, we can pre-filter the events forcing the system to handle everything we don’t want to handle ourselves. The differences are really very small, with Windows being a bit more elegant. X11 on the other hand was designed with a very different problem in mind – where the program writing the updates may be on a difference computer altogether from the screen itself.

Posted in Adventures in Open Source.

Open Heartbeat is coming along…

Well, a couple of weeks have passed and my first “from the ground up” open source project – Open Heartbeat – is alive and well. Coming along quite nicely, I think.

Recapping, Open Heartbeat is a simple daemon process for Linux or Windows that collects and redistributes stock quotes (actually, any keys, ASCII based collection of name/value pairs – like stock quotes). The idea is that all the computers working in an automated trading environment would run the daemon – then special feed handlers would receive real-time price quotes and pass them to one or more of these daemons which in turn pass it on to all the others – in real time.

in early tests, I have been processing 500 or so stock symbols with several thousand updates per second and still using only a small fraction of the cpu power in an average desktop computer. Some time in the future, I plan to add a full publish and subscribe interface but for now – getting everything seems to be working out quite well.

Making a single set of source code compile under both Windows and Linux (POSIX) has been an adventure. I started out writing using the Eclipse IDE under linux which works surprisingly well. Considering it’s all written in Java, I expected it to be slow and awkward but apart from some small irritating issues it works well. One problem I run into from time to time is adding a new class in the root rather than in the src subdirectory – it should be pretty easy to fix this but I kept on having makefile errors – and I could never figure out how the makefile is generated. I ended up scrapping and rebuilding the project by hand each time – quite a pain.

After getting the code to work properly under linix, I ported the same source files to Windows under Visual Studio 10. Open Heartbeat uses sockets and threading very heavily, and the API calls for these features is quite different under the two OSs. I say quite different, but for the most part, the structures were identical – but the names of the functions were different. As far as possible, I used a series of defines to handle these differences so the code itself can stay with the POSIX function names. The only significant difference is in IP6 socket handling which is not supported for XP (which is the version I targeted). As far as I can tell, IP6 is handled natively under Vista and Windows 7 and uses the exact same set of function names as in the POSIX standard (apart from differently names header files etc.)

On the other hand, the debugger in VS10 is a thing of beauty – it just works so well, it’s almost a pleasure to find and correct bugs. Add to that the recompilation-without-stopping-the-debugger – and I’m in heaven. I’m almost tempted to do the next big development piece in Visual Studio first and then port to Linux – but that would mean spending more time in windows which is simply unthinkable.

I’ll talk about some of the development mechanics in detail in upcoming posts.

Posted in Adventures in Open Source, Uncategorized.

Birth of a new Open Source Project

Well – here goes.

After a long period of research and contemplation, I’m ready to launch a brand spanking new Open Source application.  This posting and the ones following will record some of the nitty-gritty details of getting it up and running and (with luck) accepted by the larger Open Source community.

They say you should always write about what you know – so my first contribution is for a Ticker Plant, that is a daemon process that distributes price quotes for Stocks or whatever in real time.  Not really much of a mass market for something like this, I know, but commercial applications that do this kind of thing generally involved annual licence fees of thousands of dollars per instance.  That can add up to hundreds of thousands of dollars for any reasonable sized trading group.  I think I can use my brainpower to create something super efficient, more highly available and redundant and overall more flexible than any of the commercial applications.  I can’t image being able to market the system though, so I’m giving it away – maybe rewards will come in the form of consulting or support – who knows.

It will be fun and instructive to push the boundaries and test some of the existing competitive technologies too.  So let’s begin.

What will my application do?

Good question. Continued…

Posted in Adventures in Open Source.

Ubuntu 10.04, LAMP and the ImageCreateTrueColor problem

Brave sort that I am, I can’t help upgrading tot he latest and greatest of whatever.   I know it is asking for trouble, but somehow I like living on my wits.  So, after a fresh install of Ubuntu 10.04 everything seems lovely – all in all a very clean and professional disto.

Until I got working on GigCalendar again that is – I ran accross an obscure failure with the image upload process (GigCalendar is a Joomla component I have been working on to present and manage publicity for bands and/or performance venues).   After uploading images, the return page was blank – yet refreshing showed the upload to have occured but not the auto resizing.  Very wierd.  Turns out after scanning the apache error.log that I’ve made a call to ImageCreateTrueColor - a GD library function that seems to be missing in the default LAMP stack installed with 10.04.

After a bit of hunting around, I found several references to this – apparently whoever puts the distro together has switched out the standard suite of PHP add-ons and somehow nixed the GD library where this and many other important image processing functions are located.

Google to the rescue, I found this link explaining the step-by-step details for reselecting the libraries and recompiling PHP from source .  It’s a more or less painless process but I’m happy to crib off someone else’s work.

Oh – one thing.  Don’t forget to update the version numbers to reflect the current state of the art.  The instructions refer to php 5.2.3 while the current version (at the time of this writing) is 5.3.2

Yet another last thing – As my good friend Mubashir points out, the version of PHP available seperately from the 10.04 repo does indeed have the gsImageCreateTrueColor function built in.  It’s just the version bundled with the new LAMP stack that does not.  So a simple apt-get install php5-gd should do the trick and save a hour long compilation process.  Oh well.  it’s good to know how to do these things anyway.  Thanks Mubashir.

Posted in Ubuntu.

Solar Hot Water – Southern California’s low hanging fruit

Ask most people about “going solar” and they think automatically about huge, expensive silicon panels with all the inverters and all.  I’ve been there and done that too.  My 2KW roof installation happily generates electrical power whenever the sun shines – and this being Los Angeles, it shines just about all the time.  Great.  But probably not a good investment.  Even with all the rebates and tax incentives, I doubt it will ever really pay for itself – not until we’re facing 20 or 30 cent per KWHr.  Still, I put the system in to “do my bit” and fuel the industry etc.  Installation prices are falling quite considerably since I put my system in.  I’m still not a big fan though.

More recently though, facing an excess of free time, I decided to get creative and install my own solar domestic hot water system.  So far, the results are brilliant – I get all the hot water I can use for the cost of about 1 penny per day – that for the small electric pump which drives the water through the collectors.  For the life of me, I can’t see why the building regulations don’t make such systems compulsory in all new constructions.    For one or two thousand dollars (depending on your creativity) you too can live off the fat o’ the land.  Here’s my story…

My quest started by finding a couple of previously-loved collector panels on Craigslist.  Back when Jimmy Carter was president, tax incentives created a large industry making these things – and they were built to last.  Even the White House installed a system.  I’m told that the first thing Raegan did on taking office was to repeal the incentives and then remove the panels.  Without the tax incentives, the burgeoning solar industry died in childhood and early adopters found no-one available to maintain the systems in place.   This was bad news for the country, but happily for me – it means there are a lot of components available for next to nothing.  I picked up a couple of 4′ x 8′ solar collectors for little more than the scrap metal value – and these things were build to last.

The next step was to research on the internet how best to use these panels.  It turns out there are three main systems, each suitable for a different climate.  The simplest is a thermo-siphon design, where a large storage tank is installed above the collector panels.  When the sun shines, hot water from the panels rises into the tank without pumps or anything.  This design only works where there is zero chance of frost though.  Even here in Sunny California, we get one or two nights a year below freezing, so this design is out.

The most popular system pumps anti-freeze in a closed circuit, transferring heat to the storage tank via a heat exchanger.  This system is more complex and hence costly to install.  One problem faced by this design is in dealing with overheating – once the storage tank reaches the upper set point, something needs to be done to prevent the antifreeze mixture from over heating (and in the process becoming corrosive and reducing the freeze protection).  Many of improperly maintained installations failed with corrosion and burst pipes and/or leaks caused by not changing the anti-freeze early enough.  For much of the country though, this is the only workable system.

In temperate climates though, such as here in Southern California, there is an ingenious design refereed to as a drain-back system which is almost maintenance free and more or less fool-proof.  In a drain back system, plain water is used in the closed loop, avoiding breakdown of the antifreeze due to overheating.  A simple electronic pump control made just for this purpose sensors the temperature of the water in the collectors and in the storage tank and whenever there seems to be a useful differential, turns on a small circulatory pump driving water through the collectors and heat exchanges etc – just like in the anti-freeze system (Using plain water as the heat exchange medium turns out to be more efficient in both the heat capacity and in lower viscosity for the pump – a double win).  When the controller senses no need for heat (either due to the storage tank upper temperature limit being reached or when the sun no-longer shines enough) the pump turns off (saving electricity).  Rather than keeping the collector panels full of water though, which could freeze and burst, this design incorporates a small reservoir tank mounted inside the building’s warmed space.  When the pump turns off, all the water in the collectors flows back into the reservoir – the drain-back tank – and is protected from freezing.  Care must be taking in installation to ensure that all the pipes slope continuously back to the drain-back tank for this purpose.  Installations tend to look a bit odd as a result – a small price to pay, I think.

Here is my simplistic diagram of the arrangement showing the closed circuit heating loop.  The drain back tank is shown here with an open top while in practice the solar loop is sealed to prevent evaporation.

When there is no useful temperature difference between the collectors and the tank – or when the hot water tank reaches the top set point, the controller (not shown) shuts off the circulating pump and water from the collectors quickly drains into the drainback tank avoiding any possibility of freezing or overheating.

The most expensive part of this – not counting my labor –  was in the brand new 80 gallon solar hot water tank.  My old gas powered tank was rusting through and I was unable to find something better to replace it.  The model I choose incorporated a heat exchanger in the bottom part of the tank and an electrical heater element in the upper part.  Once or twice a year we go for three or four days without significant sunshine.  I though I might need the electrical backup for these days, but so far we have been able to brave it out during these periods with shorter and shorter showers etc. so the electrical elements remain un-connected.

Next in expense was the dedicated drainback tank.  I’m not too happy about this as the drain back tank is really low tech.  I think I spent something like $400 in the end which seems criminal.  I tried a number of alternatives first, but finally accepted the inevitable.

The differential temperature controller came next at something like $120 or so.  I picked a model with regular household plug sockets making it easy to trouble shoot – I can plug the pump directly into the wall power if needed.

The last part was the water pump – there were a bewildering set of choices here and a wide range of prices.  Nearly everything I read online said that I needed a bronze or stainless steel pump and fittings, costing 2 or 3 times as much as regular cast iron pumps.  The idea being that using water as the solar fluid would cause the iron pump to rust.  One link pointed out though that this was not a problem with a drain back system so long as the solar circuit is closed.  Rust is caused by dissolved oxygen in the water which being sealed is quickly depleted.  I shopped around until I found a three speed cast iron pump on special – I think the whole thing cost no more than $50 and it works like a charm.   The drainback tank includes a parallel site-glass which I look at from time to time to confirm that the water levels remain constant (no leaks) and rust free.

During the design, I was concerned about the energy consumed by the electrical pump.  In the end, I run my pump on the lowest possible speed which according to my kill-a-watt unit uses 50 Watts.  Note that the pump only runs while the water is being actively heated, so most of the time it is off.  In practice, I find the pump runs for about 3 hours a day.  3 hours at 50 Watts makes .15 KWh per day which at our utility rate of $0.12 per KWh becomes about 2 cents per day.  Not bad for some simple plumbing.

One last thing before wrapping up – many of the things we take for granted with domestic hot water are no longer true once we go solar.  Advice like switching to a washing powder that works in cold water is no longer necessary.  Similarly, lowering the hot water temperature setting does nothing to save energy.  Taking long hot baths still uses water – but that’s another thing.  In practice, I set the tank cut-off temperature well above the normal 45C (115F) – more like 70C (160F).  To prevent scalding though, I installed a tempering valve which constantly adds cold water to the mix.  The effect is like having a tank much larger than the actual 80 gallons.
So that’s how it works.  I highly recommend something similar to anyone living in a high solar region.  There is nothing like the feeling of smugness that comes from a piping hot shower of liquid sunshine.

Posted in Energy Conservation, Going green.

Curious problem with Ubuntu auto-update – (solved)

I ran into a small problem today, somewhat connected to my earlier auto-update post some months ago.

Recapping the earlier post, Ubuntu has a built-in update process which dose a brilliant job at keeping everything up-to-date.  The down side is that users have to go through a more-or-less daily chore of accepting the update suggestions.  My post showed how to configure the updater to do it’s work silently.

So far, so good.  A few days ago though, I noticed my CPU usage constantly pegging at 100%, making the system sluggish, though not unusable (more or less like Microsoft Windows on a good day ).  I didn’t know it at the time, but it the problem seems to be due to an update dependency which required access to the original installation CD.  While running updates in the background, the system had no way of indicating that it needed the CD.  Opps.

I tried running the update process manually, but it failed to obtain the update process lock – this was another clue that the system was stuck in the automatic update process.

Next step was to kill the automatic update process.  Open a terminal, run top to see the offending process name (apt-get), run ps -e|grep apt-get to find the process ID, then sudo kill xxxx to kill it.  Still using 100% cpu, I ran top again, this time finding the auto-update process hogging the system, use ps and kill again to nix this one.  CPU usage now back to normal – whew!.

Returning to the GUI, I ran the update process manually again, which then downloaded some 64 out of 66 updates before showing a dialog asking for the original installation CD – ah-ha!  Luckily for me, I still had one; so popping it in the drive allowed the process to finish without further incident.

I’m guessing the auto-update process running in the background was unable to show the dialog asking for the CD.  I tried looking through the log files (see System Log Viewer) and found what might have been a clue in the bootstrap.log – apparently, my system was reporting a host of pre-dependency problems – things like bash depending on dash which was not installed etc.  Maybe this was ubuntu’s way of letting me know it was in trouble.  If so, it could have been clearer…

I don’t know if everyone else running auto-updates is also having this problem, if so – maybe my experience will save them time and frustration.  Good luck


Posted in Ubuntu.

And now for something completely different – saving capitalism for/from the capitaists

Well, so much for my personal goal of one blog posting per day, come rain come shine.  I still like the idea but the flesh is so unwilling.  I’m prompted today though by a recent fun post sent to me by a friend from the fakestevejobs blog.

First – take a minute to read this gem.

The problem we have all seen is that western businesses are so focussed on short-term financial results that we fail miserably in longer-term planning and investments.   One place I worked made assembly robots – big expensive durable goods.  Before the and of any financial year – everything was focused on making sales.  All the test/development units, all the spare-parts, all the demo units – everything had to be cleared out in a giant fire sale to boost the end-of-year figures and make us look good.  Great – no worry that for the first two months of the coming financial year there is nothing to sell (all the parts kits having been raided for the fire sale).  Now the only hope of recovery would be another fire sale at the end of the next financial year, so the cycle continues.  Robbing Peter to pay Paul – or rather robbing Paul’s child to pay Paul.  Sounds familiar – no?

Wherever you look, people bemoan the short-sightedness of other peoples decisions – yet feel compelled to do the same thing themselves.  Anyone brave enough to buck the system and take this bad news on the chin would be killed by the market place.  They wouldn’t live long enough to show the wisdom of long term planning and execution.  This sucks bottom – yo. (I’ve been watching “The Wire” – highly recommended).

The miracle of electronic stock trading is only encouraging this.  When regular folks can execute stock trades of any size for $10 or so in an instant – no wonder companies are deadly afraid of any bad news.  And not just day traders (assuming there are any of these left standing), large institution investors too act second by second looking to squeeze some advantage from whatever hits the news wire.   You can’t blame them – it’s the system.

My solution (drum roll, please).
Limit stock trading to an enrolment period open one week every five years.

I haven’t worked out the mechanics of this – what to do with all the stock-brokers etc for the other 250 odd weeks – maybe stagger the open weeks by industry or alphabetically or something, although this create difficulties in selling one investment before buying another etc. if they are not in the same enrolment window.  Hmmmm.

Oh well – better minds are needed to figure out the mechanics of it all.  I think I’ve done enough for my Nobel prize.  I can always share it with someone else – as the “dude” says in the Big Labowski – this might put me in a whole new tax bracket etc…

Posted in Uncategorized, Very Interesting....

Ubuntu unattended auto-update

Anyone brave enough to have kicked the Microsoft shackles and embrace the wonderful world of linux – and here I’m talking about Ubuntu specifically, but this might also apply to other distros too – will have enjoyed the delight of easy and frequent system updates and a vastly simplified application installation process.  3 cheers to Linux.

As I write this, the next regular release of Ubuntu is about a month away and there seems to be a pattern around this stage of the cycle of super-frequent system updates – I guess this is all the fixes being readied for the next release, percolating down into the regular upgrade cycle.  It should get quieter soon – just before the real release, but right now, there seems to be a list of 10 to 15 individual updates every day.  I like the ability to review updates and to choose to delay them until after (say) some critical demo or presentation – but I just don’t have the time or attention span to review each and every update package so applying the updates becomes something of an automatic chore after each reboot of my workstation.

No more – I finally got around to figuring out how to completely automate this – and like many things with Ubuntu – it is a simple as pi (3.1415926 – give or take a metric smidgen)

Edit the upgrade configuration file /etc/apt/apt.conf.d/50unattended-upgrades and uncomment the jaunty-updates line (shown in red below)

// Automatically upgrade packages from these (origin, archive) pairs
Unattended-Upgrade::Allowed-Origins {
“Ubuntu karmic-security”;
“Ubuntu karmic-updates”;

(Don’t forget sudo – this is a system file)…


Posted in Ubuntu.

Adventures in Open Source – CMS Systems

It’s been a long time since my initial flurry of posts and a lot of new things have come along, uncommented upon.  No more.

One of the big new things for me is Joomla. In case you don’t know already, Joomla is an Open Source Web content management system (CMS) written in PHP and using a mysql database (also open source).  The natural home is on some Linux server using a standard LAMP stack (Linux, Apache web server, MySQL and PHP), though it’s pretty straightforward to host it on a windows server too if you had too.

Content Management Systems (CMS) In the old days, anyone wanting to publish to the web had to learn HTML, then figure out how to create content with the right mark-up to fit in with whatever web site style conventions were in place, add the necessary navigational links etc.  This was great for the few people who could do it well, but created an unsurmountable barrier to entry for non-geeks – writers and others generally more skilled at creating something worth reading.  In the simplest sense, this is the role of the CMS – let writers write and let designers/programmers design and program – oh frabjous joy!.  Being a geek myself, another feature that really floats my boat is that in publishing content, the author gets to specify how long it should remain.  No more publicity for passed events.

There are a number of competing CMS’s out there.  Choosing any one is a little like getting married – you will make a huge investment in learning time and content development for better or worse.  In general, there is no easy way to move content from one system to another so you’d better get it right first time.   In Windows land, the big hitters are all super-expensive commercial systems – there are some open source and/or shareware systems but the user base is so very, very tiny that it would be risky to bet the farm on any of them.  In the Linux tent though, there are some very well established competing systems – each with certain advantages and disadvantages.  And a dedicated set of advocates for each.  Ignoring the little guys, the three big boys are Joomla (formerly Mambo), Drupal (I love these names) and WordPress.

WordPress.  Clearly the king in terms of number of installation, WordPress was originally focussed on blog creation where it excells (This blog is managed by a WordPress installation on a shared linux host – I spent no more than 15 minutes getting everything setup from scratch before posting real content – Brilliant).  Apparently, WordPress can do much more than simple blogs, but it is so well entrenched in this area of excellence that few people use it for anything significantly beyond the core blog area.

Drupal.  This is the new kid on the block, and seing as how young the block itself is, this means very new.  (Correction – I imagined that the low-adoption rate of Drupal was because it is so new, but it turns out Drupal has been around longer than any of the others – it’s just growing steadily while the others have taken off like a rocket) From what I’ve read, Drupal is faster and cleaner than the existing versions of Joomla, but has a much steeper learning curve for administrators and designers.   This seems to agree with my experience, having gained some experience with the older Joomla, the install process for Drupal was equally simple, but from there on, I had no clear way of adapting it to my needs.  There were relatively few add-on components and few resources on the web for wuestions any newbie (such as myself0 would have.  By contrast, Joomla has an enormous following, matched by the thousands of open source components and templates writtent to work with it.  Almost any problem can be solved with a simple Google search as there are literally thousands and thousands of people who have run into whatever situation you might find yourself in.  It’s nice not to be alone.

Joomla. I have to confess a bias in all this – Joomla was the first and only real full-function CMS that I have had to work with, and it works so well that I haven’t given the others as much attention.  Sorry for the bias – blame history.

Joomla was developed as a split-off from the simpler Mambo CMS which I’m told is still being developed and has it’s band of dedicated followers.  The first version of Joomla was so similar to Mambo, that all the extension components for Mambo worked on Joomla too – this was a great advantage, helping Joomla hit the road with a much enhanced set of capabilities beyond the core functions that come is Joomla itself.  After a short while though, shortcoming in the API became a problem so a near complete re-write was produced and remain (at least for the time being) the standard for today.  This new version (Joomla 1.5) came with an optional system::legacy plugin, providing a n API bridge so that components written for the original version of Joomla would still work under Joomla 1.5.  For majority of compnents, this wedge worked well enough, still most compnents have been re-written to use the native Joomla 1.5 API.

Nevertheless, there are many thousands of open source components available for Joomla, something no other CMS can boast.  And the number of developers, designers and writers practiced on Joomla so far outstrips the other CMSs as to make Joomla THE dominant CMS with an ongoing level of investment unmatchable by any other system commercial or open source.

I’ve been writing and maintaining components for Joomla 1.0 and more recently for Joomla 1.5 which, once the initial learning curve has been reached seem simple and efficient to adapt to just about any purpose.

I did a Google trend search (this is the subject of another post, but if you don’t know about Google trends – check it out now).


Posted in web hosting.