Fixing WordPress SEO Sitemap Problems

I decided to switch over to WordPress SEO(Yoast) yesterday and ran into a slew of problems with their sitemap generator, a 404 error, a blank screen, and a sitemap.xml not being properly re-directed to the new sitemap_index.xml. The first problem led me to this Yoast knowledge base article, My sitemap is giving a 404 error, what should I do? I fixed the first problem by adding the code to my .htaccess file. To fix the last two problems I added the RewriteRules for the xsl statement(Line 8) and the sitemap.xml(Line 5). Now both sitemap.xml and sitemap_index.xml are being properly redirected and formatted. My Google Webmaster Tools is happy!

Note: The code below is for a WordPress blog in a sub-directory called wordpress.

# WordPress SEO - XML Sitemap Rewrite Fix
RewriteEngine On
RewriteBase /wordpress/
RewriteRule ^sitemap_index.xml$ /wordpress/index.php?sitemap=1 [L]
RewriteRule ^sitemap.xml$ /wordpress/index.php?sitemap=1 [L]
RewriteRule ^([^/]+?)-sitemap([0-9]+)?.xml$ /wordpress/index.php?sitemap=$1&sitemap_n=$2 [L]
# This rewrite ensures that the styles are available for styling the generated sitemap.
RewriteRule ^/([a-z]+)?-?sitemap\.xsl$ /wordpress/index.php?xsl=$1 last;
# END WordPress SEO - XML Sitemap Rewrite Fix

My Natty Narwhal problems

Last week I decided to update to the latest recent on my VMware version of the Ubuntu Desktop and it did not go well. During the upgrade I got a disk space error message with non-displayable characters. It was a curious message since I thought I had enough disk space and it did not stop the process. It was when I rebooted I encountered the major problem. I would like to tell you more about the error message but the screen clears and I am left with a GRUB prompt. That’s not nice!

So I did some research and booted using the instructions from this page, Express Boot to the Most Recent Kernel. Everything came up nicely except for my windows were missing the title bar when I came in via the NX client. To fix the booting problem, I reinstalled from the LiveCD using the simplest method. To fix the missing title bar I installed the latest version of NX client from NoMachine NX. I also got the missing title bar when I used the VMware console window to log in. In that case I just specified that I wanted to use the Classic interface when I logged in.

Nimble Method: Garbage Collection is Why Ruby on Rails is Slow: Patches to Improve Performance 5x; Memory Profiling


  • The News: Ruby on Rails performance is dominated by garbage collection. We present a set of patches to greatly improve Rails performance and show how to profile memory usage to get further performance gains.

  • What’s at Stake: Rails is slow for many uses and did not lend itself well to optimization. Significant performance gains could only be achieved at application level at large development cost.

  • The Upside:

    • 5x potential performance gains;
    • easy way to identify whether GC is a bottleneck;
    • deterministic process to fix memory bottlenecks;
    • set of canned patches to solve the biggest problems;
    • you can help

Nimble Method: Garbage Collection is Why Ruby on Rails is Slow: Patches to Improve Performance 5x; Memory Profiling
Sat, 02 Feb 2008 05:30:00 GMT

Okay, a couple of weeks ago I installed Ruby so that I could run Metasploit. Installing Ruby was a challenge since I needed to install several dependencies so that I could install RubyGems. Fortunately Simon had the answer. When I cranked up the GUI version of Metasploit, the GUI seemed slow and the console messages showed Ruby to be busier than I thought it should be. I hadn’t asked it do anything yet.  Maybe this will help! Then again maybe if I upgrade to the latest version of Metasploit(3.1) will help.

KeePassX – The Official KeePassX Homepage

KeePassX – The Official KeePassX Homepage

KeePassX saves many different information e.g. user names, passwords, urls, attachmets and comments in one single database.

Yesterday I got around to installing KeePassX on my Centos server. The rpm version worked fine but I had to manually create a menu item. For fun I downloaded the new versions of the KeePassX icon. For a very brief time I thought about compiling KeePassX from source code but it looks like I will have to do a lot of work. It uses the QT library and Qmake. I would prefer if I could set it up in Eclipse but that looks complicated.

KeePassX is a port of KeePass and it read the KeePass database on my USB stick without a problem. It maintains the same look and feel as the original program so that is a big advantage on the learning curve for me. KeePassX has everything I use except for the global auto-type hot key and the plugins.

Linux Tip: Replacing GKSUDO for CENTOS users

One of the annoying things about maintaining CENTOS installations is performing system maintenance as the super user from the command line. Don’t get me wrong but I was programming before graphical interfaces(BGI). The command line is a good and trusty way to perform maintenance. As long as everything works you can get by with a minimum of memorization. Since most of us live in an after graphical interfaces(AGI) world and we do not practice our Linux command line knowledge on a daily basis, we quickly get rusty on the tricks of the trade and yearn for an easier way. Something with a fast learning curve. This is precisely why we have graphical interfaces.

For reasons I did not understand until today CENTOS does not make it easy to run graphical programs as the super user, such as nautilius and gedit. Ubuntu offers a fairly simple way to create menu items to start graphical programs as a super user, gksudo.  CENTOS does not offer this utility in either Version 4 or 5. A similar utility, kdesu, was offered in CENTOS Version 4 but is not offered in CENTOS 5. Opening a terminal window and running SUDO is an pretty clumsy option so I was pretty sure that there probably was a better way! I wanted a menu item like the other system maintenance menu items that would authenticate me before running an application as a super user.

Today I found the answer. Matt Hansen wrote a tip how to “How to run a program from GNOME menu with root privileges ” back in 2004. The tip uses a utility called consolehelper. You have to create a couple of configuration files but the whole process can be completed in about five minutes. It is interesting that today was the first time I found a reference that claims consolehelper is the “proper” way to solve the “missing” gksudo problem.

Notes on Setting up the Eclipse C++ IDE on Linux

Since I had recently setup my laptop with C++ version of Visual Studio 8 Express, I was curious about setting up a similar IDE environment on Linux. I initially tried to set up Anjuta DevStudio and I failed miserably. I am running CentOS 5.1. There does not appear to be a recent RPM of Anjuta. I stumbled badly when I tried to manually install the dependencies and quickly became inspired to look for an IDE solution that would set up as easily and quickly as Visual Studio Express. Eclipse was the obvious answer.

So I went to the Eclipse site and downloaded the Linux version of the Eclipse IDE for C/C++ Developers. After I had uncompressed the file I tried running Eclipse and it did not work. It was complaining that my version of Java needed to be at least 1.5. Although I had installed a newer version of Java JRE, Eclipse was finding the 1.4 version. To get Eclipse to work I had to modify the PATH statement so that it would find the verion in “/usr/java/jdk1.6.0_03/bin” first. The best way I found to fix this problem was by modifying the .bash_profile file and adding the following statement:

export JAVA_HOME=jdk1.6.0_03

and modifying the path statement to read:


After I logged out and logged back in, I could start Eclipse. To test my Eclipse setup I decided to use the Hello World program for CPPUnit. This is the traditional Hello World program with a little extra, a C++ unit testing framework. The steps I performed to build this program are:

  1. Created a new C++ Project. In my case I called it HelloWorldCPPUnit.
  2. Next I created a “Source Folder” that I called “src” and a “Source File” in that directory that I called “HelloWorldCPPUnit.cpp”. I copied all of the source code from into the file and saved it.
  3. Before you compile this program you need to download and install cppunit. The instructions for installing it are straightforward but you will need to do a few more things to get it to work with Eclipse.
    1. You will need to modify the project settings for the GCC C++ Compiler-Directories in Eclipse to add the path to the include files, “/usr/local/include/cppunit”. This adds a “-I” parameter for the compile.
    2. You should run the command, “./cppunit-config –libs” to see the library linking information. In my case it showed “-L/usr/local/lib -lcppunit -ldl”. I modified the project settings for the GCC C++ Linker-Libraries in Eclipse to add these libraries, ccpunit and dl, and the library search path, “/usr/local/lib”.
  4. The final setup step was to tell CentOS where to find ccpunit shared library. At this point the program will build but will not run because CentOS cannot find the run-time library for cppunit. The cppunit installation creates a shared library and puts it in the “/usr/local/lib” directory. To tell CentOS where to find it I had to do the following steps.
    1. As the user, Root, I created a file that I called “local.conf” with one statement in it, /usr/local/lib, in it. I saved this file in the “/etc/” directory.
    2. Then I ran the command, “/sbin/ldconfig”. This tells CentOS to update the links to the shared libraries.
  5. If everything is set up properly the program will build and run the simple unit test.

Overall Eclipse with CDT is slightly more difficult to set up then Visual Studio Express. Most of my difficulties occurred when I tried to go a little beyond the default configuration. Recently I tried to go slightly beyond the default configuration for Visual Studio Express. Since I had minor difficulties setting up both packages my gut feeling is that it was slightly easier to find answers to set up problems from the Internet for Visual Studio problems because there is a larger developer community specializing in Visual Studio. Of course, your mileage will vary! 😉

Update on GroundWork Open Source Installation

I do have a problem putting things down. Yesterday I wrote a post about updating to the latest version of GroundWork Monitor Open Source and the problem I had with resolving three service checks, local_mysql_database_nopw, local_process_gw_feeders, and local_process_snmptt. Today I fixed them and here’s what I did:

  1. To resolve the local_mysql_database_nopw alert I went to Nagios resource macro, USER6, and made its value null. This service check uses the value of USER6 as the mysql password. The mysql password is not set in the vmware appliance version so the correct answer is null.
  2. To resolve the local_process_gw_feeders alert I fixed the so that it would find the included files. Then I ran the perl file in the background. My final fix was to modify the run script in feeder-nagios-status folder to start up the when the service is started. I think this is right place changed the service check parameters to allow 1 to 3 processes to be running. The eventlog process is a Pro feature.
  3. To resolve the local_process_snmptt alert I installed net-snmp and snmptt. Then I modified the parameters for this service check for this host so that it was happy with 2 to 3 services running.

The GroundWork server has been running for a couple of hours without alerts. Yea!

GroundWork Monitor Open Source


GroundWork Monitor Open Source 5.1

A complete availability monitoring solution that ensures IT infrastructure uptime while identifying issues before they become real problems. Unifies best-of-breed open source tools – Nagios, Nmap, SNMP TT, PHP, Apache, MySQL and more — through PHP/AJAX-based components and an integrated user interface to deliver the extensible functionality you require.

GroundWork Monitor Open Source

I finally got around to migrating my old version of GroundWork to the newest version, 5.1. GroundWork is a nice repackaging of Nagios and the 5.1 version includes some basic graphing in the free version via RRD. The paid support version has more sophisticated graphing and reporting and does a better job of interfacing with SNMP. I use the VM appliance since I am using this package to monitor a few web sites. It sends me an email when it sees a problem.

I was planning to write this post after I fixed three alerts, local_mysql_database_nopw, local_process_gw_feeders, and local_process_snmptt, on the local Linux server but I am going to turn these alerts off instead. I found the problems with the feeders(missing perl library) and snmptt was not installed but my fixes did not seem to hold. The system is running fine.

Configuring Subversion to use Apache SSL

My plan was to create a subversion repository on a Linux box(CentOS) to support the configuration files I use with a virtual machine running Groundwork Open Source. This took much longer than I expected. This procedure was more complicated than usual since the latest version of CentOS requires you to create a self-signed certificate the old way since genkey and crypto-utils are no longer available.

The first step is to install subversion and configure Apache. I installed subversion and the Apache server module, mod_dav_svn, using the package manager. 

  1. I wanted one repository.
  2. I wanted to see the projects in it by typing

My initial stumbling block was figuring out where to put the repository. After fumbling around looking for a recommendation I settled on /usr/local/svn as a logical choice. So I opened a terminal window as root and created the repository, repos, with the following command:

svnadmin create /usr/local/svn/repos
chown -R apache.apache /usr/local/svn/repos

Next I imported a template directory structure with subdirectories for branches, tags, and trunk that I use for all projects.

svn import project1 file:///usr/local/svn/repos/project1 -m "Initial import"

To configure Apache to support subversion you need to edit the /etc/httpd/conf.d/subversion.conf file. The biggest problem I had with the example in the subversion manual was figuring out that I needed to use SVNPath statement rather than the SVNParentPath statement. These are the changed I made in this file.

  1. Change the location to /repos.
  2. Added the statement SVNPath /usr/local/svn/repos
  3. Followed the directions in the subversion manual to set up basic http authentication.

After restarting the httpd service you should be able to browse the repository using your web browser. The final step was to set up the web server to support SSL using a self-signed certificate. I found several tutorials out on the web. They all follow the same general procedure.

  1. Generate your private key
  2. Generate your Certificate Signing Request
  3. Generate a new key from your private key without a PassPhrase. You need this to start apache web server without prompting.
  4. Move the certificate and the insecure key over to the /etc/httpd/conf directory and change the permissions on the files so that root is the only one who can read them(i.e. chmod 400).
  5. Edit the /etc/httpd/conf.d/ssl.conf file and tell it to use the new certificate and key file.

The tutorial I used was at  The only change I made to this procedure was to add the “-new” parameter when I was creating a CSR. After restarting the httpd you should be able to browse your repository using

Obay Home » Blog Archive » Installing VMWare Player on Ubuntu

If you get the following error message:

/usr/lib/vmware-player/bin/vmplayer: /usr/lib/vmware-player/lib/ no version information available (required by /usr/lib/

Apply the following fix

mv /usr/lib/vmware/lib/ /usr/lib/vmware/lib/
ln -sf /usr/lib/ /usr/lib/vmware/lib/
mv /usr/lib/vmware/lib/ /usr/lib/vmware/lib/
ln -sf /lib/ /usr/lib/vmware/lib/

Obay Home » Blog Archive » Installing VMWare Player on Ubuntu

Yup, this works for Centos 5. I upgraded to Server 1.04 today and had to go back and fix things again. This is a different fix than the fix I used previously where I renamed the file and let Vmware find a suitable library. I am guessing that both solutions probably end up using the correct library but this solution looks like it is a more direct approach.

What is good for Ubuntu Feisty is also good for Centos 5!

The Joe writes:

The install goes great, but when I run the server I get the following error:
/usr/lib/vmware/bin/vmware: /usr/lib/vmware/lib/ no version information available (required by /usr/lib/

As far as I can tell, the that gets installed to the /usr/lib/vmware/lib/ directory is the wrong version, your system should have a current version installed. To fix this just delete or rename from /usr/lib/vmware/lib/ » Ubuntu Feisty Vmware Server and

I was getting the same error message under Centos 5. Thinking that a solution for Ubuntu Feisty might work for Centos 5, I gave it a try and was pleasantly surprised. I renamed the file and restarted the Vmware server. The error message no longer appears and the server appears to be working fine. It also fixed the bigger problem I was having with accessing the virtual machine from PCs on the same sub-network other than the host.

Bill’s Grand Adventure

I finally got motivated to resuscitate my Ghettobox2006. It took me a little debugging but I finally got it to recognize my SATA drive. My plan was to use this box as a general purpose Linux box running several VMware guests. The problem was that I could not get the box to boot. This was a strange problem. I had the motherboard working in another case with an IDE drive and a SATA. I moved the motherboard and the SATA drive to a new case and it would not boot. The SATA drive was not recognized by the BIOS and the drive acted like it was not getting power. I wasted a fair amount of time trying to figure out the source of the problem but eventually had to go work on higher priority tasks.

Last weekend I got an idea on how to fix the problem and went back to working on the box. My idea did not work but I did find the problem. The BIOS had the SATA controller turned off. How did that happen? Well, it boots now!

So I was off to the races. Awhile back I decided to use Centos for the host and I had already downloaded a version 5 DVD. I did not have a big reason for selecting Centos besides that I am slightly more familiar with Centos/Fedora/Redhat than I am with Ubuntu and Suse. My installation was a little unusual since I had three partitions on the disk I wanted to keep, a W2K partition, a partition with several existing virtual machines, and an empty partition for a future operating system. I had about 60 GB of free disk space left for Centos. I chose to install the standard Centos Desktop.The installation went smoothly. I was pleased to find out that I could still boot to the W2K partition from GRUB if I wanted to. Dual booting Linux and Microsoft used to be so funky.

Along the way I found a solution for an interesting Java problem. After I finished installing the operation system, I cranked up the web browser and Firefox told me that I needed the Java plugin. Reluctantly I downloaded the plugin and installed it. The Java plugin installation is about as dorky as it comes. Been there…done that…you mean I have to do this again. This is one area that Windows really shines over Linux. Surprise…surprise the Java plugin did not work. To complicate the matter there was no error message either. I was a little annoyed so I tried to open the Java control panel. It did give me an error message. It could not find A quick search of the Internet found two potential solutions. I could either install a symbolic link to or install compat-libstdc++-33. I installed the compat library since it may fix other problems I do not know about yet. I just want the standard stuff to work without a lot of fiddling. Sometimes that can quite a challenge. Now when I validate the plugin at the Java site, it worked as expected.

I will talk about my adventures with VMware in another post. I still have some kinks with the networking to work out. I was pleased to find out that all of my virtual machines worked. Even the W2K virtual machine I created using VMconverter worked.

Hard Disk MTBF: Flap or Farce?


Data sheets for hard drives have always included a specification for reliability expressed in hours: commonly known as MTBF (mean time between failures), or sometimes the mean time to failure. Same difference: One way assumes that a drive will be fixed, and the other, replaced. Nowadays, this number is around a million hours for an “enterprise” hard drive. Some drives are rated at 1.5 million hours.

Now, that’s a good stretch to time. After all, a year is only 8,760 hours. One million hours comes to a bit more than 114 years. Some may be scratching their heads, since the hard drive itself has only been around for 50 years (IBM’s giant 350 Disk Storage Unit for its RAMAC computer). This can be confusing.

Instead, the MTBF is a statistical measure based on a calculation extrapolated from less-lengthy readings. It all means that drives are very reliable, with a failure rate well under 1 percent per year. Go Team Storage!

However, several papers covering large-scale storage presented at FAST ’07, the USENIX conference on File and Storage Technologies, held recently in San Jose, Calif., are kicking up a stir online about MTBF.

The Best Paper award was handed to “Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?” by Bianca Schroeder and Garth Gibson of Carnegie Mellon University in Pittsburgh.

Their study tracked a whopping set of drives used at large-scale storage sites, including high-performance computing and Web servers. The data suggests that a number of common wisdoms surrounding disk reliability are wrong.

For example, they found that annual disk replacements rates were more in the range of 2 to 4 percent and were as high as 13 percent for some sites. Yikes.

Source: Hard Disk MTBF: Flap or Farce?

I found this fascinating article about MTBF and disk failures yesterday. I have known for some time that you must take the MTBF figures with a grain of salt. Disk drives appear to fail more often than what the MTBF figures would leave you to believe. The differences between “enterprise” disk drives and “retail” disk drives appear to be indistinguishable in the real world. Yet as an IT professional we will always recommend the component with the higher perceived quality even though we have misgivings about the statistics. For most businesses the cost of down time due to a disk failure is much higher than the additional cost for quality. Although we hate to admit it, there is a significant subjective component to our component recommendation. 

Accessing Windows Or Samba Shares Using AutoFS

Accessing Windows Or Samba Shares Using AutoFS

You already installed Linux on your networked desktop PC and now you want to work with files stored on some other PCs in your network. This is where autofs comes into play. This tutorial shows how to configure autofs to use CIFS to access Windows or Samba shares from Linux Desktop PCs. It also includes a tailored configuration file.

Link to Accessing Windows Or Samba Shares Using AutoFS

Set Up Ubuntu-Server 6.10 As A Firewall/Gateway For Your Small Business Environment

Set Up Ubuntu-Server 6.10 As A Firewall/Gateway For Your Small Business Environment

This tutorial shows how to set up a Ubuntu 6.10 server (“Edgy Eft”) as a firewall and gateway for small/medium networks. The article covers the installation/configuration of services such as Shorewall, NAT, caching nameserver, DHCP server, VPN server, Webmin, munin, Apache, Squirrelmail, Postfix, Courier IMAP and POP3, SpamAssassin, ClamAV, and many more.

Link to Set Up Ubuntu-Server 6.10 As A Firewall/Gateway For Your Small Business Environment

I am almost curious enough to try this. Throw in a little Samba and you have a pretty good SBS competitor although it might be a tossup to use an inexpensive NAS box for the file sharing instead. The turn-off was the 11 pages of cut-and-paste instructions. Of course, the entire installation is done via the geek’s old friend, the command line. I guess my age is showing. I am spoiled with the ease of using Wizards to install and maintain computer systems.

VMware Delivers Free VMware Server

VMware Delivers Free VMware Server

I have become a fan of VMware. I have used VirtualPC in the past but became interested in their products when they offered VMPlayer for free. When they offered free usage of the server product and encouraged the VMTN appliance community, I switched.

My use has generally been in two areas:

  1. Testing new slipstreamed installations of Win XP.
  2. Playing with pre-built appliances.

The first appliance I started playing with was Asterisk at Home or now know as Trixbox. I have downloaded several versions over the last couple of months using BitTorrent. There is a bit of learning curve for this product and I did not want to waste time setting up a test box. There is a market for supporing this product but I do not have a customer right now.

The second appliance I have started playing with is a couple of Nagios/Groundworks variants. Nagios is an open source network monitoring program and Groundwork Open Source is a free version of a commercial variant of Nagios. Due to some recent discussions I had with my son in which he maintained that our internet access sucked, I decided to investigate the matter further. I originally downloaded a prebuilt Groundwork Open Source system by Tony Su of Su Network Consulting. The good news is that he had built it. The bad news is that he released it as a virtual disk drive rather than a virtual appliance. As a result it was a little harder to set up than Trixbox. To compound the problems the network adapter needed to configured before it would do anything. Trixbox configured the network adapter during startup so this was new territory for me since this was a SUSE box.

Along the way I found a posting about baywatchos. It was a Groundwork Open Source system built upon Centos which is the same operating system used by Trixbox. My familiarity with Centos and the fact that it had Webmin already installed were pluses for me. The author even provided a nice Getting Started document in English. After a brief configuration I had it working. Gianluca, you did a fine job!

My next project will be to move these virtual appliances to my ghetto box and see how well they run. This should be amusing. Groundwork has some pretty stiff hardware requirements.

Helix – Incident Response and Computer Forensics Live CD by e-fenseâ„¢, Inc.

Helix – Incident Response & Computer Forensics Live CD by e-fenseâ„¢, Inc.

I was researching the Linux command, dd, and GParted because I wanted to migrate some data on old disk drives to my new disk drive and to see if I could copy a drive and debug a hardware/software problem on a PC I am working on. There are existing Windows solutions but I was curious about the state of the art on Linux.

I originally tried Ubuntu but GParted did not copy the partition for me?! I then went to Gparted Live CD and it worked for the NTFS partition I was playing with. The Linux partition was a bit more complicated. It is the LVM partition I used for my Fedora Core 4 installation and Gparted will not copy LVN partitions. Hmm…bummer!

I briefly tried the LVM commands to add a new LVM physical drive to the volume group and move the data from the existing LVM physical drive to the new drive. It did not work for me and with some more work I am pretty sure I could make it work since that is one of things LVM should be able to do. However, my interests in cloning the drive were very similar to copying the drive for forensic work so I decided to see what the Pros use for creating copies of disk drives. That led me to Helix.

I had previously downloaded and played with Helix 1.5 and 1.6. Helix 1.6(Knoppix) had problems with correctly recognizing my CD-ROM so I downloaded the newest version to see if it did a better job with the CD-ROM and to see if they had a frontend tool for dd/dcfldd. The CD-ROM worked and I found a frontend acquisition tool called Adepto. Adepto is an improved version of AIR – Automated Image and Restore which is also on the disk. So I cloned the old hard drive.

Mounting cloned drive was a little hard under Helix. I had to:

sudo vgscan
sudo vgchange -a y

before I could:

sudo mount /dev/VolGroup00/LogVol00 /media/sda3

Mounting the partition under Ubuntu was much easier. Now to go clone a copy of the PC’s disk drive I want to troubleshoot.

RE: Linux vs. SBS: Switch!

Excellent point brought up in the comments section today by Josh:

For example, Microsoft wants to argue about stability vs. Linux. In nearly all Linux servers we manage that comparison is laughable. Now, compare RPC-over-HTTP functionality with Linux? You can’t, no such thing on Linux! Where is that among the facts?

This is something that I’ve tried to make very painfully clear in my Linux presentations for SBSers in Florida groups. Here is the thing about winning in small business, you have to know your customers. You also have to know your Microsoft and understand certain “facts”. So here is a little competitive howto on Linux vs. SBS.

Watch Where You Get Your Facts

First and most important thing to understand about Microsoft’s Get The Facts site is that those reports have been paid for by Microsoft and are to a large extent questionable at best and outright false in many respects. Second thing to remember is that those reports are not written or targeted for the SMB market at all – they are written to discourage enterprise and high-end markets from moving their commodity-line servers to Linux and discourage Unix-shops from going to Linux instead of Microsoft. If you’re an SBSer, you will not find your facts there.

Know Your SWOT

Know your strenghts, know your weaknesses… but more importantly know what is not your weakness.


When bidding against Linux you are really competing against this: “Joe Consultant told us that Linux is free.” They are correct, many Linux distributions are free. So in most cases, it will be $599 vs. $0. For the purchase price that is. So on the face of things, Linux wins because its free.

When you dig a little deeper you find out that the “free” is the acquisition cost. If you are losing a client over $599 this is likely a client that you do not want as your business to begin with. If the server costs $1,800 and your labor to set them up and train them for a week will cost them another $4,000 that up-front licensing cost of $599 is going to be less than 10% of the total solution. This is generally what Microsoft talks about when they mention their TCO, total cost of ownership.

But we know our small business owners, don’t we? The same folks that will sign up for a plan with a “free cell phone” (MSRP $99) but agree to a two year contract that costs $20 a month more. If you really want to compete against Linux give them a 10% discount on your labor which will outright displace the licensing costs. Show them that they will be paying the Microsoft penalty anyhow as its very hard to impossible to buy a PC without a Microsoft OS to begin with. 

Upgrades and Migrations

When you bid against Linux you bid against free upgrades, forever, and easy migrations. Thats at least what gets put on the paper and what the Linux guy will say. The truth is much different. Here are a few facts that you might want to consider about some of the most popular Linux distributions out there:

Fedora – Fedora is a free version of Redhat Linux. Redhat Enterprise Linux is a full tested and supported distribution of Linux that retails between $350 and $3000 per server. So whats the difference? Redhat uses Fedora as their bleeding edge distribution, they use it to roll out experimental packages and see what breaks. The software itself is solid, but it is not elegant by a long shot. For example, consider that there is no migration path from version 3 to 4 to 5 – if you Google for “upgrade from FC3 to FC4” you will find a number of hacks that show you how to fool the dependancy checks and hack your way up. Not that it won’t work, but what happens if it fails? Remember, unsupported. There is literally nobody you can call.

Debian – Used to be most popular but recently displaced by its Ubuntu cousin. The trick with Debian is that they are so fanatical about being free that they eliminate any commercial or restricted software (or non GNU) from the base distribution. It is a severly outdated technology (in terms of even years) that nearly everyone seriously running Debian is doing so with the untested– or experimental– branches of the code. Even if you’re not a Linux person you can imagine what thats like. Again, virtually unsupported except for the MVP-like effort.

Gentoo – The concept here is that this is the most optimized version of Linux you can get because virtually everything from kernel on up is upgraded by running an emerge command. What emerge actually does is pretty cool – it downloads the source code along with a spec and compiles it against your hardware – so on a fairly loaded box you are constantly affecting the performance by rolling out your own code. Do you trust that your security patches are deployed as full recompiles of the source code? I don’t even trust most binary patches.

Ubuntu – The darling of the Linux world at the moment. Built on the Debian core with the pretty integrated interfaces and its claim to fame is the ability to roll out LAMP (Linux, Apache, MySQL and PHP) in 15 minutes. Pretty, but unsupported.

Those are the basics of Linux and distributions you will likely come up against. Every now and then someone will propose an Enterprise Linux version, a free community recompile of the popular Redhat Enterprise Linux. Distributions such as CentOS and WhiteBox Enterprise Linux. They are free, but again, unsupported as well.

So here is a real world scenario for you. The upgrade for the above is free– in all cases. They will download an ISO, burn it, stick it in a Linux server and after the reboot the system will be upgraded. All free! Yay.

As far as the technical discussion is concerned, they are right. Here is the dirty secret behind this though that nobody talks about: For most scenarios Linux doesn’t migrate, Linux overwrites. Now lets say your consultant tweaked the /etc/rc.d/rc.local file to automatically delete specific files on the server – generally a Linux distro upgrade would put in the new file in the place and make the original one a rc.local.bak. Let’s say you wanted something special done with your web server – your /etc/httpd/conf/httpd.conf file would have two options – it would get overwritten, or they would copy an httpd.conf.orig or tweak it in another way.

So yes, the upgrade is free. But the time to get this done is not. More importantly, because these migrations are generally done on per-site basis (ok, these guys have Redhat, these are on Fedora, these are on Gentoo) the migration checklist is all but nonexistant.

The truth about Linux deployments is that they are very much done on a per-case, needs basis. The beauty of the system (unlimited flexibility) is also its dagger because by endlessly tweaking the system the documentation part of the setup goes out the window. And when the migration goes bad with the freebies above you will likely have only newsgroups and mailing lists to turn to.

Finally, migrations nearly always include more than the base OS. The reason you deploy a Linux system is to get a flexible, fast and cost effective server. Well, Linux developers don’t think the same way business owners do. Linux developers try to adapt new technology, provide the newest features, create a system that is easiest and fastest to develop for. So when that new distribution comes with MySQL 5.0 and PHP 5.0 – will your PHP 4 script designed on MySQL 3.1 work? Maybe, maybe not. Who do you contact to find out – the webmaster that took the script from some random site? Nope. The commercial software developer? Unlikely, they only support official distributions like Redhat Enterprise Linux and SuSe. Who do you turn to? Good question to ask while providing a competitive bid.

How do you do application migration compatibility tests on Linux? You install the new version and try to hack it into working. If you’re lucky, it will just work. If you’re not lucky, whats the alternative? Another question for the stack. This is not the U part of FUD in uncertainty, this is something that there is no good, reliable, documented process in Linux. For years Linux distributions have tried to fight amongst themselves to develop a unified way that Linux is deployed – with same file system layout, dependancy checks, package management. Today you’re more likely to find multiple package management systems (yum up2date, apt).


For the most part this is your biggest strength. Small business owners and business people in general have habbits that are hard to change. Going from a Windows world to a Linux world is a big transition in anything more complex than a P2P environment. Its easy to replace a pop3 server with an onsite dovecot deployment. But when you’re selling a new server you are selling new functionality. Here are things that you will not find in Linux.

Exchange – Biggest advantage. There are no decent webmail programs for Linux – the best one to date is Scalix and it costs about as much as Exchange does. It does not provide RPC-over-HTTP, it does not provide cached mode, it does not provide advanced connectivity to mobile devices.

ISA – For the most part almost all Linux firewalls are connection based firewalls, nothing provides application-level security. So yes, if you want to block people from going to certain sites, Linux will cut it. Try to set those restrictions in place per employee per hour (ie, no espn updates for Joe between 9AM and Noon) you’ll be SOL.

WSUS – Exists on commercial Linux distributions as a Satellite server but almost all are desktop triggered up2date updates via cron – no ability to see which software is running on which system and no ability to restrict what goes on which workstation without manually adjusting workstations on per-case basis. No grouping. No reporting on which patches failed and no reporting on what may be out of compliance. These could be hacked together but do you really want to hack your security solutions together? Do you think your customers would?

IIS – The biggest reason to deploy LAMP is to get PHP and a free SQL server. Both of those run quite reliably on Windows as well and you can install WAMP on Windows. My personal dev environment for Linux is based on Vertrigo server which rolls out as a single install. So if thats all you need to deploy a new forum, blog, or a survey package your customer saw somewhere – this is the way to do it. And it’s free too. But feature is an advantage here – you have a choice. ASP or PHP? On Linux you have no ASP advantage (they use Chilisoft, Sun’s poor hack of ASP) nor do they have any .NET compatibilities without hacking in mono – but skip back to migrations and upgrades – whats the guarantee that your app will run on a hacked server? Now compare that with IIS. If you’re really familiar with IIS this is almost impossible to do. The cost of a second IIS server is not that great to begin with, Windows 2003 Server Web Edition retails for less than $300 which is likely less than two hours of any consultants time. You’d end up charging them more to download an ISO and read the intro parts of the Apache documentation.

Bus Features

When I worked at Dial ISDN I used to write “If Vlad Gets Hit By A Bus” documentation for everything I did. Why? Because all of our Linux servers were so heavilly tweaked that in case something happened there was no way on earth someone would be able to figure out how I’ve implemented my patch management, version control, monitoring, account creation and race conditions.

How much documentation will the Linux deployment come with? How long will it take someone else to replicate the setup on a new system? What commercial contacts do you have that will validate what you say about Linux? How many “user-geared” books are there on Linux that can get me going with this server immediately? SMB owners are DIY-centric, how much of this can I do through a GUI?

Final question: Give me a place to find other professional Linux consultants.

Where you have hundreds of Windows guys in every area there are only a few Linux solution shops. Most of the “Linux guys” will be people with careers and full time jobs that do consulting on the side and are saving your money out of the goodness of their heart. These are also the types you turn to for support. Do you want to run your business on goodness of strangers or do you want a contract? If you want a contract the savings will go out the window.  


Linux provides a cost effective, flexible and powerful server operating system and Microsoft’s FUD about it is largely a collection of paid distortions, some quite well documented as outright lies. Microsoft will not offer competitive sales support to SMB solutions that are under $10,000 in licensing so you’re on your own. They will also not discuss any of the above because of the irrational fear that if you experience a competitive solution you might find enough in it that you like to leave Microsoft.

On the other end of the fence you have, by comparison, a relatively innovative but young solution that lacks the standardization, unity and certainty with many of its supposed solutions. While the core of it is solid the biggest lacking factors for small businesses are in the areas of available expertise and support systems to fall back on when there are problems. In the areas of affordable business intelligence Linux is behind enough to make it unattractive beyond file servers, basic pop3/imap mail servers and popular web applications. 

In the end, both sides will lie, cheat and FUD to get their points accross. Your advantage is in knowing your customer, knowing their needs, and showing them the solution that will not only solve their problems but be ready for the problems they will encounter as they grow. For what its worth, I’ve been a Linux system administrator for three years longer than I’ve been a Windows guy and work on both platforms daily. 

[Via Vlad Mazek – Vladville Blog]

RE: Distribution Release: Ubuntu 6.06 LTS

Right on schedule, Ubuntu 6.06, a distribution with long term support features, has been released: “Ubuntu, which has become one of the world’s most popular Linux distributions in recent years, launched its latest version on June 1 following months of intense testing. The new release is titled Ubuntu….

[Via News]

Yup! I downloaded this puppy. I have been pretty happy with Ubuntu 5.10 so I was curious what 6.06 would bring. Actually I have not found anything significant to me. In fact it seemed a little slower. I used azureus and left it on for an additional four hours till I have given as much as I had received.