Fixing WordPress SEO Sitemap Problems

I decided to switch over to WordPress SEO(Yoast) yesterday and ran into a slew of problems with their sitemap generator, a 404 error, a blank screen, and a sitemap.xml not being properly re-directed to the new sitemap_index.xml. The first problem led me to this Yoast knowledge base article, My sitemap is giving a 404 error, what should I do? I fixed the first problem by adding the code to my .htaccess file. To fix the last two problems I added the RewriteRules for the xsl statement(Line 8) and the sitemap.xml(Line 5). Now both sitemap.xml and sitemap_index.xml are being properly redirected and formatted. My Google Webmaster Tools is happy!

Note: The code below is for a WordPress blog in a sub-directory called wordpress.

# WordPress SEO - XML Sitemap Rewrite Fix
RewriteEngine On
RewriteBase /wordpress/
RewriteRule ^sitemap_index.xml$ /wordpress/index.php?sitemap=1 [L]
RewriteRule ^sitemap.xml$ /wordpress/index.php?sitemap=1 [L]
RewriteRule ^([^/]+?)-sitemap([0-9]+)?.xml$ /wordpress/index.php?sitemap=$1&sitemap_n=$2 [L]
# This rewrite ensures that the styles are available for styling the generated sitemap.
RewriteRule ^/([a-z]+)?-?sitemap\.xsl$ /wordpress/index.php?xsl=$1 last;
# END WordPress SEO - XML Sitemap Rewrite Fix

My Natty Narwhal problems

Last week I decided to update to the latest recent on my VMware version of the Ubuntu Desktop and it did not go well. During the upgrade I got a disk space error message with non-displayable characters. It was a curious message since I thought I had enough disk space and it did not stop the process. It was when I rebooted I encountered the major problem. I would like to tell you more about the error message but the screen clears and I am left with a GRUB prompt. That’s not nice!

So I did some research and booted using the instructions from this page, Express Boot to the Most Recent Kernel. Everything came up nicely except for my windows were missing the title bar when I came in via the NX client. To fix the booting problem, I reinstalled from the LiveCD using the simplest method. To fix the missing title bar I installed the latest version of NX client from NoMachine NX. I also got the missing title bar when I used the VMware console window to log in. In that case I just specified that I wanted to use the Classic interface when I logged in.

Adventures with iRedMail – Part III

Recently I installed iRedmail at work so that we could include DKIM signatures in our newsletters. Every week we send out a newsletter to 96,000 former customers. It takes about 13 hours to send the newsletter. Yahoo is probably our most important email domain and they want us to implement DKIM. A couple of weeks ago we started seeing Yahoo limit our sending rate. Obviously they had a problem with something in our newsletter. So we re-analyzed the error codes we were getting during the newsletter mailing and implemented DKIM. The problem is fixed. Here is how I implemented this version of iRedMail.

I implemented a VMware version of iRedMail to sign newsletter emails using DKIM. I used Ubuntu 9 server version(optimized for VMware version) to build appliance.

  1. The server works as a mail proxy in front of the SMTP server we use exclusively for the newsletter. It signs and relays the email to the existing SMTP server. I kept the existing SMTP server so that I could continue to use my existing procedures for parsing the log files to identify old/obsolete mailboxes.
  2. I created iRedMail users in LDAP to relay local users to mailboxes on Exchange.
  3. My primary bottleneck is still my mail transmission to the Internet speed, 2 per second. I can create newsletter emails at about 8 per second.
  4. On an old Proliant DL350 G4 iRedMail consumes about 40% of the dual CPU computer for four hours.

Since I had experience installing iRedMail it went quickly. The biggest bug I had to fix was the AWStats/permissions problem on the mail.log file.

Importing Self-signed CA Certificate into Windows 7

Yesterday I opted to create self-signed certificates for my local servers. Most of my local servers already had self-signed certificates with default names so it looked like a simple task. I found this document, Creating Certificate Authorities and self-signed SSL certificates, and in a few minutes I created a new Certificate Authority and replaced my existing server certificate. I checked the site via my web browser and it complained about needing the Certificate Authority certificate. So I copied the CA certificate to my PC and imported it into the Trusted Root Certification Authorities using IE8. Despite a message saying it succeeded, it really didn’t import the certificate. I re-started the browser and re-started the computer but the certificate refused to show up. I finally opted to login as the Administrator and install the certificate to the Trusted Root Certification Authorities of the computer account. I suspect that the key requirement is to import into the computer account. For those unfamiliar with the process you open a command window and run “mmc”. Next you click File-Add Snapin and add the Certificates Snap-in. When you add the plug-in it will prompt you to select which account you want to update. Select the computer account, the local system, and click to add the Snapin. Now navigate to Trusted Root Certification Authorities and import the CA certificate.

Add a partition to Openfiler

I keep suffering from memory loss when it comes to using Openfiler. I use it so infrequently I keep forgetting how to add a partition to Openfiler. The user interface is not very intuitive so I keep recreating my steps. So I am posting this procedure as a reminder.

  1. Click on the link, Volumes (https://filer:446/admin/volumes.html), in the navigation menu at the top of the page.
  2. Click on the link, Block Devices (https://filer:446/admin/volumes_physical.html), in the navigation menu on the right side of the page.
  3. To add a partition to the device, /dev/sda, click on the link, /dev/sda, under Edit Disk column.
  4. At the bottom of the next page, Volumes : Block Devices : Edit Partitions, enter the data for the partition and click on the Create button.

How To Set Up A Terminal Server In Linux Using Ubuntu 9.10 And FreeNX

This article was timely. I had just installed virtual version of Ubuntu on my ESXi server and set up VNC so I could access it. It was okay but FreeNX is a more elegant solution. The combination of FreeNX and Firehol to setup the firewall makes it a winner in my book.

ubuntu.gif

How To Set Up A Terminal Server In Linux Using Ubuntu 9.10 And FreeNX

FreeNX is an open source implementation of NoMachine’s NX Server. It is a bit more akin to Microsoft’s RDP protocol that the usual VNC, so while keeping bandwidth to a minimum, it maintains good visual quality and responsiveness.

How To Set Up A Terminal Server In Linux Using Ubuntu 9.10 And FreeNX
(author unknown)
Mon, 25 Jan 2010 16:42:09 GMT

Notes on Installing the Network Monitoring Appliance

A couple of weeks ago I installed the Network Monitoring Appliance using the tutorial on HowToForge.com. Prior to installing the Network Monitoring Appliance I was planning to give the latest community version of GroundWork Monitor, http://www.groundworkopensource.com/products/community-edition/index.html another trial. My network monitoring objectives were to have the Network Monitoring appliance notify me of problems on a remote web server and on my local network. Although these network monitoring objectives can be accomplished by a ping or a “HTTP ping”, I wanted to see a some network throughput graphs and I expected to eventually need a slightly more sophisticated data base monitoring in the near future. Nagios was at the core of the best solution for me since accomplished most of my needs and I was already familiar with Nagios from a previous trial of Groundwork Monitor. The primary attraction of the Network Monitoring Appliance over Groundwork was its much smaller resource requirements. In my environment it would be sharing a VMware ESXi server. I was also pleased to see that the Network Appliance used Jeos. For those unfamiliar with Jeos it is:

Ubuntu Server Edition JeOS (pronounced "Juice") is an efficient variant of our server operating system, configured specifically for virtual appliances.

Users deploying virtual appliances built on top of JeOS will benefit from:

  • better performance on the same hardware compared to a full non-optimized OS
  • smaller footprint of the virtual appliance on their valuable disk space
  • fewer updates and therefore less maintenance than a full server installation

For my installation I decided to use VMware’s 32-bit Ubuntu template to create the virtual machine. The only modification to the template was to adjust the disk drive size down from 8 GB to 1 GB. As described in HowToForge tutorial I installed the following programs.

  1. Ubuntu 8.04.3 JeOS as OS
  2. Nagios 2.11 for monitoring and alarming
  3. Smokeping 2.3 to observe latencies and packet loss
  4. MRTG 2.14.7 to observe network traffic’s tendencies
  5. RRDTool 1.2.19 as the Round-Robin Database for storing all measurement data
  6. Lighttpd 1.4.19 as a fast, lightweight web server frontend
  7. Weathermap4rrd for illustrating the network weather
  8. sSMTP as extremely lightweight MTA for mail delivery

The installation was quick. Almost all of my challenges was in configuring the programs. Fortunately I had previous experience configuring the most difficult to configure programs, Nagios and MRTG. It helps if you have a basic knowledge of PERL since most of programs use it. Here are my installation notes.

  1. One of the first things I needed to install to make this installation go smoother was an editor other than VIM so I could cut-and-paste from the tutorial to my SSH session. In my case I installed nano.
  2. The first application I configured was smokeping. The configuraton file is pretty easy to figure out and can be found at /etc/smokeping/config.  If everything works you will see a nice graph of the the ping statistics at http://yourip/cgi-bin/smokeping.cgi.
  3. Configuring Nagios is a bit more complicated. Since this is version 2 of Nagios, the configuration files are located at /etc/nagios2/conf.d. The main Nagios web page is at http://yourip/nagios2/. The Nagios QuickStart Document, http://nagios.sourceforge.net/docs/3_0/quickstart.html, is a good primer for the folks not familiar with Nagios.
  4. The Debian logo did not appear in Nagios next to the localhost. It showed a missing image. After a little research I figured out that I needed to install nagios-images using apt-get install nagios-images.
  5. For some reason I did not seem to have cron installed and running. This is easily solved by apt-get install cron.
  6. MRTG is useful if you have a SNMP router to poll. I used my pfSense Firewall as the SNMP source. MRTG provides some nice graphs of network traffic and its page is located at http://yourip/cgi-bin/mrtg-rrd.cgi/
  7. Configuring Weathermap4rrd is a little challenging since the documentation is sparse. Weathermap4rrd provides a clever network status graph once you figure how to configure it. It uses the same data as MRTG to create its graph. The network status page for weathermap4rrd is located at http://yourip/weathermap4rrd/weathermap.png
  8. I installed apticron to nag me via email about installing security updates and Logwatch to find any problems posted in the log file by the installed programs.
  9. If you plan on getting emails from Nagios when a host is down, you should test it. Duh! The easiest way to test it is to deliberately mistype the host name. If you do not get the email, you should check your Nagios configuration, sSMTP configuration, and the SMTP log file.
  10. sSMTP is easy to configure and use. In the simplest configuration you point it at the SMTP server you are sending your emails to. If you are sending emails to more than one domain, you need to connect to a SMTP server that will relay emails for you.
  11. I installed PHP version 5 to see how hard it would be to install under Lighttpd. I followed the instructions on the Lighttpd wiki and PHP appears to be running without problems. Most of these network monitoring programs have newer versions in PHP. Some day in the future I plan to migrate to the PHP versions of Nagios and weathermap but it is not necessary for this small network.
  12. I created a simple navigational menu on the main page with links to the various network management status pages. It is much easier to use this menu then remembering the addresses of the different status pages.

Adventures with iRedMail – Part II

In the first installment of Adventures with iRedMail I got it to send emails but I left the MS Exchange integration for another day. Since then I have updated my DNS zone with the DKIM information, set up local DNS information, decided on naming standards, and reconfigured Postfix several times before I got it right.

Updating the DNS with DKIM information

This task was relatively easy. I copied the DKIM information in the iRedMail.tips into a trouble ticket with my web provider. About 24 hours later it was ready to test. I sent an emails to my Yahoo account, sa-test@sendmail.net, and autorespond+dkim@dk.elandsys.com. Although the email from dk.elandsys.com was the first to respond, it said it did not work. When I checked my Yahoo account the headers said the email was signed correctly with DKIM. Ironically the return email from sendmail.net ended up in my Junk Mail folder. It said that everything worked correctly. For one more test I created a Gmail account and sent an email to it, too. It said the email was signed correctly.

Local DNS, naming standards, and more Postfix problems

The next challenge was to configure Postfix to accept both local email addresses and email addresses for the exchange server under the same domain. I used PostFixAdmin to create Aliases that pointed to the Exchange server emails(e. g. myemail@mybusiness.com points to myemail@mybusiness.local). PostFix complained about the DNS records for my Exchange server so I added mybusiness.local as a relay_domain and set up a psuedo DNS so that PostFix can find the IP address for my Exchange server. In my case I decided to let my pfSense firewall act as a local DNS server to serve up the local IP addresses. At this point I can email to everyone from a local iRedMail account but I cannot get replies until I set up iRedMail as the SMTP gateway and the Exchange server as a relay domain.

PostFix domain checks get me again!

It took me a long time to figure this out. When I changed the firewall to redirect SMTP traffic to the PostFix gateway I could not get any mail. I thought I had messed up the firewall settings so I kept trying different settings. I was pretty limited with my testing tools. If I could Telnet into port 25 I could see what is happening but I could not make the connection work as long as I was located on this side of the firewall. Fortunately I found a solution on the Internet. The dnsqueries.com site provides a page, http://www.dnsqueries.com/en/smtp_test_check.php, that allows me to check my local SMTP connection using their server.  Within minutes I figured out that my email server did not like my sender’s domain. In fact it did not like anyone’s domain. This was the same type of problem I had with the Postfix recipient domain check, so I removed the sender domain check and the emails starting flowing.

What have I achieved?

  • I have a gateway that checks all incoming mail for spam and viruses. Postini offers a similar service for about $1 per user per month. We use MXLogic at work.
  • I have an alternate email server that allows me to send email that passes the SPF and DKIM checks. One of the reasons I investigated iRedMail was to use it for sending out a newsletter at work. Like many Internet retailers we get a chunk of our business as a result of our biweekly newsletter. In our case DKIM is another piece of the puzzle to improve our sender reputation. Since both Yahoo and Gmail require DKIM signing in order to set up feedback loops, DKIM is probably essential if you have ambitions of having a pristine email list. For those folks looking at ways to cut the umbilical cord to Microsoft this is one of several low cost, low maintenance migration alternatives to a local Exchange server.

Adventures with iRedMail

I read this article on HowtoForge and decided to give it a try. I was not as successful as the author.

iRedMail: Full-Featured Mail Server With LDAP, Postfix, RoundCube, Dovecot, ClamAV, DKIM, SPF On CentOS 5.x Debian (Lenny) 5.0.1

iRedMail is a shell script that lets you quickly deploy a full-featured mail solution in less than 2 minutes on CentOS 5.x and Debian (Lenny) 5.0.1 (it supports both i386 and x86_64).

iRedMail: Build A Full-Featured Mail Server With LDAP, Postfix, RoundCube, Dovecot, ClamAV,SpamAssassin, DKIM, SPF On CentOS 5.x | HowtoForge – Linux Howtos and Tutorials

My first try was to use the script to update a Centos 5.3 workstation installation. It went smoothly until I tried to update look at the keys used by DKIM. I ran into trouble with the LDAP option. OpenLDAP would not install do to a missing file. So I took the Mysql option. That was when I found a series or problems. Most of the problems were minor. My initial mail userid used Chinese. Since I was particularly interested in DKIM I was disappointed to find out that Amavisd was running at a version that did not support DKIM. I quickly realized that this was taking too much time and a better solution was to install a virtual machine using the iRedOS. This is a Centos 5 installation with all of the prerequisites already installed.

Creating a virtual machine mail server went pretty smoothly. The only problem I found with the installation was that I was unable to send mail. I quickly realized that I needed to install Webmin so I could perform normal system maintenance and troubleshoot. After I installed Webmin I found my problem. Postfix thought Yahoo was an unknown domain. Although I am not familiar with intricacies of Postfix I found that if I removed the configuration parameter “reject_unknown_recipient_domain” I could send emails successfully. This is a not a fix but it will work for me until I figure out the problem between the DNS and Postfix.

My next trick is to set up the mail server as a mail relay to my Exchange server. Technically this could be a first step in migrating off of Exchange to a non-Microsoft cloud computing environment. There are a lot of good things to be said about Exchange but there are even more good things to say about cloud-based email. Making the transition to a low cost, highly dependable, feature rich email environment with the least amount of pain is the challenge for both the Microsoft and open source communities.

Nimble Method: Garbage Collection is Why Ruby on Rails is Slow: Patches to Improve Performance 5x; Memory Profiling

 

  • The News: Ruby on Rails performance is dominated by garbage collection. We present a set of patches to greatly improve Rails performance and show how to profile memory usage to get further performance gains.

  • What’s at Stake: Rails is slow for many uses and did not lend itself well to optimization. Significant performance gains could only be achieved at application level at large development cost.

  • The Upside:

    • 5x potential performance gains;
    • easy way to identify whether GC is a bottleneck;
    • deterministic process to fix memory bottlenecks;
    • set of canned patches to solve the biggest problems;
    • you can help

Nimble Method: Garbage Collection is Why Ruby on Rails is Slow: Patches to Improve Performance 5x; Memory Profiling
arunthampi
Sat, 02 Feb 2008 05:30:00 GMT

Okay, a couple of weeks ago I installed Ruby so that I could run Metasploit. Installing Ruby was a challenge since I needed to install several dependencies so that I could install RubyGems. Fortunately Simon had the answer. When I cranked up the GUI version of Metasploit, the GUI seemed slow and the console messages showed Ruby to be busier than I thought it should be. I hadn’t asked it do anything yet.  Maybe this will help! Then again maybe if I upgrade to the latest version of Metasploit(3.1) will help.

KeePassX – The Official KeePassX Homepage

KeePassX – The Official KeePassX Homepage

KeePassX saves many different information e.g. user names, passwords, urls, attachmets and comments in one single database.

Yesterday I got around to installing KeePassX on my Centos server. The rpm version worked fine but I had to manually create a menu item. For fun I downloaded the new versions of the KeePassX icon. For a very brief time I thought about compiling KeePassX from source code but it looks like I will have to do a lot of work. It uses the QT library and Qmake. I would prefer if I could set it up in Eclipse but that looks complicated.

KeePassX is a port of KeePass and it read the KeePass database on my USB stick without a problem. It maintains the same look and feel as the original program so that is a big advantage on the learning curve for me. KeePassX has everything I use except for the global auto-type hot key and the plugins.

Linux Tip: Replacing GKSUDO for CENTOS users

One of the annoying things about maintaining CENTOS installations is performing system maintenance as the super user from the command line. Don’t get me wrong but I was programming before graphical interfaces(BGI). The command line is a good and trusty way to perform maintenance. As long as everything works you can get by with a minimum of memorization. Since most of us live in an after graphical interfaces(AGI) world and we do not practice our Linux command line knowledge on a daily basis, we quickly get rusty on the tricks of the trade and yearn for an easier way. Something with a fast learning curve. This is precisely why we have graphical interfaces.

For reasons I did not understand until today CENTOS does not make it easy to run graphical programs as the super user, such as nautilius and gedit. Ubuntu offers a fairly simple way to create menu items to start graphical programs as a super user, gksudo.  CENTOS does not offer this utility in either Version 4 or 5. A similar utility, kdesu, was offered in CENTOS Version 4 but is not offered in CENTOS 5. Opening a terminal window and running SUDO is an pretty clumsy option so I was pretty sure that there probably was a better way! I wanted a menu item like the other system maintenance menu items that would authenticate me before running an application as a super user.

Today I found the answer. Matt Hansen wrote a tip how to “How to run a program from GNOME menu with root privileges ” back in 2004. The tip uses a utility called consolehelper. You have to create a couple of configuration files but the whole process can be completed in about five minutes. It is interesting that today was the first time I found a reference that claims consolehelper is the “proper” way to solve the “missing” gksudo problem.

Notes on Setting up the Eclipse C++ IDE on Linux

Since I had recently setup my laptop with C++ version of Visual Studio 8 Express, I was curious about setting up a similar IDE environment on Linux. I initially tried to set up Anjuta DevStudio and I failed miserably. I am running CentOS 5.1. There does not appear to be a recent RPM of Anjuta. I stumbled badly when I tried to manually install the dependencies and quickly became inspired to look for an IDE solution that would set up as easily and quickly as Visual Studio Express. Eclipse was the obvious answer.

So I went to the Eclipse site and downloaded the Linux version of the Eclipse IDE for C/C++ Developers. After I had uncompressed the file I tried running Eclipse and it did not work. It was complaining that my version of Java needed to be at least 1.5. Although I had installed a newer version of Java JRE, Eclipse was finding the 1.4 version. To get Eclipse to work I had to modify the PATH statement so that it would find the verion in “/usr/java/jdk1.6.0_03/bin” first. The best way I found to fix this problem was by modifying the .bash_profile file and adding the following statement:

export JAVA_HOME=jdk1.6.0_03

and modifying the path statement to read:

PATH=/usr/local/java/$JAVA_HOME/bin:$PATH:$HOME/bin

After I logged out and logged back in, I could start Eclipse. To test my Eclipse setup I decided to use the Hello World program for CPPUnit. This is the traditional Hello World program with a little extra, a C++ unit testing framework. The steps I performed to build this program are:

  1. Created a new C++ Project. In my case I called it HelloWorldCPPUnit.
  2. Next I created a “Source Folder” that I called “src” and a “Source File” in that directory that I called “HelloWorldCPPUnit.cpp”. I copied all of the source code from http://pantras.free.fr/articles/helloworld.html into the file and saved it.
  3. Before you compile this program you need to download and install cppunit. The instructions for installing it are straightforward but you will need to do a few more things to get it to work with Eclipse.
    1. You will need to modify the project settings for the GCC C++ Compiler-Directories in Eclipse to add the path to the include files, “/usr/local/include/cppunit”. This adds a “-I” parameter for the compile.
    2. You should run the command, “./cppunit-config –libs” to see the library linking information. In my case it showed “-L/usr/local/lib -lcppunit -ldl”. I modified the project settings for the GCC C++ Linker-Libraries in Eclipse to add these libraries, ccpunit and dl, and the library search path, “/usr/local/lib”.
  4. The final setup step was to tell CentOS where to find ccpunit shared library. At this point the program will build but will not run because CentOS cannot find the run-time library for cppunit. The cppunit installation creates a shared library and puts it in the “/usr/local/lib” directory. To tell CentOS where to find it I had to do the following steps.
    1. As the user, Root, I created a file that I called “local.conf” with one statement in it, /usr/local/lib, in it. I saved this file in the “/etc/ld.so.conf.d” directory.
    2. Then I ran the command, “/sbin/ldconfig”. This tells CentOS to update the links to the shared libraries.
  5. If everything is set up properly the program will build and run the simple unit test.

Overall Eclipse with CDT is slightly more difficult to set up then Visual Studio Express. Most of my difficulties occurred when I tried to go a little beyond the default configuration. Recently I tried to go slightly beyond the default configuration for Visual Studio Express. Since I had minor difficulties setting up both packages my gut feeling is that it was slightly easier to find answers to set up problems from the Internet for Visual Studio problems because there is a larger developer community specializing in Visual Studio. Of course, your mileage will vary! 😉

GroundWork Monitor Open Source

 

GroundWork Monitor Open Source 5.1

A complete availability monitoring solution that ensures IT infrastructure uptime while identifying issues before they become real problems. Unifies best-of-breed open source tools – Nagios, Nmap, SNMP TT, PHP, Apache, MySQL and more — through PHP/AJAX-based components and an integrated user interface to deliver the extensible functionality you require.

GroundWork Monitor Open Source

I finally got around to migrating my old version of GroundWork to the newest version, 5.1. GroundWork is a nice repackaging of Nagios and the 5.1 version includes some basic graphing in the free version via RRD. The paid support version has more sophisticated graphing and reporting and does a better job of interfacing with SNMP. I use the VM appliance since I am using this package to monitor a few web sites. It sends me an email when it sees a problem.

I was planning to write this post after I fixed three alerts, local_mysql_database_nopw, local_process_gw_feeders, and local_process_snmptt, on the local Linux server but I am going to turn these alerts off instead. I found the problems with the feeders(missing perl library) and snmptt was not installed but my fixes did not seem to hold. The system is running fine.

Obay Home » Blog Archive » Installing VMWare Player on Ubuntu

If you get the following error message:

/usr/lib/vmware-player/bin/vmplayer: /usr/lib/vmware-player/lib/libpng12.so.0/libpng12.so.0: no version information available (required by /usr/lib/libcairo.so.2)

Apply the following fix

mv /usr/lib/vmware/lib/libpng12.so.0/libpng12.so.0 /usr/lib/vmware/lib/libpng12.so.0/libpng12.so.0.disabled
ln -sf /usr/lib/libpng12.so.0 /usr/lib/vmware/lib/libpng12.so.0/libpng12.so.0
mv /usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1 /usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1.disabled
ln -sf /lib/libgcc_s.so.1 /usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1

Obay Home » Blog Archive » Installing VMWare Player on Ubuntu

Yup, this works for Centos 5. I upgraded to Server 1.04 today and had to go back and fix things again. This is a different fix than the fix I used previously where I renamed the file and let Vmware find a suitable library. I am guessing that both solutions probably end up using the correct library but this solution looks like it is a more direct approach.

Variations on Updating WordPress

The folks have updated WordPress again and I have been evaluating different methods of upgrading. The standard method works but I have been wanting to streamline the process for remote hosts.

For my locally hosted blog I used the Updating WordPress with Subversion method. This is pretty slick! I had previously checked out a copy of WordPress using Subversion and integrated the wp-content and a couple other files into the working copy. All I had to do this morning was to crank up TortoiseSVN on the checked out directory, change its tag to 2.3, and let Subversion do the rest. When I logged in as Admin, it updated the database. I did get some database errors about duplicate entries into wp_terms and wp_term_taxonomy but I do not think these errors are critical since this is the blog I use to test changes with. Its pretty funky!

Yesterday I got carried away again and did a little research on using Subversion on remote hosts. I found that some host providers provide it but most do not it. My host provider, bluehost.com, does not provide Subversion support directly but I found a post on a forum that described a method I could use to install it.  I kind of followed their instructions. It is working as a client and here are my instructions.

  1. Log in using SSH. I used PuTTy.
  2. Create a bin directory.
  3. Edit the .bashrc file to add the path statement to the bin directory.
  4. Create a source directory and then change to this directory.
  5. Use wget to download the tar version of both the Subversion package and the dependencies package.
  6. Untar both packages.
  7. Run configure, make, and then make install. You should have several executables in the bin directory. 
  8. Make sure that subversion works by typing in “svn –version”.

Here is the command line version:

mkdir ~/bin
# Use your favorite editor to edit the .bashrc file and add the path statement to the bin directory
mkdir ~/src
# get the subversion and dependencies tarballs
cd ~/src
wget http://subversion.tigris.org/downloads/subversion-1.4.5.tar.gz
wget http://subversion.tigris.org/downloads/subversion-deps-1.4.5.tar.gz
tar -xzf subversion-1.4.5.tar.gz 
tar -xzf subversion-deps-1.4.5.tar.gz 

# Build it
cd subversion-1.4.3
./configure --prefix=$HOME --without-berkeley-db --with-zlib --with-ssl
make
make install

# check it works!  
svn --version

I am now able to check out a copy of WordPress and update it on my bluehost.com website. I am not sure this is much better than the WPAU plugin I used recently. I will probably continue to play with both methods. I am still working at setting up a repository on bluehost. I do not mind using the Subversion client to update the WordPress files but I would like my bluehost account to be a server for the wp-content files since I would like version control on my theme files. It would be nice if the folks at bluehost decided to directly support Subversion, too.

What is good for Ubuntu Feisty is also good for Centos 5!

The Joe writes:

The install goes great, but when I run the server I get the following error:
/usr/lib/vmware/bin/vmware: /usr/lib/vmware/lib/libpng12.so.0/libpng12.so.0: no version information available (required by /usr/lib/libcairo.so.2)

As far as I can tell, the libpng12.so.0 that gets installed to the /usr/lib/vmware/lib/libpng12.so.0 directory is the wrong version, your system should have a current version installed. To fix this just delete or rename libpng12.so.0 from /usr/lib/vmware/lib/libpng12.so.0

www.TheJoe.com » Ubuntu Feisty Vmware Server and libpng12.so.0

I was getting the same error message under Centos 5. Thinking that a solution for Ubuntu Feisty might work for Centos 5, I gave it a try and was pleasantly surprised. I renamed the file and restarted the Vmware server. The error message no longer appears and the server appears to be working fine. It also fixed the bigger problem I was having with accessing the virtual machine from PCs on the same sub-network other than the host.

Just when you thought it was safe to go out alone in the Linux world …

NIC Broadcom AC 131 not properly detected – Ubuntu Forums

we are also having problems with the Shuttle SS31T, we can boot from a live CD and install the OS to a IDE hard drive but the NIC and SATA are not recognised?
Do you have any tips on how to get the NIC and SATA devices working?

NIC Broadcom AC 131 not properly detected – Ubuntu Forums

I bought a Shuttle SS31T this week for a non-profit. I am turning it into a cheapo M$ terminal server. The nonprofit has lots of dumb PCs(aka Win98) but not a real PC(i.e. something made in the last 3 years) to be found.

The SS31T is a cute box and the price with a 940 D, disk drive, and a gig of memory is attractive. To make sure that everything was connected properly I booted Ubuntu’s LiveCD. Everything looked fine except for the minor details that it was not talking to the LAN and it did not see the SATA drive. The Ethernet link light was on at the back of the PC but no traffic made it through. Next I booted FreeDos. At least it could see the SATA drive. In a panic I installed a trial version of XP since my 2K3 is in transit. Whew it worked! All of the problems in the Device Manger cleared up when I installed the Shuttle drivers.

Bill’s Grand Adventure

I finally got motivated to resuscitate my Ghettobox2006. It took me a little debugging but I finally got it to recognize my SATA drive. My plan was to use this box as a general purpose Linux box running several VMware guests. The problem was that I could not get the box to boot. This was a strange problem. I had the motherboard working in another case with an IDE drive and a SATA. I moved the motherboard and the SATA drive to a new case and it would not boot. The SATA drive was not recognized by the BIOS and the drive acted like it was not getting power. I wasted a fair amount of time trying to figure out the source of the problem but eventually had to go work on higher priority tasks.

Last weekend I got an idea on how to fix the problem and went back to working on the box. My idea did not work but I did find the problem. The BIOS had the SATA controller turned off. How did that happen? Well, it boots now!

So I was off to the races. Awhile back I decided to use Centos for the host and I had already downloaded a version 5 DVD. I did not have a big reason for selecting Centos besides that I am slightly more familiar with Centos/Fedora/Redhat than I am with Ubuntu and Suse. My installation was a little unusual since I had three partitions on the disk I wanted to keep, a W2K partition, a partition with several existing virtual machines, and an empty partition for a future operating system. I had about 60 GB of free disk space left for Centos. I chose to install the standard Centos Desktop.The installation went smoothly. I was pleased to find out that I could still boot to the W2K partition from GRUB if I wanted to. Dual booting Linux and Microsoft used to be so funky.

Along the way I found a solution for an interesting Java problem. After I finished installing the operation system, I cranked up the web browser and Firefox told me that I needed the Java plugin. Reluctantly I downloaded the plugin and installed it. The Java plugin installation is about as dorky as it comes. Been there…done that…you mean I have to do this again. This is one area that Windows really shines over Linux. Surprise…surprise the Java plugin did not work. To complicate the matter there was no error message either. I was a little annoyed so I tried to open the Java control panel. It did give me an error message. It could not find libstdc++.so.5. A quick search of the Internet found two potential solutions. I could either install a symbolic link to libstdc++.so.6 or install compat-libstdc++-33. I installed the compat library since it may fix other problems I do not know about yet. I just want the standard stuff to work without a lot of fiddling. Sometimes that can quite a challenge. Now when I validate the plugin at the Java site, it worked as expected.

I will talk about my adventures with VMware in another post. I still have some kinks with the networking to work out. I was pleased to find out that all of my virtual machines worked. Even the W2K virtual machine I created using VMconverter worked.

Hard Disk MTBF: Flap or Farce?

 

Data sheets for hard drives have always included a specification for reliability expressed in hours: commonly known as MTBF (mean time between failures), or sometimes the mean time to failure. Same difference: One way assumes that a drive will be fixed, and the other, replaced. Nowadays, this number is around a million hours for an “enterprise” hard drive. Some drives are rated at 1.5 million hours.

Now, that’s a good stretch to time. After all, a year is only 8,760 hours. One million hours comes to a bit more than 114 years. Some may be scratching their heads, since the hard drive itself has only been around for 50 years (IBM’s giant 350 Disk Storage Unit for its RAMAC computer). This can be confusing.

Instead, the MTBF is a statistical measure based on a calculation extrapolated from less-lengthy readings. It all means that drives are very reliable, with a failure rate well under 1 percent per year. Go Team Storage!

However, several papers covering large-scale storage presented at FAST ’07, the USENIX conference on File and Storage Technologies, held recently in San Jose, Calif., are kicking up a stir online about MTBF.

The Best Paper award was handed to “Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?” by Bianca Schroeder and Garth Gibson of Carnegie Mellon University in Pittsburgh.

Their study tracked a whopping set of drives used at large-scale storage sites, including high-performance computing and Web servers. The data suggests that a number of common wisdoms surrounding disk reliability are wrong.

For example, they found that annual disk replacements rates were more in the range of 2 to 4 percent and were as high as 13 percent for some sites. Yikes.

Source: Hard Disk MTBF: Flap or Farce?

I found this fascinating article about MTBF and disk failures yesterday. I have known for some time that you must take the MTBF figures with a grain of salt. Disk drives appear to fail more often than what the MTBF figures would leave you to believe. The differences between “enterprise” disk drives and “retail” disk drives appear to be indistinguishable in the real world. Yet as an IT professional we will always recommend the component with the higher perceived quality even though we have misgivings about the statistics. For most businesses the cost of down time due to a disk failure is much higher than the additional cost for quality. Although we hate to admit it, there is a significant subjective component to our component recommendation.