Monday, January 24, 2011

Dell Poweredge, Latitude and Optiplex BIOS updates HOWTO for Fedora, CentOS and RHEL

BIOS updates for us Linux types have not been very easy. It was often difficult enough when there was boot floppies to deal with. Now it seems that many vendors even require actual Windows to be running in order to update. Luckily, Dell has not been one of those companies when it comes to their business lines of servers, laptops and desktops.

The whole process is very easy thanks to Dell's commitment to Linux:
  1.  wget -q -O - http://linux.dell.com/repo/community/bootstrap.cgi | bash
  2.  wget -q -O - http://linux.dell.com/repo/firmware/bootstrap.cgi | bash
  3.  yum -y install libsmbios-bin firmware-tools firmware-addon-dell
  4.  yum -y install $(bootstrap_firmware)
  5.  update_firmware --yes
Make sure "plugins=1" is in your /etc/yum.conf
Don't take my word for it look at Dell's firmware repository information.

I have had success using these steps for Poweredge 2850, 2950, R710, Optiplex GX280, GX520, GX620, Latitude D620 and D630 just to name a few. In addition, OS's have been CentOS 5.x, Fedora 11, 12 and 13.

Thursday, January 13, 2011

RHEL or CentOS 5.x RPM sanity and not source installs

I am a firm believer in package managers. What's there not to like? Dependency chains, package verification before installation, package validation after installation, versioning and reproducibility are among my favorite reasons for liking the Redhat and Fedora default "RPM Package Manager" rpm  and by extension, yum.

The focus for this short article will be on RHEL or CentOS 5.5 as the base OS. These are often considered a "server" based OS and tend to focus on stability, security and not on newest version. However, with many production systems, there tends to be needs or demands imposed on the systems administrator. It  will be our issue to safeguard the installed base OS and even extended repository installs against harm. Many new creative and useful perl modules are not even available as rpm's. Or the versions that are found are relegated to an older version than required (by the boss, the project etc).

Every effort should be made avoid source installs. As an added deterrent, " DO NOT attempt to install software packages which are part of CentOS as a source package, because you think you absolutely need the newest version. THIS WILL OFTEN BREAK THINGS"

Luckily, there are many tools to help ease this burden by attempting to create an rpm package for you. Some will work better in certain situations than others. Some are designed to handle a specific type of source (like a perl module):
  1. checkinstall is for general source installs that do not offer .spec files
  2. rpmbuild is easy if there is a usable .spec file in the source install or if there is a .src.rpm file for the needed install
  3. cpan2rpm will help with many pesky perl modules
  4. perl2rpm will help with many pesky perl modules
  5. cpanflute2 will help with many pesky perl modules
Here is an example from a recent requirement to have the newest tesseract-ocr 3.0 installed. Make sure you read over Setuping up an RPM Build Environment!

I found a needed tesseract-3.00-1.fc15.src.rpm from Fedora 15. The original source tar file did not have a spec file that was usable by RHEL/CentOS 5.5 version of rpmbuild. And it will not build without some help. Also the "--nomd5" is needed as rpm version issue from newer Fedora and RHEL and CentOS 5.x.

rpm -ihv --nomd5 tesseract-3.00-1.fc15.src.rpm
rpmbuild -bb /path/to/tesseract.spec
yum update /path/to/tesseract-3.00-1.i386.rpm

Darn. Can't install without a much newer version of the "optional" leptonlib of greater than 1.60. There is not an updated src.rpm version to be found anywhere. Going to need to use the source...

wget http://www.leptonica.org/source/leptonlib-1.67.tar.gz
tar -xzvf leptonlib-1.67.tar.gz
cd leptonlib-1.67
./configure
checkinstall -R --review-spec --install=no

Check through the generated spec ("--review-spec" - sometimes need to use "--fstrans=no" if checkinstall gets too many other files included)
Then finally:

yum update /path/to/leptonlib-1.67-1.i386.rpm /path/to/tesseract-3.00-1.i386.rpm

Checkinstall is one of my general go-to options for source installs when I come up empty looking for prebuilt rpm packages from known repositories.

But the fun was not over... boss wanted the newest perl Image-OCR-Tesseract module...

wget http://search.cpan.org/CPAN/authors/id/L/LE/LEOCHARRE/Image-OCR-Tesseract-1.24.tar.gz
#no spec file to be found in the tar file
perl2rpm Image-OCR-Tesseract-1.24.tar.gz
yum install /path/to/perl-Image-OCR-Tesseract-1.24-1.noarch.rpm

Honestly, any one of perl2rpm or cpanflute or cpan2rpm would have worked.

Now boss is happy and sysadmin is happy and the CentOS forum won't hassle you as much about source installs ;)

Sunday, January 9, 2011

Archiving with tar & compresing with pbzip2 -- a great combination

Creating a tar archive is not anything new or fancy. Heck, it's not even glamorous or exiting. But, if you need a ubiquitous all around good tool to create an archive of files, tar should be toward the top of your list.

First, let me say that this quick intro will generalize on tar and a couple compression options. No lvm or disk snapshots, rysnc, librsync or any other sometimes similar choices will be compared here. However, there are many situations when you will want to incorporate those other choices into a bigger strategy. This information will shed some light on a couple of opportunities that are present either directly or indirectly that may need to be thought about depending on your situation when using tar.

Generally, when you just need a quick archive the standard tar command will just be:
tar cf /path/to/archive.tar /path/to/source

That's good. It will work, but many will compress the archive with either gzip (z) or bzip2 (j) options:
tar czf /path/to/archive.tar /path/to/source
tar cjf /path/to/archive.tar /path/to/source

The space required to create the archive is normally reduced if file is not already compressed or of a compressed type like avi's or mp3 etc. That's often better but there are issues when dealing with larger sized files or directories:
  1. tar compression extends the time required to hold open file(s)
  2. tar compression extends the time required to complete the archive
  3. tar compression tends to create files slightly larger than post compressed files
  4. tar compression is limited to single processor utilization
All of these issue are overcome by post compression using "pbzip2". pbzip2 is "a parallel implementation of the bzip2... and achieves near-linear speedup on SMP
machines". With pbzip2 (optionally) all of the systems' processors can be put to use at the same time. The archive requiring 2 hours to bzip2 can take as little as 30 minutes on a idle single quad core system.

Naturally, the trade off will be an increase in the free disk space required to complete the process. A trade off will be the need for increased free disk space. As a general minimum you will need at least 1.5 times the size of the files to be archived in order to complete the process.

This is a small scale example with a modest 1.5GB directory. The directory has data base SQL unload files. The system has 2 older quad core CPU's and a fairly fast disk subsystem along with 16GB RAM.

testdir = 1603076072 bytes or 1.5GB

time tar cf test.tar testdir
real    0m12.039s
size 1603164160 bytes or about 1.5GB

time tar cjf test.tar.bz2 testdir
real    9m37.415s
size 216820944 bytes = 207M

time tar czf test.tar.gz testdir
real    2m44.550s
size 282025065 bytes = 269M

time pbzip2 test.tar
real    2m17.014s
size 217235869 = 208M

time bzip2 test.tar
real    9m13.197s
size  216820491 = 207M

Combining a normal tar file with pbzip2 provides about 22% greater compression than gzip in less time for this test. For some situations, tar + pbzip2 is a great combination. I just wanted to share ;)

Wednesday, January 5, 2011

Linux CentOS 5.5 + Xen 3.4.x and virt-manager 0.8.5

Xen Hypervisor is a very powerful tool for any sysadmin. Loads of virtualization capability and free (GPL "free"). What's there not to like... Well, I don't like the CentOS/RHEL xen-3.0.3 version from 2007 for starters and the equally ancient virt-manager. Solution...  with minimal effort follows:

This pair of updates will be done using 2 distinct parts. One easy one a little harder. Please make sure you start with a CPU with vmx or vt (virtualization or vanderpool technology) enabled and running CentOS 5.5 64bit!

Easy part Xen 3.4.x:
  1. cd /etc/yum.repos.d/
  2. wget http://www.gitco.de/repo/GITCO-XEN3.4.3_x86_64.repo
  3. yum install xen kernel-xen (or "yum update" if you an older xen installed)
You could run/try the xen 4.0.x version if you want. You take a look at the information from http://www.gitco.de/repo/.

Note: 03/25/2011 newer versions of 0.8.6 and 0.8.7 have some hefty dependencies on new and/or updated packages. I have yet to successfully build either on CentOS 5.5.
Harder part virt-manager:
  1. sanely setup a build environment on another box (if possible/practical)
  2. install a usable virt-manager-0.8.5-1.fc14.src.rpm :
    rpm --nomd5 -ihv virt-manager-0.8.5-1.fc14.src.rpm
    rpmbuild -bb virt-manager.spec
  3. note the location of the created virt-manager-0.8.5-1.noarch.rpm
  4. install a usable python-virtinst*src.rpm:
    rpm --nomd5 -ihv python-virtinst-0.500.4-1.fc14.src.rpm
    rpmbuild -bb python-virtinst.spec
  5. note the location of the created  python-virtinst-0.500.4-1.noarch.rpm
  6. install both of the packages created with yum:
    yum install /path/to/virt-manager-0.8.5-1.noarch.rpm /path/to/python-virtinst-0.500.4-1.noarch.rpm

Post install notes:
Some kernel updates create an issue for the /etc/grub.conf kernel line entry. Currently for the Xen 3.4.3 installed system, the line should read:
kernel /xen.gz-3.4.3 or
kernel /boot/xen.gz-3.4.3

For example, if you run "xm list" and get " Error: Unable to connect to xend: No such file or directory. Is xend running?" you may need to check your /etc/grub.conf!

You could also try to get the Xen Hypervisor 3.4.3 yourself and try to build a sane package from the Xen guys. Make sure to get the "Linux 2.6.18 with Xen 3.4.x support source tarball"

Virt Manager official web site is HERE

Sunday, January 2, 2011

Linux Centos 5.x on PowerEdge 2950 with PERC 5/e and Dell MD3000 followup

Here is an update to a previous post that now includes a graph along with some additional quick tests. Even though the MD3000 line is getting a bit old, I got a killer deal on a unit had some free time for testing.

Quick info: Dell PowerEdge 2950 with PERC 5/e SAS card dual connected to a Dell MD3000 disk array with qty 15 73GB 15K drives running CentOS 5.5 all with kernel  2.6.18-194.26.1.el5. The MD3000 has 1 global host spare setup and is performing the RAID 1, 5 or 6 portion of the test and linux md is the RAID 0 part. The MD3000 really only performs well when both controller are active in all of the tests that I have run. In addition, mirrorCacheEnabled=FALSE is set on the MD3000 controllers.

A picture does speak a thousand words. Higher is better.



Very nice for ext4! I did test with the "nobarrier" option just to make sure there was not the performance regression in this kernel. The issue does not appear in this kernel since the iozone numbers were right on.

As a note, this is mostly "stock" setup changing the RAID types. You *can* have even better performance numbers by playing with simply items like " blockdev --setra xxxx". For example, a change of "blockdev --setra 8192" produced about a 20% increase under ext4 for both read and re-read in iozone ("--setra 32768 got 30%)! Also, a scheduler change to "deadline" may help depending on your workload.

Tuesday, December 28, 2010

Linux Centos 5.x on PowerEdge 2950 with PERC 5/e and Dell MD3000

This is my second Dell MD3000 SAS setup pre-production test. For background and comparison, I resurrected the testing info from a very similar setup with CentOS 5.2 back in late 2008 early 2009. Here is the (basic/limited) information from that time frame:

Dell PowerEdge 2950 II, BIOS version 2.5.0, 2x dual core Intel(R) Xeon(R) CPU dual core @ 3.00GHz, 16GB RAM, Dell MD3000 (Firmware 07.35.22.60, mirrorCacheEnabled=FALSE) with PERC 5/E (non cache) model UCS-50, md RAID 0 using EXT3 (-b 4096 -E stride=128), SAS drive setup was 2 sets of 5x 15k 450GB (Dell hardware) RAID 5 running kernel 2.6.18-92.1.22.el5 under CentOS 5.2.

Obviously, based on some of the details above, I was testing the effects of several very different changes (not at the same time). For starters, the MD3000 was originally tested with the old version 6.x of the firmware, MD3000 controller changes, ext3 stride changes, md chunk size performance, etc. I will generally just say that there are LOTS of things will affect performance. But the fastest (system disk performance) setup for that time frame was using both controllers (with md RAID 0) and any combination of MD3000 drive setup. That is 2 LUNS each assigned to a different MD3000 controller. I choose 2 MD3000 RAID 5 volumes and md RAID 0 : effectively RAID 50. Please don't hassle me on the RAID 5 is not good for X database or Y program. Suffice it to say that the disk space -vs- performance -vs- cost question lead me to this choice. And these are the average numbers of several runs:

(iozone test)
sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.18-92.1.22test##.xls
  Initial write  224179.00
        Rewrite   205363.05
           Read     350508.81
        Re-read   355806.31

(generic hdparm test)
sync ; hdparm -Tt /dev/md0
Timing cached reads:   2976 MB in  2.00 seconds = 1487.69 MB/sec
Timing buffered disk reads:  892 MB in  3.00 seconds = 296.90 MB/sec

(generic raw read speed test)
sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 66.9784 seconds, 321 MB/s
real    1m7.042s

Sadly, I do not have a spare R710 at the moment or I would test with it...

Current test setup is:
PowerEdge 2950, BIOS Version 2.6.1, 2x dual core Intel(R) Xeon(R) CPU dual core @ 3.00GHz, 16GB RAM, , Dell MD3000 (Firmware 07.35.22.60, mirrorCacheEnabled=FALSE) with PERC 5/E (non cache) model UCS-50, md RAID 0 using EXT3 (-b 4096 -E stride=128), latest drive firmware, SAS drive setup was 2 sets of 5x 15k 450GB (Dell hardware) RAID 5 running kernel-2.6.18-194.26.1.el5 x86_64 under CentOS 5.5 with Dell's linuxrdac-09.03.0C06.0234 and mptlinux-4.00.38.02-3:

sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.18-194.26.1test##.xls
  Initial write  211298.00
        Rewrite   277691.25
           Read     453337.34
        Re-read   480531.75

sync ; hdparm -Tt /dev/md0

 Timing cached reads:   2900 MB in  2.00 seconds = 1449.77 MB/sec
 Timing buffered disk reads:  926 MB in  3.00 seconds = 308.26 MB/sec

sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 68.8348 seconds, 312 MB/s
real    1m8.906s

One year and 3 RHEL/CentOS 5 updates (5.3, 5.4, 5.5) and it's mostly positive with the stock kernel.

Now I'm testing with the newest kernel 2.6.36-2.el5.elrepo (without the dell kernel module changes) mainline kernel tracker from the GREAT folks at ElRepo. Nice, the mainline kernel offers modest improvements from the stock CentOS 5.5 kernel:

sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.36-2.el5.elrepotest##.xls
  Initial write  140675.53
        Rewrite   274615.28
           Read     467823.69
        Re-read   498164.59

sync ; hdparm -Tt /dev/md0
Timing cached reads:   2952 MB in  2.00 seconds = 1476.07 MB/sec
Timing buffered disk reads:  1008 MB in  3.00 seconds = 335.54 MB/sec

sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 57.3881 seconds, 374 MB/s
real    0m57.419s

If I figure out how to do bar charts for the results, I will add them.

There you go. A bit of performance history and some signs of things to come. Naturally, I will test with ext4 and other changes. But all of the above info has come from generally stock and/or default setup. Since this MD3000 I get to play with now is fully populated with drives, I plan to compare the dual 5 drive RAID 5 setup with dual 7 drive RAID 5 and maybe RAID 1. Then there is the whole RHEL/CentOS 6 question to answer for testing...

Sunday, December 26, 2010

Linux RHEL/CentOS 5.5 PHP 5.3 upgrade made easy

Please see the RHEL 5.6 update note HERE 

Just to be honest, this PHP 5.3 upgrade/update has been made painless by the GREAT folks at IUS Community Repo! The real magic is their repository setup and rather helpful "yum-plugin-replace" yum plugin. I have not seen any problem with this update path at all. My needs may be different than yours, so YMMV. This post is adapted from a previous post that focused on another program that just needed a PHP > 5.2 installed.

You must be the root user to execute all the stated commands.


  • install the IUS and EPEL repository:
    wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/i386/ius-release-1.0-8.ius.el5.noarch.rpm
    wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/i386/epel-release-5-4.noarch.rpm
    rpm -Uhv ius-release-1.0-8.ius.el5.noarch.rpm epel-release-5-4.noarch.rpm





  • understand the possible issues with 3rd party repositories and how to use priorities or protectBase





  • install the IUS yum plugin (if you missed it from the IUS instructions):
    yum install yum-plugin-replace





  • replace the old stock php:
    yum replace php --replace-with php53





  • get an overview of what other php53 packages are available:
    yum search php53





  • if you had the old php and web server running, restart the web server:
    service httpd restart





  • if you had a mysql database server running, restart the mysqld:
    service mysqld restart





  • same goes if you had postgresql running:
    service postgresql restart