Like it or not, companies like certifications. Cert's help HR department figure out baseline skills. They make for an easy box to check on an hiring request. It is far more difficult to interview and discover abilities than it is to look for some acronym of ability.
Need another reason?? Learning is fun or at least should be. Especially if your a full time geek. I am amazed on how many new things I learn just looking at some of the prep information. Besides, some of our favourite commands evolve and morph all the time.
That being said, one very good organization is the Linux Professional Institute or just LPI. I'm not saying it's a Cisco cert or RHEL cert but it's a good all around way to get a little more on the old resume. Besides it's a bunch cheaper to get than a lot of the others.
What's even more interesting is that IBM has a very large amount of helpful training information geared directly toward passing LPI group of tests. And best of all, the price is right... free.
The main starting point for IBM is HERE
LPIC-1 certification prep.
LPIC-2 certification prep.
LPIC-3 certification prep.
Thursday, January 27, 2011
Monday, January 24, 2011
Dell Poweredge, Latitude and Optiplex BIOS updates HOWTO for Fedora, CentOS and RHEL
BIOS updates for us Linux types have not been very easy. It was often difficult enough when there was boot floppies to deal with. Now it seems that many vendors even require actual Windows to be running in order to update. Luckily, Dell has not been one of those companies when it comes to their business lines of servers, laptops and desktops.
The whole process is very easy thanks to Dell's commitment to Linux:
Don't take my word for it look at Dell's firmware repository information.
I have had success using these steps for Poweredge 2850, 2950, R710, Optiplex GX280, GX520, GX620, Latitude D620 and D630 just to name a few. In addition, OS's have been CentOS 5.x, Fedora 11, 12 and 13.
The whole process is very easy thanks to Dell's commitment to Linux:
- wget -q -O - http://linux.dell.com/repo/community/bootstrap.cgi | bash
- wget -q -O - http://linux.dell.com/repo/firmware/bootstrap.cgi | bash
- yum -y install libsmbios-bin firmware-tools firmware-addon-dell
- yum -y install $(bootstrap_firmware)
- update_firmware --yes
Don't take my word for it look at Dell's firmware repository information.
I have had success using these steps for Poweredge 2850, 2950, R710, Optiplex GX280, GX520, GX620, Latitude D620 and D630 just to name a few. In addition, OS's have been CentOS 5.x, Fedora 11, 12 and 13.
Thursday, January 13, 2011
RHEL or CentOS 5.x RPM sanity and not source installs
I am a firm believer in package managers. What's there not to like? Dependency chains, package verification before installation, package validation after installation, versioning and reproducibility are among my favorite reasons for liking the Redhat and Fedora default "RPM Package Manager" rpm and by extension, yum.
The focus for this short article will be on RHEL or CentOS 5.5 as the base OS. These are often considered a "server" based OS and tend to focus on stability, security and not on newest version. However, with many production systems, there tends to be needs or demands imposed on the systems administrator. It will be our issue to safeguard the installed base OS and even extended repository installs against harm. Many new creative and useful perl modules are not even available as rpm's. Or the versions that are found are relegated to an older version than required (by the boss, the project etc).
Every effort should be made avoid source installs. As an added deterrent, " DO NOT attempt to install software packages which are part of CentOS as a source package, because you think you absolutely need the newest version. THIS WILL OFTEN BREAK THINGS"
Luckily, there are many tools to help ease this burden by attempting to create an rpm package for you. Some will work better in certain situations than others. Some are designed to handle a specific type of source (like a perl module):
I found a needed tesseract-3.00-1.fc15.src.rpm from Fedora 15. The original source tar file did not have a spec file that was usable by RHEL/CentOS 5.5 version of rpmbuild. And it will not build without some help. Also the "--nomd5" is needed as rpm version issue from newer Fedora and RHEL and CentOS 5.x.
Darn. Can't install without a much newer version of the "optional" leptonlib of greater than 1.60. There is not an updated src.rpm version to be found anywhere. Going to need to use the source...
Check through the generated spec ("--review-spec" - sometimes need to use "--fstrans=no" if checkinstall gets too many other files included)
Then finally:
Checkinstall is one of my general go-to options for source installs when I come up empty looking for prebuilt rpm packages from known repositories.
But the fun was not over... boss wanted the newest perl Image-OCR-Tesseract module...
Honestly, any one of perl2rpm or cpanflute or cpan2rpm would have worked.
Now boss is happy and sysadmin is happy and the CentOS forum won't hassle you as much about source installs ;)
The focus for this short article will be on RHEL or CentOS 5.5 as the base OS. These are often considered a "server" based OS and tend to focus on stability, security and not on newest version. However, with many production systems, there tends to be needs or demands imposed on the systems administrator. It will be our issue to safeguard the installed base OS and even extended repository installs against harm. Many new creative and useful perl modules are not even available as rpm's. Or the versions that are found are relegated to an older version than required (by the boss, the project etc).
Every effort should be made avoid source installs. As an added deterrent, " DO NOT attempt to install software packages which are part of CentOS as a source package, because you think you absolutely need the newest version. THIS WILL OFTEN BREAK THINGS"
Luckily, there are many tools to help ease this burden by attempting to create an rpm package for you. Some will work better in certain situations than others. Some are designed to handle a specific type of source (like a perl module):
- checkinstall is for general source installs that do not offer .spec files
- rpmbuild is easy if there is a usable .spec file in the source install or if there is a .src.rpm file for the needed install
- cpan2rpm will help with many pesky perl modules
- perl2rpm will help with many pesky perl modules
- cpanflute2 will help with many pesky perl modules
I found a needed tesseract-3.00-1.fc15.src.rpm from Fedora 15. The original source tar file did not have a spec file that was usable by RHEL/CentOS 5.5 version of rpmbuild. And it will not build without some help. Also the "--nomd5" is needed as rpm version issue from newer Fedora and RHEL and CentOS 5.x.
rpm -ihv --nomd5 tesseract-3.00-1.fc15.src.rpm rpmbuild -bb /path/to/tesseract.spec yum update /path/to/tesseract-3.00-1.i386.rpm
Darn. Can't install without a much newer version of the "optional" leptonlib of greater than 1.60. There is not an updated src.rpm version to be found anywhere. Going to need to use the source...
wget http://www.leptonica.org/source/leptonlib-1.67.tar.gz tar -xzvf leptonlib-1.67.tar.gz cd leptonlib-1.67 ./configure checkinstall -R --review-spec --install=no
Check through the generated spec ("--review-spec" - sometimes need to use "--fstrans=no" if checkinstall gets too many other files included)
Then finally:
yum update /path/to/leptonlib-1.67-1.i386.rpm /path/to/tesseract-3.00-1.i386.rpm
Checkinstall is one of my general go-to options for source installs when I come up empty looking for prebuilt rpm packages from known repositories.
But the fun was not over... boss wanted the newest perl Image-OCR-Tesseract module...
wget http://search.cpan.org/CPAN/authors/id/L/LE/LEOCHARRE/Image-OCR-Tesseract-1.24.tar.gz #no spec file to be found in the tar file perl2rpm Image-OCR-Tesseract-1.24.tar.gz yum install /path/to/perl-Image-OCR-Tesseract-1.24-1.noarch.rpm
Honestly, any one of perl2rpm or cpanflute or cpan2rpm would have worked.
Now boss is happy and sysadmin is happy and the CentOS forum won't hassle you as much about source installs ;)
Sunday, January 9, 2011
Archiving with tar & compresing with pbzip2 -- a great combination
Creating a tar archive is not anything new or fancy. Heck, it's not even glamorous or exiting. But, if you need a ubiquitous all around good tool to create an archive of files, tar should be toward the top of your list.
First, let me say that this quick intro will generalize on tar and a couple compression options. No lvm or disk snapshots, rysnc, librsync or any other sometimes similar choices will be compared here. However, there are many situations when you will want to incorporate those other choices into a bigger strategy. This information will shed some light on a couple of opportunities that are present either directly or indirectly that may need to be thought about depending on your situation when using tar.
Generally, when you just need a quick archive the standard tar command will just be:
tar cf /path/to/archive.tar /path/to/source
That's good. It will work, but many will compress the archive with either gzip (z) or bzip2 (j) options:
tar czf /path/to/archive.tar /path/to/source
tar cjf /path/to/archive.tar /path/to/source
The space required to create the archive is normally reduced if file is not already compressed or of a compressed type like avi's or mp3 etc. That's often better but there are issues when dealing with larger sized files or directories:
machines". With pbzip2 (optionally) all of the systems' processors can be put to use at the same time. The archive requiring 2 hours to bzip2 can take as little as 30 minutes on a idle single quad core system.
Naturally, the trade off will be an increase in the free disk space required to complete the process. A trade off will be the need for increased free disk space. As a general minimum you will need at least 1.5 times the size of the files to be archived in order to complete the process.
This is a small scale example with a modest 1.5GB directory. The directory has data base SQL unload files. The system has 2 older quad core CPU's and a fairly fast disk subsystem along with 16GB RAM.
testdir = 1603076072 bytes or 1.5GB
time tar cf test.tar testdir
real 0m12.039s
size 1603164160 bytes or about 1.5GB
time tar cjf test.tar.bz2 testdir
real 9m37.415s
size 216820944 bytes = 207M
time tar czf test.tar.gz testdir
real 2m44.550s
size 282025065 bytes = 269M
time pbzip2 test.tar
real 2m17.014s
size 217235869 = 208M
time bzip2 test.tar
real 9m13.197s
size 216820491 = 207M
Combining a normal tar file with pbzip2 provides about 22% greater compression than gzip in less time for this test. For some situations, tar + pbzip2 is a great combination. I just wanted to share ;)
First, let me say that this quick intro will generalize on tar and a couple compression options. No lvm or disk snapshots, rysnc, librsync or any other sometimes similar choices will be compared here. However, there are many situations when you will want to incorporate those other choices into a bigger strategy. This information will shed some light on a couple of opportunities that are present either directly or indirectly that may need to be thought about depending on your situation when using tar.
Generally, when you just need a quick archive the standard tar command will just be:
tar cf /path/to/archive.tar /path/to/source
That's good. It will work, but many will compress the archive with either gzip (z) or bzip2 (j) options:
tar czf /path/to/archive.tar /path/to/source
tar cjf /path/to/archive.tar /path/to/source
The space required to create the archive is normally reduced if file is not already compressed or of a compressed type like avi's or mp3 etc. That's often better but there are issues when dealing with larger sized files or directories:
- tar compression extends the time required to hold open file(s)
- tar compression extends the time required to complete the archive
- tar compression tends to create files slightly larger than post compressed files
- tar compression is limited to single processor utilization
machines". With pbzip2 (optionally) all of the systems' processors can be put to use at the same time. The archive requiring 2 hours to bzip2 can take as little as 30 minutes on a idle single quad core system.
Naturally, the trade off will be an increase in the free disk space required to complete the process. A trade off will be the need for increased free disk space. As a general minimum you will need at least 1.5 times the size of the files to be archived in order to complete the process.
This is a small scale example with a modest 1.5GB directory. The directory has data base SQL unload files. The system has 2 older quad core CPU's and a fairly fast disk subsystem along with 16GB RAM.
testdir = 1603076072 bytes or 1.5GB
time tar cf test.tar testdir
real 0m12.039s
size 1603164160 bytes or about 1.5GB
time tar cjf test.tar.bz2 testdir
real 9m37.415s
size 216820944 bytes = 207M
time tar czf test.tar.gz testdir
real 2m44.550s
size 282025065 bytes = 269M
time pbzip2 test.tar
real 2m17.014s
size 217235869 = 208M
time bzip2 test.tar
real 9m13.197s
size 216820491 = 207M
Combining a normal tar file with pbzip2 provides about 22% greater compression than gzip in less time for this test. For some situations, tar + pbzip2 is a great combination. I just wanted to share ;)
Wednesday, January 5, 2011
Linux CentOS 5.5 + Xen 3.4.x and virt-manager 0.8.5
Xen Hypervisor is a very powerful tool for any sysadmin. Loads of virtualization capability and free (GPL "free"). What's there not to like... Well, I don't like the CentOS/RHEL xen-3.0.3 version from 2007 for starters and the equally ancient virt-manager. Solution... with minimal effort follows:
This pair of updates will be done using 2 distinct parts. One easy one a little harder. Please make sure you start with a CPU with vmx or vt (virtualization or vanderpool technology) enabled and running CentOS 5.5 64bit!
Easy part Xen 3.4.x:
Note: 03/25/2011 newer versions of 0.8.6 and 0.8.7 have some hefty dependencies on new and/or updated packages. I have yet to successfully build either on CentOS 5.5.
Harder part virt-manager:
Post install notes:
Some kernel updates create an issue for the /etc/grub.conf kernel line entry. Currently for the Xen 3.4.3 installed system, the line should read:
kernel /xen.gz-3.4.3 or
kernel /boot/xen.gz-3.4.3
For example, if you run "xm list" and get " Error: Unable to connect to xend: No such file or directory. Is xend running?" you may need to check your /etc/grub.conf!
You could also try to get the Xen Hypervisor 3.4.3 yourself and try to build a sane package from the Xen guys. Make sure to get the "Linux 2.6.18 with Xen 3.4.x support source tarball"
Virt Manager official web site is HERE
This pair of updates will be done using 2 distinct parts. One easy one a little harder. Please make sure you start with a CPU with vmx or vt (virtualization or vanderpool technology) enabled and running CentOS 5.5 64bit!
Easy part Xen 3.4.x:
- cd /etc/yum.repos.d/
- wget http://www.gitco.de/repo/GITCO-XEN3.4.3_x86_64.repo
- yum install xen kernel-xen (or "yum update" if you an older xen installed)
Note: 03/25/2011 newer versions of 0.8.6 and 0.8.7 have some hefty dependencies on new and/or updated packages. I have yet to successfully build either on CentOS 5.5.
Harder part virt-manager:
- sanely setup a build environment on another box (if possible/practical)
- install a usable virt-manager-0.8.5-1.fc14.src.rpm :
rpm --nomd5 -ihv virt-manager-0.8.5-1.fc14.src.rpm
rpmbuild -bb virt-manager.spec - note the location of the created virt-manager-0.8.5-1.noarch.rpm
- install a usable python-virtinst*src.rpm:
rpm --nomd5 -ihv python-virtinst-0.500.4-1.fc14.src.rpm
rpmbuild -bb python-virtinst.spec - note the location of the created python-virtinst-0.500.4-1.noarch.rpm
- install both of the packages created with yum:
yum install /path/to/virt-manager-0.8.5-1.noarch.rpm /path/to/python-virtinst-0.500.4-1.noarch.rpm
Post install notes:
Some kernel updates create an issue for the /etc/grub.conf kernel line entry. Currently for the Xen 3.4.3 installed system, the line should read:
kernel /xen.gz-3.4.3 or
kernel /boot/xen.gz-3.4.3
For example, if you run "xm list" and get " Error: Unable to connect to xend: No such file or directory. Is xend running?" you may need to check your /etc/grub.conf!
You could also try to get the Xen Hypervisor 3.4.3 yourself and try to build a sane package from the Xen guys. Make sure to get the "Linux 2.6.18 with Xen 3.4.x support source tarball"
Virt Manager official web site is HERE
Sunday, January 2, 2011
Linux Centos 5.x on PowerEdge 2950 with PERC 5/e and Dell MD3000 followup
Here is an update to a previous post that now includes a graph along with some additional quick tests. Even though the MD3000 line is getting a bit old, I got a killer deal on a unit had some free time for testing.
Quick info: Dell PowerEdge 2950 with PERC 5/e SAS card dual connected to a Dell MD3000 disk array with qty 15 73GB 15K drives running CentOS 5.5 all with kernel 2.6.18-194.26.1.el5. The MD3000 has 1 global host spare setup and is performing the RAID 1, 5 or 6 portion of the test and linux md is the RAID 0 part. The MD3000 really only performs well when both controller are active in all of the tests that I have run. In addition, mirrorCacheEnabled=FALSE is set on the MD3000 controllers.
A picture does speak a thousand words. Higher is better.
Very nice for ext4! I did test with the "nobarrier" option just to make sure there was not the performance regression in this kernel. The issue does not appear in this kernel since the iozone numbers were right on.
As a note, this is mostly "stock" setup changing the RAID types. You *can* have even better performance numbers by playing with simply items like " blockdev --setra xxxx". For example, a change of "blockdev --setra 8192" produced about a 20% increase under ext4 for both read and re-read in iozone ("--setra 32768 got 30%)! Also, a scheduler change to "deadline" may help depending on your workload.
Quick info: Dell PowerEdge 2950 with PERC 5/e SAS card dual connected to a Dell MD3000 disk array with qty 15 73GB 15K drives running CentOS 5.5 all with kernel 2.6.18-194.26.1.el5. The MD3000 has 1 global host spare setup and is performing the RAID 1, 5 or 6 portion of the test and linux md is the RAID 0 part. The MD3000 really only performs well when both controller are active in all of the tests that I have run. In addition, mirrorCacheEnabled=FALSE is set on the MD3000 controllers.
A picture does speak a thousand words. Higher is better.
Very nice for ext4! I did test with the "nobarrier" option just to make sure there was not the performance regression in this kernel. The issue does not appear in this kernel since the iozone numbers were right on.
As a note, this is mostly "stock" setup changing the RAID types. You *can* have even better performance numbers by playing with simply items like " blockdev --setra xxxx". For example, a change of "blockdev --setra 8192" produced about a 20% increase under ext4 for both read and re-read in iozone ("--setra 32768 got 30%)! Also, a scheduler change to "deadline" may help depending on your workload.
Subscribe to:
Posts (Atom)