Tuesday, December 28, 2010

Linux Centos 5.x on PowerEdge 2950 with PERC 5/e and Dell MD3000

This is my second Dell MD3000 SAS setup pre-production test. For background and comparison, I resurrected the testing info from a very similar setup with CentOS 5.2 back in late 2008 early 2009. Here is the (basic/limited) information from that time frame:

Dell PowerEdge 2950 II, BIOS version 2.5.0, 2x dual core Intel(R) Xeon(R) CPU dual core @ 3.00GHz, 16GB RAM, Dell MD3000 (Firmware 07.35.22.60, mirrorCacheEnabled=FALSE) with PERC 5/E (non cache) model UCS-50, md RAID 0 using EXT3 (-b 4096 -E stride=128), SAS drive setup was 2 sets of 5x 15k 450GB (Dell hardware) RAID 5 running kernel 2.6.18-92.1.22.el5 under CentOS 5.2.

Obviously, based on some of the details above, I was testing the effects of several very different changes (not at the same time). For starters, the MD3000 was originally tested with the old version 6.x of the firmware, MD3000 controller changes, ext3 stride changes, md chunk size performance, etc. I will generally just say that there are LOTS of things will affect performance. But the fastest (system disk performance) setup for that time frame was using both controllers (with md RAID 0) and any combination of MD3000 drive setup. That is 2 LUNS each assigned to a different MD3000 controller. I choose 2 MD3000 RAID 5 volumes and md RAID 0 : effectively RAID 50. Please don't hassle me on the RAID 5 is not good for X database or Y program. Suffice it to say that the disk space -vs- performance -vs- cost question lead me to this choice. And these are the average numbers of several runs:

(iozone test)
sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.18-92.1.22test##.xls
  Initial write  224179.00
        Rewrite   205363.05
           Read     350508.81
        Re-read   355806.31

(generic hdparm test)
sync ; hdparm -Tt /dev/md0
Timing cached reads:   2976 MB in  2.00 seconds = 1487.69 MB/sec
Timing buffered disk reads:  892 MB in  3.00 seconds = 296.90 MB/sec

(generic raw read speed test)
sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 66.9784 seconds, 321 MB/s
real    1m7.042s

Sadly, I do not have a spare R710 at the moment or I would test with it...

Current test setup is:
PowerEdge 2950, BIOS Version 2.6.1, 2x dual core Intel(R) Xeon(R) CPU dual core @ 3.00GHz, 16GB RAM, , Dell MD3000 (Firmware 07.35.22.60, mirrorCacheEnabled=FALSE) with PERC 5/E (non cache) model UCS-50, md RAID 0 using EXT3 (-b 4096 -E stride=128), latest drive firmware, SAS drive setup was 2 sets of 5x 15k 450GB (Dell hardware) RAID 5 running kernel-2.6.18-194.26.1.el5 x86_64 under CentOS 5.5 with Dell's linuxrdac-09.03.0C06.0234 and mptlinux-4.00.38.02-3:

sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.18-194.26.1test##.xls
  Initial write  211298.00
        Rewrite   277691.25
           Read     453337.34
        Re-read   480531.75

sync ; hdparm -Tt /dev/md0

 Timing cached reads:   2900 MB in  2.00 seconds = 1449.77 MB/sec
 Timing buffered disk reads:  926 MB in  3.00 seconds = 308.26 MB/sec

sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 68.8348 seconds, 312 MB/s
real    1m8.906s

One year and 3 RHEL/CentOS 5 updates (5.3, 5.4, 5.5) and it's mostly positive with the stock kernel.

Now I'm testing with the newest kernel 2.6.36-2.el5.elrepo (without the dell kernel module changes) mainline kernel tracker from the GREAT folks at ElRepo. Nice, the mainline kernel offers modest improvements from the stock CentOS 5.5 kernel:

sync ; iozone -s 20480m -r 64 -i 0 -i 1 -t 1 -b kernel-2.6.36-2.el5.elrepotest##.xls
  Initial write  140675.53
        Rewrite   274615.28
           Read     467823.69
        Re-read   498164.59

sync ; hdparm -Tt /dev/md0
Timing cached reads:   2952 MB in  2.00 seconds = 1476.07 MB/sec
Timing buffered disk reads:  1008 MB in  3.00 seconds = 335.54 MB/sec

sync ; time dd if=/dev/md0 of=/dev/null bs=1M count=20480
21474836480 bytes (21 GB) copied, 57.3881 seconds, 374 MB/s
real    0m57.419s

If I figure out how to do bar charts for the results, I will add them.

There you go. A bit of performance history and some signs of things to come. Naturally, I will test with ext4 and other changes. But all of the above info has come from generally stock and/or default setup. Since this MD3000 I get to play with now is fully populated with drives, I plan to compare the dual 5 drive RAID 5 setup with dual 7 drive RAID 5 and maybe RAID 1. Then there is the whole RHEL/CentOS 6 question to answer for testing...

No comments:

Post a Comment