Benchmark

From ImageWiki

Jump to: navigation, search

Contents

Disk performance

File server

Following numbers are measured on imagediskserver3.


Speed of system disk (/dev/hda):
Read access: Numbers are averages over 5 runs.

imagediskserver3 ~ # hdparm -T -t /dev/hda

/dev/hda:
 Timing cached reads:   4915.2 MB in  2.00 seconds =  2457.6 MB/sec
 Timing buffered disk reads:  232 MB in  3.016 seconds =  76.95 MB/sec


Write access: Numbers are averages over 6 runs.

imagediskserver3 ~ # time dd if=/dev/zero of=/tmp/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.777690 s, 344.6 MB/s

real    0m0.776s
user    0m0.004s
sys     0m0.748s


Speed of RAID array (data disk) /dev/md1:
Read access: Numbers are averages over 5 runs.

imagediskserver3 ~ # hdparm -T -t /dev/md1

/dev/md1:
 Timing cached reads:   4973.2 MB in  2.00 seconds = 2489.22 MB/sec
 Timing buffered disk reads:  1086.4 MB in  3.00 seconds = 361.83 MB/sec


Write access: Numbers are averages over 6 runs.

IBM blade servers

Following numbers are measured on imageserver3.


Speed of system disk (/dev/sdb):
Read access: Numbers are averages over 5 runs.


NFS performance

In this test we measure the performance of the network file system (NFS). We measure on imageserver3 and access data on imagediskserver3 (/image/data3).

Read access: Average of 5 runs (unmount/mount to flush buffers).

imageserver3 ~ # umount /image/data3
imageserver3 ~ # mount imagediskserver3:/image/data3 /image/data3/
imageserver3 ~ # time dd if=/image/data3/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.24471 s, 120 MB/s

real    0m2.246s
user    0m0.004s
sys     0m0.132s


Write access: Average of 5 runs

imageserver3 ~ # time dd if=/dev/zero of=/image/data3/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.906524 s, 92.5 MB/s

real    0m2.752s
user    0m0.000s
sys     0m0.356s


It looks like the 1 Gbit network is a bottleneck for read access transfer rate, because for all runs the transfer rate is always 120 MB/s. The bandwidth of the 1Gbit network is 125 MB/s and an overhead of the UDP/IP protocol is to be expected (does it correspond to the 5MB/s?).

Comparison with NFS on mosix (old ITU cluster)

The NFS read access transfer rate is 41.1 MB/s and the write access transfer rate is 25.9 MB/s. The measurements was done on mosix15 using the /shared drive following the same procedure as above.

Matlab performance

Performance measured with matlab bench build-in function. The matlab command used is:

T=sum(bench(10))/10

Output of bench for different machines collected in a table:

Machine                       LU        FFT       ODE       SPARSE
imageserver1                  0.1653    0.2206    0.1625    0.3394
imageserver2                  0.1658    0.2228    0.1631    0.3405
imageserver3                  0.1642    0.2186    0.1591    0.3411
bach-4.diku.dk                0.2491    0.2998    0.4130    0.5463
  (Dual P4 3 GHz, 3.5 GB ram)
kand-1.diku.dk                0.2571    0.3078    0.4074    0.5490
  (Dual P4 3 GHz, 2 GB ram)
mosix15                       0.2603    0.3363    0.4429    0.6573
  (P4 2.8 GHz, 2.5 GB ram)
mosix3                        0.3970    0.5885    0.6015    0.9442
  (Dual Xeon 2.4 GHz, 2 GB ram)

The measurements has been done on machines without any other active user processes.

Personal tools