New image cluster setup

From ImageWiki

Revision as of 09:07, 10 June 2014 by Kimstp (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


File servers

Storage is split between two storage servers (nfs1 and nfs2), each having two RAID 5 arrays with 12 x 1 TB disks. This gives in total 4 x 10 TB arrays.

They are currently mounted on each server as

/home (11 TB on nfs2 /dev/mapper/vg0-user_1)
/image/data1 (11 TB on nfs2 /dev/mapper/vg1-user_2)
/image/data2 (11 TB on nfs1)
/image/data3 (11 TB on nfs1)

Compute servers


The DNS configuration currently looks like this: IN A IN A IN A IN A IN A IN A IN A       IN CNAME      IN CNAME      IN CNAME IN CNAME IN CNAME IN CNAME IN CNAME

Access to the enclosure

To get access to the enclosures OA/iLO management software first make a ssh-tunnel:

ssh SCIENCE\\<ku-login>

The connect via a browser to


If you have access rights login with your KU-login.

Installing software using the package manager

OpenSUSE and SLES uses the zypper software package manager.

Here is a couple of commands that are useful.

 sudo zypper refresh
 sudo zypper search <package name>
 sudo zypper install <package name>

For more information see the manual.

Installation notes

Personal tools