Building up & testing a new 9TB SATA RAID10 NFSv4 NAS, part I

Plus, a glamour shot the server itself!

Over the past few weeks I’ve been building up a new data center for my employer, xtendx AG. One of the core tasks has been to design, assemble and install a new storage system. To that end, I’ve put together the below NFSv4 Network Attached Storage (NAS) system.

  • 3U 16-bay Chassis: SuperMicro SuperChassis 836A-R1200B
  • LGA1156 Mother Board: Intel DP55KG w/ P55 Express chip set
  • 2.67 MHz CPU: Intel Core i5 750
  • 8GB DDR3 1366MHz RAM
  • 3x 250 GB Hard Drives for OS: WD RE3 250GB WD2502ABYS
  • 11x 2TB Hard Drives for Data: WD RE4 2TB WD2003FYYS
  • RAID controller: LSI 3ware 9650SE-ML16
  • Quad Port server NIC: Intel PRO/1000 PT Quad Port Server Adapter
  • Operating System: Ubuntu 10.04 LTS

(larger picture)
Both the data and system volumes are in a RAID 10 configuration, with a hot spare for the OS already present. I’ll be adding one or two hot spare 2TB drives in a few weeks.

Ubuntu is installed and mostly configured. Over this long Assention Day weekend, I’ve got IOzone running some benchmarks. Once those are complete I’ll graph and post the data. Below is a sample graph from my first test IOzone run with iozone -Ra -g 64G -n 8B -z -b out.wks.

The main reason behind these benchmarks are not the pretty graphs, which I do love. What we are looking for is a comparison with our existing server infrastructure to ensure the new system will hold up under the load of our on-demand streaming servers.

Once the NAS box is up in the server room, we’ll perform additional configuration, tests and tuning. E.g.: Perform these benchmarks over the network from the client machines. Perform them again one the NFS clustering with drbd is setup with a 2nd NAS box. And then we go into production!

There is still much work to be done. Once all the raw data is collected and analyzed, I’ll be posting here again for your reading pleasure.