Nexenta & ZFS basics

Today I was invited at the OpenStorageSummit EMEA in Amsterdam. They could have just called it Nexenta Summit but hey, who cares? It was a really nice summit. Not to big but everyone you could possibly imagine was there. It was great meeting for example CEO Evan Powell and VP Marketing Bill Roth amongst alot of local Nexenta sales people as global techies (Theron Conrey and Andy Bennett). 


So what the heck is Nexenta doing? No more or less than ZFS on steroids:
“…The core storage platform, NexentaStor™, is a software-only storage operating system based on Linux and a storage-optimized file system based on the OpenSolaris / Open Storage ZFS file system. With this unique combination of open source codebases, NexentaStor delivers the ease-of-use and developer-friendly aspects of Linux in tandem with the power and scalability of the revolutionary ZFS file system…”

Yes, software-only! This means you literally can choose whatever commodity hardware underneath that you want to. To take advantage of support your config will be validated off course. In this first blogpost I’ll go quickly over some architectural basics and in a second blog I’ll elaborate more on the scaling/HA part of Nexenta. 
This is how the basics of ZFS look like:


The Flash: DRAM storage. There’s no dispute what so ever that this is the fastest storage of all. It’s also the most expensive. Well, not always but I’ll come back on that later. So what happens with RAM? Nexenta eats it ALL to put the data tables on (all but 1Gb for OS). This is good! In fact adding more RAM is the smartest thing to do in a Nexenta box.
SuperMan: SSD storage. This is used for 2 things. First is the level 2 caching for read (L2ARC). Obviously this holds the data that is most-used. And now a pretty sexy part: the ZIL. The ZIL keeps the writes coming from the applications a few seconds on the SSD so that it can be written sequentially to disk. This increases alot of extra bandwith we would have lost in a random write pattern.
The Juggernaut: power and volume. No more no less.
* note: the references to the comic figures are mine, not Nexenta’s!

Commodity <> Certified hardware:
Commodity hardware means you can pick the hardware off the shelve. Is this really a smart idea? For a home/testing lab yes. For production? Maybe not. Therefore Nexenta has been working with multiple partners (ex. SuperMicro, DELL, …) to create reference architectures. This is the brand new one for DELL:

Server: DELL PowerEdge R720 12G (around € 17,5k)
  • (2) Intel® Xeon® E5-2650
    2.00GHz, 20M Cache, 8.0GT/s QPI, Turbo, 8C, 95W
  • (8) 16GB RDIMM, 1600 MHz,
    Standard Volt, Dual Rank, x4
  • Risers with up to 6, x8
    PCIe Slots + 1, x16 PCIe Slot
  • Intel Ethernet X540 10Gb BT
    DP + I350 1Gb BT DP Network Daughter Card
  • Intel X520 DP 10Gb DA/SFP+
    Server Adapter
  • iDRAC7 Enterprise with
    VFlash, 8GB SD Card
  • PERC H710 Integrated RAID
    Controller, 512MB NV Cache
  • (2) 146GB, SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot Plug)
  • (4) 200GB, SSD SAS 6Gbps, 2.5-in Hard Drive (Hot Plug)*
  • (2) PERC H810 RAID Adapter for
    External JBOD, 1Gb NV Cache
JBOD: PowerVault MD1200 (around €8.5k/node)
  • (2) PowerVault MD1200 Base
  • (2) 2 x 0.6M SAS Connector
    External Cable
  • (24) 600GB SAS 15k
    3.5″ HD
* : I took the SSDs from the online configuration tool but these should be STEC ZeusRAM disks.


Pricing:
If you count this up you have an Enterprise class tiered Scale-out NAS (10TB RAID6 or 7.5 TB RAID1+0)  for less than €35k. Now imagine that the EqualLogic Hybrid model (PS6110XS 7xSSD, 17x10k) costs around 60k (10TB RAID 50) … Important note: you’re running now on the Community License which is free up to 18TB of storage (wth no support). I’ll elaborate on the licensing in the second post. And I promised before that I would show you when RAM gets cheaper: enable dedupe! Nexenta puts the dedupe tables in the RAM (so it even eats more RAM) but this means you can do inline dedupe and so eliminate disks in the backend.

What’s up next: 
I’ll show you how storage is presented to the hosts and how the different failover mechanisms work. EDIT: here it is finally!

In the mean time maybe you want to read this blog from a colleague of mine. Seems they are building freaky all-in-one boxes for home-lab, based on ZFS.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.