Link to home
Start Free TrialLog in
Avatar of janhoedt
janhoedt

asked on

ZFS Nexenta on hp NL 40, from hardware to vm

Hi,

I have this great zfs nexenta on HP NL 40 microserver, 16 GB RAM.
Back on time hanccocka (Andrew Hancock now apparently :-)) helped me out greatly to fully configure the system (A post -from multiple- https://www.experts-exchange.com/questions/27962816/ZFS-on-ESXI-howto-configure-network-Esxi-with-3-nics.html).


Now it turns out my config (3 hp NL 40, 1 for zfs, 2 for esxi) ware using way to much power. So I m looking for ways to lower power consumption.

I thought of getting rid of 2 esxi and running nexenta as a vm. Mostly only a dc is running anyhow (I'm barely using my environment these days). That would mean: ZFS + vm's, 1 ESXi host in standby, 1 to be sold.

However this would mean I'd have to boot my ZFS NAS (hp microserver) from it's usb stick again and then make a vm with Nexenta, then reconfigure it to use the disks internally (from ZFS).

No idea howto do this, please advise.

Just curoious, therefore one more thing: can I upgrade the HP Microserver NL40 in any way to add more ram/cpu (probably not, just asking: new motherboard maybe ...?). Or maybe built it into another chassis ...?

J.
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

I've always been Andrew Hancock!

The biggest issue with the N40L is it's processor, which is quite low powered, in terms of performance.

You can use Nexenta as a VM, and present the disks to the VM, as RAW, to achieve maximum performance.

My Article here shows to you how:-

HOW TO: Add Local Storage (e.g. a SATA disk) as a Raw Disk Mapping (RDM) or Mapped RAW LUN to a virtual machine hosted on ESXi

So, create a basic VM, add all the disks as Local RDM, download the Nexenta CDROM/ISO, BOOT it and Install.

Just curoious, therefore one more thing: can I upgrade the HP Microserver NL40 in any way to add more ram/cpu (probably not, just asking: new motherboard maybe ...?). Or maybe built it into another chassis ...?

Unfortunately, not you are limited by hardware to 16GB RAM (works but not supported).

Here you go the All-One-One!

Follow this Great PDF Article How to Do it
Avatar of janhoedt
janhoedt

ASKER

Thanks for  fast your feedback Andrew!
As I want to do some pre-flight checks … I checked the “enable pass-through” … but my esxi doesn’t seem to support it (host does not support pass through configuration, googled screenshot, this is what I also got.

Probably a showstopper …?

J.
In that article for "super performance" they are adding the SCSI controller as a passthrough device to the VM.

You can use just pass the disks through to the VM, as per my EE Article above, with a reduction in performance, bu still obtain good figures.

I know you are trying to use what you have....but the MicroServerr GEN8 is now very good...

http://www.virten.net/2013/11/vsphere-5-homelab-esxi-on-hp-microserver-gen8/
Hi Andrew,

Thansk. I got version 5.1.0 799733, could it help upgrading to latest version or is it really hardware related?
About the new HP Microserver: is is also low power, can I go beyond 16GB ram ... and most important, can I build in the config of my current ZFS (sharkoon, lsi ...) without to much hassle? Then I might consider it indeed (and sell my HP microservers).

J.
It's hardware related, very few Servers support VM Direct Path I/O.

It still has a maximum supported memory of 16GB. As for low power, it's similar to the low power current MicroServers.

Power Supply: 150 Watt

You will need to change processor for  VM Direct Path I/O support on the Gen8.
Thanks, ok. My last questions, then I'll close this ticket. Will create a new one for other questions:

*the NL 40 has also 8 GB maximum supported but can do 16 GB. Won't 32 GB or even 64 GB work (It's a shame of the upgrade/effort if I have the same amount of RAM)?

*Any idea if I can order the gen8 modified with different specs (no disks, different processor ...)?

*What I also ask myself: the BIOS tweak I had to do on my NL40 (to see all the disks in , is it needed on gen8 also?

*Can I build in the config of my current ZFS (sharkoon, lsi ...) without to much hassle?
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks. So HP G8 is not an option for me (yet).
I could configure my current NL40 with ESXi and zfs but then I will throw away my LSI storage controller (which costed me about 200 Euro) so not an option either.
To bad.
What would be the next level ... machine HP NL 60 ...?

Iow which machine could I buy which I can make a dedicated ESXi with ZFS machine which can hold more ram and still has acceptable power usage ...?
if you want to use VM Direct Path I/O, the cheapest lower power model is the Microserver Gen8, which users are experimenting with.

Servers with VM Direct Path I/O, are few, and those that do have it, will require  larger power supplies.

What is acceptable power usage?
Thanks, I m just looking for optimal performance for as little electricity usage in my lab.
Current config: zfs NL 40, 1 esx running, 1 standby. I was hoping to build 1 esx with zfs vm which has about 32 or 64 gb ram and fast cpu. Since all storage is local, speed should be really fast. Power usage: 250, 300 wattt?
that's the issue

performance versus power versus function.

MicroServer Gen8 ticks a box with all those. If you select a larger server, more electricity, more performance, and more cost to run.

Is noise an issue ?

these are tower servers, capable of 32GB, a lot more noise, and power 300Watts

HP ProLiant ML310e Gen8
DELL PowerEdge T110 II
IBM x3100 M4

none of the above support VM Direct Path I/O still, you've got to purchase an expensive Enterprise server for that option, unless you build your own whitebox, out of bits.

e.g. processor and motherboard etc and then you can pick your own supported motherboard, processor, case, memory etc

here's a homebox, whiteboard build with VM Direct Path I/O

http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu-build-2/

http://wahlnetwork.com/2013/12/02/new-haswell-fueled-esxi-5-5-home-lab-build/
Again in this article, what he built does not support VM Direct Path I/O, so had to pass the disks through RDM, and lose some performance for his NAS.

The NUC are low power, we have them here, but they are limited to 32GB, and you cannot use any devices in them.
32 gb on 25 watt?! That would mean I can sell my 2 NL 40 and run 1 nuc on 25 watt instead now 1 esx now powered on, 1 standby .... Or even 2 nucs for ha on 50 w .... Sounds really good to me.
Actually my main concerns are power usage and performance, 2 opposities I know ....
What if I would configure the lsi raid on my sharkoon, I d have 6 disks of 120 gb ssd in raid 5 as lun, then no need for zfs and about 16 gb for my vm s. Then again my 3 sata disks of 2tb are lost in space (can t do anything with it). It would make my esxi really fast though. I could forget about vcenter too just connect to 1 esx. Good enough for me.
Have you measured the power consumption of a N40L with zero disks, USB install because the power consumption is similar.

But it's of no surprise because the NUC are low power, designed for Desktop PCs, and have no expansion!

Again a compromise.

Your final solution would work.
Thanks, my config could work indeed. Just wonder if I could have some kind of raid on my sata too ( extra raid controller), software raid ...? Then I could put all my offline vm s and data on that lun.
software RAID will not work with ESXi.

You will need a supported RAID controller for ESXi RAID.
Thanks. Not sure I understand, I configure my lsi raid controller to raid 5 on the sharkoon ssd s. I boot from esxi usb then create a lun on this raid 5 ssd volume. Esx I just sees the one ssd raid 5 volume, right?
The 3 sata disks will be seen as 3 seperate volumes. No soft raid ok, so no raid possible on sata s then since I think there s no possibility to add extra raid controller to NL 40(?)
Thanks. Not sure I understand, I configure my lsi raid controller to raid 5 on the sharkoon ssd s. I boot from esxi usb then create a lun on this raid 5 ssd volume. Esx I just sees the one ssd raid 5 volume, right?

Correct.

The SATA controller in the N40L only supports software RAID, this is not supported in ESXi as RAID.

if you want three SATA disks, 3 separate datastores, that will work okay.
Thanks!  
I m considering 3 options now:
1.install esxi on local raid5 ssd, removing 3 local sata disks (2 TB)
2.installing windows 2012 core on raid 1 ssd 50 gb, lun of 4x 120 gb
3.buying this Intell NUC of 32 GB, no idea if that is a good option though(?)
Fyi: option 1. = lun on 4 ssd of 120 gb
In all cases I d sell 2 of my 3 NL 40 s.
A thought: with zfs a a vm, can t I offer my 3 sata disks as 1 disk in raid 5 hardware? This way I can still use my lsi raid and maybe have performance gain?
A thought: with zfs a a vm, can t I offer my 3 sata disks as 1 disk in raid 5 hardware? This way I can still use my lsi raid and maybe have performance gain?

OR make a raid 5 from 3 ssd s for running the nexenta zfs vm and maybe another vm. Then 2 ssd for cache 1 for log ...? (I have 4 ssd of 60 gb, 4 of 120 gb, 2 laptop disks).
You can present your three SATA disks to a VM, using RAW mode.

and then use ZRAID, to create a ZFS RAID of 3 disks, this is the preferred way of using ZFS, because it's faster than "hardware RAID."
And maybe overclock the cpu a bit ...
Thanks, I know, by I run my esxi from usb. Then have 6 ssd in sharkoon, 3 sata disks. The idea: present raid 5 3 ssd as 1 disk to esxi, then nexenta vm on this disk. Present 3 other ssd raw for cache and log. 3 sata indeed as raw in raidz then.
Thanks, great! Only question left: what ssd size where: 3 ssd of 120 gb for nexenta vm AND other vm s, 3 ssd of 60 gb for only nexenta vm.
Thanks, great! Only question left: what ssd size where: 3 ssd of 120 gb for nexenta vm AND other vm s, 3 ssd of 60 gb for only nexenta vm.

I don't understand? what's the question?
The Q is if the raid 5 I wil create for running my vm should only be used for my nexenta (then I would use ssd s of 60 gb). If I can add more vm s on this raid (performancewise), I d use ssd s of 60 gb. It s a performance q I ask myself.