VMware White Box 13


I’m super happy to be off a couple small 4GB desktops and onto a real server class computer. CPU has not been the bottleneck for me, but memory has been very limiting. Most of my test machines sit around idle most of the time, but having to pick and choose which ones you have to start and stop has been a pain in my side. But no longer. I now have the following hardware up and running ESXi 4.1.

Motherboard – SuperMicro X8DTi-LN4F

This motherboard supports 2 CPUs and up to 192GB of memory using 16GB chips. Although those size chips are a bit out of my price range I was able to find 8GB chips for a reasonable price. I’ve started with just 3 8GB chips and one CPU, but I’ll be able to expand that up to 48GB before having to add a second processor. And I leave myself the option to add a second CPU and up to another 48GB memory.

Case – NZXT TEMPEST EVO

I read many reports of people having difficulties fitting their SuperMicro boards into cases or having to mod their case.  You can see on SuperMicro’s website that some of their board sizes are listed as proprietary.  This one is listed as Extended ATX so fitting it into my second NZXT Case was easy.  I only had to remove the two drive cages to allow the motherboard to slide into place. After the motherboard was installed, I was able to put one of the hard drive cages back in, but the board does not allow the second cage to fit. 4 hard drive slots should be enough especially since I’m not using any at the moment. I’ve installed ESXi onto a USB thumb drive and use iSCSI for VM storage.  One note though, this motherboard is designed for the power supply to be on the top instead of the bottom.  My power cables are having to run across the motherboard.

Power Supply – Rosewill BRONZE Series RBR1000-M 1000W

This power supply is overkill for me at the moment, but it has the two 8 pin connectors needed to use two CPUs and will allow me plenty of growth without having to worry about it.  Not to mention it was on sale for same price as lower wattage PSUs that don’t have 2 8pin connectors.

CPU – Intel Xeon E5606 Westmere-EP 2.13GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366 80W Quad-Core Server Processor

Like I said CPU has not been my bottleneck so I did not go for a high end processor. 4 cores at this speed should satisfy me for a while.

CPU Cooler – Intel BXSTS100A Active heat sink with fixed fan

Luckily I read other reviews where it was mentioned that you need a cooler with threaded screws instead of the snap in clips that come on most coolers.

Memory – Kingston ValueRAM 24GB DDR3 SDRAM Memory Module – KVR1333D3D4R9SK3/24G – 24GB (3 x 8GB) – 1333MHz ECC – DDR3 SDRAM – 240-pin DIMM

Non ECC memory would not give me the future expansion capabilities I want. The supported memory listed on SuperMicro’s website I found to be very expensive. Kingston’s website has a list of motherboards and memory tested and guaranteed to work with it. I found a 3 pack part number for my 8GB chips and found it at buy.com for a good price.

All this purchased from NewEgg.com (except the memory from buy.com) set me back less than $1400.  I built my first desktop PC in 1993 for over $2000.  More recently, the last couple I built still cost me around $500 – $600.  I’ll be able to easily run 10 VMs on this setup, probably a few more.  I see my upgrade path being $400 for another 24GB of memory and 10 more VMs.  Then a second CPU and 24GB more memory for $600 and 10 more VMs.  Lastly another 24GB of memory for $400 and 10 more VMs.  If I need that many more VMs in the next couple of years I’m in trouble.  Total all said and done would be 40 VMs on one server costing $2800.  My setup may be a little cheaper than others since I’m not having to purchase any controllers or hard drives, but I spent that money on a beefy iSCSI server already.


Leave a comment

Your email address will not be published.

13 thoughts on “VMware White Box

  • Pascal Kuenzli

    Nice config – I want also buy this board (and also the same cpu; but already twice). My open question is, if this board support passtrough configuration (VMDirectPath). Will be nice, if you could confirm this to me.

    At the moment I’m using a ASUS P5Q board, that not support passtrough.

    Thanks!

  • getSurreal

    @Vishal Patel,

    All four NICs are available, but I have not added more than one to a vSwitch yet.

  • Ben

    Thanks for describing your build.

    Somewhat off-topic, but I have a newbie question. How are you using 10, and in the future, 20 VMs? I’ve only begun and use them for testing, working around physical limitations, etc … but I’m anxious to see how to expand the use of what has turned out to be a great tool.

  • getSurreal

    VMs just start adding up. Testing different OS’s and new version releases such as CentOS, Ubuntu, Fedora, Server 2003, Server 2008, XP, Win7, etc. Then you have VMs with different roles, domain controllers, web servers, database servers, video surveillance server, file server, and VMs just for some application I want to test and not have to uninstall it if I don’t like it.

  • Simon

    Great post. The links to newegg really help. Do you know if the motherboard is certified to run esxi. Can you do high availability with your setup?

  • getSurreal

    @Simon – Glad the links were helpful. I don’t think VMware is going to spend much time certifying various motherboards. That’s the primary reason for posting a “White Box” that is reported to work. This supports both HA and DRS.

  • Fellow

    Hi James,

    Thanks for the writeup. I’m also working toward a home esxi server. I’m interested in your noise and wattage. Do you have any numbers on that? I like the idea of having a CPU and RAM upgrade path. Been thinking about a ThinkMate for the silent components. Please reply to email also.

    Thanks,
    fellow

  • getSurreal

    @ Fellow

    Noise is low on this setup. No internal hard drives helps. But my iSCSI server sitting right next to it is pretty load due to the amount of fans and hard drives I have in it. Sorry I don’t have power consumption numbers for you.