Home Lab Tidbits #2

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab?

This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

  • It looks like ServeTheHome has their hands on one of the new Xeon D SoC motherboards from Supermicro and loaded up ESXi 6.
  • Intel have released some pretty cheap Celeron SoC, which might be useful for those with heat / power constraints (4W-6W).
  • Benchmarks for the Xeon D-1540 are starting to emerge, and it’s looking pretty good for the price.

Storage

  • The storage wars are heating up again with western digital and toshiba talking about 10TB SSDs.
  • With huge capacity SSDs, we’ll also need faster small ones, like this new SSD 750 PCIe SSD from Intel
  • Buffalo have launched a new NAS series based on the Atom, but this NAS from Thecus wins the cool vote with a built in UPS.
  • By far the cheapest NASs I’ve seen are cropping up. Last HLT was the Synology DS115j. Now Qnap have the TS-112P for the ~$100 price point.
  • Samsung have released their popular EVO 850 line in the mSATA form factor.
  • OCZ are back now that Toshiba are controlling them with the Vector 180.

Network

  • No news this time.

Software

  • Can’t afford to build your own lab? You could always run up a nested environment on vCloud Air for ~$15 a month. Or there’s always AutoLab on your laptop :)

Other

  • While there’s plenty of racks available, the startech 42u is certainly easier to move than most.

Home Lab Build – ESXi 6 /w VSAN

As part of documenting my home lab (re)build, today I’m going to build an ESXi 6 server and then bootstrap VSAN using a single hosts’s local disks. If you’re following along my Home Lab Re-Build series, we’re building the first ESXi host in the diagram.

LabOverview

So why ESXi6? Well, we want to host some VMs, we want to use just local storage but we want it to be stable and have the ability to run nested ESXi VMs on top. Using VMware Virtual SAN on a single host provides no data redundancy so you’ll want to keep that in mind if you’re deciding to go this route. It’s a configuration not supported, but (in my opinion) really useful in a home lab environment.

First off we’ll wipe the local disks, then we’ll install ESXi 6, set a root P/W and set the management network up. Once it’s on the network we’ll install the vSphere Desktop Client and configure NTP and SSH. Finally we’ll configure VSAN to use the local disks of this single host. So, let’s get into it.

We’re going to mount the Gnome Partition Editor ISO to give us the ability to wipe the local disks of any existing partition information. This is required when configuring VSAN as it expects blank disks.

Once Gparted is loaded we can select each disk and then ensure no existing partitions exist. In the video below I forgot that we need to create a blank partition table prior to rebooting the hosts at first. Create a new partition table by selecting the Device -> Create Partition Table, leave the table type as “msdos” and click apply. You’ll need to repeat this task for each disk to be used by VSAN.

Once the disks have a blank partition table you can install ESXi 6 as normal, I wont document that here as it’s a fairly basic process and included in the video below. Once ESXi is installed, set the management network and install the new version of the vSphere Desktop Client (browse to the IP of you ESXi host for details). We need SSH / CLI access to be able to bootstrap VSAN so enable SSH in the vSphere Desktop Client by going to Configuration -> Security Profile -> Services Properties -> SSH -> Options -> Start.

I first heard about enabling VSAN on a single host from William Lam’s post. He’s using it to get vCenter up and running without the need for shared storage so we’re using is slightly differently but he concept is the same. He’s also got a post on using USB disks in VSAN.

Once logged into the CLI via SSH or the DCUI, run the following commands to set a default VSAN policy to work with only 1 host:

esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

Now that the default policy will work with a single host, build a new VSAN “cluster”:

esxcli vsan cluster new

Finally add your SSD and magnetic/ssd capacity disks to the new cluster. You can get the SSD-DISK-ID and HDD-DISK-ID from either the UI (Configuration -> Storage -> Devices -> Right Click -> Copy Identifying to Clipboar) or by the CLI (esxcli storage core device list):

esxcli vsan storage add -s SSD-DISK-ID -d HDD-DISK-ID

You’ll now have a new VSAN datastore mounted on the local ESXi host. Remember, this datastore is not redundant so use caution.

Next in the series we’ll go through building a AD controller to be used for the lab DNS, NTP and Directory Services.

Home Lab Build

Today I want to introduce a series that I’ve been wanting to do for a while, a step-by-step video based home lab build. This will be the first in a series where I’ll take you through this new home lab build out so you can follow along if you like. Lets start out with gear

I have 2 primary systems in my home lab that are identical and based on the E5-1620 Xeon chip from Intel. While they have plenty of power for what I need (they are a quad core, 3.6 Ghz CPI), the they do use a considerable amount of power being rated at 130 watts. The CPUs are coupled with 64GB RAM per system, which is probably the biggest limit in my lab today. The ram is a little older and none-ECC. While it was ok when I first got these systems a couple of years ago, it needs replacing. I use the SuperMicro X9SRH-7TF motherboard which supports up to 512GB if you get the right type. For me, this board provided me with 2 great things. First, lots of memory support. Secondly, onboard 10GbE ports. I hook both of these systems together with the cheapest 10GbE switch I can find the Netgear XS708E. It’s not fancy, but it pushes packets over copper fast. The systems are housed in the super quiet and minimalist Fractal R4 case. Lets move onto the layout of the lab I’m going to (re)build.

LabOverview

I’ve quickly drawn up how my home network is set up today and how I’m going to connect that through to my home lab, probably using an NSX or vShield edge. You can see I have 4 ESXi hosts, along with the 2 Supermicro based systems, I also have 2 small HP N36L Microservers. I don’t have a use for them at this stage, but I’m sure I can find something along the way. Storage is both local in the form of VSAN on the ESXi systems and network based on a Synology NAS. In the lower portion of the diagram you can see the 4 VMs that I’m going to build first. An AD box, a database server and then vCenter with an external PSC. As we go along I’ll add to this diagram anything I decide to include.

And if you have any comments or questions please reach out.

What’s New vSphere 6 vBrownBag

I presented a What’s New in vSphere 6 presentation on last week’s US vBrownBag podcast.
Below is a copy of the recording and deck for those interested.

Home Lab Tidbits #1

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this emerging market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab? This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

  • ASRock have a new motherboard, the EPC612D8A-TB this may well rival the Home Lab favorites from SuperMicro.
  • Intel have announced their SoC for Enterprise, the Xeon D. Lets hope some inexpensive Home Lab components are the result.
  • Wanting to build a new system to host lots and lots of storage? The Haswell-E based ASRock X99 has 18 on-board SATA ports.

Storage

  • In the market for a new NAS? SmallNetBuilder has a guide that will help you decide which is best for you.
  • Both Asustor and Western Digital have a couple of new NAS units with decent performance.
  • ServeTheHome benchmarked the inexpensive 240GB Intel DC S3500 SSD and the considerably more expensive 1.6TB Toshiba PX03SNB160.
  • If you want some SLC enterprise class storage, Hitachi have a 100GB SSD that has great read and write sequential performance.
  • Synology keeps their entry bar low with the DS115j.

Network

  • Netgear have a competitor in the cheap 10GbE switch segment. ZyXel have a new 12-port 10GbE switch, the XS1920-12.

Software

  • NAS4Free is pretty easy to get up and running in 5 minutes if you want roll your own shared storage.
  • A little product called vSphere 6 went GA this week.

Upgrading vCenter 5.5 -> 6

Earlier this year I recorded a video for the VMUG 2015 Virtual Event. As is often the case with online webinar platforms, the quality of the recording wasn’t as good as we’ve come to expect with the prevalence of online video these days. So, I posted the video (embedded below) to my YouTube channel, just like I did last year. Since that recording I learned a few things thanks to a couple of my colleagues that I want to point out.

Firstly, I mentioned that VSAN is using the VMware acquired Virsto file-system. This is incorrect. While VSAN in vSphere 6 does have improved sparseness and caching capabilities it’s not using Virsto. There is also mention of the 256 datastore limitation being removed by VVOLs, this is also incorrect.

Secondly, but much more exciting, VMware have announced the long awaited Windows vCenter to Linux vCenter Virtual Appliance Fling. My buddy William Lam (of www.virtuallyghetto.com fame) is pretty excited about this one! I thought it particularly relevant for those watching this video as I had a number of questions at the Virtual Event around this very topic. So head over and grab the fling. I might just do another video of what it looks like to migrate the AutoLab vCenter to a vCSA!

AutoLab Version 2.0 Released

It has been a while, but the day has arrived. AutoLab version 2.0 is available for download. This version doesn’t support a new vSphere release since VMware hasn’t shipped one. AutoLab 2.0 is more of a maintenance and usability release.

The biggest feature is adding support for Windows Server 2012R2 as the platform for the domain controller and vCentre VMs. Naturally you should make sure the version of vSphere you deploy is supported on top of the version of Windows Server you use.

I have also removed the tagged VLANs which makes it easier to run multiple AutoLab instances on one ESXi server or to extend one AutoLab across a couple of physical servers if you only have smaller machines.

I’ve also added the ability to customize the password for the administrator accounts, which helps lock down an AutoLab environment.

Go ahead and download the new build from the usual download page and get stuck in. If you haven’t used AutoLab before make sure to read the deployment guide.