Author Archives: Nick Marshall

Home Lab Tidbits #3

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab?

This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

Xeon
– It looks like the new Intel Xeon lineup is getting a fair amount of attention, both AnandTech and ServeTheHome here and here.
– The new Broadwell-DE based systems really are awesome, 128GB of RAM and 10GbE is great in such a small package.
– And if you want 128 GB of DDR4 RAM, Corsair have a bunch of new kits.

Storage

  • Will we ever see the SandForce SF3500 based SSDs?
  • Not to be confused with Intel’s new SSD now available, the S3510.
  • Getting some more NVMe details from Supermicro.
  • QNAP have a couple of AMD based NASs on the market here and here.
  • Synology have DSM 5.2 out. I’m looking forward to seeing what kind of improvement they have made to their (previously poor) SSD caching. Checkout details on SmallNetBuilder and ServeTheHome.
  • Speaking of Synology they have some cheap(ish) large NASs that have just been released.

Network

  • One thing that continues to amaze me is the price of PCIe 10GbE NICs. You can buy a whole new motherboard with Intel NICs for a LOT less than this new card from D-Link.
  • Speaking of those Intel 10GbE NICs, the new one is the X552/X557-AT.

Software

  • EMC have some pretty nifty Software Defined Storage solutions that are now opensource / available to the community.

Other

  • My friend Frank Denneman and I certainly agree on specs for home lab servers.
  • Converging standards is a good thing, but it’ll be interesting to see how the “thunderbolt networking” gets implemented.
  • If you hadn’t noticed I’m a big fan of the Intel Xeon D-1540. Supermicro have a great little box that consumes minimal power.
  • More traditional small systems (LGA2011-3) still have new options too.
  • While not strictly “home lab” one can dream right?

Home Lab Build – Active Directory

In this part of the Home Lab Build series, we’ll step through the creation of a Windows 2012 R2 Domain Controller. While one of the more basic installs, it can carry some fairly important tasks within a lab environment. You can find the visio file for the diagram is here.AD-Build

If you want a basic set up with some kind of identity source, name resolution and a time sync source all in one, building a Windows AD box is going to be on your short list. Also, if you plan on studying for a Microsoft or VMware certification, having a grasp on Active Directory is a must. Like it or loath it, Windows and in turn Active Directory dominates many corporate networks today. So let’s get to it.

At a high level we want to accomplish a few things:

  1. Install Windows 2012 R2 on a new VM
  2. Set an Administrator password
  3. Install VMware Tools
  4. Set a static IP
  5. Set a nameserver
  6. Set a hostname
  7. Disable the local firewall
  8. Enable Remote Desktop Access
  9. Add the Active Directory and DNS roles
  10. Set a Domain Name for the new Domain
  11. Set a Restore Mode password

First up, using the vSphere Desktop Client, create a VM with a Guest OS of Windows Server 2012 (64-bit). Change the NIC from E1000E to VMXNET3 and leave all other “Create New Virtual Machine” wizard settings to their defaults. Using Thin provisioning is a good idea in a lab environment especially if you’re disk space constrained. If you have more than 2 physical cores on your ESXi hosts, change the vCPU count of your VM to 2 but don’t do this if you lab only has 2 physical cores. Mount the Windows 2012 R2 ISO to this VM and then power it on.

Once the Windows installer is booted, select the appropriate language and click the “install now” button. Setup will give you a choice for the OS version, in this case, we want the standard GUI installation. On the following screen you’ll be asked if you want to “upgrade” an installation or “custom” which actually means “install windows only”. Select “custom” and then use the whole disk without creating any partitions by just clicking “next”. The installation of the OS will now commence and will take a few minutes (depending on your hardware).

After the install is complete and the server reboots you will be asked to set an Administrator password. Once logged in to the server, VMware Tools is the first thing that should be installed. This will provide the drivers and utilities needed to get the most out of this VM. Specifically, without VMware Tools, the VMXNET3 network card we chose to use does not have default drivers in Windows. Reboot the server once the VMware Tools installation is complete.

The server can now have it’s network identity created. We’ll set a static IP, a subnet mask, a gateway and a name (DNS) server. We’re actually going to set the DNS server to the localhost IP because this server will have the DNS services running on it. Finally we’ll set a hostname turn off the local firewall and then reboot once again.

IP: 192.168.20.20
SNM: 255.255.255.0
GW: 192.168.20.1
DNS: 127.0.0.1

After the server is on the network with the correct details, we will enable the ability to remotely manage it with a Remote Desktop Client and then add the “Active Directory Domain Services” and “DNS Server” roles. As we step through this wizard we will create a new forest with the domain name of “labguides.local” and configure a Directory Services Restore Mode password.

LoginFinally once the wizard is over and server rebooted, you can login to the domain with the original Administrator password that was created upon first boot. If you’d like to set your domain up exactly the same as mine, you can grab the script export from my build here

If you need more information, watch the video for a detailed guide on how to accomplish these tasks.

Home Lab Tidbits #2

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab?

This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

  • It looks like ServeTheHome has their hands on one of the new Xeon D SoC motherboards from Supermicro and loaded up ESXi 6.
  • Intel have released some pretty cheap Celeron SoC, which might be useful for those with heat / power constraints (4W-6W).
  • Benchmarks for the Xeon D-1540 are starting to emerge, and it’s looking pretty good for the price.

Storage

  • The storage wars are heating up again with western digital and toshiba talking about 10TB SSDs.
  • With huge capacity SSDs, we’ll also need faster small ones, like this new SSD 750 PCIe SSD from Intel
  • Buffalo have launched a new NAS series based on the Atom, but this NAS from Thecus wins the cool vote with a built in UPS.
  • By far the cheapest NASs I’ve seen are cropping up. Last HLT was the Synology DS115j. Now Qnap have the TS-112P for the ~$100 price point.
  • Samsung have released their popular EVO 850 line in the mSATA form factor.
  • OCZ are back now that Toshiba are controlling them with the Vector 180.

Network

  • No news this time.

Software

  • Can’t afford to build your own lab? You could always run up a nested environment on vCloud Air for ~$15 a month. Or there’s always AutoLab on your laptop :)

Other

  • While there’s plenty of racks available, the startech 42u is certainly easier to move than most.

Home Lab Build – ESXi 6 /w VSAN

As part of documenting my home lab (re)build, today I’m going to build an ESXi 6 server and then bootstrap VSAN using a single hosts’s local disks. If you’re following along my Home Lab Re-Build series, we’re building the first ESXi host in the diagram.

LabOverview

So why ESXi6? Well, we want to host some VMs, we want to use just local storage but we want it to be stable and have the ability to run nested ESXi VMs on top. Using VMware Virtual SAN on a single host provides no data redundancy so you’ll want to keep that in mind if you’re deciding to go this route. It’s a configuration not supported, but (in my opinion) really useful in a home lab environment.

First off we’ll wipe the local disks, then we’ll install ESXi 6, set a root P/W and set the management network up. Once it’s on the network we’ll install the vSphere Desktop Client and configure NTP and SSH. Finally we’ll configure VSAN to use the local disks of this single host. So, let’s get into it.

We’re going to mount the Gnome Partition Editor ISO to give us the ability to wipe the local disks of any existing partition information. This is required when configuring VSAN as it expects blank disks.

Once Gparted is loaded we can select each disk and then ensure no existing partitions exist. In the video below I forgot that we need to create a blank partition table prior to rebooting the hosts at first. Create a new partition table by selecting the Device -> Create Partition Table, leave the table type as “msdos” and click apply. You’ll need to repeat this task for each disk to be used by VSAN.

Once the disks have a blank partition table you can install ESXi 6 as normal, I wont document that here as it’s a fairly basic process and included in the video below. Once ESXi is installed, set the management network and install the new version of the vSphere Desktop Client (browse to the IP of you ESXi host for details). We need SSH / CLI access to be able to bootstrap VSAN so enable SSH in the vSphere Desktop Client by going to Configuration -> Security Profile -> Services Properties -> SSH -> Options -> Start.

I first heard about enabling VSAN on a single host from William Lam’s post. He’s using it to get vCenter up and running without the need for shared storage so we’re using is slightly differently but he concept is the same. He’s also got a post on using USB disks in VSAN.

Once logged into the CLI via SSH or the DCUI, run the following commands to set a default VSAN policy to work with only 1 host:

esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

Now that the default policy will work with a single host, build a new VSAN “cluster”:

esxcli vsan cluster new

Finally add your SSD and magnetic/ssd capacity disks to the new cluster. You can get the SSD-DISK-ID and HDD-DISK-ID from either the UI (Configuration -> Storage -> Devices -> Right Click -> Copy Identifying to Clipboar) or by the CLI (esxcli storage core device list):

esxcli vsan storage add -s SSD-DISK-ID -d HDD-DISK-ID

You’ll now have a new VSAN datastore mounted on the local ESXi host. Remember, this datastore is not redundant so use caution.

Next in the series we’ll go through building a AD controller to be used for the lab DNS, NTP and Directory Services.

Home Lab Build

Today I want to introduce a series that I’ve been wanting to do for a while, a step-by-step video based home lab build. This will be the first in a series where I’ll take you through this new home lab build out so you can follow along if you like. Lets start out with gear

I have 2 primary systems in my home lab that are identical and based on the E5-1620 Xeon chip from Intel. While they have plenty of power for what I need (they are a quad core, 3.6 Ghz CPI), the they do use a considerable amount of power being rated at 130 watts. The CPUs are coupled with 64GB RAM per system, which is probably the biggest limit in my lab today. The ram is a little older and none-ECC. While it was ok when I first got these systems a couple of years ago, it needs replacing. I use the SuperMicro X9SRH-7TF motherboard which supports up to 512GB if you get the right type. For me, this board provided me with 2 great things. First, lots of memory support. Secondly, onboard 10GbE ports. I hook both of these systems together with the cheapest 10GbE switch I can find the Netgear XS708E. It’s not fancy, but it pushes packets over copper fast. The systems are housed in the super quiet and minimalist Fractal R4 case. Lets move onto the layout of the lab I’m going to (re)build.

LabOverview

I’ve quickly drawn up how my home network is set up today and how I’m going to connect that through to my home lab, probably using an NSX or vShield edge. You can see I have 4 ESXi hosts, along with the 2 Supermicro based systems, I also have 2 small HP N36L Microservers. I don’t have a use for them at this stage, but I’m sure I can find something along the way. Storage is both local in the form of VSAN on the ESXi systems and network based on a Synology NAS. In the lower portion of the diagram you can see the 4 VMs that I’m going to build first. An AD box, a database server and then vCenter with an external PSC. As we go along I’ll add to this diagram anything I decide to include.

And if you have any comments or questions please reach out.

What’s New vSphere 6 vBrownBag

I presented a What’s New in vSphere 6 presentation on last week’s US vBrownBag podcast.
Below is a copy of the recording and deck for those interested.

Home Lab Tidbits #1

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this emerging market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab? This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

  • ASRock have a new motherboard, the EPC612D8A-TB this may well rival the Home Lab favorites from SuperMicro.
  • Intel have announced their SoC for Enterprise, the Xeon D. Lets hope some inexpensive Home Lab components are the result.
  • Wanting to build a new system to host lots and lots of storage? The Haswell-E based ASRock X99 has 18 on-board SATA ports.

Storage

  • In the market for a new NAS? SmallNetBuilder has a guide that will help you decide which is best for you.
  • Both Asustor and Western Digital have a couple of new NAS units with decent performance.
  • ServeTheHome benchmarked the inexpensive 240GB Intel DC S3500 SSD and the considerably more expensive 1.6TB Toshiba PX03SNB160.
  • If you want some SLC enterprise class storage, Hitachi have a 100GB SSD that has great read and write sequential performance.
  • Synology keeps their entry bar low with the DS115j.

Network

  • Netgear have a competitor in the cheap 10GbE switch segment. ZyXel have a new 12-port 10GbE switch, the XS1920-12.

Software

  • NAS4Free is pretty easy to get up and running in 5 minutes if you want roll your own shared storage.
  • A little product called vSphere 6 went GA this week.

Upgrading vCenter 5.5 -> 6

Earlier this year I recorded a video for the VMUG 2015 Virtual Event. As is often the case with online webinar platforms, the quality of the recording wasn’t as good as we’ve come to expect with the prevalence of online video these days. So, I posted the video (embedded below) to my YouTube channel, just like I did last year. Since that recording I learned a few things thanks to a couple of my colleagues that I want to point out.

Firstly, I mentioned that VSAN is using the VMware acquired Virsto file-system. This is incorrect. While VSAN in vSphere 6 does have improved sparseness and caching capabilities it’s not using Virsto. There is also mention of the 256 datastore limitation being removed by VVOLs, this is also incorrect.

Secondly, but much more exciting, VMware have announced the long awaited Windows vCenter to Linux vCenter Virtual Appliance Fling. My buddy William Lam (of www.virtuallyghetto.com fame) is pretty excited about this one! I thought it particularly relevant for those watching this video as I had a number of questions at the Virtual Event around this very topic. So head over and grab the fling. I might just do another video of what it looks like to migrate the AutoLab vCenter to a vCSA!

Synology DSM Virtual Machine

Introduction

Synology have made a name for themselves over the past few years as one of the preferred home lab NAS solutions. In particular, they were one of the first consumer vendors to support VAAI in VMware environments and also one of the first to support SSD caching. If you ever wanted to checkout their DSM 5.0 software without purchasing any hardware, the following outlines how you can spin up their DSM software in a VM for testing purposes. Obviously this is not going to give you an exact comparison to hardware, but it’s a great way to test it in your lab. If you want to skip the “build” section and just spin one of these up right away, simply download and import this OVF file to your virtual environment and move on to “Installing the DSM software”.

How to Build the DSM Virtual Machine

There are a number of quick steps that you’ll need to perform to be able to spin up your own DSM VM. First of all, you will want a couple of bits of software:

Once you have all the tools, you need to modify the nonoboot image boot loader.

  1. Using WinImage, open the nanoboot file and find syslinux.cfg
  2. Extract and edit the syslinux.cfg file, find the lines that start with “kernel /ZImage” and add the following to the end of the line: rmmod ata_piix
  3. Save the cfg file and inject it back to the nanoboot image overriding the exiting file.
  4. Next use Starwind to convert the nanoboot IMG file to an IDE pre-allocated VMDK
  5. Create a new VM and use these VMDKs as an “existing hard drive” IDE 0:0.
  6. Set the disk to “independent non-persistent”. Continue with Step 12 below

Installing the the DSM software

After you have a working bootable VM that emulates Synology hardware, its time to install the DSM software itself. You can add additional hardware to the VM at any time after this point (SCSI Disks, NICs, vCPU or Memory)

    1. Upgrade / DowngradeAdd SCSI based virtual hard disks to the VM for how much space you would like available for the virtual NAS
    2. Attach the network card in the VM to the correct network. DHCP must be enabled on the network.
    3. Power on the VM
    4. Select the 3rd option in the boot menu labeled “Upgrade / Downgrade”
    5. Once the IP is shown, use a web browser to the IP address listed on the console
    6. Follow the onscreen instructions to complete the installation wizard with the following options:
      • Install the DSM version from disk (SD214 DSM 5.0 4482)
      • Do not create a Synology Hybrid volume
    7. After some time the VM will reboot, and then power off.
    8. Power the VM back on and you will have a working Synology DSM Virtual Machine.

The guys at xpenology.com have a whole site dedicated to this stuff.

Always Invest in Power

I had a bit of a problem over the last 8-9 months with 2 Supermicro X9SRH-7TF systems and the cheap and cheerful 10G Netgear XS708E switch. Somewhat sporadically (weeks would go by), the onboard X540 based 10G NICs would disconnect and the motherboard would require a full power cut (PSU switch off/on) before they would re-link again. If you follow me on twitter you would have heard numerous rants.

To begin with, I thought it was just one of the boards, so it was sent it back to Supermicro (no fault found) but then the other one started doing it too. Fast forward a few months and the servers were moved from 220v power to 110v (Australia -> USA move)… suddenly the problem disappeared. I thought it must have been some dodgy voltage step-down issue, but I didn’t think much more of it. I was simply happy the issue had been resolved and the HA wasn’t working overtime starting up failed VMs.

A couple of weeks ago, the servers were each upgraded with an Icy Dock MB994SP 4S to enable VSAN. Some older HP 10K 146GB SAS drives (with Samsung EVO SSDs) were used with the onboard LSI 2308. As soon as the servers were powered up, the NICs once again started disconnecting. At once I knew it must be a power issue, and I upgraded the PSUs in both servers. While calculating the total power draw, the originals (430W) should have been enough, but I now have plenty more in reserve (750W).

The moral of the story? Power Supplies are a relatively cheap component within your home lab system, so get one a little better than your current needs.