What’s New vSphere 6 vBrownBag

I presented a What’s New in vSphere 6 presentation on last week’s US vBrownBag podcast.
Below is a copy of the recording and deck for those interested.

Home Lab Tidbits #1

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this emerging market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab? This is an experimental aggregation post that may or may not continue, depending on feedback.


  • ASRock have a new motherboard, the EPC612D8A-TB this may well rival the Home Lab favorites from SuperMicro.
  • Intel have announced their SoC for Enterprise, the Xeon D. Lets hope some inexpensive Home Lab components are the result.
  • Wanting to build a new system to host lots and lots of storage? The Haswell-E based ASRock X99 has 18 on-board SATA ports.


  • In the market for a new NAS? SmallNetBuilder has a guide that will help you decide which is best for you.
  • Both Asustor and Western Digital have a couple of new NAS units with decent performance.
  • ServeTheHome benchmarked the inexpensive 240GB Intel DC S3500 SSD and the considerably more expensive 1.6TB Toshiba PX03SNB160.
  • If you want some SLC enterprise class storage, Hitachi have a 100GB SSD that has great read and write sequential performance.
  • Synology keeps their entry bar low with the DS115j.


  • Netgear have a competitor in the cheap 10GbE switch segment. ZyXel have a new 12-port 10GbE switch, the XS1920-12.


  • NAS4Free is pretty easy to get up and running in 5 minutes if you want roll your own shared storage.
  • A little product called vSphere 6 went GA this week.

Upgrading vCenter 5.5 -> 6

Earlier this year I recorded a video for the VMUG 2015 Virtual Event. As is often the case with online webinar platforms, the quality of the recording wasn’t as good as we’ve come to expect with the prevalence of online video these days. So, I posted the video (embedded below) to my YouTube channel, just like I did last year. Since that recording I learned a few things thanks to a couple of my colleagues that I want to point out.

Firstly, I mentioned that VSAN is using the VMware acquired Virsto file-system. This is incorrect. While VSAN in vSphere 6 does have improved sparseness and caching capabilities it’s not using Virsto. There is also mention of the 256 datastore limitation being removed by VVOLs, this is also incorrect.

Secondly, but much more exciting, VMware have announced the long awaited Windows vCenter to Linux vCenter Virtual Appliance Fling. My buddy William Lam (of www.virtuallyghetto.com fame) is pretty excited about this one! I thought it particularly relevant for those watching this video as I had a number of questions at the Virtual Event around this very topic. So head over and grab the fling. I might just do another video of what it looks like to migrate the AutoLab vCenter to a vCSA!

AutoLab Version 2.0 Released

It has been a while, but the day has arrived. AutoLab version 2.0 is available for download. This version doesn’t support a new vSphere release since VMware hasn’t shipped one. AutoLab 2.0 is more of a maintenance and usability release.

The biggest feature is adding support for Windows Server 2012R2 as the platform for the domain controller and vCentre VMs. Naturally you should make sure the version of vSphere you deploy is supported on top of the version of Windows Server you use.

I have also removed the tagged VLANs which makes it easier to run multiple AutoLab instances on one ESXi server or to extend one AutoLab across a couple of physical servers if you only have smaller machines.

I’ve also added the ability to customize the password for the administrator accounts, which helps lock down an AutoLab environment.

Go ahead and download the new build from the usual download page and get stuck in. If you haven’t used AutoLab before make sure to read the deployment guide.

Synology DSM Virtual Machine


Synology have made a name for themselves over the past few years as one of the preferred home lab NAS solutions. In particular, they were one of the first consumer vendors to support VAAI in VMware environments and also one of the first to support SSD caching. If you ever wanted to checkout their DSM 5.0 software without purchasing any hardware, the following outlines how you can spin up their DSM software in a VM for testing purposes. Obviously this is not going to give you an exact comparison to hardware, but it’s a great way to test it in your lab. If you want to skip the “build” section and just spin one of these up right away, simply download and import this OVF file to your virtual environment and move on to “Installing the DSM software”.

How to Build the DSM Virtual Machine

There are a number of quick steps that you’ll need to perform to be able to spin up your own DSM VM. First of all, you will want a couple of bits of software:

Once you have all the tools, you need to modify the nonoboot image boot loader.

  1. Using WinImage, open the nanoboot file and find syslinux.cfg
  2. Extract and edit the syslinux.cfg file, find the lines that start with “kernel /ZImage” and add the following to the end of the line: rmmod ata_piix
  3. Save the cfg file and inject it back to the nanoboot image overriding the exiting file.
  4. Next use Starwind to convert the nanoboot IMG file to an IDE pre-allocated VMDK
  5. Create a new VM and use these VMDKs as an “existing hard drive” IDE 0:0.
  6. Set the disk to “independent non-persistent”. Continue with Step 12 below

Installing the the DSM software

After you have a working bootable VM that emulates Synology hardware, its time to install the DSM software itself. You can add additional hardware to the VM at any time after this point (SCSI Disks, NICs, vCPU or Memory)

    1. Upgrade / DowngradeAdd SCSI based virtual hard disks to the VM for how much space you would like available for the virtual NAS
    2. Attach the network card in the VM to the correct network. DHCP must be enabled on the network.
    3. Power on the VM
    4. Select the 3rd option in the boot menu labeled “Upgrade / Downgrade”
    5. Once the IP is shown, use a web browser to the IP address listed on the console
    6. Follow the onscreen instructions to complete the installation wizard with the following options:
      • Install the DSM version from disk (SD214 DSM 5.0 4482)
      • Do not create a Synology Hybrid volume
    7. After some time the VM will reboot, and then power off.
    8. Power the VM back on and you will have a working Synology DSM Virtual Machine.

The guys at xpenology.com have a whole site dedicated to this stuff.

Always Invest in Power

I had a bit of a problem over the last 8-9 months with 2 Supermicro X9SRH-7TF systems and the cheap and cheerful 10G Netgear XS708E switch. Somewhat sporadically (weeks would go by), the onboard X540 based 10G NICs would disconnect and the motherboard would require a full power cut (PSU switch off/on) before they would re-link again. If you follow me on twitter you would have heard numerous rants.

To begin with, I thought it was just one of the boards, so it was sent it back to Supermicro (no fault found) but then the other one started doing it too. Fast forward a few months and the servers were moved from 220v power to 110v (Australia -> USA move)… suddenly the problem disappeared. I thought it must have been some dodgy voltage step-down issue, but I didn’t think much more of it. I was simply happy the issue had been resolved and the HA wasn’t working overtime starting up failed VMs.

A couple of weeks ago, the servers were each upgraded with an Icy Dock MB994SP 4S to enable VSAN. Some older HP 10K 146GB SAS drives (with Samsung EVO SSDs) were used with the onboard LSI 2308. As soon as the servers were powered up, the NICs once again started disconnecting. At once I knew it must be a power issue, and I upgraded the PSUs in both servers. While calculating the total power draw, the originals (430W) should have been enough, but I now have plenty more in reserve (750W).

The moral of the story? Power Supplies are a relatively cheap component within your home lab system, so get one a little better than your current needs.

Lab Hardware Guides – Q3 2014

A number of you have asked for recommendations on hardware for use in a home lab, so we created a page that will get updated on a regular basis.
This is by no means the only hardware that can be used, please use the comments below to share what hardware you run for a home lab.

Here are is the “LabGuides Recommended Home Lab Hardware” for Q3 2014!

$1,000 Price Point

CPU Intel i5-4460 $189.99 Quad 3.2
Motherboard Supermicro MBD-C7Z87-O $198.99 Dual Intel Onboard NIC
Memory G.Skill F3-1600C7Q-32GTX $329.99 CAS Latency 7 | Lifetime Warranty
Storage Intel 530 Series 240GB $159.99 THE standard
PSU Corsair CX600M $69.99 Solid PSU
Case Fractal Design R4 $99.99 Quiet | Plenty of Expansion
TOTAL $1,039.94

$2,000 Price Point

CPU Intel E5-1620 v2 $309.99 Quad 3.7
Motherboard Supermicro X9SRH-7TF $528 Dual 10GbE
Memory G.Skill KVR16R11D4K4/64I $649.99 64GB ECC
Storage Intel 2x 530 240GB $319.98 THE standard
PSU Corsair CX750M $79.99 Good 12v Rail Performance
Case Fractal Design Fractal R4 $99.99 Quiet | Plenty of Expansion
TOTAL $1,987.94

AutoLab Videos

The last few weeks I’ve been busy making videos. There are now setup videos covering the setup of all the supported outer virtualization platforms as well as a video that looks at populating the build share. Here are links to the videos:

VMware Workstation setup

VMware Player setup

VMware ESXi setup

VMware Fusion Setup

VMware Fusion, Building VMs

Populating the Build share

Another great set of AutoLab videos are made by Hersey Cartwright, you can find the first one here with all the other videos linked.

In addition I’ve been making videos for my new site, Notes4Engineers. This site has brief videos about designing, building and operating IT Infrastructure. And while we’re talking I will mention that I’m now delivering my own workshops, there’s more detail about this change on my blog.

Installing vCenter Server in a Linked Mode Group

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

What is a vCenter linked mode group, and why might you want to install multiple instances of vCenter Server into such a group? If you need more ESXi hosts or more VMs than a single vCenter Server instance can handle, or if you need more than one instance of vCenter Server, you can install multiple instances of vCenter Server to scale outward or sideways and have those instances share licensing and permission information. These multiple instances of vCenter Server that share information among them are referred to as a linked mode group. In a linked mode environment, there are multiple vCenter Server instances, and each of the instances has its own set of hosts, clusters, and VMs.

vCenter Server linked mode uses Microsoft ADAM to replicate the following information between the instances:

  • Connection information (IP addresses and ports)
  • Certificates and thumbprints
  • Licensing information
  • User roles and permissions

There are a few different reasons why you might need multiple vCenter Server instances running in a linked mode group. With vCenter Server 4.0, one common reason was the size of the environment. With the dramatic increases in capacity incorporated into vCenter Server 4.1 and above, the need for multiple vCenter Server instances due to size will likely decrease. However, you might still use multiple vCenter Server instances. You might prefer to deploy multiple vCenter Server instances in a linked mode group to accommodate organizational or geographic constraints, for example.

Item vCenter Server 4.0 vCenter Server 4.1 vCenter Server 5.0 → 5.5
ESXi hosts per vCenter Server instance 200 1000 1000
VMs per vCenter Server instance 3000 10000 10000

Before you install additional vCenter Server instances, you must verify the following prerequisites:

  • All computers that will run vCenter Server in a linked mode group must be members of a domain. The servers can exist in different domains only if a two-way trust relationship exists between the domains.
  • DNS must be operational. Also, the DNS name of the servers must match the server name.
  • The servers that will run vCenter Server cannot be domain controllers or terminal servers.
  • You cannot combine vCenter Server 5 instances in a linked mode group with earlier versions of vCenter Server.
  • vCenter Server instances in linked mode must be connected to a single SSO server, a two-node SSO cluster, or two nodes in multisite mode.
  • Windows vCenter is required. Linked mode is not supported with the Linux-based vCenter virtual appliance.

Each vCenter Server instance must have its own backend database, and each database must be configured as outlined earlier with the correct permissions. The databases can all reside on the same database server, or each database can reside on its own database server.
After you have met the prerequisites, installing vCenter Server in a linked mode group is straightforward. You follow the steps outlined previously in “Installing vCenter Server” until you get to step 10. In the previous instructions, you installed vCenter Server as a stand-alone instance in step 10. This sets up a master ADAM instance that vCenter Server uses to store its configuration information.
This time, however, at step 10 you simply select the option Join A VMware vCenter Server Group Using Linked Mode To Share Information. When you select to install into a linked mode group, the next screen also prompts for the name and port number of a remote vCenter Server instance. The new vCenter Server instance uses this information to replicate data from the existing server’s ADAM repository. After you’ve provided the information to connect to a remote vCenter Server instance, the rest of the installation follows the same steps.

You can also change the linked mode configuration after the installation of vCenter Server. For example, if you install an instance of vCenter Server and then realize you need to create a linked mode group, you can use the vCenter Server Linked Mode Configuration icon on the Start menu to change the configuration.
Perform the following steps to join an existing vCenter Server installation to a linked mode group:


  1. Log into the vCenter Server computer as an administrative user, and run vCenter Server Linked Mode Configuration from the Start menu.
  2. Click Next at the Welcome To The Installation Wizard For VMware vCenter Server screen.
  3. Select Modify Linked Mode Configuration, and click Next.
  4. To join an existing linked mode group, select “Join a VMware vCenter Server group using
    Linked Mode to share information,” and click Next. This is shown in Figure 3.10.
  5. A warning appears reminding you that you cannot join vCenter Server 5.5 with older versions of vCenter Server. Click OK.
  6. Supply the name of the server and the LDAP port. Specify the server name as a fully qualified domain name.It’s generally not necessary to modify the LDAP port unless you know that the other
    vCenter Server instance is running on a port other than the standard port.
    Click Next to continue.
  7. Click Continue to proceed.
  8. Click Finish.

Linked ModeUsing this same process, you can also remove an existing vCenter Server installation from a linked mode group.

After the additional vCenter Server is up and running in the linked mode group, logging in via the vSphere Client displays all the linked vCenter Server instances in the inventory view, as you can see in Figure 3.11.

One quick note about linked mode: While the licensing and permissions are shared among all the linked mode group members, each vCenter Server instance is managed separately, and each vCenter Server instance represents a vMotion domain by virtue of each vCenter Server having unique datacenter objects that ultimately represent a vMotion boundary. This means that you can’t perform a vMotion migration between vCenter Server instances in a linked mode group. We’ll discuss vMotion in detail in Chapter 12.

Installing vCenter Server onto a Windows Server–based computer, though, is only one of the options available for getting vCenter Server running in your environment. For those environ- ments that don’t need linked mode support or environments for which you want a full-featured virtual appliance with all the necessary network services, the vCenter Server virtual appliance is a good option. We’ll discuss the vCenter Server virtual appliance in the next section.

Reconfiguring the ESXi Management Network

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

During the installation of ESXi, the installer creates a virtual switch—also known as a vSwitch— bound to a physical NIC. The tricky part, depending on your server hardware, is that the installer might select a different physical NIC than the one you need for correct network connectivity. Consider the scenario depicted in Figure 2.12. If, for whatever reason, the ESXi installer doesn’t link the correct physical NIC to the vSwitch it creates, then you won’t have network connectivity to that host. We’ll talk more about why ESXi’s network connectivity must be configured with the correct NIC in Chapter 5, but for now just understand that this is a requirement for connectivity. Since you need network connectivity to manage the host from the vSphere Client, how do you fix this?

Change ESXi Management Network

The simplest fix for this problem is to unplug the network cable from the current Ethernet port in the back of the server and continue trying the remaining ports until the host is accessible, but that’s not always possible or desirable. The better way is to use the DCUI to reconfigure the management network so that it is converted the way you need it to be configured.

Perform the following steps to fix the management NIC in ESXi using the DCUI:

  1. Access the console of the ESXi host, either physically or via a remote console solution such as an IP-based KVM.
  2. On the ESXi home screen, shown in Figure 2.13, press F2 for Customize System/View Logs. If a root password has been set, enter that root password.
  3. From the System Customization menu, select Configure Management Network, and press Enter.
  4. From the Configure Management Network menu, select Network Adapters, and press Enter.
  5. Use the spacebar to toggle which network adapter or adapters will be used for the system’s management network, as shown in Figure 2.14. Press Enter when finished.
  6. Press Esc to exit the Configure Management Network menu. When prompted to apply changes and restart the management network, press Y. After the correct NIC has been assigned to the ESXi management network, the System Customization menu provides a Test Management Network option to verify network connectivity.
  7. Press Esc to log out of the System Customization menu and return to the ESXi home screen.

The other options within the DCUI for troubleshooting management network issues are covered in detail within Chapter 5.

At this point, you should have management network connectivity to the ESXi host, and from here forward you can use the vSphere Client to perform other configuration tasks, such as configuring time synchronization and name resolution.