I presented a What’s New in vSphere 6 presentation on last week’s US vBrownBag podcast.
Below is a copy of the recording and deck for those interested.
Earlier this year I recorded a video for the VMUG 2015 Virtual Event. As is often the case with online webinar platforms, the quality of the recording wasn’t as good as we’ve come to expect with the prevalence of online video these days. So, I posted the video (embedded below) to my YouTube channel, just like I did last year. Since that recording I learned a few things thanks to a couple of my colleagues that I want to point out.
Firstly, I mentioned that VSAN is using the VMware acquired Virsto file-system. This is incorrect. While VSAN in vSphere 6 does have improved sparseness and caching capabilities it’s not using Virsto. There is also mention of the 256 datastore limitation being removed by VVOLs, this is also incorrect.
Secondly, but much more exciting, VMware have announced the long awaited Windows vCenter to Linux vCenter Virtual Appliance Fling. My buddy William Lam (of www.virtuallyghetto.com fame) is pretty excited about this one! I thought it particularly relevant for those watching this video as I had a number of questions at the Virtual Event around this very topic. So head over and grab the fling. I might just do another video of what it looks like to migrate the AutoLab vCenter to a vCSA!
It has been a while, but the day has arrived. AutoLab version 2.0 is available for download. This version doesn’t support a new vSphere release since VMware hasn’t shipped one. AutoLab 2.0 is more of a maintenance and usability release.
The biggest feature is adding support for Windows Server 2012R2 as the platform for the domain controller and vCentre VMs. Naturally you should make sure the version of vSphere you deploy is supported on top of the version of Windows Server you use.
I have also removed the tagged VLANs which makes it easier to run multiple AutoLab instances on one ESXi server or to extend one AutoLab across a couple of physical servers if you only have smaller machines.
I’ve also added the ability to customize the password for the administrator accounts, which helps lock down an AutoLab environment.
Go ahead and download the new build from the usual download page and get stuck in. If you haven’t used AutoLab before make sure to read the deployment guide.
Synology have made a name for themselves over the past few years as one of the preferred home lab NAS solutions. In particular, they were one of the first consumer vendors to support VAAI in VMware environments and also one of the first to support SSD caching. If you ever wanted to checkout their DSM 5.0 software without purchasing any hardware, the following outlines how you can spin up their DSM software in a VM for testing purposes. Obviously this is not going to give you an exact comparison to hardware, but it’s a great way to test it in your lab. If you want to skip the “build” section and just spin one of these up right away, simply download and import this OVF file to your virtual environment and move on to “Installing the DSM software”.
How to Build the DSM Virtual Machine
There are a number of quick steps that you’ll need to perform to be able to spin up your own DSM VM. First of all, you will want a couple of bits of software:
Once you have all the tools, you need to modify the nonoboot image boot loader.
- Using WinImage, open the nanoboot file and find syslinux.cfg
- Extract and edit the syslinux.cfg file, find the lines that start with “kernel /ZImage” and add the following to the end of the line:
- Save the cfg file and inject it back to the nanoboot image overriding the exiting file.
- Next use Starwind to convert the nanoboot IMG file to an IDE pre-allocated VMDK
- Create a new VM and use these VMDKs as an “existing hard drive” IDE 0:0.
- Set the disk to “independent non-persistent”. Continue with Step 12 below
Installing the the DSM software
After you have a working bootable VM that emulates Synology hardware, its time to install the DSM software itself. You can add additional hardware to the VM at any time after this point (SCSI Disks, NICs, vCPU or Memory)
- Add SCSI based virtual hard disks to the VM for how much space you would like available for the virtual NAS
- Attach the network card in the VM to the correct network. DHCP must be enabled on the network.
- Power on the VM
- Select the 3rd option in the boot menu labeled “Upgrade / Downgrade”
- Once the IP is shown, use a web browser to the IP address listed on the console
- Follow the onscreen instructions to complete the installation wizard with the following options:
- Install the DSM version from disk (SD214 DSM 5.0 4482)
- Do not create a Synology Hybrid volume
- After some time the VM will reboot, and then power off.
- Power the VM back on and you will have a working Synology DSM Virtual Machine.
The guys at xpenology.com have a whole site dedicated to this stuff.
I had a bit of a problem over the last 8-9 months with 2 Supermicro X9SRH-7TF systems and the cheap and cheerful 10G Netgear XS708E switch. Somewhat sporadically (weeks would go by), the onboard X540 based 10G NICs would disconnect and the motherboard would require a full power cut (PSU switch off/on) before they would re-link again. If you follow me on twitter you would have heard numerous rants.
To begin with, I thought it was just one of the boards, so it was sent it back to Supermicro (no fault found) but then the other one started doing it too. Fast forward a few months and the servers were moved from 220v power to 110v (Australia -> USA move)… suddenly the problem disappeared. I thought it must have been some dodgy voltage step-down issue, but I didn’t think much more of it. I was simply happy the issue had been resolved and the HA wasn’t working overtime starting up failed VMs.
A couple of weeks ago, the servers were each upgraded with an Icy Dock MB994SP 4S to enable VSAN. Some older HP 10K 146GB SAS drives (with Samsung EVO SSDs) were used with the onboard LSI 2308. As soon as the servers were powered up, the NICs once again started disconnecting. At once I knew it must be a power issue, and I upgraded the PSUs in both servers. While calculating the total power draw, the originals (430W) should have been enough, but I now have plenty more in reserve (750W).
The moral of the story? Power Supplies are a relatively cheap component within your home lab system, so get one a little better than your current needs.
A number of you have asked for recommendations on hardware for use in a home lab, so we created a page that will get updated on a regular basis.
This is by no means the only hardware that can be used, please use the comments below to share what hardware you run for a home lab.
Here are is the “LabGuides Recommended Home Lab Hardware” for Q3 2014!
$1,000 Price Point
||Dual Intel Onboard NIC
|| CAS Latency 7 | Lifetime Warranty
||530 Series 240GB
||Quiet | Plenty of Expansion
$2,000 Price Point
||2x 530 240GB
||Good 12v Rail Performance
||Quiet | Plenty of Expansion
The last few weeks I’ve been busy making videos. There are now setup videos covering the setup of all the supported outer virtualization platforms as well as a video that looks at populating the build share. Here are links to the videos:
VMware Workstation setup
VMware Player setup
VMware ESXi setup
VMware Fusion Setup
VMware Fusion, Building VMs
Populating the Build share
Another great set of AutoLab videos are made by Hersey Cartwright, you can find the first one here with all the other videos linked.
In addition I’ve been making videos for my new site, Notes4Engineers. This site has brief videos about designing, building and operating IT Infrastructure. And while we’re talking I will mention that I’m now delivering my own workshops, there’s more detail about this change on my blog.