AutoLab YouTube Channel

We now have a YouTube Channel for AutoLab videos. You can find the channel here.

To start off I’ve posted a video on setting up the networking under Fusion Pro 5 for the AutoLab and another with the initial setup and populating the build share on the same MAC, right up to building the domain controller VM.

I plan to put up more videos around the initial setup and builds, on Workstation, ESXi and VMware Player.

AutoLab V1.5 with vSphere 5.5 support

It is finally here, the long awaited AutoLab build with support for vSphere 5.5 is available for your downloading pleasure at the AutoLab home page.

The addition of v5.5 support is the headline item here, along with a fair bit of cleanup. The big change is that ESXi 5.5 wants 4GB of RAM before it will install, not a problem if you have 16GB of RAM in your physical machine, but those still using 8GB will need to reduce each ESXi VMs RAM to 2GB after it builds.

vSphere 5.5 brings VSAN, so there are now three ESXi VMs in the lab. Each is configured with three disks, one for boot, one tagged as SSD and the final as a normal disk. To get VSAN to configure you will need all three ESXi servers built & they will need their RAM increased to 6GB each (24GB physical required) but after that VSAN deploys normally.

I’ve also added support for View 5.2 & 5.3 since these have shipped since I started the new build. SRM support is a work in progress, you will find signs of the work in this build but it is still a little way off. The deployment guide has been reworked a little so it’s flow is a little clearer and formatted to be prettier than the eBook format I experimented with.

I have a laundry list of fixes and enhancements that I will add to AutoLab in the coming months. Big thanks to Infinio for their support which will enable me to take some time off to do this work. If you would like to contribute to AutoLab or have suggestions of ways to make it easier get in touch via feedback@labguides.com or contact me on Twitter @DemitasseNZ.

Have fun.

Installing ESXi – Disk-less, CD-less, Net-less

When installing some new lab hosts the other day, I had bit of a situation. My hosts were in an isolated (new) network environment, they didn’t have CD/DVD drives and they were diskless. So, how do I install ESXi to these new hosts?

By far the simplest way to get ESXi up and running is simply burning the ISO to a CD and booting it from the local CD drive, but that wasn’t an option. Another option may have been to use a remote management capability (ILO, DRAC, RSA etc) but these hosts didn’t have that capability either. I prefer to build my hosts via a network boot option, but as I stated, these were on a network without any existing infrastructure (no PXE boot service, DHCP services, name resolution etc).
However, there’s a really simple way to get ESXi installed in an environment like this…

Did you know that ESXi is (for the most part) hardware agnostic? The hypervisor comes prebuilt to run on most common hardware and doesn’t tie itself to a particular platform post installation. I find the easiest way to get around the constraints listed above is to deploy onto USB storage from another machine and simply plug this USB device into the new server to boot. Usually, this “other machine” is a VM running under VMware Fusion or Workstation! Here’s the five easy steps:

  1. Build a new “blank” VM and use the ESXi ISO file as the OS so Fusion or Workstation can identify it as an ESXi build
  2. As the VM boots up, make sure you connect the USB device to the individual VM
  3. Install ESXi as normal, choosing the USB storage as the install location
  4. Shutdown the VM after the installer initiates the final reboot.
  5. Dismount / Unplug the USB device from the “build” machine and plug it into the host you wish to boot.

Big news for AutoLab

It has been a bit quiet on the public side of AutoLab for a while, mostly because my day job has been getting in the way of developing the new version of AutoLab with vSphere 5.5 support. The new release is almost ready, some final testing and documentation updates so the next couple of weeks should see AutoLab 1.5 released.

Infinio-Logo-300px

The big news for this week is that AutoLab has a new sponsor, my friends at Infinio are helping to enable me to spend some more time developing AutoLab and generally make the project better. If you haven’t heard about Infinio I suggest you take a look at their site, the basics of the product is that it can alleviate NFS storage performance issues on ESXi servers. One potential customer I saw last year was spending a lot of money on more disk shelves for their aging NFS array just to get better performance, the Infinio caching could have given them a much more graceful solution to the problem for less money.

Infinio have their own big news today, the release of Infinio Accelerator version 1.2, which add support for ESXi5.5. Of special interest is that they are giving away a limited number of licenses for home lab use. If your home lab is based around a low end NAS with SATA disks as shared storage you already know that this is a bottleneck, adding the Infinio accelerator will improve storage performance. You can enter the giveaway by filling in this form on Infinio’s site.

One of the things that makes me happy about the Infinio support is that we finally have a logo for AutoLab which you may see on shirts and buttons at future events.

AutoLab color  leftside 72 dpi

Stay tuned for the AutoLab v1.5 release post

Al.

Virtual VMUG – Upgrading and Mastering vSphere 5.5

Today I presented a session at the Virtual VMUG conference titled “Upgrading and Mastering VMware vSphere 5.5”. It was a session based around the upgrade process to vSphere 5.5 and a few little tricks for playing with vFRC and VSAN inside AutoLab. Thanks for all those of you that attended, I was inundated with questions during and after the presentation. I’ll be sure to get your contact details and answer those I didn’t get to within the allotted timeframe.

As promised, below is the (higher quality) recording from the demo section of the presentation. The slide deck was pretty light on, but if people want it just comment below and I’ll upload that too.

OpenFiler Installation

VCP5 Blueprint Section

Objective 3.1 – Plan and Configure vSphere Storage

Activities

Boot and Install from the OpenFiler ISO
Configure and present a volume using iSCSI and NFS

Estimated Length

45 mins

Things to Watch

Make sure the partition table for OpenFiler is set up correctly
Pay close attention to the OpenFiler configuration in Part 2

Extra Credit

Identify the components of the Target IQN

Additional Reading

Part 1 – Installation

Create a new VM within VMware Workstation with the following settings:

  • VM Version 8.0
  • Disc Image: OpenFiler ISO
  • Guest OS: Other Linux 64-bit
  • VM Name: OpenFiler
  • 1x vCPU
  • 256MB RAM
  • Bridged Networking – VMnet3 (or appropriate for your lab)
  • I/O Controller: LSI Logic
  • Disk: SCSI 60GB

Edit the VM settings once it’s created and remove the following components:

  • Floppy
  • USB Controller
  • Sound Card
  • Printer

Power on the VM and hit enter to start loading the OpenFiler installer.

Hit next, select the appropriate language and accept that “ALL DATA” will be removed from this drive.

Create a custom layout partition table with the following attributes:

  • /boot – 100MB primary partition
  • Swap – 2048MB primary partition
  • / – 2048MB primary partition
  • * remainder should be left as free space

The installer will complain about not having enough RAM and asks if it can turn on swap space. Hit Yes to this dialogue box.

Accept the default EXTLINUX boot loader and configure the networking as per the Lab Setup Convention

Select the appropriate time zone for your location and on the next screen set the root password.

Once the password is entered all configuration items are complete and installation will commence. Go make a coffee :)

After installation is complete the installer will ask to reboot. Allow this to happen and eventually a logon prompt will appear.

Part 2 of this Lab Guide will configure this newly built VM ready to be used as shared storage by your ESXi host VMs

Part 2 – Configuration

Point your browser of choice at the OpenFiler VM IP address on port 446. You will be prompted to accept an unsigned certificate.

Once at the prompt login with the following details:

  • Username: openfiler
  • Password: password

You will be presented with the Status / System Information page.

Click on the blue “System” tab at the top. Scroll down the Network Access Configuration. To allow the OpenFiler NAS to talk to the rest of our Lab we need to configure an ACL. Add the following and then click the Update button below:

  • Name: lab.local
  • Network/Host: 192.168.199.0 (or appropriate for your lab)
  • Netmask: 255.255.255.0 (or appropriate for your lab)
  • Type: Share

Since the first type of storage we will be using within our lab environment will be iSCSI we need to enable this within OpenFiler.
Click on the blue “Services” tab, then enable and start the CIFS, NFS and iSCSI related services.

Click the next blue tab “Volumes”. In here we will be configuring the OpenFiler virtual disk to be shared to other hosts.
Since no physical volumes exist at this point, lets create one now by clicking create new physical volumes” and then on “/dev/sda” in the Edit Disk column

This view shows the partition layout that was created in Part 1 – Installation. 4.1GB / 7% of used space for the OpenFiler installation and around 56GB / 39% Free.

Towards the bottom of this page, we are given the option to create new partitions. Usually in a Production environment there would be RAID protection and hardware based NAS or SAN equipment but for the purposes of learning we have no need for expensive equipment. OpenFiler can present a single Physical (or Virtual) disk that will suit our needs.
For unknown reasons OpenFiler does not like directly adjacent partitions to the OS therefore create a new partition with the following settings

  • Mode: Primary
  • Partition Type: Physical Volume
  • Starting cylinder: 600
  • Ending cylinder: 7750

Once the new partition is created, the new layout is presented showing the additional /dev/sda4 partition.

We now have a new Physical Volume that we can continue configuring for presenting to ESXi. The next step takes us back to the blue “Volumes” tab. Now we have a physical volume that is accessible OpenFiler gives us the option to add this volume to a “Volume Group Name”.
Create a Volume Group Name of: LABVG with “/dev/sda4” selected.

Now that the LAB Volume Group is created, we can create an actual volume that is allocated directly to presented storage (NFS, iSCSI etc).

In the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABISCSIVOL
  • Volume Description: LAB.Local iSCSI Volume
  • Required Space (MB): 40960 (40GB)
  • Filesystem / Volume type: block (iSCSI,FC,etc)

The following screen is presented, which indicates the storage volume created is now allocated to Block Storage.

The next step to present the iSCSI Block Storage to ESXi is to setup a Target and LUN.
Still under the blue “Volumes” tab, click “iSCSI Targets” in the right hand menu, you will be presented with the following “Add new iSCSI Target” option. Leave the Target IQN and click the “Add” button.

Once the new iSCSI Target is created we need to allocate this target an actual LUN. Click the grey “LUN Mapping” tab. All Details for this volume can be left as is. It is just a matter of clicking the “Map” button.

The very last iSCSI step is to allow our ESXi hosts network access to our newly presented storage. Click on the grey “Network ACL” tab and change the lab.local ACL to “allow” and click Update.

At this stage leave the CHAP Authentication settings as we will discuss this in a later lab.
One last thing before closing out this lengthy lab… NFS. Along with iSCSI we also want to experiment with NFS, plus we also need a place to locate our ISO repository. In production environments this is usually placed on cheaper storage which is “usually” presented via a NAS as NFS or SMB. Let’s create an NFS partition and mount point while we’re here.
Within the blue Volumes tab, in the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABNFSVOL
  • Volume Description: LAB.Local NFS Volume
  • Required Space (MB): 12512 (Max)
  • Filesystem / Volume type: Ext4

Now head to the blue “Shares” tab and click on “LAB.Local NFS Volume”. This prompts you for a Sub-folder name, call it Build and then click “Create Sub-folder”

Now we want to click on this new Sub-folder and click the “Make Share” button.
The next page has three settings we need to change, and then we’re all done.
Add “Build” to the “Override SMB/Rsyn share name”. Change the “Share Access Control Mode” to “Public guest access”, then click the “Update” button.

And then change the “Host access configuration” to be SMB = RW, NFS = RW, HTTP(S) = NO, FTP = NO, Rsync = NO. Tick the “Restart services” checkbox then, click “Edit” and change “UID/GID Mappping” to “no_root_squash” for the last time, click the “Update” button.

Congratulations, you now have a functioning NAS virtual machine that presents virtual hard disks like any “real” NAS or SAN.

Mastering vSphere 5.5

vsphere5.5cover

Just a quick note for those wanting a companion text for their AutoLab deployment…

I’ve been working on the latest revision of the bestselling Mastering VMware vSphere book for the last 9 months. While AutoLab isn’t updated for vSphere 5.5 yet rest assured that Alastair and I will start work on it as soon as it goes GA next month.

Anyway, if you’re studying for that VCP5-DCV certification or just want a comprehensive practical guide for all the vSphere features, pick up a copy from your local or electronic store of choice!

Mastering VMware vSphere 5.5 is available on Amazon for pre-order now.