Author Archives: Nick Marshall

An Introduction to vSphere 5.5

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this software, download AutoLab.

Introducing VMware vSphere 5.5

Now in its fifth generation, VMware vSphere 5.5 builds on previous generations of VMware’s enterprise grade virtualization products. vSphere 5.5 extends fine-grained resource allocation controls to more types of resources, enabling VMware administrators to have even greater control over how resources are allocated to and used by virtual workloads. With dynamic resource controls, high availability, unprecedented fault-tolerance features, distributed resource management, and backup tools included as part of the suite, IT administrators have all the tools they need to run an enterprise environment ranging from a few servers to thousands of servers.
Exploring VMware vSphere 5.5

The VMware vSphere product suite is a comprehensive collection of products and features that together provide a full array of enterprise virtualization functionality. The vSphere product suite includes the following products and features:

VMware ESXi

The core of the vSphere product suite is the hypervisor, which is the virtualization layer that serves as the foundation for the rest of the product line. In vSphere 5 and later, including vSphere 5.5, the hypervisor comes in the form of VMware ESXi.

VMware vCenter Server

Stop for a moment to think about your current network. Does it include Active Directory? There is a good chance it does. Now imag-ine your network without Active Directory, without the ease of a centralized management database, without the single signon capabilities, and without the simplicity of groups. That is what managing VMware ESXi hosts would be like without using VMware vCenter Server. Not a very pleasant thought, is it? Now calm yourself down, take a deep breath, and know that vCenter Server, like Active Directory, is meant to provide a centralized management platform and framework for all ESXi hosts and their respective VMs. vCenter Server allows IT administrators to deploy, manage, monitor, automate, and secure a virtual infrastructure in a centralized fashion. To help provide scalability, vCenter Server leverages a backend database (Microsoft SQL Server and Oracle are both supported, among others) that stores all the data about the hosts and VMs.

vSphere Update Manager

vSphere Update Manager is an add-on package for vCenter Server that helps users keep their ESXi hosts and select VMs patched with the latest updates.

VMware vSphere Web Client and vSphere Client

vCenter Server provides a centralized management framework for VMware ESXi hosts, but it’s the vSphere Web Client (and its predecessor, the Windows based vSphere Client) where vSphere administrators will spend most of their time.

VMware vCenter Orchestrator

VMware vCenter Orchestrator is a workflow automation engine that is automatically installed with every instance of vCenter Server. Using vCenter Orchestrator, vSphere administrators can build automated workflows for a wide variety of tasks available within vCenter Server.

vSphere Virtual Symmetric Multi-Processing

The vSphere Virtual Symmetric Multi-Processing (vSMP or Virtual SMP) product allows virtual infrastructure administrators to construct VMs with multiple virtual processors. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM.

vSphere vMotion and vSphere Storage vMotion

If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. vSphere vMo-tion, also known as live migration, is a feature of ESXi and vCenter Server that allows an administrator to move a running VM from one physical host to another physical host without having to power off the VM. This migration between two physical hosts occurs with no down-time and with no loss of network connectivity to the VM. The ability to manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in today’s datacenters.

vSphere Distributed Resource Scheduler

vMotion is a manual operation, meaning that an administrator must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster.

vSphere Storage DRS

vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS helps balance storage capacity and storage performance across a cluster of datastores using mechanisms that echo those used by vSphere DRS.

Storage I/O Control and Network I/O Control

VMware vSphere has always had extensive controls for modifying or controlling the allocation of CPU and memory resources to VMs. What vSphere didn’t have prior to the release of vSphere 4.1 was a way to apply these same sort of extensive controls to storage I/O and network I/O. Storage I/O Control and Network I/O Control address that shortcoming.

Profile-Driven Storage

With profile-driven storage, vSphere administrators are able to use storage capabilities and VM storage profiles to ensure that VMs are residing on storage that is able to provide the necessary levels of capacity, performance, availability, and redundancy.

vSphere High Availability

In many cases, high availability (or the lack of high availability) is the key argument used against virtualization. The most com-mon form of this argument more or less sounds like this: “Before virtualization, the failure of a physical server affected only one application or workload. After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can’t put all our eggs in one basket!”
VMware addresses this concern with another feature present in ESXi clusters called vSphere High Availability (HA). Once again, by nature of the naming conventions (clusters, high availability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high availability configuration in Windows. The vSphere HA feature provides an automated process for restarting VMs that were running on an ESXi host at a time of server failure .

vSphere Fault Tolerance

While vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not be good enough for some workloads. vSphere Fault Tolerance (FT) might help in these situations.
As we described in the previous section, vSphere HA protects against unplanned physical server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some down-time (generally less than 3 minutes) is incurred. vSphere FT goes even further and eliminates any downtime in the event of a physical host failure.

vSphere Storage APIs for Data Protection and VMware Data Protection

One of the most critical aspects to any network, not just a virtualized infrastructure, is a solid backup strategy as defined by a company’s disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere 5.5 has two key components: the vSphere Storage APIs for Data Protection (VADP) and VMware Data Protection (VDP).

Virtual SAN (VSAN)

VSAN is a major new feature included with vSphere 5.5 and the evolution of work that VMware has been doing for a few years now. Building on top of the work VMware did with the vSphere Storage Appliance (VSA), VSAN lets organizations leverage the storage found in all their individual compute nodes and turn it into.. well, a virtual SAN.

vSphere Replication

vSphere Replication brings data replication, a feature typically found in hardware storage platforms, into vSphere itself. It’s been around since vSphere 5.0, when it was only enabled for use in conjunction with VMware Site Recovery Manager (SRM) 5.0. In vSphere 5.1, vSphere Replication was decoupled from SRM and enabled for use even without VMware SRM.

vFlash Read Cache

Flash Read Cache brings full support for using solid-state storage as a caching mechanism into vSphere. Using Flash Read Cache, administrators can assign solid-state caching space to VMs much in same manner as VMs are assigned CPU cores, RAM, or network connectivity. vSphere manages how the solid-state caching capacity is allocated and assigned and how it is used by the VMs.

Installing ESXi – Disk-less, CD-less, Net-less

When installing some new lab hosts the other day, I had bit of a situation. My hosts were in an isolated (new) network environment, they didn’t have CD/DVD drives and they were diskless. So, how do I install ESXi to these new hosts?

By far the simplest way to get ESXi up and running is simply burning the ISO to a CD and booting it from the local CD drive, but that wasn’t an option. Another option may have been to use a remote management capability (ILO, DRAC, RSA etc) but these hosts didn’t have that capability either. I prefer to build my hosts via a network boot option, but as I stated, these were on a network without any existing infrastructure (no PXE boot service, DHCP services, name resolution etc).
However, there’s a really simple way to get ESXi installed in an environment like this…

Did you know that ESXi is (for the most part) hardware agnostic? The hypervisor comes prebuilt to run on most common hardware and doesn’t tie itself to a particular platform post installation. I find the easiest way to get around the constraints listed above is to deploy onto USB storage from another machine and simply plug this USB device into the new server to boot. Usually, this “other machine” is a VM running under VMware Fusion or Workstation! Here’s the five easy steps:

  1. Build a new “blank” VM and use the ESXi ISO file as the OS so Fusion or Workstation can identify it as an ESXi build
  2. As the VM boots up, make sure you connect the USB device to the individual VM
  3. Install ESXi as normal, choosing the USB storage as the install location
  4. Shutdown the VM after the installer initiates the final reboot.
  5. Dismount / Unplug the USB device from the “build” machine and plug it into the host you wish to boot.

Virtual VMUG – Upgrading and Mastering vSphere 5.5

Today I presented a session at the Virtual VMUG conference titled “Upgrading and Mastering VMware vSphere 5.5”. It was a session based around the upgrade process to vSphere 5.5 and a few little tricks for playing with vFRC and VSAN inside AutoLab. Thanks for all those of you that attended, I was inundated with questions during and after the presentation. I’ll be sure to get your contact details and answer those I didn’t get to within the allotted timeframe.

As promised, below is the (higher quality) recording from the demo section of the presentation. The slide deck was pretty light on, but if people want it just comment below and I’ll upload that too.

OpenFiler Installation

VCP5 Blueprint Section

Objective 3.1 – Plan and Configure vSphere Storage

Activities

Boot and Install from the OpenFiler ISO
Configure and present a volume using iSCSI and NFS

Estimated Length

45 mins

Things to Watch

Make sure the partition table for OpenFiler is set up correctly
Pay close attention to the OpenFiler configuration in Part 2

Extra Credit

Identify the components of the Target IQN

Additional Reading

Part 1 – Installation

Create a new VM within VMware Workstation with the following settings:

  • VM Version 8.0
  • Disc Image: OpenFiler ISO
  • Guest OS: Other Linux 64-bit
  • VM Name: OpenFiler
  • 1x vCPU
  • 256MB RAM
  • Bridged Networking – VMnet3 (or appropriate for your lab)
  • I/O Controller: LSI Logic
  • Disk: SCSI 60GB

Edit the VM settings once it’s created and remove the following components:

  • Floppy
  • USB Controller
  • Sound Card
  • Printer

Power on the VM and hit enter to start loading the OpenFiler installer.

Hit next, select the appropriate language and accept that “ALL DATA” will be removed from this drive.

Create a custom layout partition table with the following attributes:

  • /boot – 100MB primary partition
  • Swap – 2048MB primary partition
  • / – 2048MB primary partition
  • * remainder should be left as free space

The installer will complain about not having enough RAM and asks if it can turn on swap space. Hit Yes to this dialogue box.

Accept the default EXTLINUX boot loader and configure the networking as per the Lab Setup Convention

Select the appropriate time zone for your location and on the next screen set the root password.

Once the password is entered all configuration items are complete and installation will commence. Go make a coffee :)

After installation is complete the installer will ask to reboot. Allow this to happen and eventually a logon prompt will appear.

Part 2 of this Lab Guide will configure this newly built VM ready to be used as shared storage by your ESXi host VMs

Part 2 – Configuration

Point your browser of choice at the OpenFiler VM IP address on port 446. You will be prompted to accept an unsigned certificate.

Once at the prompt login with the following details:

  • Username: openfiler
  • Password: password

You will be presented with the Status / System Information page.

Click on the blue “System” tab at the top. Scroll down the Network Access Configuration. To allow the OpenFiler NAS to talk to the rest of our Lab we need to configure an ACL. Add the following and then click the Update button below:

  • Name: lab.local
  • Network/Host: 192.168.199.0 (or appropriate for your lab)
  • Netmask: 255.255.255.0 (or appropriate for your lab)
  • Type: Share

Since the first type of storage we will be using within our lab environment will be iSCSI we need to enable this within OpenFiler.
Click on the blue “Services” tab, then enable and start the CIFS, NFS and iSCSI related services.

Click the next blue tab “Volumes”. In here we will be configuring the OpenFiler virtual disk to be shared to other hosts.
Since no physical volumes exist at this point, lets create one now by clicking create new physical volumes” and then on “/dev/sda” in the Edit Disk column

This view shows the partition layout that was created in Part 1 – Installation. 4.1GB / 7% of used space for the OpenFiler installation and around 56GB / 39% Free.

Towards the bottom of this page, we are given the option to create new partitions. Usually in a Production environment there would be RAID protection and hardware based NAS or SAN equipment but for the purposes of learning we have no need for expensive equipment. OpenFiler can present a single Physical (or Virtual) disk that will suit our needs.
For unknown reasons OpenFiler does not like directly adjacent partitions to the OS therefore create a new partition with the following settings

  • Mode: Primary
  • Partition Type: Physical Volume
  • Starting cylinder: 600
  • Ending cylinder: 7750

Once the new partition is created, the new layout is presented showing the additional /dev/sda4 partition.

We now have a new Physical Volume that we can continue configuring for presenting to ESXi. The next step takes us back to the blue “Volumes” tab. Now we have a physical volume that is accessible OpenFiler gives us the option to add this volume to a “Volume Group Name”.
Create a Volume Group Name of: LABVG with “/dev/sda4” selected.

Now that the LAB Volume Group is created, we can create an actual volume that is allocated directly to presented storage (NFS, iSCSI etc).

In the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABISCSIVOL
  • Volume Description: LAB.Local iSCSI Volume
  • Required Space (MB): 40960 (40GB)
  • Filesystem / Volume type: block (iSCSI,FC,etc)

The following screen is presented, which indicates the storage volume created is now allocated to Block Storage.

The next step to present the iSCSI Block Storage to ESXi is to setup a Target and LUN.
Still under the blue “Volumes” tab, click “iSCSI Targets” in the right hand menu, you will be presented with the following “Add new iSCSI Target” option. Leave the Target IQN and click the “Add” button.

Once the new iSCSI Target is created we need to allocate this target an actual LUN. Click the grey “LUN Mapping” tab. All Details for this volume can be left as is. It is just a matter of clicking the “Map” button.

The very last iSCSI step is to allow our ESXi hosts network access to our newly presented storage. Click on the grey “Network ACL” tab and change the lab.local ACL to “allow” and click Update.

At this stage leave the CHAP Authentication settings as we will discuss this in a later lab.
One last thing before closing out this lengthy lab… NFS. Along with iSCSI we also want to experiment with NFS, plus we also need a place to locate our ISO repository. In production environments this is usually placed on cheaper storage which is “usually” presented via a NAS as NFS or SMB. Let’s create an NFS partition and mount point while we’re here.
Within the blue Volumes tab, in the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABNFSVOL
  • Volume Description: LAB.Local NFS Volume
  • Required Space (MB): 12512 (Max)
  • Filesystem / Volume type: Ext4

Now head to the blue “Shares” tab and click on “LAB.Local NFS Volume”. This prompts you for a Sub-folder name, call it Build and then click “Create Sub-folder”

Now we want to click on this new Sub-folder and click the “Make Share” button.
The next page has three settings we need to change, and then we’re all done.
Add “Build” to the “Override SMB/Rsyn share name”. Change the “Share Access Control Mode” to “Public guest access”, then click the “Update” button.

And then change the “Host access configuration” to be SMB = RW, NFS = RW, HTTP(S) = NO, FTP = NO, Rsync = NO. Tick the “Restart services” checkbox then, click “Edit” and change “UID/GID Mappping” to “no_root_squash” for the last time, click the “Update” button.

Congratulations, you now have a functioning NAS virtual machine that presents virtual hard disks like any “real” NAS or SAN.

Mastering vSphere 5.5

vsphere5.5cover

Just a quick note for those wanting a companion text for their AutoLab deployment…

I’ve been working on the latest revision of the bestselling Mastering VMware vSphere book for the last 9 months. While AutoLab isn’t updated for vSphere 5.5 yet rest assured that Alastair and I will start work on it as soon as it goes GA next month.

Anyway, if you’re studying for that VCP5-DCV certification or just want a comprehensive practical guide for all the vSphere features, pick up a copy from your local or electronic store of choice!

Mastering VMware vSphere 5.5 is available on Amazon for pre-order now.

Capture

VMworld 2012 – Vote for the AutoLab Sessions!

With VMworld just around the corner, session voting has been opened up to the public.

Alastair and Nick have submitted a number of sessions and would really appreciate any votes cast our way. All you need is a free VMworld account (you don’t have to be attending VMworld) and then apply a filter for either “Alastair Cooke” or “Nick Marshall” then tick the thumbs up next to our sessions. Vote here.

  • 1496 vSphere AutoLab. Build Your Personal Training and Test Lab Using the PC You Already Have Without All the Hardwork
  • 1497 Secrets of the vSphere AutoLab: How and When to Build Automation Into Your vSphere Deployment
  • 1498 Certification Preparation with the Community Lab Guide to vSphere 5

There’s also a session submitted for the vBrownBag crew:

  • 2356 The vBrownBag Panel: Certification Preparation By the Community for the Community

Thanks for your vote!

Introduction

Welcome to LabGuides.com!

This site was created as the online companion to the upcoming VMware Certified Professional 5 LabGuides ebook written by Alastair Cooke and Nick Marshall. It also hosts the AutoLab developed by Alastair which can be used to follow the ebook.

Time permitting the site will be expanded to be used as the companion for other ebooks and possibly guides directly on the site.

For any feedback, bug reports or just to contact us, email feedback@labguides.com