Category Archives: VMware

AutoLab Version 2.0 Released

It has been a while, but the day has arrived. AutoLab version 2.0 is available for download. This version doesn’t support a new vSphere release since VMware hasn’t shipped one. AutoLab 2.0 is more of a maintenance and usability release.

The biggest feature is adding support for Windows Server 2012R2 as the platform for the domain controller and vCentre VMs. Naturally you should make sure the version of vSphere you deploy is supported on top of the version of Windows Server you use.

I have also removed the tagged VLANs which makes it easier to run multiple AutoLab instances on one ESXi server or to extend one AutoLab across a couple of physical servers if you only have smaller machines.

I’ve also added the ability to customize the password for the administrator accounts, which helps lock down an AutoLab environment.

Go ahead and download the new build from the usual download page and get stuck in. If you haven’t used AutoLab before make sure to read the deployment guide.

AutoLab Videos

The last few weeks I’ve been busy making videos. There are now setup videos covering the setup of all the supported outer virtualization platforms as well as a video that looks at populating the build share. Here are links to the videos:

VMware Workstation setup

VMware Player setup

VMware ESXi setup

VMware Fusion Setup

VMware Fusion, Building VMs

Populating the Build share

Another great set of AutoLab videos are made by Hersey Cartwright, you can find the first one here with all the other videos linked.

In addition I’ve been making videos for my new site, Notes4Engineers. This site has brief videos about designing, building and operating IT Infrastructure. And while we’re talking I will mention that I’m now delivering my own workshops, there’s more detail about this change on my blog.

Installing vCenter Server in a Linked Mode Group

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

What is a vCenter linked mode group, and why might you want to install multiple instances of vCenter Server into such a group? If you need more ESXi hosts or more VMs than a single vCenter Server instance can handle, or if you need more than one instance of vCenter Server, you can install multiple instances of vCenter Server to scale outward or sideways and have those instances share licensing and permission information. These multiple instances of vCenter Server that share information among them are referred to as a linked mode group. In a linked mode environment, there are multiple vCenter Server instances, and each of the instances has its own set of hosts, clusters, and VMs.

vCenter Server linked mode uses Microsoft ADAM to replicate the following information between the instances:

  • Connection information (IP addresses and ports)
  • Certificates and thumbprints
  • Licensing information
  • User roles and permissions

There are a few different reasons why you might need multiple vCenter Server instances running in a linked mode group. With vCenter Server 4.0, one common reason was the size of the environment. With the dramatic increases in capacity incorporated into vCenter Server 4.1 and above, the need for multiple vCenter Server instances due to size will likely decrease. However, you might still use multiple vCenter Server instances. You might prefer to deploy multiple vCenter Server instances in a linked mode group to accommodate organizational or geographic constraints, for example.

Item vCenter Server 4.0 vCenter Server 4.1 vCenter Server 5.0 → 5.5
ESXi hosts per vCenter Server instance 200 1000 1000
VMs per vCenter Server instance 3000 10000 10000

Before you install additional vCenter Server instances, you must verify the following prerequisites:

  • All computers that will run vCenter Server in a linked mode group must be members of a domain. The servers can exist in different domains only if a two-way trust relationship exists between the domains.
  • DNS must be operational. Also, the DNS name of the servers must match the server name.
  • The servers that will run vCenter Server cannot be domain controllers or terminal servers.
  • You cannot combine vCenter Server 5 instances in a linked mode group with earlier versions of vCenter Server.
  • vCenter Server instances in linked mode must be connected to a single SSO server, a two-node SSO cluster, or two nodes in multisite mode.
  • Windows vCenter is required. Linked mode is not supported with the Linux-based vCenter virtual appliance.

Each vCenter Server instance must have its own backend database, and each database must be configured as outlined earlier with the correct permissions. The databases can all reside on the same database server, or each database can reside on its own database server.
After you have met the prerequisites, installing vCenter Server in a linked mode group is straightforward. You follow the steps outlined previously in “Installing vCenter Server” until you get to step 10. In the previous instructions, you installed vCenter Server as a stand-alone instance in step 10. This sets up a master ADAM instance that vCenter Server uses to store its configuration information.
This time, however, at step 10 you simply select the option Join A VMware vCenter Server Group Using Linked Mode To Share Information. When you select to install into a linked mode group, the next screen also prompts for the name and port number of a remote vCenter Server instance. The new vCenter Server instance uses this information to replicate data from the existing server’s ADAM repository. After you’ve provided the information to connect to a remote vCenter Server instance, the rest of the installation follows the same steps.

You can also change the linked mode configuration after the installation of vCenter Server. For example, if you install an instance of vCenter Server and then realize you need to create a linked mode group, you can use the vCenter Server Linked Mode Configuration icon on the Start menu to change the configuration.
Perform the following steps to join an existing vCenter Server installation to a linked mode group:

LinkedSetup

  1. Log into the vCenter Server computer as an administrative user, and run vCenter Server Linked Mode Configuration from the Start menu.
  2. Click Next at the Welcome To The Installation Wizard For VMware vCenter Server screen.
  3. Select Modify Linked Mode Configuration, and click Next.
  4. To join an existing linked mode group, select “Join a VMware vCenter Server group using
    Linked Mode to share information,” and click Next. This is shown in Figure 3.10.
  5. A warning appears reminding you that you cannot join vCenter Server 5.5 with older versions of vCenter Server. Click OK.
  6. Supply the name of the server and the LDAP port. Specify the server name as a fully qualified domain name.It’s generally not necessary to modify the LDAP port unless you know that the other
    vCenter Server instance is running on a port other than the standard port.
    Click Next to continue.
  7. Click Continue to proceed.
  8. Click Finish.

Linked ModeUsing this same process, you can also remove an existing vCenter Server installation from a linked mode group.

After the additional vCenter Server is up and running in the linked mode group, logging in via the vSphere Client displays all the linked vCenter Server instances in the inventory view, as you can see in Figure 3.11.

One quick note about linked mode: While the licensing and permissions are shared among all the linked mode group members, each vCenter Server instance is managed separately, and each vCenter Server instance represents a vMotion domain by virtue of each vCenter Server having unique datacenter objects that ultimately represent a vMotion boundary. This means that you can’t perform a vMotion migration between vCenter Server instances in a linked mode group. We’ll discuss vMotion in detail in Chapter 12.

Installing vCenter Server onto a Windows Server–based computer, though, is only one of the options available for getting vCenter Server running in your environment. For those environ- ments that don’t need linked mode support or environments for which you want a full-featured virtual appliance with all the necessary network services, the vCenter Server virtual appliance is a good option. We’ll discuss the vCenter Server virtual appliance in the next section.

Reconfiguring the ESXi Management Network

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

During the installation of ESXi, the installer creates a virtual switch—also known as a vSwitch— bound to a physical NIC. The tricky part, depending on your server hardware, is that the installer might select a different physical NIC than the one you need for correct network connectivity. Consider the scenario depicted in Figure 2.12. If, for whatever reason, the ESXi installer doesn’t link the correct physical NIC to the vSwitch it creates, then you won’t have network connectivity to that host. We’ll talk more about why ESXi’s network connectivity must be configured with the correct NIC in Chapter 5, but for now just understand that this is a requirement for connectivity. Since you need network connectivity to manage the host from the vSphere Client, how do you fix this?

Change ESXi Management Network

The simplest fix for this problem is to unplug the network cable from the current Ethernet port in the back of the server and continue trying the remaining ports until the host is accessible, but that’s not always possible or desirable. The better way is to use the DCUI to reconfigure the management network so that it is converted the way you need it to be configured.

Perform the following steps to fix the management NIC in ESXi using the DCUI:

  1. Access the console of the ESXi host, either physically or via a remote console solution such as an IP-based KVM.
  2. On the ESXi home screen, shown in Figure 2.13, press F2 for Customize System/View Logs. If a root password has been set, enter that root password.
  3. From the System Customization menu, select Configure Management Network, and press Enter.
  4. From the Configure Management Network menu, select Network Adapters, and press Enter.
  5. Use the spacebar to toggle which network adapter or adapters will be used for the system’s management network, as shown in Figure 2.14. Press Enter when finished.
  6. Press Esc to exit the Configure Management Network menu. When prompted to apply changes and restart the management network, press Y. After the correct NIC has been assigned to the ESXi management network, the System Customization menu provides a Test Management Network option to verify network connectivity.
  7. Press Esc to log out of the System Customization menu and return to the ESXi home screen.

The other options within the DCUI for troubleshooting management network issues are covered in detail within Chapter 5.

At this point, you should have management network connectivity to the ESXi host, and from here forward you can use the vSphere Client to perform other configuration tasks, such as configuring time synchronization and name resolution.

Performing an Unattended Installation of ESXi

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

ESXi Installation Scripts

ESXi supports the use of an installation script (often referred to as a kickstart, or KS, script) that automates the installation routine. By using an installation script, users can create unattended installation routines that make it easy to quickly deploy multiple instances of ESXi.
ESXi comes with a default installation script on the installation media. Listing 2.1 shows the default installation script.


Listing 2.1: ESXi provides a default installation script

#
# Sample scripted installation file
#
# Accept the VMware End User License Agreement
vmaccepteula
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
# Install on the first local disk available on machine
install --firstdisk --overwritevmfs
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )

If you want to use this default install script to install ESXi, you can specify it when booting the VMware ESXi installer by adding the ks=file://etc/vmware/weasel/ks.cfg boot option. We’ll show you how to specify that boot option shortly.
Of course, the default installation script is useful only if the settings work for your environment. Otherwise, you’ll need to create a custom installation script. The installation script commands are much the same as those supported in previous versions of vSphere. Here’s a breakdown of some of the commands supported in the ESXi installation script:

accepteula or vmaccepteula
These commands accept the ESXi license agreement.

Install
The install command specifies that this is a fresh installation of ESXi, not an upgrade. You must also specify the following parameters:

–firstdisk Specifies the disk on which ESXi should be installed. By default, the ESXi installer chooses local disks first, then remote disks, and then USB disks. You can change the order by appending a comma-separated list to the –firstdisk command, like this:
–firstdisk=remote,local This would install to the first available remote disk and then to the first available local disk. Be careful here—you don’t want to inadvertently overwrite something (see the next set of commands).
–overwritevmfs or –preservevmfs These commands specify how the installer will handle existing VMFS datastores. The commands are pretty self explanatory.

Keyboard
This command specifies the keyboard type. It’s an optional component in the installation script.

Network
This command provides the network configuration for the ESXi host being installed. It is optional but generally recommended. Depending on your configuration, some of the additional parameters are required:

–bootproto This parameter is set to dhcp for assigning a network address via DHCP or to static for manual assignment of an IP address.
–ip This sets the IP address and is required with –bootproto=static. The IP address should be specified in standard dotted-decimal format.
–gateway This command specifies the IP address of the default gateway in standard dotted-decimal format. It’s required if you specified –bootproto=static.
–netmask The network mask, in standard dotted-decimal format, is specified with this command. If you specify –bootproto=static, you must include this value.
–hostname Specifies the hostname for the installed system.
–vlanid If you need the system to use a VLAN ID, specify it with this command. Without a VLAN ID specified, the system will respond only to untagged traffic.
–addvmportgroup This parameter is set to either 0 or 1 and controls whether a default VM Network port group is created. 0 does not create the port group; 1 does create the port group.

Reboot
This command is optional and, if specified, will automatically reboot the system at the end of installation. If you add the –noeject parameter, the CD is not ejected.

Rootpw
This is a required parameter and sets the root password for the system. If you don’t want the root password displayed in the clear, generate an encrypted password and use the –iscrypted parameter.

Upgrade
This specifies an upgrade to ESXi 5.5. The upgrade command uses many of the same parameters as install and also supports a parameter for deleting the ESX Service Console VMDK for upgrades from ESX to ESXi. This parameter is the –deletecosvmdk parameter.

This is by no means a comprehensive list of all the commands available in the ESXi installation script, but it does cover the majority of the commands you’ll see in use.

Looking back at Listing 2.1, you’ll see that the default installation script incorporates a %post section, where additional scripting can be added using either the Python interpreter or the BusyBox interpreter. What you don’t see in Listing 2.1 is the %firstboot section, which also allows you to add Python or BusyBox commands for customizing the ESXi installation. This section comes after the installation script commands but before the %post section. Any command supported in the ESXi shell can be executed in the %firstboot section, so commands such as vim-cmd, esxcfg-vswitch, esxcfg-vmknic, and others can be combined in the %firstboot section of the installation script.

A number of commands that were supported in previous versions of vSphere (by ESX or ESXi) are no longer supported in installation scripts for ESXi 5.5, such as these:

  • autopart (replaced by install, upgrade, or installorupgrade)
  • auth or authconfig
  • bootloader
  • esxlocation
  • firewall
  • firewallport
  • serialnum or vmserialnum
  • timezone
  • virtualdisk
  • zerombr
  • The –level option of %firstboot

Once you have created the installation script you will use, you need to specify that script as part of the installation routine.
Specifying the location of the installation script as a boot option is not only how you would tell the installer to use the default script but also how you tell the installer to use a custom installation script that you’ve created. This installation script can be located on a USB flash drive or in a network location accessible via NFS, HTTP, HTTPS, or FTP. Table 2.1 summarizes some of the supported boot options for use with an unattended installation of ESXi.


Table 2.1: Boot options for an unattended ESXi installation

Boot Option Brief Description
ks=cdrom:/path Uses the installation script found at path on the CD-ROM. The installer checks all CD-ROM drives until the file matching the specified path is found.
ks=usb Uses the installation script named ks.cfg found in the root directory of an attached USB device. All USB devices are searched as long as they have a FAT16 or FAT32 file system.
ks=usb:/path Uses the installation script at the specified path on an attached USB device. This allows you to use a different filename or location for the installation script.
ks=protocol:/serverpath Uses the installation script found at the specified network location. The protocol can be NFS; HTTP; HTTPS or FTP.
ip=XX.XX.XX.XX Specifies a static IP address for downloading the installation script and the installation media.
nameserver=XX.XX.XX.XX Provides the IP address of a Domain Name System (DNS) server to use for name resolution when downloading the installation script or the installation media.
gateway=XX.XX.XX.XX Provides the network gateway to be used as the default gateway for downloading the installation script and the installation media.
netmask=XX.XXXX.XX Specifies the network mask for the network interface used to download the installation script or the installation media.
vlanid=XX Configures the network interface to be on the specified VLAN when downloading the installation script or the installation media.

Not a Comprehensive List of Boot Options

The list found in Table 2.1 includes only some of the more commonly used boot options for performing a scripted installation of ESXi. For the complete list of supported boot options, refer to the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere.

To use one or more of these boot options during the installation, you’ll need to specify them at the boot screen for the ESXi installer. The bottom of the installer boot screen states that you can press Shift+O to edit the boot options.
The following code line is an example that could be used to retrieve the installation script from an HTTP URL; this would be entered at the prompt at the bottom of the installer boot screen:

<ENTER: Apply options and boot> <ESC: Cancel>
> runweasel ks=http://192.168.1.1/scripts/ks.cfg ip=192.168.1.200
netmask=255.255.255.0 gateway=192.168.1.254

Using an installation script to install ESXi not only speeds up the installation process but also helps to ensure the consistent configuration of all your ESXi hosts.

An Introduction to vSphere 5.5

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this software, download AutoLab.

Introducing VMware vSphere 5.5

Now in its fifth generation, VMware vSphere 5.5 builds on previous generations of VMware’s enterprise grade virtualization products. vSphere 5.5 extends fine-grained resource allocation controls to more types of resources, enabling VMware administrators to have even greater control over how resources are allocated to and used by virtual workloads. With dynamic resource controls, high availability, unprecedented fault-tolerance features, distributed resource management, and backup tools included as part of the suite, IT administrators have all the tools they need to run an enterprise environment ranging from a few servers to thousands of servers.
Exploring VMware vSphere 5.5

The VMware vSphere product suite is a comprehensive collection of products and features that together provide a full array of enterprise virtualization functionality. The vSphere product suite includes the following products and features:

VMware ESXi

The core of the vSphere product suite is the hypervisor, which is the virtualization layer that serves as the foundation for the rest of the product line. In vSphere 5 and later, including vSphere 5.5, the hypervisor comes in the form of VMware ESXi.

VMware vCenter Server

Stop for a moment to think about your current network. Does it include Active Directory? There is a good chance it does. Now imag-ine your network without Active Directory, without the ease of a centralized management database, without the single signon capabilities, and without the simplicity of groups. That is what managing VMware ESXi hosts would be like without using VMware vCenter Server. Not a very pleasant thought, is it? Now calm yourself down, take a deep breath, and know that vCenter Server, like Active Directory, is meant to provide a centralized management platform and framework for all ESXi hosts and their respective VMs. vCenter Server allows IT administrators to deploy, manage, monitor, automate, and secure a virtual infrastructure in a centralized fashion. To help provide scalability, vCenter Server leverages a backend database (Microsoft SQL Server and Oracle are both supported, among others) that stores all the data about the hosts and VMs.

vSphere Update Manager

vSphere Update Manager is an add-on package for vCenter Server that helps users keep their ESXi hosts and select VMs patched with the latest updates.

VMware vSphere Web Client and vSphere Client

vCenter Server provides a centralized management framework for VMware ESXi hosts, but it’s the vSphere Web Client (and its predecessor, the Windows based vSphere Client) where vSphere administrators will spend most of their time.

VMware vCenter Orchestrator

VMware vCenter Orchestrator is a workflow automation engine that is automatically installed with every instance of vCenter Server. Using vCenter Orchestrator, vSphere administrators can build automated workflows for a wide variety of tasks available within vCenter Server.

vSphere Virtual Symmetric Multi-Processing

The vSphere Virtual Symmetric Multi-Processing (vSMP or Virtual SMP) product allows virtual infrastructure administrators to construct VMs with multiple virtual processors. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM.

vSphere vMotion and vSphere Storage vMotion

If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. vSphere vMo-tion, also known as live migration, is a feature of ESXi and vCenter Server that allows an administrator to move a running VM from one physical host to another physical host without having to power off the VM. This migration between two physical hosts occurs with no down-time and with no loss of network connectivity to the VM. The ability to manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in today’s datacenters.

vSphere Distributed Resource Scheduler

vMotion is a manual operation, meaning that an administrator must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster.

vSphere Storage DRS

vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS helps balance storage capacity and storage performance across a cluster of datastores using mechanisms that echo those used by vSphere DRS.

Storage I/O Control and Network I/O Control

VMware vSphere has always had extensive controls for modifying or controlling the allocation of CPU and memory resources to VMs. What vSphere didn’t have prior to the release of vSphere 4.1 was a way to apply these same sort of extensive controls to storage I/O and network I/O. Storage I/O Control and Network I/O Control address that shortcoming.

Profile-Driven Storage

With profile-driven storage, vSphere administrators are able to use storage capabilities and VM storage profiles to ensure that VMs are residing on storage that is able to provide the necessary levels of capacity, performance, availability, and redundancy.

vSphere High Availability

In many cases, high availability (or the lack of high availability) is the key argument used against virtualization. The most com-mon form of this argument more or less sounds like this: “Before virtualization, the failure of a physical server affected only one application or workload. After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can’t put all our eggs in one basket!”
VMware addresses this concern with another feature present in ESXi clusters called vSphere High Availability (HA). Once again, by nature of the naming conventions (clusters, high availability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high availability configuration in Windows. The vSphere HA feature provides an automated process for restarting VMs that were running on an ESXi host at a time of server failure .

vSphere Fault Tolerance

While vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not be good enough for some workloads. vSphere Fault Tolerance (FT) might help in these situations.
As we described in the previous section, vSphere HA protects against unplanned physical server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some down-time (generally less than 3 minutes) is incurred. vSphere FT goes even further and eliminates any downtime in the event of a physical host failure.

vSphere Storage APIs for Data Protection and VMware Data Protection

One of the most critical aspects to any network, not just a virtualized infrastructure, is a solid backup strategy as defined by a company’s disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere 5.5 has two key components: the vSphere Storage APIs for Data Protection (VADP) and VMware Data Protection (VDP).

Virtual SAN (VSAN)

VSAN is a major new feature included with vSphere 5.5 and the evolution of work that VMware has been doing for a few years now. Building on top of the work VMware did with the vSphere Storage Appliance (VSA), VSAN lets organizations leverage the storage found in all their individual compute nodes and turn it into.. well, a virtual SAN.

vSphere Replication

vSphere Replication brings data replication, a feature typically found in hardware storage platforms, into vSphere itself. It’s been around since vSphere 5.0, when it was only enabled for use in conjunction with VMware Site Recovery Manager (SRM) 5.0. In vSphere 5.1, vSphere Replication was decoupled from SRM and enabled for use even without VMware SRM.

vFlash Read Cache

Flash Read Cache brings full support for using solid-state storage as a caching mechanism into vSphere. Using Flash Read Cache, administrators can assign solid-state caching space to VMs much in same manner as VMs are assigned CPU cores, RAM, or network connectivity. vSphere manages how the solid-state caching capacity is allocated and assigned and how it is used by the VMs.

Installing ESXi – Disk-less, CD-less, Net-less

When installing some new lab hosts the other day, I had bit of a situation. My hosts were in an isolated (new) network environment, they didn’t have CD/DVD drives and they were diskless. So, how do I install ESXi to these new hosts?

By far the simplest way to get ESXi up and running is simply burning the ISO to a CD and booting it from the local CD drive, but that wasn’t an option. Another option may have been to use a remote management capability (ILO, DRAC, RSA etc) but these hosts didn’t have that capability either. I prefer to build my hosts via a network boot option, but as I stated, these were on a network without any existing infrastructure (no PXE boot service, DHCP services, name resolution etc).
However, there’s a really simple way to get ESXi installed in an environment like this…

Did you know that ESXi is (for the most part) hardware agnostic? The hypervisor comes prebuilt to run on most common hardware and doesn’t tie itself to a particular platform post installation. I find the easiest way to get around the constraints listed above is to deploy onto USB storage from another machine and simply plug this USB device into the new server to boot. Usually, this “other machine” is a VM running under VMware Fusion or Workstation! Here’s the five easy steps:

  1. Build a new “blank” VM and use the ESXi ISO file as the OS so Fusion or Workstation can identify it as an ESXi build
  2. As the VM boots up, make sure you connect the USB device to the individual VM
  3. Install ESXi as normal, choosing the USB storage as the install location
  4. Shutdown the VM after the installer initiates the final reboot.
  5. Dismount / Unplug the USB device from the “build” machine and plug it into the host you wish to boot.