Category Archives: Guides

Reconfiguring the ESXi Management Network

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

During the installation of ESXi, the installer creates a virtual switch—also known as a vSwitch— bound to a physical NIC. The tricky part, depending on your server hardware, is that the installer might select a different physical NIC than the one you need for correct network connectivity. Consider the scenario depicted in Figure 2.12. If, for whatever reason, the ESXi installer doesn’t link the correct physical NIC to the vSwitch it creates, then you won’t have network connectivity to that host. We’ll talk more about why ESXi’s network connectivity must be configured with the correct NIC in Chapter 5, but for now just understand that this is a requirement for connectivity. Since you need network connectivity to manage the host from the vSphere Client, how do you fix this?

Change ESXi Management Network

The simplest fix for this problem is to unplug the network cable from the current Ethernet port in the back of the server and continue trying the remaining ports until the host is accessible, but that’s not always possible or desirable. The better way is to use the DCUI to reconfigure the management network so that it is converted the way you need it to be configured.

Perform the following steps to fix the management NIC in ESXi using the DCUI:

  1. Access the console of the ESXi host, either physically or via a remote console solution such as an IP-based KVM.
  2. On the ESXi home screen, shown in Figure 2.13, press F2 for Customize System/View Logs. If a root password has been set, enter that root password.
  3. From the System Customization menu, select Configure Management Network, and press Enter.
  4. From the Configure Management Network menu, select Network Adapters, and press Enter.
  5. Use the spacebar to toggle which network adapter or adapters will be used for the system’s management network, as shown in Figure 2.14. Press Enter when finished.
  6. Press Esc to exit the Configure Management Network menu. When prompted to apply changes and restart the management network, press Y. After the correct NIC has been assigned to the ESXi management network, the System Customization menu provides a Test Management Network option to verify network connectivity.
  7. Press Esc to log out of the System Customization menu and return to the ESXi home screen.

The other options within the DCUI for troubleshooting management network issues are covered in detail within Chapter 5.

At this point, you should have management network connectivity to the ESXi host, and from here forward you can use the vSphere Client to perform other configuration tasks, such as configuring time synchronization and name resolution.

Performing an Unattended Installation of ESXi

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this procedure, download AutoLab.

ESXi Installation Scripts

ESXi supports the use of an installation script (often referred to as a kickstart, or KS, script) that automates the installation routine. By using an installation script, users can create unattended installation routines that make it easy to quickly deploy multiple instances of ESXi.
ESXi comes with a default installation script on the installation media. Listing 2.1 shows the default installation script.


Listing 2.1: ESXi provides a default installation script

#
# Sample scripted installation file
#
# Accept the VMware End User License Agreement
vmaccepteula
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
# Install on the first local disk available on machine
install --firstdisk --overwritevmfs
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )

If you want to use this default install script to install ESXi, you can specify it when booting the VMware ESXi installer by adding the ks=file://etc/vmware/weasel/ks.cfg boot option. We’ll show you how to specify that boot option shortly.
Of course, the default installation script is useful only if the settings work for your environment. Otherwise, you’ll need to create a custom installation script. The installation script commands are much the same as those supported in previous versions of vSphere. Here’s a breakdown of some of the commands supported in the ESXi installation script:

accepteula or vmaccepteula
These commands accept the ESXi license agreement.

Install
The install command specifies that this is a fresh installation of ESXi, not an upgrade. You must also specify the following parameters:

–firstdisk Specifies the disk on which ESXi should be installed. By default, the ESXi installer chooses local disks first, then remote disks, and then USB disks. You can change the order by appending a comma-separated list to the –firstdisk command, like this:
–firstdisk=remote,local This would install to the first available remote disk and then to the first available local disk. Be careful here—you don’t want to inadvertently overwrite something (see the next set of commands).
–overwritevmfs or –preservevmfs These commands specify how the installer will handle existing VMFS datastores. The commands are pretty self explanatory.

Keyboard
This command specifies the keyboard type. It’s an optional component in the installation script.

Network
This command provides the network configuration for the ESXi host being installed. It is optional but generally recommended. Depending on your configuration, some of the additional parameters are required:

–bootproto This parameter is set to dhcp for assigning a network address via DHCP or to static for manual assignment of an IP address.
–ip This sets the IP address and is required with –bootproto=static. The IP address should be specified in standard dotted-decimal format.
–gateway This command specifies the IP address of the default gateway in standard dotted-decimal format. It’s required if you specified –bootproto=static.
–netmask The network mask, in standard dotted-decimal format, is specified with this command. If you specify –bootproto=static, you must include this value.
–hostname Specifies the hostname for the installed system.
–vlanid If you need the system to use a VLAN ID, specify it with this command. Without a VLAN ID specified, the system will respond only to untagged traffic.
–addvmportgroup This parameter is set to either 0 or 1 and controls whether a default VM Network port group is created. 0 does not create the port group; 1 does create the port group.

Reboot
This command is optional and, if specified, will automatically reboot the system at the end of installation. If you add the –noeject parameter, the CD is not ejected.

Rootpw
This is a required parameter and sets the root password for the system. If you don’t want the root password displayed in the clear, generate an encrypted password and use the –iscrypted parameter.

Upgrade
This specifies an upgrade to ESXi 5.5. The upgrade command uses many of the same parameters as install and also supports a parameter for deleting the ESX Service Console VMDK for upgrades from ESX to ESXi. This parameter is the –deletecosvmdk parameter.

This is by no means a comprehensive list of all the commands available in the ESXi installation script, but it does cover the majority of the commands you’ll see in use.

Looking back at Listing 2.1, you’ll see that the default installation script incorporates a %post section, where additional scripting can be added using either the Python interpreter or the BusyBox interpreter. What you don’t see in Listing 2.1 is the %firstboot section, which also allows you to add Python or BusyBox commands for customizing the ESXi installation. This section comes after the installation script commands but before the %post section. Any command supported in the ESXi shell can be executed in the %firstboot section, so commands such as vim-cmd, esxcfg-vswitch, esxcfg-vmknic, and others can be combined in the %firstboot section of the installation script.

A number of commands that were supported in previous versions of vSphere (by ESX or ESXi) are no longer supported in installation scripts for ESXi 5.5, such as these:

  • autopart (replaced by install, upgrade, or installorupgrade)
  • auth or authconfig
  • bootloader
  • esxlocation
  • firewall
  • firewallport
  • serialnum or vmserialnum
  • timezone
  • virtualdisk
  • zerombr
  • The –level option of %firstboot

Once you have created the installation script you will use, you need to specify that script as part of the installation routine.
Specifying the location of the installation script as a boot option is not only how you would tell the installer to use the default script but also how you tell the installer to use a custom installation script that you’ve created. This installation script can be located on a USB flash drive or in a network location accessible via NFS, HTTP, HTTPS, or FTP. Table 2.1 summarizes some of the supported boot options for use with an unattended installation of ESXi.


Table 2.1: Boot options for an unattended ESXi installation

Boot Option Brief Description
ks=cdrom:/path Uses the installation script found at path on the CD-ROM. The installer checks all CD-ROM drives until the file matching the specified path is found.
ks=usb Uses the installation script named ks.cfg found in the root directory of an attached USB device. All USB devices are searched as long as they have a FAT16 or FAT32 file system.
ks=usb:/path Uses the installation script at the specified path on an attached USB device. This allows you to use a different filename or location for the installation script.
ks=protocol:/serverpath Uses the installation script found at the specified network location. The protocol can be NFS; HTTP; HTTPS or FTP.
ip=XX.XX.XX.XX Specifies a static IP address for downloading the installation script and the installation media.
nameserver=XX.XX.XX.XX Provides the IP address of a Domain Name System (DNS) server to use for name resolution when downloading the installation script or the installation media.
gateway=XX.XX.XX.XX Provides the network gateway to be used as the default gateway for downloading the installation script and the installation media.
netmask=XX.XXXX.XX Specifies the network mask for the network interface used to download the installation script or the installation media.
vlanid=XX Configures the network interface to be on the specified VLAN when downloading the installation script or the installation media.

Not a Comprehensive List of Boot Options

The list found in Table 2.1 includes only some of the more commonly used boot options for performing a scripted installation of ESXi. For the complete list of supported boot options, refer to the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere.

To use one or more of these boot options during the installation, you’ll need to specify them at the boot screen for the ESXi installer. The bottom of the installer boot screen states that you can press Shift+O to edit the boot options.
The following code line is an example that could be used to retrieve the installation script from an HTTP URL; this would be entered at the prompt at the bottom of the installer boot screen:

<ENTER: Apply options and boot> <ESC: Cancel>
> runweasel ks=http://192.168.1.1/scripts/ks.cfg ip=192.168.1.200
netmask=255.255.255.0 gateway=192.168.1.254

Using an installation script to install ESXi not only speeds up the installation process but also helps to ensure the consistent configuration of all your ESXi hosts.

An Introduction to vSphere 5.5

The following is an excerpt from my book Mastering VMware vSphere 5.5, which you can read more on this blog over the coming weeks. To read the full text you can get a copy here. To test out this software, download AutoLab.

Introducing VMware vSphere 5.5

Now in its fifth generation, VMware vSphere 5.5 builds on previous generations of VMware’s enterprise grade virtualization products. vSphere 5.5 extends fine-grained resource allocation controls to more types of resources, enabling VMware administrators to have even greater control over how resources are allocated to and used by virtual workloads. With dynamic resource controls, high availability, unprecedented fault-tolerance features, distributed resource management, and backup tools included as part of the suite, IT administrators have all the tools they need to run an enterprise environment ranging from a few servers to thousands of servers.
Exploring VMware vSphere 5.5

The VMware vSphere product suite is a comprehensive collection of products and features that together provide a full array of enterprise virtualization functionality. The vSphere product suite includes the following products and features:

VMware ESXi

The core of the vSphere product suite is the hypervisor, which is the virtualization layer that serves as the foundation for the rest of the product line. In vSphere 5 and later, including vSphere 5.5, the hypervisor comes in the form of VMware ESXi.

VMware vCenter Server

Stop for a moment to think about your current network. Does it include Active Directory? There is a good chance it does. Now imag-ine your network without Active Directory, without the ease of a centralized management database, without the single signon capabilities, and without the simplicity of groups. That is what managing VMware ESXi hosts would be like without using VMware vCenter Server. Not a very pleasant thought, is it? Now calm yourself down, take a deep breath, and know that vCenter Server, like Active Directory, is meant to provide a centralized management platform and framework for all ESXi hosts and their respective VMs. vCenter Server allows IT administrators to deploy, manage, monitor, automate, and secure a virtual infrastructure in a centralized fashion. To help provide scalability, vCenter Server leverages a backend database (Microsoft SQL Server and Oracle are both supported, among others) that stores all the data about the hosts and VMs.

vSphere Update Manager

vSphere Update Manager is an add-on package for vCenter Server that helps users keep their ESXi hosts and select VMs patched with the latest updates.

VMware vSphere Web Client and vSphere Client

vCenter Server provides a centralized management framework for VMware ESXi hosts, but it’s the vSphere Web Client (and its predecessor, the Windows based vSphere Client) where vSphere administrators will spend most of their time.

VMware vCenter Orchestrator

VMware vCenter Orchestrator is a workflow automation engine that is automatically installed with every instance of vCenter Server. Using vCenter Orchestrator, vSphere administrators can build automated workflows for a wide variety of tasks available within vCenter Server.

vSphere Virtual Symmetric Multi-Processing

The vSphere Virtual Symmetric Multi-Processing (vSMP or Virtual SMP) product allows virtual infrastructure administrators to construct VMs with multiple virtual processors. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM.

vSphere vMotion and vSphere Storage vMotion

If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. vSphere vMo-tion, also known as live migration, is a feature of ESXi and vCenter Server that allows an administrator to move a running VM from one physical host to another physical host without having to power off the VM. This migration between two physical hosts occurs with no down-time and with no loss of network connectivity to the VM. The ability to manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in today’s datacenters.

vSphere Distributed Resource Scheduler

vMotion is a manual operation, meaning that an administrator must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster.

vSphere Storage DRS

vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS helps balance storage capacity and storage performance across a cluster of datastores using mechanisms that echo those used by vSphere DRS.

Storage I/O Control and Network I/O Control

VMware vSphere has always had extensive controls for modifying or controlling the allocation of CPU and memory resources to VMs. What vSphere didn’t have prior to the release of vSphere 4.1 was a way to apply these same sort of extensive controls to storage I/O and network I/O. Storage I/O Control and Network I/O Control address that shortcoming.

Profile-Driven Storage

With profile-driven storage, vSphere administrators are able to use storage capabilities and VM storage profiles to ensure that VMs are residing on storage that is able to provide the necessary levels of capacity, performance, availability, and redundancy.

vSphere High Availability

In many cases, high availability (or the lack of high availability) is the key argument used against virtualization. The most com-mon form of this argument more or less sounds like this: “Before virtualization, the failure of a physical server affected only one application or workload. After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can’t put all our eggs in one basket!”
VMware addresses this concern with another feature present in ESXi clusters called vSphere High Availability (HA). Once again, by nature of the naming conventions (clusters, high availability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high availability configuration in Windows. The vSphere HA feature provides an automated process for restarting VMs that were running on an ESXi host at a time of server failure .

vSphere Fault Tolerance

While vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not be good enough for some workloads. vSphere Fault Tolerance (FT) might help in these situations.
As we described in the previous section, vSphere HA protects against unplanned physical server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some down-time (generally less than 3 minutes) is incurred. vSphere FT goes even further and eliminates any downtime in the event of a physical host failure.

vSphere Storage APIs for Data Protection and VMware Data Protection

One of the most critical aspects to any network, not just a virtualized infrastructure, is a solid backup strategy as defined by a company’s disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere 5.5 has two key components: the vSphere Storage APIs for Data Protection (VADP) and VMware Data Protection (VDP).

Virtual SAN (VSAN)

VSAN is a major new feature included with vSphere 5.5 and the evolution of work that VMware has been doing for a few years now. Building on top of the work VMware did with the vSphere Storage Appliance (VSA), VSAN lets organizations leverage the storage found in all their individual compute nodes and turn it into.. well, a virtual SAN.

vSphere Replication

vSphere Replication brings data replication, a feature typically found in hardware storage platforms, into vSphere itself. It’s been around since vSphere 5.0, when it was only enabled for use in conjunction with VMware Site Recovery Manager (SRM) 5.0. In vSphere 5.1, vSphere Replication was decoupled from SRM and enabled for use even without VMware SRM.

vFlash Read Cache

Flash Read Cache brings full support for using solid-state storage as a caching mechanism into vSphere. Using Flash Read Cache, administrators can assign solid-state caching space to VMs much in same manner as VMs are assigned CPU cores, RAM, or network connectivity. vSphere manages how the solid-state caching capacity is allocated and assigned and how it is used by the VMs.

Installing ESXi – Disk-less, CD-less, Net-less

When installing some new lab hosts the other day, I had bit of a situation. My hosts were in an isolated (new) network environment, they didn’t have CD/DVD drives and they were diskless. So, how do I install ESXi to these new hosts?

By far the simplest way to get ESXi up and running is simply burning the ISO to a CD and booting it from the local CD drive, but that wasn’t an option. Another option may have been to use a remote management capability (ILO, DRAC, RSA etc) but these hosts didn’t have that capability either. I prefer to build my hosts via a network boot option, but as I stated, these were on a network without any existing infrastructure (no PXE boot service, DHCP services, name resolution etc).
However, there’s a really simple way to get ESXi installed in an environment like this…

Did you know that ESXi is (for the most part) hardware agnostic? The hypervisor comes prebuilt to run on most common hardware and doesn’t tie itself to a particular platform post installation. I find the easiest way to get around the constraints listed above is to deploy onto USB storage from another machine and simply plug this USB device into the new server to boot. Usually, this “other machine” is a VM running under VMware Fusion or Workstation! Here’s the five easy steps:

  1. Build a new “blank” VM and use the ESXi ISO file as the OS so Fusion or Workstation can identify it as an ESXi build
  2. As the VM boots up, make sure you connect the USB device to the individual VM
  3. Install ESXi as normal, choosing the USB storage as the install location
  4. Shutdown the VM after the installer initiates the final reboot.
  5. Dismount / Unplug the USB device from the “build” machine and plug it into the host you wish to boot.

OpenFiler Installation

VCP5 Blueprint Section

Objective 3.1 – Plan and Configure vSphere Storage

Activities

Boot and Install from the OpenFiler ISO
Configure and present a volume using iSCSI and NFS

Estimated Length

45 mins

Things to Watch

Make sure the partition table for OpenFiler is set up correctly
Pay close attention to the OpenFiler configuration in Part 2

Extra Credit

Identify the components of the Target IQN

Additional Reading

Part 1 – Installation

Create a new VM within VMware Workstation with the following settings:

  • VM Version 8.0
  • Disc Image: OpenFiler ISO
  • Guest OS: Other Linux 64-bit
  • VM Name: OpenFiler
  • 1x vCPU
  • 256MB RAM
  • Bridged Networking – VMnet3 (or appropriate for your lab)
  • I/O Controller: LSI Logic
  • Disk: SCSI 60GB

Edit the VM settings once it’s created and remove the following components:

  • Floppy
  • USB Controller
  • Sound Card
  • Printer

Power on the VM and hit enter to start loading the OpenFiler installer.

Hit next, select the appropriate language and accept that “ALL DATA” will be removed from this drive.

Create a custom layout partition table with the following attributes:

  • /boot – 100MB primary partition
  • Swap – 2048MB primary partition
  • / – 2048MB primary partition
  • * remainder should be left as free space

The installer will complain about not having enough RAM and asks if it can turn on swap space. Hit Yes to this dialogue box.

Accept the default EXTLINUX boot loader and configure the networking as per the Lab Setup Convention

Select the appropriate time zone for your location and on the next screen set the root password.

Once the password is entered all configuration items are complete and installation will commence. Go make a coffee :)

After installation is complete the installer will ask to reboot. Allow this to happen and eventually a logon prompt will appear.

Part 2 of this Lab Guide will configure this newly built VM ready to be used as shared storage by your ESXi host VMs

Part 2 – Configuration

Point your browser of choice at the OpenFiler VM IP address on port 446. You will be prompted to accept an unsigned certificate.

Once at the prompt login with the following details:

  • Username: openfiler
  • Password: password

You will be presented with the Status / System Information page.

Click on the blue “System” tab at the top. Scroll down the Network Access Configuration. To allow the OpenFiler NAS to talk to the rest of our Lab we need to configure an ACL. Add the following and then click the Update button below:

  • Name: lab.local
  • Network/Host: 192.168.199.0 (or appropriate for your lab)
  • Netmask: 255.255.255.0 (or appropriate for your lab)
  • Type: Share

Since the first type of storage we will be using within our lab environment will be iSCSI we need to enable this within OpenFiler.
Click on the blue “Services” tab, then enable and start the CIFS, NFS and iSCSI related services.

Click the next blue tab “Volumes”. In here we will be configuring the OpenFiler virtual disk to be shared to other hosts.
Since no physical volumes exist at this point, lets create one now by clicking create new physical volumes” and then on “/dev/sda” in the Edit Disk column

This view shows the partition layout that was created in Part 1 – Installation. 4.1GB / 7% of used space for the OpenFiler installation and around 56GB / 39% Free.

Towards the bottom of this page, we are given the option to create new partitions. Usually in a Production environment there would be RAID protection and hardware based NAS or SAN equipment but for the purposes of learning we have no need for expensive equipment. OpenFiler can present a single Physical (or Virtual) disk that will suit our needs.
For unknown reasons OpenFiler does not like directly adjacent partitions to the OS therefore create a new partition with the following settings

  • Mode: Primary
  • Partition Type: Physical Volume
  • Starting cylinder: 600
  • Ending cylinder: 7750

Once the new partition is created, the new layout is presented showing the additional /dev/sda4 partition.

We now have a new Physical Volume that we can continue configuring for presenting to ESXi. The next step takes us back to the blue “Volumes” tab. Now we have a physical volume that is accessible OpenFiler gives us the option to add this volume to a “Volume Group Name”.
Create a Volume Group Name of: LABVG with “/dev/sda4” selected.

Now that the LAB Volume Group is created, we can create an actual volume that is allocated directly to presented storage (NFS, iSCSI etc).

In the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABISCSIVOL
  • Volume Description: LAB.Local iSCSI Volume
  • Required Space (MB): 40960 (40GB)
  • Filesystem / Volume type: block (iSCSI,FC,etc)

The following screen is presented, which indicates the storage volume created is now allocated to Block Storage.

The next step to present the iSCSI Block Storage to ESXi is to setup a Target and LUN.
Still under the blue “Volumes” tab, click “iSCSI Targets” in the right hand menu, you will be presented with the following “Add new iSCSI Target” option. Leave the Target IQN and click the “Add” button.

Once the new iSCSI Target is created we need to allocate this target an actual LUN. Click the grey “LUN Mapping” tab. All Details for this volume can be left as is. It is just a matter of clicking the “Map” button.

The very last iSCSI step is to allow our ESXi hosts network access to our newly presented storage. Click on the grey “Network ACL” tab and change the lab.local ACL to “allow” and click Update.

At this stage leave the CHAP Authentication settings as we will discuss this in a later lab.
One last thing before closing out this lengthy lab… NFS. Along with iSCSI we also want to experiment with NFS, plus we also need a place to locate our ISO repository. In production environments this is usually placed on cheaper storage which is “usually” presented via a NAS as NFS or SMB. Let’s create an NFS partition and mount point while we’re here.
Within the blue Volumes tab, in the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABNFSVOL
  • Volume Description: LAB.Local NFS Volume
  • Required Space (MB): 12512 (Max)
  • Filesystem / Volume type: Ext4

Now head to the blue “Shares” tab and click on “LAB.Local NFS Volume”. This prompts you for a Sub-folder name, call it Build and then click “Create Sub-folder”

Now we want to click on this new Sub-folder and click the “Make Share” button.
The next page has three settings we need to change, and then we’re all done.
Add “Build” to the “Override SMB/Rsyn share name”. Change the “Share Access Control Mode” to “Public guest access”, then click the “Update” button.

And then change the “Host access configuration” to be SMB = RW, NFS = RW, HTTP(S) = NO, FTP = NO, Rsync = NO. Tick the “Restart services” checkbox then, click “Edit” and change “UID/GID Mappping” to “no_root_squash” for the last time, click the “Update” button.

Congratulations, you now have a functioning NAS virtual machine that presents virtual hard disks like any “real” NAS or SAN.