AutoLab v3.0 release

After a very long delay, there is finally a new release of AutoLab.

This version adds support for vSphere 6.5 and 6.7, Horizon View 7.0 and 7.5 and Windows Server 2016.

The version of FreeNAS has been updated, and pfSense replaces FreeSCO as the router, these changes make AutoLab more stable and reliable at the cost of much larger downloads. Each AutoLab package is around 2GB in size, so you should only download the version you will use.

As always, read the deployment guide with care and have fun with AutoLab.

There is also a new AutoLab 3.0 blueprint in the Ravello Repo.

AutoLab on Ravello got easier

A couple of updates on deploying AutoLab on Ravello. It is great working with startup as they are always making improvements to their platform and making the whole thing easier for end users.
The first one is that the default maximum RAM allocated per VM is now 8GB during the free trial. This means you don’t need to file a support request before you can power on your AutoLab VC VM.
The second is that Ravello have launched an app store for blueprints. It is called the Repo and it contains blueprint created by Ravello users that you can copy into your library. AutoLab is now in the Repo so you don’t need to log a support call to get the blueprint.
To find out more about using AutoLab on Ravello take a look back over the last few posts here on Labguides.com.

Home Lab Tidbits #3

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab?

This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

Xeon
– It looks like the new Intel Xeon lineup is getting a fair amount of attention, both AnandTech and ServeTheHome here and here.
– The new Broadwell-DE based systems really are awesome, 128GB of RAM and 10GbE is great in such a small package.
– And if you want 128 GB of DDR4 RAM, Corsair have a bunch of new kits.

Storage

  • Will we ever see the SandForce SF3500 based SSDs?
  • Not to be confused with Intel’s new SSD now available, the S3510.
  • Getting some more NVMe details from Supermicro.
  • QNAP have a couple of AMD based NASs on the market here and here.
  • Synology have DSM 5.2 out. I’m looking forward to seeing what kind of improvement they have made to their (previously poor) SSD caching. Checkout details on SmallNetBuilder and ServeTheHome.
  • Speaking of Synology they have some cheap(ish) large NASs that have just been released.

Network

  • One thing that continues to amaze me is the price of PCIe 10GbE NICs. You can buy a whole new motherboard with Intel NICs for a LOT less than this new card from D-Link.
  • Speaking of those Intel 10GbE NICs, the new one is the X552/X557-AT.

Software

  • EMC have some pretty nifty Software Defined Storage solutions that are now opensource / available to the community.

Other

  • My friend Frank Denneman and I certainly agree on specs for home lab servers.
  • Converging standards is a good thing, but it’ll be interesting to see how the “thunderbolt networking” gets implemented.
  • If you hadn’t noticed I’m a big fan of the Intel Xeon D-1540. Supermicro have a great little box that consumes minimal power.
  • More traditional small systems (LGA2011-3) still have new options too.
  • While not strictly “home lab” one can dream right?

AutoLab 2.6 on Ravello Video

I have just uploaded a video of deploying AutoLab 2.6 on the Ravello platform. The process is similar to deployment using your own hardware, but has a couple of differences. Make sure you have the Deployment guide to hand as the steps are in there too.

Uploading to AutoLab on Ravello

Many AutoLab deployments are inside your firewall on a trusted network, without direct inbound access from the Internet. When AutoLab is deployed on Ravello it is outside your firewall and accessible only over the Internet. Due to this very different security situation the Ravello build of AutoLab does not publish the NAS VM and it’s Build share. Usually the various pieces of licensed software are copied onto this share at the start of the build. On Ravello the ESXi and vCenter installer ISOs are uploaded and attached to the NAS VM, additional build share files are downloaded from inside the DC after it’s built.

You may wish to publish the Build share to upload additional files, then unpublish when you’re done uploading. I don’t recommend leaving the share accessible as there is no security on the share. In a later release we will have better mechanisms for uploads.

Here’s the simple process to make the Build share accessible:

1. In the Canvas select the NAS VM and click the Services tab

image

2. In the Supplied Services area Click the Add button

3. Enter 445 in the Port field and click Save

image

4. Remember to click Update to apply the configuration change to the application.

image

As you may have noticed my VMs were all powered off as I made this change, I need to power on the NAS before I can upload to it. You can still make the changes to the VMs while they are running however may be an brief outage on the VM you change.

5. On the Summary tab of the NAS select the DNS: field, copy the entire text. This is the Internet accessible address of the NAS VM.

image

6. In Explorer use the copied text to make up the URL of the Build share: \\<copied text>\Build

image

You can now copy files onto the Build share, or edit files like the Automate.ini file to your requirements.Bear in mind that you are accessing a remote Samba share, it will be very (very) slow. I plan to build FTP access into the next version for faster uploads.

7. Once you are finished uploading and editing you should disable access to the share. In the Ravello console select the NAS VM again and click the Services tab. Click the trash can next to the service you added to delete the service. Again click Save and Update to apply the change.

image

AutoLab with vSphere 6, now with extra Cloud

I’m delighted to release AutoLab version 2.6, the biggest new feature is support for vSphere 6.0. You can download the new deployment guide and packages from the AutoLab page.

vSphere6

With vSphere 6 VMware have vastly increased the amount of RAM required to install vCenter and the minimum RAM to run both vSphere and ESXi. This means that you can no longer build the core lab with less than 16GB of RAM. If you want to add a third host, VSAN or View then you will need even more RAM so it is good that 32GB is more achievable in a low cost home lab than a few years ago.

RavelloSystems

The other great new feature of AutoLab 2.6 is the ability to use public cloud to host AutoLab, so you may not even need to upgrade your lab to be able to play with AutoLab. I’ve been working with Ravello Systems, a start-up who have built a hypervisor that runs on top of AWS or Google Cloud. This is some very cool magic that I wrote about here. On the Ravello platform you can have a lab that you rent by the hour and only pay for while you’re using it. A three ESXi server AutoLab costs under $3 per hour to run, At that rate you could run the lab every evening for a month at a lower cost than buying a new machine. Another benefit of Ravello is that you can run multiple labs in parallel, something I often want to do as I’m working on different projects.

Home Lab Build – Active Directory

In this part of the Home Lab Build series, we’ll step through the creation of a Windows 2012 R2 Domain Controller. While one of the more basic installs, it can carry some fairly important tasks within a lab environment. You can find the visio file for the diagram is here.AD-Build

If you want a basic set up with some kind of identity source, name resolution and a time sync source all in one, building a Windows AD box is going to be on your short list. Also, if you plan on studying for a Microsoft or VMware certification, having a grasp on Active Directory is a must. Like it or loath it, Windows and in turn Active Directory dominates many corporate networks today. So let’s get to it.

At a high level we want to accomplish a few things:

  1. Install Windows 2012 R2 on a new VM
  2. Set an Administrator password
  3. Install VMware Tools
  4. Set a static IP
  5. Set a nameserver
  6. Set a hostname
  7. Disable the local firewall
  8. Enable Remote Desktop Access
  9. Add the Active Directory and DNS roles
  10. Set a Domain Name for the new Domain
  11. Set a Restore Mode password

First up, using the vSphere Desktop Client, create a VM with a Guest OS of Windows Server 2012 (64-bit). Change the NIC from E1000E to VMXNET3 and leave all other “Create New Virtual Machine” wizard settings to their defaults. Using Thin provisioning is a good idea in a lab environment especially if you’re disk space constrained. If you have more than 2 physical cores on your ESXi hosts, change the vCPU count of your VM to 2 but don’t do this if you lab only has 2 physical cores. Mount the Windows 2012 R2 ISO to this VM and then power it on.

Once the Windows installer is booted, select the appropriate language and click the “install now” button. Setup will give you a choice for the OS version, in this case, we want the standard GUI installation. On the following screen you’ll be asked if you want to “upgrade” an installation or “custom” which actually means “install windows only”. Select “custom” and then use the whole disk without creating any partitions by just clicking “next”. The installation of the OS will now commence and will take a few minutes (depending on your hardware).

After the install is complete and the server reboots you will be asked to set an Administrator password. Once logged in to the server, VMware Tools is the first thing that should be installed. This will provide the drivers and utilities needed to get the most out of this VM. Specifically, without VMware Tools, the VMXNET3 network card we chose to use does not have default drivers in Windows. Reboot the server once the VMware Tools installation is complete.

The server can now have it’s network identity created. We’ll set a static IP, a subnet mask, a gateway and a name (DNS) server. We’re actually going to set the DNS server to the localhost IP because this server will have the DNS services running on it. Finally we’ll set a hostname turn off the local firewall and then reboot once again.

IP: 192.168.20.20
SNM: 255.255.255.0
GW: 192.168.20.1
DNS: 127.0.0.1

After the server is on the network with the correct details, we will enable the ability to remotely manage it with a Remote Desktop Client and then add the “Active Directory Domain Services” and “DNS Server” roles. As we step through this wizard we will create a new forest with the domain name of “labguides.local” and configure a Directory Services Restore Mode password.

LoginFinally once the wizard is over and server rebooted, you can login to the domain with the original Administrator password that was created upon first boot. If you’d like to set your domain up exactly the same as mine, you can grab the script export from my build here

If you need more information, watch the video for a detailed guide on how to accomplish these tasks.

Home Lab Tidbits #2

Home labs are really starting to pick up some pace with many vendors tailoring products and solutions directly to this market. It’s becoming hard to keep up with what’s coming out. Of those releases, what is actually useful for your home lab?

This is an experimental aggregation post that may or may not continue, depending on feedback.

Compute

  • It looks like ServeTheHome has their hands on one of the new Xeon D SoC motherboards from Supermicro and loaded up ESXi 6.
  • Intel have released some pretty cheap Celeron SoC, which might be useful for those with heat / power constraints (4W-6W).
  • Benchmarks for the Xeon D-1540 are starting to emerge, and it’s looking pretty good for the price.

Storage

  • The storage wars are heating up again with western digital and toshiba talking about 10TB SSDs.
  • With huge capacity SSDs, we’ll also need faster small ones, like this new SSD 750 PCIe SSD from Intel
  • Buffalo have launched a new NAS series based on the Atom, but this NAS from Thecus wins the cool vote with a built in UPS.
  • By far the cheapest NASs I’ve seen are cropping up. Last HLT was the Synology DS115j. Now Qnap have the TS-112P for the ~$100 price point.
  • Samsung have released their popular EVO 850 line in the mSATA form factor.
  • OCZ are back now that Toshiba are controlling them with the Vector 180.

Network

  • No news this time.

Software

  • Can’t afford to build your own lab? You could always run up a nested environment on vCloud Air for ~$15 a month. Or there’s always AutoLab on your laptop :)

Other

  • While there’s plenty of racks available, the startech 42u is certainly easier to move than most.

Home Lab Build – ESXi 6 /w VSAN

As part of documenting my home lab (re)build, today I’m going to build an ESXi 6 server and then bootstrap VSAN using a single hosts’s local disks. If you’re following along my Home Lab Re-Build series, we’re building the first ESXi host in the diagram.

LabOverview

So why ESXi6? Well, we want to host some VMs, we want to use just local storage but we want it to be stable and have the ability to run nested ESXi VMs on top. Using VMware Virtual SAN on a single host provides no data redundancy so you’ll want to keep that in mind if you’re deciding to go this route. It’s a configuration not supported, but (in my opinion) really useful in a home lab environment.

First off we’ll wipe the local disks, then we’ll install ESXi 6, set a root P/W and set the management network up. Once it’s on the network we’ll install the vSphere Desktop Client and configure NTP and SSH. Finally we’ll configure VSAN to use the local disks of this single host. So, let’s get into it.

We’re going to mount the Gnome Partition Editor ISO to give us the ability to wipe the local disks of any existing partition information. This is required when configuring VSAN as it expects blank disks.

Once Gparted is loaded we can select each disk and then ensure no existing partitions exist. In the video below I forgot that we need to create a blank partition table prior to rebooting the hosts at first. Create a new partition table by selecting the Device -> Create Partition Table, leave the table type as “msdos” and click apply. You’ll need to repeat this task for each disk to be used by VSAN.

Once the disks have a blank partition table you can install ESXi 6 as normal, I wont document that here as it’s a fairly basic process and included in the video below. Once ESXi is installed, set the management network and install the new version of the vSphere Desktop Client (browse to the IP of you ESXi host for details). We need SSH / CLI access to be able to bootstrap VSAN so enable SSH in the vSphere Desktop Client by going to Configuration -> Security Profile -> Services Properties -> SSH -> Options -> Start.

I first heard about enabling VSAN on a single host from William Lam’s post. He’s using it to get vCenter up and running without the need for shared storage so we’re using is slightly differently but he concept is the same. He’s also got a post on using USB disks in VSAN.

Once logged into the CLI via SSH or the DCUI, run the following commands to set a default VSAN policy to work with only 1 host:

esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

Now that the default policy will work with a single host, build a new VSAN “cluster”:

esxcli vsan cluster new

Finally add your SSD and magnetic/ssd capacity disks to the new cluster. You can get the SSD-DISK-ID and HDD-DISK-ID from either the UI (Configuration -> Storage -> Devices -> Right Click -> Copy Identifying to Clipboar) or by the CLI (esxcli storage core device list):

esxcli vsan storage add -s SSD-DISK-ID -d HDD-DISK-ID

You’ll now have a new VSAN datastore mounted on the local ESXi host. Remember, this datastore is not redundant so use caution.

Next in the series we’ll go through building a AD controller to be used for the lab DNS, NTP and Directory Services.

Home Lab Build

Today I want to introduce a series that I’ve been wanting to do for a while, a step-by-step video based home lab build. This will be the first in a series where I’ll take you through this new home lab build out so you can follow along if you like. Lets start out with gear

I have 2 primary systems in my home lab that are identical and based on the E5-1620 Xeon chip from Intel. While they have plenty of power for what I need (they are a quad core, 3.6 Ghz CPI), the they do use a considerable amount of power being rated at 130 watts. The CPUs are coupled with 64GB RAM per system, which is probably the biggest limit in my lab today. The ram is a little older and none-ECC. While it was ok when I first got these systems a couple of years ago, it needs replacing. I use the SuperMicro X9SRH-7TF motherboard which supports up to 512GB if you get the right type. For me, this board provided me with 2 great things. First, lots of memory support. Secondly, onboard 10GbE ports. I hook both of these systems together with the cheapest 10GbE switch I can find the Netgear XS708E. It’s not fancy, but it pushes packets over copper fast. The systems are housed in the super quiet and minimalist Fractal R4 case. Lets move onto the layout of the lab I’m going to (re)build.

LabOverview

I’ve quickly drawn up how my home network is set up today and how I’m going to connect that through to my home lab, probably using an NSX or vShield edge. You can see I have 4 ESXi hosts, along with the 2 Supermicro based systems, I also have 2 small HP N36L Microservers. I don’t have a use for them at this stage, but I’m sure I can find something along the way. Storage is both local in the form of VSAN on the ESXi systems and network based on a Synology NAS. In the lower portion of the diagram you can see the 4 VMs that I’m going to build first. An AD box, a database server and then vCenter with an external PSC. As we go along I’ll add to this diagram anything I decide to include.

And if you have any comments or questions please reach out.