Category Archives: Virtual Appliances

Synology DSM Virtual Machine


Synology have made a name for themselves over the past few years as one of the preferred home lab NAS solutions. In particular, they were one of the first consumer vendors to support VAAI in VMware environments and also one of the first to support SSD caching. If you ever wanted to checkout their DSM 5.0 software without purchasing any hardware, the following outlines how you can spin up their DSM software in a VM for testing purposes. Obviously this is not going to give you an exact comparison to hardware, but it’s a great way to test it in your lab. If you want to skip the “build” section and just spin one of these up right away, simply download and import this OVF file to your virtual environment and move on to “Installing the DSM software”.

How to Build the DSM Virtual Machine

There are a number of quick steps that you’ll need to perform to be able to spin up your own DSM VM. First of all, you will want a couple of bits of software:

Once you have all the tools, you need to modify the nonoboot image boot loader.

  1. Using WinImage, open the nanoboot file and find syslinux.cfg
  2. Extract and edit the syslinux.cfg file, find the lines that start with “kernel /ZImage” and add the following to the end of the line: rmmod ata_piix
  3. Save the cfg file and inject it back to the nanoboot image overriding the exiting file.
  4. Next use Starwind to convert the nanoboot IMG file to an IDE pre-allocated VMDK
  5. Create a new VM and use these VMDKs as an “existing hard drive” IDE 0:0.
  6. Set the disk to “independent non-persistent”. Continue with Step 12 below

Installing the the DSM software

After you have a working bootable VM that emulates Synology hardware, its time to install the DSM software itself. You can add additional hardware to the VM at any time after this point (SCSI Disks, NICs, vCPU or Memory)

    1. Upgrade / DowngradeAdd SCSI based virtual hard disks to the VM for how much space you would like available for the virtual NAS
    2. Attach the network card in the VM to the correct network. DHCP must be enabled on the network.
    3. Power on the VM
    4. Select the 3rd option in the boot menu labeled “Upgrade / Downgrade”
    5. Once the IP is shown, use a web browser to the IP address listed on the console
    6. Follow the onscreen instructions to complete the installation wizard with the following options:
      • Install the DSM version from disk (SD214 DSM 5.0 4482)
      • Do not create a Synology Hybrid volume
    7. After some time the VM will reboot, and then power off.
    8. Power the VM back on and you will have a working Synology DSM Virtual Machine.

The guys at have a whole site dedicated to this stuff.

OpenFiler Installation

VCP5 Blueprint Section

Objective 3.1 – Plan and Configure vSphere Storage


Boot and Install from the OpenFiler ISO
Configure and present a volume using iSCSI and NFS

Estimated Length

45 mins

Things to Watch

Make sure the partition table for OpenFiler is set up correctly
Pay close attention to the OpenFiler configuration in Part 2

Extra Credit

Identify the components of the Target IQN

Additional Reading

Part 1 – Installation

Create a new VM within VMware Workstation with the following settings:

  • VM Version 8.0
  • Disc Image: OpenFiler ISO
  • Guest OS: Other Linux 64-bit
  • VM Name: OpenFiler
  • 1x vCPU
  • 256MB RAM
  • Bridged Networking – VMnet3 (or appropriate for your lab)
  • I/O Controller: LSI Logic
  • Disk: SCSI 60GB

Edit the VM settings once it’s created and remove the following components:

  • Floppy
  • USB Controller
  • Sound Card
  • Printer

Power on the VM and hit enter to start loading the OpenFiler installer.

Hit next, select the appropriate language and accept that “ALL DATA” will be removed from this drive.

Create a custom layout partition table with the following attributes:

  • /boot – 100MB primary partition
  • Swap – 2048MB primary partition
  • / – 2048MB primary partition
  • * remainder should be left as free space

The installer will complain about not having enough RAM and asks if it can turn on swap space. Hit Yes to this dialogue box.

Accept the default EXTLINUX boot loader and configure the networking as per the Lab Setup Convention

Select the appropriate time zone for your location and on the next screen set the root password.

Once the password is entered all configuration items are complete and installation will commence. Go make a coffee :)

After installation is complete the installer will ask to reboot. Allow this to happen and eventually a logon prompt will appear.

Part 2 of this Lab Guide will configure this newly built VM ready to be used as shared storage by your ESXi host VMs

Part 2 – Configuration

Point your browser of choice at the OpenFiler VM IP address on port 446. You will be prompted to accept an unsigned certificate.

Once at the prompt login with the following details:

  • Username: openfiler
  • Password: password

You will be presented with the Status / System Information page.

Click on the blue “System” tab at the top. Scroll down the Network Access Configuration. To allow the OpenFiler NAS to talk to the rest of our Lab we need to configure an ACL. Add the following and then click the Update button below:

  • Name: lab.local
  • Network/Host: (or appropriate for your lab)
  • Netmask: (or appropriate for your lab)
  • Type: Share

Since the first type of storage we will be using within our lab environment will be iSCSI we need to enable this within OpenFiler.
Click on the blue “Services” tab, then enable and start the CIFS, NFS and iSCSI related services.

Click the next blue tab “Volumes”. In here we will be configuring the OpenFiler virtual disk to be shared to other hosts.
Since no physical volumes exist at this point, lets create one now by clicking create new physical volumes” and then on “/dev/sda” in the Edit Disk column

This view shows the partition layout that was created in Part 1 – Installation. 4.1GB / 7% of used space for the OpenFiler installation and around 56GB / 39% Free.

Towards the bottom of this page, we are given the option to create new partitions. Usually in a Production environment there would be RAID protection and hardware based NAS or SAN equipment but for the purposes of learning we have no need for expensive equipment. OpenFiler can present a single Physical (or Virtual) disk that will suit our needs.
For unknown reasons OpenFiler does not like directly adjacent partitions to the OS therefore create a new partition with the following settings

  • Mode: Primary
  • Partition Type: Physical Volume
  • Starting cylinder: 600
  • Ending cylinder: 7750

Once the new partition is created, the new layout is presented showing the additional /dev/sda4 partition.

We now have a new Physical Volume that we can continue configuring for presenting to ESXi. The next step takes us back to the blue “Volumes” tab. Now we have a physical volume that is accessible OpenFiler gives us the option to add this volume to a “Volume Group Name”.
Create a Volume Group Name of: LABVG with “/dev/sda4” selected.

Now that the LAB Volume Group is created, we can create an actual volume that is allocated directly to presented storage (NFS, iSCSI etc).

In the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABISCSIVOL
  • Volume Description: LAB.Local iSCSI Volume
  • Required Space (MB): 40960 (40GB)
  • Filesystem / Volume type: block (iSCSI,FC,etc)

The following screen is presented, which indicates the storage volume created is now allocated to Block Storage.

The next step to present the iSCSI Block Storage to ESXi is to setup a Target and LUN.
Still under the blue “Volumes” tab, click “iSCSI Targets” in the right hand menu, you will be presented with the following “Add new iSCSI Target” option. Leave the Target IQN and click the “Add” button.

Once the new iSCSI Target is created we need to allocate this target an actual LUN. Click the grey “LUN Mapping” tab. All Details for this volume can be left as is. It is just a matter of clicking the “Map” button.

The very last iSCSI step is to allow our ESXi hosts network access to our newly presented storage. Click on the grey “Network ACL” tab and change the lab.local ACL to “allow” and click Update.

At this stage leave the CHAP Authentication settings as we will discuss this in a later lab.
One last thing before closing out this lengthy lab… NFS. Along with iSCSI we also want to experiment with NFS, plus we also need a place to locate our ISO repository. In production environments this is usually placed on cheaper storage which is “usually” presented via a NAS as NFS or SMB. Let’s create an NFS partition and mount point while we’re here.
Within the blue Volumes tab, in the right hand menu click the “Add Volume” link.

Scroll to the bottom of this page and create a new volume with the following details:

  • Volume Name: LABNFSVOL
  • Volume Description: LAB.Local NFS Volume
  • Required Space (MB): 12512 (Max)
  • Filesystem / Volume type: Ext4

Now head to the blue “Shares” tab and click on “LAB.Local NFS Volume”. This prompts you for a Sub-folder name, call it Build and then click “Create Sub-folder”

Now we want to click on this new Sub-folder and click the “Make Share” button.
The next page has three settings we need to change, and then we’re all done.
Add “Build” to the “Override SMB/Rsyn share name”. Change the “Share Access Control Mode” to “Public guest access”, then click the “Update” button.

And then change the “Host access configuration” to be SMB = RW, NFS = RW, HTTP(S) = NO, FTP = NO, Rsync = NO. Tick the “Restart services” checkbox then, click “Edit” and change “UID/GID Mappping” to “no_root_squash” for the last time, click the “Update” button.

Congratulations, you now have a functioning NAS virtual machine that presents virtual hard disks like any “real” NAS or SAN.