A new KB was released yesterday (http://kb.vmware.com/kb/1033696), in which I noticed something interesting.
ESXi Installable creates a 4 GB Fat16 partition on the target device during installation if there is sufficient space, and if the device is considered Local.
This made me prick up my ears, as only a couple of weeks ago I was having problems using a kickstart script to deploy ESXi to some HP DL580 G7 servers. This issue arose because the ESXi installer considered the local disk controller as non-local.
<aside>
To get around this kickstart issue, I had to add “remote” to the firstdisk option on the autopart line, so it ended-up looking like this:
autopart --firstdisk=local,remote --overwritevmfs
Basically, this tells the installer to tries the first local disk and if it can’t find one then in goes for the first remote disk. Clearly this increases the chances of accidentally wiping a SAN LUN, but as the site had migrated to NFS only, I wasn’t too concerned.
</aside>
So I had a quick check of a few ESXi hosts that I had rolled out recently, and sure enough no scratch partition had been created. This was unexpected behaviour as the hosts had indeed local spinning disks and had enough space (4GB free) for the scratch partition to be created during the install. This means there will be no persistent scratch area – so the scratch will instead be created on a volitile ramdisk, which eats a bit of your host’s memory, and means the scratch contents don’t survive a reboot. After further investigation I found this was also true on some DL380 G6 servers, but not on some DL380 G5 servers. It seems this is something you want to go and check yourself on a case-by-case (RAID controller-by-RAID controller) basis.
To check if a host has a scratch partition, login via the TSM and run:
EDIT – see here for an updatecat /etc/vmware/locker.config
If the file is blank, then no scratch is configured.
Here it is without a scratch partition:
And here it is with a scratch partition created by the installer:
To create a scratch partition for these servers on their local “non-local” disks then follow the steps in the KB. You can do this after deployment via the vSphere client, vCLI, PowerCLI or the TSM.
Here is an outline of doing it at the TSM:
1. Create a directory on the local VMFS volume
2. Run vim-cmd hostsvc/advopt/update ScratchConfig.ConfiguredScratchLocation string /vmfs/volumes/DatastoreName/DirectoryName
3. Reboot the host
The KB also details how to add this configuration to your kickstart files for future deployments or rebuilds.