Skip to main content

vSphere Bugs & Minor Irritations

I’ve recently reported a couple of annoying bugs to VMware. Both have been around for a long time, almost certainly since the days of Virtual Center 2.0, maybe even earlier.

  • If a VM has nothing in the “Notes” annotation, vCenter displays the “Notes” from the previously selected VM instead.
    So if you have a machine with a note saying “Delete after 1st Jan 2011”, and you then view the summary of a machine with no note set, it’ll display the “Delete after 1st Jan 2011”. That could be bad…
    The problem only occurs if the VM has never had any Notes annotation. If you set one and then remove it, it shows the blank note correctly.
    **UPDATE**This appears to be fixed in 4.1 U1 – it only seems to affect VMs which were deployed without notes under VirtualCenter 2.x.
  • When deploying a VM from a template, the Tasks & Events history doesn’t correctly name the template from which the VM was deployed.
    As you can see in the example image, vCenter lists the deployed VM name instead of the template. 

Click for full size

** UPDATE ** VMware have acknowledged this as a bug but it will not be fixed until vSphere 5, later this year.

And there’s a cosmetic thing which winds me up. I haven’t reported it as a bug but if anyone from VMware reads this, maybe they can have a word. It’s really trivial….

Send Ctrl-Alt-del“. What has “del” done to deprive it of a capital “D”?

VMware Fusion also has a “Send Ctrl-Alt-Del” menu option but it gets the capitalisation right. I can only offer this as proof that Macs are better than PCs… or something.

vSphere HA Slot Sizes

vSphere HA slot sizes are used to calculate the number of VMs that can be powered on in an HA cluster with “Host failures cluster tolerates” selected. The slots size is calculated based on the size of reservations on the VMs in the cluster. HA Admission Control then prevents new VMs being powered on if it would not leave any slots available should a host fail.

The slot size for a cluster can be seen by going to the Summary Page for the cluster and clicking the “Advanced Runtime Info” link in the HA box.

If none of the VMs have CPU or RAM reservations, a default of 256MHz and 0GB is used.

The slots per host is derived by taking the total available CPU/RAM for the host and dividing by the slot size. Some CPU is reserved for the system so it will usually be a little lower than the full amount. So a host with 2xquad-core 2.4GHz CPUs (total 19.2GHz) and no VM CPU or RAM reservations has 73 slots and will only allow 73 VMs to be powered on if the cluster has two hosts and is set to protect against a single host failure.

Obviously this allows a very minimal amount of resource for each VM, so either reservations should be set for each VM, or slots size can be manually adjusted (see the VMware vSphere Availability Guide (pdf) for full details).

Note that the slot size is used for admission control calculations only. It has no direct effect on the resources available to VMs should an HA event occur.

There is a VMware Knowledgebase article (1010594) which  has some details of the difference in VI3 and vSphere 4.x.

vSphere: Attempting to add NFS datastore – “Error performing operation: Unable to create object, volume Name not valid”

We’ve had this error on ESX 3.5 and 4.0 hosts, both ESX and ESXi.
When trying to add a new NFS datastore we get the above error message, both in the vSphere Client and when using the command-line tools.

Logging on to the Service Console on an affected host, “esxcfg-nas -l” also results in the same error.

The cause is invalid entries in the /etc/vmware/esx.conf file. Fortunately, it seems to be possible to remove the bad entries and the host then starts working properly without a reboot.

In our case, the bad entries looked like:
/nas/./enabled = "false"
/nas/./host = "1"
/nas/./share = "0"

whereas valid entries are recognisable:
/nas/vmdesktop/enabled = "true"
/nas/vmdesktop/host = "10.0.0.100"
/nas/vmdesktop/readOnly = "false"
/nas/vmdesktop/share = "/vol/vmdesktop"

As I mention above, fixing it on ESX isn’t too hard, just edit /etc/vmware/esx.conf using nano or vi as root, being careful not to affect any other lines.

On ESXi, it’s a little more tricky as you can’t readily log on and edit files without enabling Technical Support mode.

It is, however, possible to edit esx.conf via the Remote CLI (Linux or Windows) or using the vMA.

Below are some example commands which grabs the esx.conf file, edit it using ‘sed’, and then put the altered file back on the host.

vifs -get /host/esx.conf work.conf
cat work.conf | sed -e '/\/nas\/\./d' > fixed.conf
vifs.pl -put fixed.conf /host/esx.conf

The ‘sed’ command will probably need to change for you, depending on what the invalid lines look like. You can just use nano or vi on Linux or the vMA to do the edit, but if you’re using Windows you may find that Notepad and Wordpad either don’t display the file clearly or convert the line endings from Unix format to DOS. Using the free VIM for Windows (http://www.vim.org/download.php) will let you keep the file in the same format.

After making those changes, it was possible to add NFS datastores as normal.

Add multiple datastores to multiple vSphere hosts

In large vSphere environments it can be very tedious to add multiple NFS datastores to lots of hosts.
PowerCLI comes to the rescue as usual.

I needed to add some datastores to all the clustered hosts in a single datacenter, but to skip our non-clustered standalone hosts which are used for backups.

A bit of PowerCLI which should be fairly self-explanatory:

  1. #
  2. #  Add Datastores to all hosts in all clusters in a specified datacenter
  3. #  If a host isn't in a cluster it won't get the datastore
  4. #  Easy enough to change to do all hosts in a datacenter, all in vCenter etc
  5. #
  6. #  Change the value below to point it to your own vCenter
  7. $myvCenter = "vcenter.example.com"
  8.  
  9. # Array of arrays below holds NFS-hostname, NFS-path and Datastore name, should be easy to add to
  10. $nfsArray = @(
  11. 			  @("nfsserver1","/vol/nfspath1","VMstore1"),
  12. 			  @("nfsserver2","/vol/nfspath2","VMstore2"),
  13. 			  @("nfsserver3","/vol/nfspath3","VMstore3")
  14. 			 )
  15.  
  16. $datacenter = Read-Host -Prompt "Datacenter name"
  17.  
  18. Write-Host "Datacenter is $datacenter"
  19.  
  20. connect-viserver -server $myvCenter
  21.  
  22. #
  23. # Get all hosts in all clusters in the named datacenter
  24. $ObjAllHosts = get-datacenter -name $datacenter | Get-Cluster  |  Get-VMHost 
  25.  
  26. ForEach($objHost in $ObjAllHosts){
  27. 	ForEach($nfsItem in $nfsArray) {
  28.  
  29.       Write-Host "Adding datastore" $nfsItem[2] "with path" $nfsItem[0]":"$nfsItem[1] "to" $objHost
  30.       New-Datastore -Nfs -NfsHost $nfsItem[0] -Path $nfsItem[1] -Name $nfsItem[2] -VMHost (Get-VMHost $objHost)
  31.     }
  32. }
  33. disconnect-viserver -server $myvCenter -Confirm:$false

vSphere 4.1 ESXi Installable on USB?

Looking at the ESXi Installable Setup guides for 4.1 and 4.0 reveals a change. In 4.0 under System Requirements, a USB drive was listed as a possible boot device, as well as being usable for installation. So you could stick in your installation media and select USB as the destination.

Under 4.1, USB is no longer listed in the documentation as a bootable device, though boot from SAN devices via HBAs is now supported.

I’ve not checked whether you can actually install ESXi Installable 4.1 to a USB drive. It may well be possible, but I suspect it’s dropped off the “Supported” list of options.

The only reason I can see to drop support for putting Installable onto USB is to encourage people to purchase and use ESXi Embedded from their hardware supplier instead.

**UPDATE 2nd March 2011**

VMware have posted a clarification Knowledgebase article about the support for USB and SD for booting ESXi.

You can install ESXi 4.x to a USB or SD flash storage device directly attached to the server. This option is intended to allow you to gain experience with deploying a virtualized server without relying on traditional hard disks. However, VMware supports this option only under these conditions:
  • The server on which you want to install ESXi 4.x is on the ESXi 4.x Hardware Compatibility Guide.

    AND

  • You have purchased a server with ESXi 4.x Embedded on the server from a certified vendor.

    OR

  • You have used a USB or SD flash device that is approved by the server vendor for the particular server model on which you want to install ESXi 4.x on a USB or SD flash storage device.

If you intend to install ESXi 4.x on a USB or SD flash storage device while ensuring VMware support for it and you have not purchased a server with embedded ESXi 4.x, consult your server vendor for the appropriate choice of a USB or SD flash storage device.

So as I suspected, it’s only supported if you install either using “Embedded” or on a device approved by the server vendor.

It does work absolutely fine on a normal USB stick, the host that this web server runs from boots from such an item in fact. It’s just not supported by VMware.

Worth noting that an approved 2GB USB stick from HP will cost you approx £75, about 15 times the going rate…

After vSphere 4.1 – what will be going

vSphere 4.1 has been out for a couple of days now.

As well as the new features which have been covered extensively (see What’s New), the release notes list some future changes for the product range. They’re not really hidden but haven’t been given much publicity.

  • ESX will be dropped in future releases, with ESXi being the hypervisor product for vSphere.
  • Future versions of vCenter Update Manager will not scan or remediate guest OSes. I presume the cross-licensing costs of using Shavlik were outweighing any benefit. UM will continue to scan and update ESXi hosts. and presumably aid in conversion of ESX hosts to ESXi.
  • VMware vCenter Converter plugin and VMware vCenter Guided Consolidation are also going away in future versions. Converter will continue in a standalone format rather than a vCenter plugin.
  • Web Access isn’t available on ESXi so that’ll be going away when ESX is dropped too

There are a few other items being dropped such as support for some versions of Linux in guests, VMI paravirtualization support, and MSCS in Windows 2000 but they aren’t as widely used.

Passing info from PowerCLI into your VM using guestinfo variables

For a project at work we’ve been trying to pass information into a VM without connecting the VM to the network.
This is in order to set up some config within both Windows and Linux VMs. We decided to explore the use of GuestInfo variables which are held in memory in VMware Tools within the Guest VM, but which can be set from the host.

From the ESX Service Console of the host you can use
vmware-cmd <cfgfile> setguestinfo <variable> <value>
to set a value and vmware-cmd <cfgfile> getguestinfo <variable> to read the value. Note that you don’t need to use the “guestinfo.” prefix when using these commands.

Within the VM guest OS the values can be set/read using:
(Windows)
vmtoolsd.exe --cmd "info-set guestinfo.<variable> <value>"
vmtoolsd.exe --cmd "info-get guestinfo.<variable>"
(vmtoolsd.exe is usually in C:\Program Files\VMware\VMware Tools)

(Linux)
vmware-guestd --cmd "info-set guestinfo.<variable> <value>"
vmware-guestd --cmd "info-get guestinfo.<variable>"
(vmware-guestd is usually in /usr/sbin)

It wasn’t immediately obvious where the guestinfo variables live with regard to the PowerCLI vmConfig properties. Googling didn’t reveal much useful info, so I though I’d try the wonders of new technology and use Twitter. I’ve been following Carter Shanklin of VMware on Twitter since I attended a London UK VMware User Group meeting a couple of months ago, and as he’s the Product Manager for PowerCLI and the SDK, I thought I’d ask.

He quickly pointed me at the VMware SDK documentation for the ConfigInfo object and the extraConfig object in particular.

A bit of further reading led me to some experiments let me to try the following code:
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
$extra = New-Object VMware.Vim.optionvalue
$extra.Key="guestinfo.test"
$extra.Value="TestFromPCLI"
$vmConfigSpec.extraconfig += $extra
$vm = Get-View -ViewType VirtualMachine | where { $_.name -eq "MyVMName" }
$vm.ReconfigVM($vmConfigSpec)

That will set the guestinfo.test property to “TestFromPCLI”. Once that’s been set it can be read by the VM.

The guestinfo property can have multiple Key/Value pairs so you can pass quite a few variables through to a VM. These can only be set when a VM is powered up and running VMware Tools as the value is stored in the VMs memory, and as far as I can tell, the contents are lost when the VM reboots.

However, there is another extraConfig object which can also be set which is the machine.id. Again this can be read from within the VM (replace guestinfo. with machine.id in the above code snippets), but this one gets written to the VMs VMX config file and will thus survive reboots.

You could squish several bits of info into that one object/variable, for example a unique identifier and the VM name so that the VM can self-configure.

Unable to upgrade from Virtual Center evaluation license

I encountered a problem with Virtual Center (VC, vCenter) 2.5 Update 4 where trying to change from the evaluation license to a proper license server resulted in a message that there were “Not enough licenses for this operation”.

There were various suggestions on the web, mostly involving stopping and starting the Virtual Center and License Server services. None of them worked. The only other suggestion was to remove all the hosts from the Virtual Center, add the license, then re-add the hosts back in. This would lose any nice filing of VMs in the “Virtual Machines & Templates” view, and would generally be a bit of a pain.

As a last resort, I thought I’d have a Google round for possible registry settings which may affect licensing.

I found a blog posting about changing the license type that VC looks for within the registry, which also mentioned a “LicensePath” key.

Looking in the registry of my VC server there was no “LicensePath” key, so I created one, stopped and restart the Virtual Center service and found that VC now found the license server correctly and didn’t complain about “Not enough licenses”.

So to recap, open regedit, navigate to
HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter

Create a new String key calledLicensePath and enter the info for your license server, e.g. [email protected]

Hostname/IP bug in ESXi 3.5 Update 3

Last week I was working on an HA cluster of VMware ESXi hosts.

Whatever I tried I couldn’t get the hosts to play nicely when HA was turned on. Further investigation showed that the hosts were resolving their own names to a previous IP address.

Checking the console on each host showed the correct information, but the error messages when turning HA on indicated that they were still resolving to their old incorrect addresses.

The usual problem with HA is that DNS records don’t match what is actually set up, but in this case DNS checked out fine.

It would appear that:

  1. ESXi uses a local ‘/etc/hosts’ file for resolving names in preference to DNS. You can view this file by pointing a browser at http://youresxservername/host/hosts
  2. The local hosts file doesn’t get updated when you change the settings using the console interface. It keeps whatever was in there the first time you set up the hostname and IP address for the host

Most of the time this doesn’t matter much as the host doesn’t often need to look itself up, but when VMware HA is turned on, name resolution becomes much more important.

There are two ways to edit the /etc/hosts file. Because ESXi doesn’t have a normal Console Operating System (COS) that full ESX enjoys, both methods are relatively awkwards.

  1. Use ‘unsupported’ mode to get a local root shell and edit using ‘vi /etc/hosts’
  2. Use the Remote CLI to change the file remotely.
    1. vifs.pl –sessionfile <yoursessionfile> –get /host/hosts C:\temp
    2. Edit the file with notepad
    3. vifs.pl –sessionfile <yoursessionfile> –put C:\temp\hosts /host/hosts

With that done HA started working again. Probably worth doing a reboot just to make sure it’s ‘taken’.

iPhoto mangles GPS EXIF data, even from an iPhone 3G

A while ago I started geo-tagging some of my photos (basically, adding GPS location data to the EXIF data in the image files).

I did the geo-tagging using the excellent HoudahGeo. Because of a limitation in iPhoto, you have to geo-tag your pictures before importing them into the iPhoto library, as it won’t re-read or change the location info it stores in the library database.

After importing to iPhoto, I would edit the photos and then upload them using Connected Flow‘s superb FlickrExport (the beta versions support uploading of location co-ordinates).

I found that, for some photos, the resulting location on the maps in Flickr was incorrect. More specifically, they were the wrong side of the Greenwich Meridian.

Further investigation showed that HoudahGeo and FlickrExport (and Flickr) were all blameless.
If I tagged a photo, examined it in OS X’s Preview the location showed correctly. If I imported the photo into iPhoto, the co-ordinates still showed up fine. When I exported the photo, using the File Export option, and examined the result in Preview, the location had changed.

This image demonstrates the issue, giving the correct co-ordinates before import into iPhoto, the correct co-ordinates inside iPhoto, but the wrong co-ordinates when subsequently exported from iPhoto using FlickrExport or File Export.

There’s a discussion about the problem in a Flickr group from a year or so ago, so I certainly wasn’t the first person to notice it.

I reported the problem to Apple, along with a link to the above image on 30th May.

On Friday, I was fortunate enough to receive a shiny new iPhone 3G. The phone has a built-in GPS (strictly speaking, a-GPS) and will geo-tag photos taken with the camera. The same switching of co-ordinates persists if you put the images through iPhoto, and quite a lot of other people have started to take notice, including a MacWorld article so I’m hoping Apple will finally fix this issue.

A work around if using FlickrExport is to re-tag all your pictures in FlickrExport before exporting them to Flickr, or if using HoudahGeo, to re-tag them after they’ve been edited and exported from iPhoto.

I don’t know whether Aperture has the same problem.

UPDATE: iPhoto 7.1.4 released 23rd July 2008 seems to fix the problem