Skip to main content

Cisco WebEx stuck at 98% when joining meeting using Mac client

I recently hit a problem on my MacBook Air when trying to use Cisco WebEx. The Mac is running OS X 10.11 El Capitan.

The WebEx client installed without an issue, but when I tried to join a meeting the connection would hang at 98%. I had to use Force Quit to kill the client.

The solution to the problem was to change the proxy settings in Network Preferences. I’m not using a proxy but WebEx seems to have difficulties with the default setting.

Open “System Preferences” then select the “Network” icon.

Now select your current network connection from the left-hand panel (in my case this was Wi-Fi). Click the “Advanced…” button on the lower right of the panel.

Click the “Proxies” tab and the proxy settings are on the left. I’m not using a proxy but the default setting is “Auto Proxy Discovery”. Untick that, click “OK” and then “Apply” and WebEx would connect without any further problem.

System PreferencesScreenSnapz001

VMware Distributed Virtual Switches with single-NIC hosts

Home Lab

I have a small Home Lab for vSphere, based on two Intel NUC hosts running ESXi and an HP Microserver running FreeNAS, presenting datastores as iSCSI LUNs to the hosts.

The vCenter is a vCenter Server Appliance (VCSA) running on one of the ESXi hosts.

The initial setup was fairly straightforward and you can find plenty of other people who have done similar things.

The Problem

One of the difficulties/limitations of using the Intel NUC is that each machine has a singe NIC. That makes creating a Distributed Virtual Switch (DvS) quite difficult as when migrating the host running vCenter to the DvS, the vCenter loses contact with the host and rolls back the migration.

I figured I could live with standard switches, and if I really needed to use DvS I could use nested ESXi with multiple virtual NICs.

A Workaround

Recently I discovered an excellent blog article by Larus Hjartarson titled “vCenter & Distributed vSwitch on two ESXi hosts with a single NIC“. This suggests a workaround for the migration issue mentioned above.

The outline of the workaround is:

  1. Create the DvS on the host that doesn’t run vCenter.
  2. Move that ESXi host  to the Distributed vSwitch. Create VM traffic portgroups.
  3. Clone the vCenter VM and place it on the ESXi host that doesn’t run the vCenter VM.
  4. Connect the newly cloned vCenter VM to a Distributed Portgroup on the ESXi host (that was connected to the DVS previously)
  5. Turn off the original vCenter.
  6. Turn on the cloned vCenter and configure the network settings
  7. Move the other host to the Distributed switch.

Some things I encountered which might help others attempting this:

VCSA changes MAC address when cloned

If you’re using the VCSA for your vCenter, when you clone it the MAC address changes so it won’t come up properly. There’s a VMware KB article on how to work around that: – basically you edit the /etc/udev/rules.d/70-persistent-net.rules file so that the new MAC address is associated with eth0.

iSCSI vmknics cannot be migrated

If you’re using iSCSI-based storage, you cannot migrate a host to the DvS until you remove the vmknic from the software iSCSI adapter, which means having no iSCSI-stored VMs running on the host at the time.

You can’t vMotion the VMs to the other host as they’re on a standard vSwitch and the other host is on a DvS. Even if you leave the old standard vSwitch in place on the other host, vMotion will still abort as it sees the old portgroup as a “virtual intranet” (i.e. no uplinks). So you have to shut down the VMs to move them. That’s when it becomes important to either have two DNS servers, DNS served outside your VMware setup or your host names in the vCenter hosts file.

VMware do have an KnowledgeBase article on the “virtual intranet” problem which has a workaround based on editing the vCenter config file but I have not tried that.

I hope that information proves useful to anyone else using single-NIC hosts for a Home Lab.  Obviously I wouldn’t recommend doing any of this on a production environment, but then I’d hope that a production environment wouldn’t be using single-NIC hosts anyway!