The ramdisk ‘var’ is full on ESXi host and fix (without maintenance mode or reboot)

I recently encountered an issue where vMotions on a host would fail, the host would disconnect from vCenter, and some other strange errors.

This was an HP host installed with the HP ISO image, but not sure if that is the cause of this issue.

When investigating the logs on the host I noticed that /var on the ramdisk was full

When issuing vdf -h available space for /var on the ramdisk was  0%

Looking in /var/log i noticed all logfiles where symlinks to /scratch except for the EMU directory, where some Emulex process seemed to fill up a log file …..

When removing the logfile /var/log/EMU/mili/mili2d.log and after restarting hostd, space was freed up on /var in the ramdisk, but the logfile /var/log/EMU/mili/mili2d.log returned and started filling up again.

Googeling I found a suggestion to remove the Emulex vibs when not using an Emulex HBA, but these hosts did have Emulex HBA’s

After some more research I found a fix that did not need a reboot or maintenance mode (which is great since vMotion stoped working on these hosts):

Continue reading

Issue with jumbo frames after upgrading nested ESXi servers in the lab to 5.5 and fix

IMPORTANT UPDATE AT THE END OF THE ARTICLE

In my lab I use to test and play with numerous VMware solutions, I have several nested ESXi servers running. Nested ESXi servers are ESXi servers running as a VM. This is a not supported option, but it does help me to test and play around with software without having to rebuild my physical lab environment all the time.

So first a little on the setup of my nested ESXi servers

The VM’s for my nested ESXI servers have 4 NIC’s

The first NIC connects to “vESXi Trunk” This is a port group on my physical ESXi hosts that is configured on a vDS with VLAN type “VLAN Trunking” so I get all VLAN’s in my nested ESXi host:

Screen Shot 2013-11-25 at 20.49.39

I use this VLAN trunk to present my management network and my VM networks to my nested ESXi servers

I also have a NIC that connects to my vMotion network, and two nice that connect to my iSCSI networks. I use two subnets and two VLAN’s for my iSCSI connections.

Screen Shot 2013-11-25 at 20.54.20

In my physical setup I use jumbo frames in these networks, and I did the same in my nested ESXi hosts, and it worked perfectly … Until I upgraded my nested ESXi hosts to vSphere 5.5 … Continue reading

Connectivity issue when upgrading Dell R620 to ESXi 5.1 build 914609

When building a couple of new ESX hosts based on Dell R620 systems, I used the Dell customized iso VMware-VMvisor-Installer-5.1.0-799733.x86_64-Dell_Customized_RecoveryCD_A01.iso to install ESXi

Those Dell systems had 4 Broadcom nics (2 x 1Gb + 2 x 10Gb) and 2 Intel 10Gb nics

Install went fine, and I decided to upgrade to the latest patches using esxcli since the hosts had no access to vCenter. All went fine till after the reboot. I noticed all Broadcom nics where missing from my hosts, most likely due to a driver issue, so time to investigate. Continue reading