Updated: Problem with downloading VIBs from https://vmwaredepot.dell.com/index.xml

Update as of januari 31st

Seems like Dell fixed their repository and vLCM can now download the VIBs again as it should when adding a VIB to an image.

Here is the original article I wrote

We have been using the Dell depot in vLCM for a while but since a couple of days we see issues when we try to include some VIBs in an image. Including in the image works, but as soon as we try to remediate a host, vCenter try to download the VIB but that fails and thus remediating the host fails.

We added the Dell repository to our vCLM config:

Next thing we do is add a VIB from this repository to the image we will apply to our cluster:

But when we try to remediate the cluster, it tries to fetch the VIB from the Dell repo but fails …

It seems like the VIB that needs to be downloaded is not available at the URL vCLM gets from the Dell repository ….

I am able to access https://vmwaredepot.dell.com from my vCenter Appliance but trying to fetch the VIB gives me a 404 error: (In this screenshot a proxy is used, but even with a direct connection it fails)

Seems to me like Dell changed the directory structure on https://vmwaredepot.dell.com but forgot to update the xml index file ….. Getting in touch with Dell to get this fixed, will update this article when I get more info.

Dell PowerStore warning when using two storage networks (can be ignored ….)

While configuring Dell PowerStore 1000T for NVMe over TCP, I opted to use two separate storage networks for availability purposes, similar to 2 storage fabric used in a Fibre Channel SAN. So each hosts has 2 NVMe over TCP NICs and each one connects to a separate switch. Each switch only has the VLAN of one of the two storage networks.

For this to work (due to the fact you can only configure the same subnet on the same front-end port for each node) this can only be done when in my scenario, port 2 of both Node A and Node B connect to the first switch, that only has storage network 1, and port 3 of both Node A and Node B connect to the second switch, that only has storage network 2.

So far so good, but as soon as I connected both ports 2 to switch 1 and ports 3 to switch 2, the PowerStore showed me an error: “Appliance port pairs are connected to the same data switch” Which is by design in my case and according to Dell’s best practices to use 2 storage networks for NVMe ….

But luckily there is a Dell knowledge base article (this one talks about iSCSI but it’s the same for NVMe) that says:

So in this scenario, “the warning can be safely ignored” which is what I did 🙂

Possible front-end port oversubscription on Dell PowerStores

While architecting an environment with a Dell PowerStore for a customer, I noticed some interesting details in the “Dell PowerStore: Best Practices Guide”

First a part about the on-board mezzanine card:

This means the ports on the mezzanine card have about 63.04 Gbits/sec of bandwidth available. When using 4 ports at 25Gbit/sec, we can not use the full bandwidth these ports offer. So what about the I/O module slots?

This part is about the I/O module slots? Slot 0 is 16-lane PCIe Gen3 which gives you about 126 Gbit/sec, which should be sufficient for the 4 port 25Gbit/sec card …

But ….. Regarding the I/O module:

So the 4 x 25Gbit/sec I/O module itself is 8-lane PCIe Gen3 …. So even in slot 0 you can not use the full bandwidth of this card ….

Since PowerStore OS there is a new 2-Port Ethernet card that supports speeds of up to 100 Gb/s. This 100 GbE card is supported on PowerStore 1000-9200 models in I/O Module 0 slot, the 16-lane PCIe Gen3 slot, which is still limited to 126 Gbit/sec ….

If you want to combine ethernet connectivity and Fibre Channel, it gets even more interesting, since the preferred slot for the 4 port 32Gbit/sec FC I/O module is slot 0 …. In slot 0 it can run at full capacity. When adding more FC ports to slot 1, the combined ports bandwidth is again limited to 63.04 Gbits/sec

So when designing a high bandwidth PowerStore environment, keep this in the back of your mind ….

Connectivity issue when upgrading Dell R620 to ESXi 5.1 build 914609

When building a couple of new ESX hosts based on Dell R620 systems, I used the Dell customized iso VMware-VMvisor-Installer-5.1.0-799733.x86_64-Dell_Customized_RecoveryCD_A01.iso to install ESXi

Those Dell systems had 4 Broadcom nics (2 x 1Gb + 2 x 10Gb) and 2 Intel 10Gb nics

Install went fine, and I decided to upgrade to the latest patches using esxcli since the hosts had no access to vCenter. All went fine till after the reboot. I noticed all Broadcom nics where missing from my hosts, most likely due to a driver issue, so time to investigate. Continue reading