Home » VMware » vSphere 6.5

Category: vSphere 6.5

vSAN: PBM error occurred during PreCloneCheckCallback

Lately, I encountered some issues related to VMware vSAN in my Lab environment. The error message that was popping up all the time was “PBM error occurred during PreCloneCheckCallback“.

So how did the problem occur? First, we start with some background information. My Lab environment is powered-on when needed and powered-off when not needed. This is, of course, a little bit different than a production 24×7 environment that you have in your datacenters worldwide.

The environment was booted successfully at first glance. We are talking about Domain Controllers, vCenter Server, VMware NSX-V, nested ESXi Hosts and vRealize Automation. When I started deploying virtual machines with a vRealize Automation (vRA) based on blueprints with vSphere Templates issues started to occur.

vRealize Automation was failing on the provisioning task and was cleaning up the deployment because of the failed state (default behavior). So it was time to dig into the underlying infrastructure.



Environment

When the issue occurred the following software versions were used in my lab environment:

  • VMware vCenter 6.5 Update 2B
  • VMware vRealize Automation 7.3.1
  • VMware ESXi 6.5 Update 2
  • VMware vSAN 6.6

Error message(s)

Here is all the information that can be found in various locations surrounding the issue.

Error message: Screenshots

Here are the screenshots, the first one is from VMware vCenter and the second one is from vRealize Automation. As you can see there is clearly a problem.

Error message: vRealize Automation

Here is the vRealize Automation log entry related to the VMware vSAN issue:

Error in Execute DynamicOps.Common.Client.HtmlResponseException: Service Unavailable (503)

Error message: vCenter Server

Here is the VMware vCenter log entry related to the VMware vSAN issue:

A general system error occurred - PBM error occurred during PreCloneCheckCallback (2118557)

Solution

The solution is quick but is more like a quick fix because it comes back every time I start up my lab environment.

Procedure:

  • Open a web browser.
  • Navigate to your vCenter Server URL (https://%vc%/vsphere-client).
  • Login with a user that has administrator credentials (administrator@vsphere.local).
  • Navigate to Hosts & Clusters > Select the vCenter Object.
  • Click on the Configure tab.
  • Click on the Storage Providers.
  • Click on the following two buttons:
    • Synchronizes all Storage Providers with the current state of the environment.
    • Rescan the storage provider for new storage systems and storage capabilities.
  • After pressing the buttons, you don’t see any tasks running on the vCenter Server (expected behavior). After 5 seconds everything should be working and provisioning should be possible.

Share this:

HPE Storage Controller Management (ssacli)

This time I decided to do a blog post about the HPE Smart Array RAID controllers with their wonderful ssacli tool. The tooling of HPE is very powerful because you can online manage a VMware ESXi host and migrate for example from a RAID 1 volume to a RAID 10 without downtime or change the read and write cache ratio.

So far as I know I haven’t seen an identical tool yet from the other server hardware vendors like Cisco, Dell EMC, IBM, and Supermicro. The main difference has always been that the HPE tool can perform the operation live without downtime. 

So far as I can remember it has been there for ages. It was already available for VMware ESX 4.0 and is still available in VMware ESXi 6.7. So thumbs-up for HPE :).

Let’s talk about controller support. The tool supports the most HPE SmartArray controllers over the last 10 to 15 years, for example, the Smart Array P400 was released in 2005 and is still working fine today.

Here is an overview of supported controllers:

  • HPE Smart Array P2XX
  • HPE Smart Array P4XX
  • HPE Smart Array P7XX
  • HPE Smart Array P8XX

HPE SSACLI – Location

In case you are using the HPE VMware ESXi custom images. The tool is already pre-installed when installing ESXi. The tool is installed as a VIB (vSphere Installable Bundle). This means it can also be updated with vSphere Update Manager.

Over the years the name of the HPE Storage Controller Tool has been changed and so has the location. Here is a list of locations that have been used for the last ten years for VMware ESXi:

# Location VMware ESXi 4.0/4.1/5.0
/opt/hp/hpacucli/bin/hpacucli

# Location VMware ESXi 5.1/5.5/6.0
/opt/hp/hpssacli/bin/hpssacli

# Location VMware ESXi 6.5/6.7
/opt/smartstorageadmin/ssacli/bin/ssacli


HPE SSACLI – Examples

I have collected some screenshots over the years. Screenshots were taken by doing maintenance on VMware ESXi servers. The give you an idea what valuable information can be shown.


HPE SSACLI – Abréviation

All commands have a short name to reduce the length of the total input provided to the ssacli tool:

### Shortnames:
- chassisname = ch
- controller = ctrl 
- logicaldrive = ld
- physicaldrive = pd 
- drivewritecache = dwc
- licensekey = lk

### Specify drives:
- A range of drives (one to three): 1E:1:1-1E:1:3
- Drives that are unassigned: allunassigned

HPE SSACLI – Status

To view the status of the controller, disks or volumes you can run all sorts of commands to get information about what is going on in your VMware ESXi server. The extensive detail is very useful for troubleshooting and gathering information about the system.

# Show - Controller Slot 1 Basic configuration
./ssacli ctrl slot=1 show config

# Show - Controller Slot 1 Detailed configuration
./ssacli ctrl slot=1 show config detail

# Show - Controller Slot 1 Status
./ssacli ctrl slot=1 show status

# Show - All Controllers Configuration
./ssacli ctrl all show config

# Show - Controller slot 1 logical drive 1 status
./ssacli ctrl slot=1 ld 1 show status

# Show - Basic Physical Disks status
./ssacli ctrl slot=1 pd all show status

# Show - Detailed Physical Disk status
./ssacli ctrl slot=1 pd all show status

HPE SSACLI – Creating

Creating a new logical drive can be done online with the HPE Smart Array controllers. I have displayed some basic examples.

# Create - New single disk volume
./ssacli ctrl slot=1 create type=ld drives=2I:0:8 raid=0 forced

# Create - New spare disk (two defined)
./ssacli ctrl slot=1 array all add spares=2I:1:6,2I:1:7

# Create - New RAID 1 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2 raid=1 forced

# Create - New RAID 5 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2,1I:0:3 raid=5 forced

HPE SSACLI – Adding drives to logical drive

Adding drives to an already created logical drive is possible with the following commands. You need to perform two actions: adding the drive(s) and expanding the logical drive. Keep in mind: make a backup before performing the procedure.

# Add - All unassigned drives to logical drive 1
./ssacli ctrl slot=1 ld 1 add drives=allunassigned

# Modify - Extend logical drive 2 size to maximum (must be run with the "forced" flag)
./ssacli ctrl slot=1 ld 2 modify size=max forced

HPE SSACLI – Rescan controller

To issue a controller rescan, you can run the following command. This can be interesting for when you add new drives in hot swap bays.

### Rescan all controllers
./ssacli rescan

HPE SSACLI – Drive Led Status

The LED status of the drives can also be controlled by the ssacli utility. An example is displayed below how to enable and disable a LED.

# Led - Activate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=on

# Led - Deactivate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=off

# Led - Activate LED on physical drive
ctrl slot=0 pd 1I:0:1 modify led=on

# Led - Deactivate LED on physical drive
ctrl slot=0 pd 1I:0:1 modify led=off


HPE SSACLI – Modify Cache Ratio

Modify the cache ratio on a running system can be interesting for troubleshooting and performance beanchmarking.

# Show - Cache Ratio Status
./ssacli ctrl slot=1 modify cacheratio=?

# Modify - Cache Ratio read: 50% / write: 50%
./ssacli ctrl slot=1 modify cacheratio=50/50

# Modify - Cache Ratio read: 0% / Write: 100%
./ssacli ctrl slot=1 modify cacheratio=0/100


HPE SSACLI – Modify Write Cache

Changing the write cache settings on the storage controller can be done with the following commands:

# Show - Write Cache Status
./ssacli ctrl slot=1 modify dwc=?

# Modify - Enable Write Cache on controller
./ssacli ctrl slot=1 modify dwc=enable forced

# Modify - Disable Write Cache on controller
./ssacli ctrl slot=1 modify dwc=disable forced

# Show - Write Cache Logicaldrive Status
./ssacli ctrl slot=1 logicaldrive 1 modify aa=?

# Modify - Enable Write Cache on Logicaldrive 1
./ssacli ctrl slot=1 logicaldrive 1 modify aa=enable

# Modify - Disable Write Cache on Logicaldrive 1
./ssacli ctrl slot=1 logicaldrive 1 modify aa=disable

HPE SSACLI – Modify Rebuild Priority

Viewing or changing the rebuild priority can be done on the fly. Even when the rebuild is already active. Used it myself a couple of times to lower the impact on production.

# Show - Rebuild Priority Status
./ssacli ctrl slot=1 modify rp=?

# Modify - Set rebuildpriority to Low
./ssacli ctrl slot=1 modify rebuildpriority=low

# Modify - Set rebuildpriority to Medium
./ssacli ctrl slot=1 modify rebuildpriority=medium

# Modify - Set rebuildpriority to High
./ssacli ctrl slot=1 modify rebuildpriority=high

HPE SSACLI – Modify SSD Smart Path

You can modify the HPE SDD Smart Path feature by disabling or enabling. To make clear what the HPE SDD Smart Path includes, here is a official statement by HPE: 

“HP SmartCache feature is a controller-based read and write caching solution that caches the most frequently accessed data (“hot” data) onto lower latency SSDs to dynamically accelerate application workloads. This can be implemented on direct-attached storage and SAN storage.”

For example, when running VMware vSAN SSD Smart Path must be disabled for better performance. In some cases worse the entire vSAN disk group fails.

# Modify - Enable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=enable

# Modify - Disable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=disable

HPE SSACLI – Delete Logical Drive

Deleting a logical drive on the HPE Smart Array controller can be done with the following commands.

# Delete - Logical Drive 1
./ssacli ctrl slot=1 ld 1 delete

# Delete - Logical Drive 2
./ssacli ctrl slot=1 ld 2 delete

HPE SSACLI – Erasing Physical Drives

In some cases, you need to erase a physical drive. This can be performed with multiple erasing options. Also, you can stop the process.

Erase patterns available:

  • Default
  • Zero
  • Random_zero
  • Random_random_zero
# Erase physical drive with default erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase

# Erase physical drive with zero erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase erasepattern=zero

# Erase physical drive with random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_zero

# Erase physical drive with random random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_random_zero

# Stop the erasing process on phsyical drive 1E:1:1
./ssacli ctrl slot=1 pd 1E:1:1 modify stoperase

HPE SSACLI – License key

In some cases a licence key needs to be installed on the SmartArray storage controller to enable the advanced features. This can be done with the following command:

# License key installation
./ssacli ctrl slot=1 licensekey XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

# License key removal
./ssacli ctrl slot=5 lk XXXXXXXXXXXXXXXXXXXXXXXXX delete 

Related sources

A couple of interesting links related to the HPE Storage Controller tool for VMware ESXi:

Share this:

Changing VMware Storage Controller to Paravirtual for CentOS 7

In this post, we are going to change the Virtual Storage Controller from LSI Logic Parallel to VMware Paravirtual for a CentOS 7 based Virtual Machine that is running on VMware vSphere. This blog post will contain step by step guidance for performing the operation.

In my case the virtual machine was build in VMware Workstation and after some time migrated to VMware ESXi. The VMware Paravirtual Storage Controller is not supported in VMware Workstation. That is why the virtual machine came over with the “wrong” storage controller.

My 24×7 Lab environment is running shared iSCSI based storage and all virtual machines are thin provisioned. The Virtual Machine that came over from VMware Workstation is installed with CentOS 7.

Why VMware Paravirtual?

Why should you want to migrate from an LSI Logic Parallel to a VMware Paravirtual SCSI Controller? Two simple reasons and it are two good ones:

  • Lower CPU utilization
  • Higher Throughput

Personally, I have a third reason to add… compliance. All my virtual machines should be compliant with the VMware Best Practice and my personal Home Lab standard. In my Lab environment, this means using the VMware Paravirtual where ever possible/supported.

VMware Official Statement 1:

PVSCSI adapters are high-performance storage adapters that can result in greater throughput and lower CPU utilization. PVSCSI adapters are best for environments, especially SAN environments, where hardware or applications drive a very high amount of I/O throughput. The VMware PVSCSI adapter driver is also compatible with the Windows Storport storage driver. PVSCSI adapters are not suitable for DAS environments.VMware Paravirtual SCSI adapters are high-performance storage adapters that can result in greater throughput and lower CPU utilization.

VMware Official Statement 2:

The PVSCSI adapter offers a significant reduction in CPU utilization as well as potentially increased throughput compared to the default virtual storage adapters, and is thus the best choice for environments with very I/O-intensive guest applications.



Procedure

The most important step in the process is to make sure you have a valid backup! After that, it is just following the steps described below:

  • Create a virtual machine snapshot or backup before you begin.
  • Power-off the virtual machine.
  • Add the VMware Paravirtual Controller to the Virtual Machine. Do not change the disk controller assignment yet, only add the storage controller to the VM (screenshot 01).
  • Power-on the virtual machine.
  • Login with an account on the virtual machine (account must be able to obtain root access).
  • Start rebuilding the initial ramdisk image (screenshot 02):
    mkinitrd -f -v /boot/initramfs-$(uname -r).img $(uname -r)
  • Power-off the virtual machine.
  • Assign disks to the new storage controller and remove the old storage controller (screenshot 03).
  • Power-on the virtual machine.
  • Validate that everything is working and disks are mounted (screenshot 04).
  • Remove the virtual machine snapshot or backup after you are done.

Screenshots

Conclusion

At this point, I have swapped out three virtual machines from the LSI controller to the VMware Paravirtual SCSI Controller. The machines have been running now for about two weeks without any problems. So everything is compliant again ;).

If you encounter any problems or have any question about this subject please feel free to contact me on Twitter or the Reply option below.



Source

Here are some interesting related articles that I used for creating this blog post:

Share this:

Cannot Remove Content Library in VCSA 6.5 Update 1

Share this:

Opening vSphere Web Client (Flash) on Windows Server 2016

Share this:

The VM Remote Console changed to VMware Workstation instead of VMRC

Lately, I discovered an annoying feature in combination with VMware vCenter and VMware Workstation. When installing VMware Workstation on your management computer it becomes the default Remote Console viewer. To be honest, I like the VMware Remote Console (VMRC) very much. The application has all the features and is quick and light. This compared to starting VMware Workstation to open a Remote Console.

What is VMware Remote Console: “The VMware Remote Console (VMRC) is a standalone console application for Windows. VMware Remote Console provides console access and client device connection to VMs on a remote host. You will need to download this installer before you can launch the external VMRC application directly from a VMware vSphere or vRealize Automation web client.”

In October 2017, I already fixed my problem on my management computer… but after a recent VMware Workstation update, it changed the Remote Console back to VMware Workstation. Currently, there is no option in the GUI to change the default Remote Console. Ok, but how do we get VMRC back?

When I was comparing the Windows Registry, I found out that the following registry keys were different between machines. To speed up to process I created some PowerShell one-liners to fix the problem.

### View settings in registry
Get-Item HKLM:\SOFTWARE\Classes\vmrc\DefaultIconGet-Item HKLM:\SOFTWARE\Classes\vmrc\shell\open\command

### Change settings to VMRC
Set-Item HKLM:\SOFTWARE\Classes\vmrc\DefaultIcon -Value '"C:\Program Files (x86)\VMware\VMware Remote Console\vmrc.exe",0'
Set-Item HKLM:\SOFTWARE\Classes\vmrc\shell\open\command -Value '"C:\Program Files (x86)\VMware\VMware Remote Console\vmrc.exe" "%1"'

When you change the registry keys, the settings are direct in effect. No Operating System reboot or browser restart is required. The change is instant. I hope the blog post helps some vSphere Administrators that also prefer VMRC above VMware Workstation for viewing Remote Consoles.

@VMware: I would like to have an option to control the behaviour without changing registry keys by hand… 🙂 Thanks!



Environment

The issues occurred with the following combination of software:

  • VMware vCenter Server 6.5 (Update 1e)
  • VMware VMRC (10.0.2-7096020)
  • VMware Workstation (12.5.9 build-7535481)
  • Management Workstation: Windows 10 X64

VMRC Screenshots

Some screenshots that display the changes when opening the Remote Console of a Virtual Machine in VMware vCenter.

Share this:

VMware VCSA 6.5 Content Library Issue

Today I was facing a VMware Content Library issue. I was removing a newly created Content Library in the vSphere Web Client but that resulted in a Java Runtime error.
The conclusion was that there was no way to remove the Content Library item.

The environment that I was troubleshooting has an external PSC and a vCenter server. The PSC and VC were running on the VMware VCSA 6.5 U1b release.
After some searching in the logs and googling, I came across the two following articles:
Link: Notes from MWhite – Can’t create a Content Library?
Link: VMware KB – OVF deployment fails after upgrading to vCenter Server Appliance 6.5 U1 (2151085)

Based on both information sources a couple of commands would fix the problem but there was also a note about installing patch VCSA 6.5 U1d.

I can confirm after upgrading the VSCA (PSC and VC) to version 6.5 U1e the problem was resolved.

Content Library Screenshots

Share this:

vCenter Server 6.5 U1 does not support deployment of OVF files

Today I was planning a NSX manager deployment in my Home Lab… But that turn out to be a problem, because I could not upload an OVF file in the vSphere Client and HTML5 Web Client. When looking in my Home Lab notes I realized the last time I deployed an OVF was when the VCSA was running 6.5 without update 1. I think something went wrong with updating to VCSA 6.5 update 1.

Problem:

Both webpages display the problem in a different way.

vSphere Client:

With the vSphere Client the following pop-up appears when trying to deploy an OVF file:  “This version of vCenter Server does not support Deploy OVF Template using this version of vSphere Web Client. To Deploy OVF Template, login with version 6.5.0.0 of vSphere Web Client”

vSpher Client - OVF Deployment
vSphere Client – OVF Deployment

 

HTML5 Web Client:

The HTML5 Web Client does not display any error at all. It just disables the option to deploy an OVF file.

HTML5 Web Client - Deployment not possible
HTML5 Web Client – Deployment not possible

Fix:

After some googling I found the following VMware KB article 2151085 (link). This turned out to be the solution.
1. Connect to the vCenter Server Appliance with an SSH session and root credentials.
2. Run this command to enable access the Bash shell:
shell.set –enabled true
3. Type shell and press Enter.
4. Navigate to /etc/vmware-content-library/config/ with this command:
cd /etc/vmware-content-library/config/
5. Create a backup of the ts-config.properties and ts-config.properties.rpmnew file with these commands:
cp ts-config.properties ts-config.properties.orig
cp ts-config.properties.rpmnew ts-config.properties.rpmnew.orig
6. Rename ts-config.properties.rpmnew to ts-config.properties.
mv ts-config.properties.rpmnew ts-config.properties
7. Restart the Content Library service:
service-control –stop vmware-content-library
service-control –start vmware-content-library
8. Refresh or close your browser and connect with one of the web interfaces.

Share this:

Automated installation with VMware ESXi 5.5/6.0/6.5

In this blog post, we are going to automate the installation of VMware ESXi 5.5, 6.0 and 6.5. This can be done with a so-called “kickstart” configuration file which is officially supported by VMware. The file contains the configuration for a VMware ESXi Host to configure settings like IP address, subnet mask, hostname, license key, datastore, etc.

The kickstart configuration file can be made available in the following locations:

  • FTP
  • HTTP/HTTPS
  • NFS Share
  • USB flash drive
  • CD/DVD device

Personally, I prefer to use the HTTP protocol.



Use Case

You might ask yourself, why should I install an ESXi Host with a kickstart file? Some of the use cases I identified over the years are:

  • The very first ESXi Hosts for your SDDC environment (before VMware vCenter is deployed or vSphere Auto Deploy is configured).
  • A standalone ESXi Host for a small environment.
  • A Home Lab environment to install nested VMware ESXi Hosts.

Setup a web server

To make the kickstart configuration file available for the ESXi host we need a web server. Basically, every web server available on the market can serve this file. Here is a list of web server products that I have used: Apache, Microsoft IIS and NGINX.

In this environment/example I used a Microsoft IIS server on a Windows 10 Client. Do not forget to add the cfg extension to the MIME types.

Configuration file

Now it’s time to create a text file with your favourite text editor. The text file in this example is called (ks.cfg). I have added two configuration files as samples, one with the minimum settings and one I normally use for my Lab environment.

Configuration file – Simple (ks.cfg)

This is a default ks.cfg configuration file with just the minimum of settings required.

#
# Sample scripted installation file
#
 
# Accept the VMware End User License Agreement
vmaccepteula
 
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
 
# The install media is in the CD-ROM drive
install --firstdisk --overwritevmfs
 
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
 
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )

Configuration file – Advanced (ks.cfg)

This is the more advanced version of the configuration file that also configures a lot of other settings like NTP servers, search domain, CEIP and a static IP address for the management interface.

### ESXi Installation Script
### Hostname: LAB-ESXi01A
### Author: M. Buijs
### Date: 2017-08-11
### Tested with: ESXi 6.0 and ESXi 6.5
 
##### Stage 01 - Pre installation:
 
    ### Accept the VMware End User License Agreement
    vmaccepteula
 
    ### Set the root password for the DCUI and Tech Support Mode
    rootpw VMware1!
 
    ### The install media (priority: local / remote / USB)
    install --firstdisk=local --overwritevmfs --novmfsondisk
 
    ### Set the network to DHCP on the first network adapter
    network --bootproto=static --device=vmnic0 --ip=192.168.151.101 --netmask=255.255.255.0 --gateway=192.168.151.254 --nameserver=192.168.126.21,192.168.151.254 --hostname=LAB-ESXi01A.lab.local --addvmportgroup=0
 
    ### Reboot ESXi Host
    reboot --noeject
 
##### Stage 02 - Post installation:
 
    ### Open busybox and launch commands
    %firstboot --interpreter=busybox
 
    ### Set Search Domain
    esxcli network ip dns search add --domain=lab.local
 
    ### Add second NIC to vSwitch0
    esxcli network vswitch standard uplink add --uplink-name=vmnic1 --vswitch-name=vSwitch0
 
    ###  Disable IPv6 support (reboot is required)
    esxcli network ip set --ipv6-enabled=false
 
    ### Add NTP Server addresses
    echo "server 192.168.126.21" >> /etc/ntp.conf;
    echo "server 192.168.151.254" >> /etc/ntp.conf;
 
    ### Allow NTP through firewall
    esxcfg-firewall -e ntpClient
 
    ### Enable NTP autostartup
    /sbin/chkconfig ntpd on;
 
    ### Rename local datastore (currently disabled because of --novmfsondisk)
    #vim-cmd hostsvc/datastore/rename datastore1 "DAS - $(hostname -s)"
 
    ### Disable CEIP
    esxcli system settings advanced set -o /UserVars/HostClientCEIPOptIn -i 2
 
    ### Enable maintaince mode
    esxcli system maintenanceMode set -e true
 
    ### Reboot
    esxcli system shutdown reboot -d 15 -r "rebooting after ESXi host configuration"


Installing an ESXi Host with Kickstart file

The following procedure needs to be performed to boot from a kickstart file:

  1. Boot the ESXi host with a VMware ESXi ISO (ISO file can be obtained from the VMware download page).
  2. Press the key combination “shift + o” at boot.
  3. Enter one of the following lines after runweasel:
    • For an HTTP share: ks=http://%IP_or_FQDN%/kg.cfg
    • For an HTTPs share: ks=https://%IP_or_FQDN%/kg.cfg
    • For a NFS share: ks=nfs://%IP_or_FQDN%/ks.cfg
  4. The installation will start and use the kickstart configuration file (ks.cfg).
  5. After the installation is complete the ESXi Host will reboot.

Screenshots

HTTP Path to ks.cfg file on webserver
HTTP Path to ks.cfg file on webserver
ESXi Host is downloading/reading file from HTTP mirror
ESXi Host is downloading/reading file from HTTP mirror

Article updates:

  • 2018-10-04 – This article has been updated.
  • 2018-11-16 – Code blocks were not displaying correctly.

Share this:

vSphere Integrated Containers (VIC) v1.1

This week VMware released vSphere Integrated Containers (VIC) version 1.1. Below are the product highlights and a small introduction to the new product.

Information about VMware Integrated Containers

vSphere Integrated Containers comprises three components:

  • VMware vSphere Integrated Containers Engine, a container runtime for vSphere that allows developers who are familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. vSphere administrators can manage these workloads by using vSphere in a way that is familiar.VMware Integrated Containers (VIC) - Logo
  • VMware vSphere Integrated Containers Registry, an enterprise-class container registry server that stores and distributes container images. vSphere Integrated Containers Registry extends the Docker Distribution open source project by adding the functionalities that an enterprise requires, such as security, identity and management.
  • VMware vSphere Integrated Containers Management Portal, a container management portal that provides a UI for DevOps teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators can manage container hosts and apply governance to their usage, including capacity quotas and approval workflows. When integrated with vRealize Automation, more advanced capabilities become available, such as deployment blueprints and enterprise-grade Containers-as-a-Service.

With these capabilities, vSphere Integrated Containers enables VMware customers to deliver a production-ready container solution to their developers and DevOps teams. By leveraging their existing SDDC, customers can run container-based applications alongside existing virtual machine based workloads in production without having to build out a separate, specialized container infrastructure stack. As an added benefit for customers and partners, vSphere Integrated Containers is modular. So, for example, if your organization already has a container registry in production, you can use that registry with vSphere Integrated Containers Engine and vSphere Integrated Containers Management Portal.

New features:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers.

For more information read the links below.

Links:

Share this: