Tag: ITQ

Automated installation with VMware ESXi 5.5/6.0/6.5

In this blog post, we are going to automate the installation of VMware ESXi 5.5, 6.0 and 6.5. This can be done with a so-called “kickstart” configuration file which is officially supported by VMware. The file contains the configuration for a VMware ESXi Host to configure settings like IP address, subnet mask, hostname, license key, datastore, etc.

The kickstart configuration file can be made available in the following locations:

  • FTP
  • HTTP/HTTPS
  • NFS Share
  • USB flash drive
  • CD/DVD device

Personally, I prefer to use the HTTP protocol.



Use Case

You might ask yourself, why should I install an ESXi Host with a kickstart file? Some of the use cases I identified over the years are:

  • The very first ESXi Hosts for your SDDC environment (before VMware vCenter is deployed or vSphere Auto Deploy is configured).
  • A standalone ESXi Host for a small environment.
  • A Home Lab environment to install nested VMware ESXi Hosts.

Setup a web server

To make the kickstart configuration file available for the ESXi host we need a web server. Basically, every web server available on the market can serve this file. Here is a list of web server products that I have used: Apache, Microsoft IIS and NGINX.

In this environment/example I used a Microsoft IIS server on a Windows 10 Client. Do not forget to add the cfg extension to the MIME types.

Configuration file

Now it’s time to create a text file with your favourite text editor. The text file in this example is called (ks.cfg). I have added two configuration files as samples, one with the minimum settings and one I normally use for my Lab environment.

Configuration file – Simple (ks.cfg)

This is a default ks.cfg configuration file with just the minimum of settings required.

#
# Sample scripted installation file
#
 
# Accept the VMware End User License Agreement
vmaccepteula
 
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
 
# The install media is in the CD-ROM drive
install --firstdisk --overwritevmfs
 
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
 
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )

Configuration file – Advanced (ks.cfg)

This is the more advanced version of the configuration file that also configures a lot of other settings like NTP servers, search domain, CEIP and a static IP address for the management interface.

### ESXi Installation Script
### Hostname: LAB-ESXi01A
### Author: M. Buijs
### Date: 2017-08-11
### Tested with: ESXi 6.0 and ESXi 6.5
 
##### Stage 01 - Pre installation:
 
    ### Accept the VMware End User License Agreement
    vmaccepteula
 
    ### Set the root password for the DCUI and Tech Support Mode
    rootpw VMware1!
 
    ### The install media (priority: local / remote / USB)
    install --firstdisk=local --overwritevmfs --novmfsondisk
 
    ### Set the network to DHCP on the first network adapter
    network --bootproto=static --device=vmnic0 --ip=192.168.151.101 --netmask=255.255.255.0 --gateway=192.168.151.254 --nameserver=192.168.126.21,192.168.151.254 --hostname=LAB-ESXi01A.lab.local --addvmportgroup=0
 
    ### Reboot ESXi Host
    reboot --noeject
 
##### Stage 02 - Post installation:
 
    ### Open busybox and launch commands
    %firstboot --interpreter=busybox
 
    ### Set Search Domain
    esxcli network ip dns search add --domain=lab.local
 
    ### Add second NIC to vSwitch0
    esxcli network vswitch standard uplink add --uplink-name=vmnic1 --vswitch-name=vSwitch0
 
    ###  Disable IPv6 support (reboot is required)
    esxcli network ip set --ipv6-enabled=false
 
    ### Add NTP Server addresses
    echo "server 192.168.126.21" >> /etc/ntp.conf;
    echo "server 192.168.151.254" >> /etc/ntp.conf;
 
    ### Allow NTP through firewall
    esxcfg-firewall -e ntpClient
 
    ### Enable NTP autostartup
    /sbin/chkconfig ntpd on;
 
    ### Rename local datastore (currently disabled because of --novmfsondisk)
    #vim-cmd hostsvc/datastore/rename datastore1 "DAS - $(hostname -s)"
 
    ### Disable CEIP
    esxcli system settings advanced set -o /UserVars/HostClientCEIPOptIn -i 2
 
    ### Enable maintaince mode
    esxcli system maintenanceMode set -e true
 
    ### Reboot
    esxcli system shutdown reboot -d 15 -r "rebooting after ESXi host configuration"


Installing an ESXi Host with Kickstart file

The following procedure needs to be performed to boot from a kickstart file:

  1. Boot the ESXi host with a VMware ESXi ISO (ISO file can be obtained from the VMware download page).
  2. Press the key combination “shift + o” at boot.
  3. Enter one of the following lines after runweasel:
    • For an HTTP share: ks=http://%IP_or_FQDN%/kg.cfg
    • For an HTTPs share: ks=https://%IP_or_FQDN%/kg.cfg
    • For a NFS share: ks=nfs://%IP_or_FQDN%/ks.cfg
  4. The installation will start and use the kickstart configuration file (ks.cfg).
  5. After the installation is complete the ESXi Host will reboot.

Screenshots

Here are some screenshots of the procedure:


Article updates:

  • 2018-10-04 – This article has been updated.
  • 2018-11-16 – Code blocks were not displaying correctly.
  • 2019-11-25 – Fixed images

PowerCLI 6.3 – Start-VM Exception has been thrown by the target invocation.

Today I was running one of my favourite home lab scripts to startup and shutdown my lab environment. Sadly, this ended with a PowerCLI error code. So it was time to investigate what was going wrong because the code did not change but the script stopped functioning at some point.

Error message:

The first thing we look at is the error message. The following error message was displayed inside my PowerCLI console:
“Start-VM Exception has been thrown by the target invocation”

So it appears the command “Start-VM” is causing some issue. The Start-VM PowerCLI cmdlet is responsible for sending a command to vCenter Server to start a particular virtual machine.

Fix:

The first thing I noticed the system I was using was not running the most recent version of PowerCLI. So the first thing I did was upgrade PowerCLI from version 6.3 to PowerCLI version 6.5. I rebooted the system and I started the script again. It appeared that all the problems were gone :). So something surrounding the “Start-VM” cmdlet is not working correctly in PowerCLI 6.3. I could not find any information or a changelog entry related to the issue but it fixed “something” :).

Screenshots:

The first screenshot displays the script and encountering the issue with PowerCLI version 6.3. The second screenshot the same script is run but with an installed PowerCLI 6.5 version. As you can see the issue is resolved now.

PowerCLI 6.3 - Start-VM Exception has been thrown by the target invocation
PowerCLI 6.3 – Start-VM Exception has been thrown by the target invocation

PowerCLI 6.5 - Start-VM problem solved
PowerCLI 6.5 – Start-VM problem solved

vSphere Integrated Containers (VIC) v1.1

This week VMware released vSphere Integrated Containers (VIC) version 1.1. Below are the product highlights and a small introduction to the new product.

Information about VMware Integrated Containers

vSphere Integrated Containers comprises three components:

  • VMware vSphere Integrated Containers Engine, a container runtime for vSphere that allows developers who are familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. vSphere administrators can manage these workloads by using vSphere in a way that is familiar.VMware Integrated Containers (VIC) - Logo
  • VMware vSphere Integrated Containers Registry, an enterprise-class container registry server that stores and distributes container images. vSphere Integrated Containers Registry extends the Docker Distribution open source project by adding the functionalities that an enterprise requires, such as security, identity and management.
  • VMware vSphere Integrated Containers Management Portal, a container management portal that provides a UI for DevOps teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators can manage container hosts and apply governance to their usage, including capacity quotas and approval workflows. When integrated with vRealize Automation, more advanced capabilities become available, such as deployment blueprints and enterprise-grade Containers-as-a-Service.

With these capabilities, vSphere Integrated Containers enables VMware customers to deliver a production-ready container solution to their developers and DevOps teams. By leveraging their existing SDDC, customers can run container-based applications alongside existing virtual machine based workloads in production without having to build out a separate, specialized container infrastructure stack. As an added benefit for customers and partners, vSphere Integrated Containers is modular. So, for example, if your organization already has a container registry in production, you can use that registry with vSphere Integrated Containers Engine and vSphere Integrated Containers Management Portal.

New features:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers.

For more information read the links below.

Links:

Photon Platform 1.2 is available

Photon Platform version 1.2 has been released this week. Keep in mind the support for VMware ESXi 6.0 has been dropped for Photon Platform version 1.2.

What’s new:

  • Support for Kubernetes 1.6
  • Simpler Cluster Management
  • Static and Dynamic Persistent Volumes
  • Master and Worker Node High Availability
  • Pod Networking and Enhancements
  • AD/LDAP and Security
  • Quota Based Dynamic Resource Allocation
  • SDK and API

For more information read the links below.

Links:

NLVMUG 2017

VMware NLVMUG 2017On March 16 2017, I attended the NLVMUG 2017 in the Netherlands. Frank Denneman from VMware presented the opening keynote about “VMware Cloud on AWS”. The NLVMUG is a one-day event and displayed 65 sessions, a remarkably high number. The NLVMUG is one of the largest/is the largest VMUG in the world.

This week the NLVMUG organization published the presentations and recorded sessions online. The information is for everyone available and free of charge.

Links:
NLVMUG website
NLVMUG YouTube channel

VMware Product Vulnerability (CVE-2017-5638)

A security vulnerability has been discovered in some VMware products (CVE-2017-5638). It’s a critical vulnerability which allows remote code execution (RCE) on Apache Struts 2.

The vulnerability affects the following VMware products:
– DaaS 6.X / 7.X
– Hyperic 5.X
– vCenter 5.5 / 6.0 / 6.5
– vROPS 6.X

Read more

Changing Guest Time Synchronization Setting From Within-Guest OS

I recently got a question about enabling and disabling the quest time synchronization for virtual machines. The customer asked about a solution to change the settings from within the operating system instead of the VMware vSphere Client or vSphere Web Client. Normally you would change the virtual machine time synchronization settings by hand with the vSphere Client/Web Client/HTML5 or with a PowerCLI script, but after some searching, it appears, there is a solution provided by VMware.
Read more

PowerCLI Datastore Selection without Storage DRS (SDRS)

When deploying some virtual machines in a test environment I ran into the following problem. In most cases, I make use of a VMware vCenter Storage DRS cluster, in this case when deploying a virtual machine the best-suited datastore is selected for the virtual machines. The only problem is not all customers are entitled to use Storage DRS, because Storage DRS requires a vSphere Enterprise Plus license.

So I needed to create a workaround to select a datastore with enough space. The default PowerCLI behavior is selecting the first datastore detected on a alphabetic order.

So when you are deploying let’s say twenty virtual machines all those virtual machines will be put on the first datastore, so that isn’t going to work well in most cases.



PowerCLI Code

To solve the problem I created the following PowerCLI code. The code selects a cluster and lists all the datastore available. The datastore with the most space available is selected for the virtual machine that is being deployed.

In the PowerCLI code, I just create a very simple virtual machine but you probably get the point. The magic is the $DS line that selects the datastore.

Requirements:

The PowerShell code is tested with the following VMware software components on Microsoft Windows:

  • PowerCLI 6.5 Update 1
  • VMware vCenter Server 6.0
### Variables
$CLUSTER = "Production"       # A Cluster Name
$FOLDER = "Deployed VMs"      # A Virtual Machine folder name located in the vCenter inventory

### Select datastores available and sort them on free space (select the one with most space free)
$DS = Get-Cluster -Name $CLUSTER | Get-Datastore | Select Name, FreeSpaceGB | Sort-Object FreeSpaceGB -Descending | Select -first 1

### Create a virtual machine called VM01
New-VM -Name VM01 -ResourcePool $CLUSTER -Datastore $DS.Name -Location $FOLDER -MemoryGB 1 -CD -DiskGB 5

Article update:

  • 2018-07-30 – Added feature image.
  • 2018-11-17 – Updated article to support the new standards of the website.

VMware VCAP6-DCV Deployment Certification

On 1 February 2017, I passed the VMware VCAP6-DCV Deployment exam (3V0-623). This was my first VMware VCAP exam that I ever did. I prepped for about two months in my Home Lab environment and a couple of times I used the VMware Hands-on Labs. The main goal wat to exercise all the objectives listed in the exam blueprint.

So what exactly is the VCAP6-DCV Deployment exam? VMware describes it as following:

This exam tests your skills and abilities in implementation of a vSphere 6.x solution, including deployment, administration, optimization and troubleshooting.

Lab environment:

In my home lab environment I deployed the following components to complete all the exam blueprint objectives:

  • 2x – VMware vCenter 6.0 (Windows and VCSA)
  • 1x – Windows Machine with Update Manager (VUM)
  • 6x – VMware ESXi 6.0 (for vSAN and traditional storage testing)
  • 2x – Site Recovery Manager (SRM)
  • 2x – vSphere Replication
  • 1x – VMware vSphere Data Protection (VDP)
  • 1x – Dell Unity VSA for iSCSI, NFS and Virtual Volumes.

The hardware I used can be found on the following page Home Lab. The environment was using nested ESXi hosts to accommodate the required amount of ESXi hosts.

Personal experience:

The exam is a Lab based exam, so this is completely different than a VMware VCP exam. The exam itself is not the most difficult one out their… at least for someone who is working on a day-to-day base with VMware vSphere. The most difficult part is the time management. You have got twenty-seven objectives to complete and you have 205 minutes to complete them, of course, you just need to score 300 points. That can be a bit tricky because if you get stuck you need to go to the following objective.

There are two unofficial study guides available on the internet. These are based on the VMware Blueprint and they helped me a lot. Both guides are detailed and full of information.

Links:

Veeam Backup & Replication 9.5 Update 1 Released

With the release of Veeam Backup & Replication 9.5 in November, there was an announcement made about the vSphere 6.5 support.

“Yesterday, with impeccable timing, VMware announced the general availability of vSphere 6.5. As always, we started work on the full integration of vSphere 6.5 since its beta, however, we now need adequate time to integrate the final VDDK code and then perform full regressive testing against the final vSphere 6.5 code to ensure the reliability of our advanced vSphere integrations. Therefore, full support for vSphere 6.5 in Veeam Availability Suite 9.5 will be delivered as a part of Veeam Availability Suite 9.5 Update 1. And while an exact support timeline will depend on the results of our testing, historically we deliver support for new vSphere releases approximately two months after the final code availability.” Source: https://www.veeam.com/blog/new-veeam-availability-suite-9-5-is-available-today.html

On 20 January, Veeam has finally released update 1 for the Backup & Replication software (download link below).

Platform support:

  • Dell EMC Data Domain OS 6.0 support, including synthetic full backup performances optimizations, backup retention and health check reliability improvements
  • HPE 3PAR 3.2.2 MU3 support, including multiple API interaction improvements for added reliability and performance
  • HPE StoreOnce 3.15.1 support, bringing Instant VM Recovery to Catalyst-based backup repositories
  • Veeam Agent for Linux 1.0 support
  • Veeam Agent for Microsoft Windows 2.0 Public Beta (build 2.0.0.594) support
  • VMware vSAN 6.5 support
  • VMware vSphere 6.5 support
    • Encrypted VMs support
    • VMFS6 support
    • Virtual hardware version 13 support
    • NBD compression
    • New guest interaction API support
    • New VM tag API support

Download link: https://www.veeam.com/kb2222