In this blog post, we are going to talk about VMware ESXi and the ability to start & stop a virtual machine from the ESXi shell.
Recently, I ran into a situation where the vCenter Server was powered-off manually and the ESXi host that was responsible was not able to open the VMware Host Client to start VMware vCenter with a single mouse click. So it was time to figure out how to boot a virtual machine from the VMware ESXi shell.
I was lucky because SSH was enabled on the ESXi Host so I was able to connect and log in with the root account, but then I ran into the issue… Which command do I need to power on a virtual machine? I knew for sure that it was possible but it took me some time to find the right commands.
So based on that experience, it was time for a quick write-up to show you how to boot a virtual machine from the shell with VMware ESXi. To complete the article I also added the commands for powering off.
Note: The environment was running on vSphere 6.7 Update 2. So all commands are valid for vSphere 6.7 and probably older versions of VMware vSphere.
Start a Virtual Machine from Shell
Here is a step-by-step procedure for booting a virtual machine from the VMware ESXi shell.
# Step 01: Connect with SSH (for example Putty).
# Step 02: Login as a user with root privileges
# Step 03a: View ESXi host virtual machine inventory
vim-cmd vmsvc/getallvms
# Step 03b: View ESXi host virtual machine inventory with filter
vim-cmd vmsvc/getallvms | grep %VMname%
# Step 04: Write-down the VMid, in my case:
183
# Step 05: Verify the current power status
vim-cmd vmsvc/power.getstate %VMid%
# Step 06: Power-on virtual machine
vim-cmd vmsvc/power.on %VMid%
# Step 07: Command has been executed the virtual machine will be power-on. To verify you can use:
vim-cmd vmsvc/power.getstate %VMid%
Screenshots
Here are the screenshots of performing a VMware vCenter virtual machine startup from the VMware ESXi shell.
Stop a Virtual Machine from Shell
Here is the procedure for stopping a virtual machine from the VMware ESXi Shell.
# Step 01: Connect with SSH (for example Putty).
# Step 02: Login as a user with root privileges
# Step 03a: View ESXi host virtual machine inventory
vim-cmd vmsvc/getallvms
# Step 03b: View ESXi host virtual machine inventory with filter
vim-cmd vmsvc/getallvms | grep %VMname%
# Step 04: Write-down the VMid, in my case:
183
# Step 05: Verify the current power status
vim-cmd vmsvc/power.getstate %VMid%
# Step 06: Power-on virtual machine
vim-cmd vmsvc/power.off %VMid%
# Step 07: Command has been executed the virtual machine will be power-off. To verify you can use:
vim-cmd vmsvc/power.getstate %VMid%
Conclusion
You can easily perform this procedure if you know the right commands. There is not a lot of new information available about the vim-cmd command but I added the source(s) below.
This time I decided to do a blog post about the HPE Smart Array RAID controllers with their wonderful ssacli tool. The tooling of HPE is very powerful because you can online manage a VMware ESXi host and migrate for example from a RAID 1 volume to a RAID 10 without downtime or change the read and write cache ratio.
So far as I know I haven’t seen an identical tool yet from the other server hardware vendors like Cisco, Dell EMC, IBM, and Supermicro. The main difference has always been that the HPE tool can perform the operation live without downtime.
So far as I can remember it has been there for ages. It was already available for VMware ESX 4.0 and is still available in VMware ESXi 6.7. So thumbs-up for HPE :).
Let’s talk about controller support. The tool supports the most HPE SmartArray controllers over the last 10 to 15 years, for example, the Smart Array P400 was released in 2005 and is still working fine today.
Here is an overview of supported controllers:
HPE Smart Array P2XX
HPE Smart Array P4XX
HPE Smart Array P7XX
HPE Smart Array P8XX
HPE SSACLI – Location
In case you are using the HPE VMware ESXi custom images. The tool is already pre-installed when installing ESXi. The tool is installed as a VIB (vSphere Installable Bundle). This means it can also be updated with vSphere Update Manager.
Over the years the name of the HPE Storage Controller Tool has been changed and so has the location. Here is a list of locations that have been used for the last ten years for VMware ESXi:
I have collected some screenshots over the years. Screenshots were taken by doing maintenance on VMware ESXi servers. The give you an idea what valuable information can be shown.
HPE SSACLI – Abréviation
All commands have a short name to reduce the length of the total input provided to the ssacli tool:
### Shortnames:
- chassisname = ch
- controller = ctrl
- logicaldrive = ld
- physicaldrive = pd
- drivewritecache = dwc
- licensekey = lk
### Specify drives:
- A range of drives (one to three): 1E:1:1-1E:1:3
- Drives that are unassigned: allunassigned
HPE SSACLI – Status
To view the status of the controller, disks or volumes you can run all sorts of commands to get information about what is going on in your VMware ESXi server. The extensive detail is very useful for troubleshooting and gathering information about the system.
# Show - Controller Slot 1 Controller configuration basic
./ssacli ctrl slot=1 show config
# Show - Controller Slot 1 Controller configuration detailed
./ssacli ctrl slot=1 show detail
# Show - Controller Slot 1 full configuration
./ssacli ctrl slot=1 show config detail
# Show - Controller Slot 1 Status
./ssacli ctrl slot=1 show status
# Show - All Controllers Configuration
./ssacli ctrl all show config
# Show - Controller slot 1 logical drive 1 status
./ssacli ctrl slot=1 ld 1 show status
# Show - Physical Disks status basic
./ssacli ctrl slot=1 pd all show status
# Show - Physical Disk status detailed
./ssacli ctrl slot=1 pd all show status
# Show - Logical Disk status basic
./ssacli ctrl slot=1 ld all show status
# Show - Logical Disk status detailed
./ssacli ctrl slot=1 ld all show detail
HPE SSACLI – Creating
Creating a new logical drive can be done online with the HPE Smart Array controllers. I have displayed some basic examples.
# Create - New single disk volume
./ssacli ctrl slot=1 create type=ld drives=2I:0:8 raid=0 forced
# Create - New spare disk (two defined)
./ssacli ctrl slot=1 array all add spares=2I:1:6,2I:1:7
# Create - New RAID 1 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2 raid=1 forced
# Create - New RAID 5 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2,1I:0:3 raid=5 forced
HPE SSACLI – Adding drives to logical drive
Adding drives to an already created logical drive is possible with the following commands. You need to perform two actions: adding the drive(s) and expanding the logical drive. Keep in mind: make a backup before performing the procedure.
# Add - All unassigned drives to logical drive 1
./ssacli ctrl slot=1 ld 1 add drives=allunassigned
# Modify - Extend logical drive 2 size to maximum (must be run with the "forced" flag)
./ssacli ctrl slot=1 ld 2 modify size=max forced
HPE SSACLI – Rescan controller
To issue a controller rescan, you can run the following command. This can be interesting for when you add new drives in hot swap bays.
### Rescan all controllers
./ssacli rescan
HPE SSACLI – Drive Led Status
The LED status of the drives can also be controlled by the ssacli utility. An example is displayed below how to enable and disable a LED.
# Led - Activate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=on
# Led - Deactivate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=off
# Led - Activate LED on physical drive
./ssacli ctrl slot=0 pd 1I:0:1 modify led=on
# Led - Deactivate LED on physical drive
./ssacli ctrl slot=0 pd 1I:0:1 modify led=off
HPE SSACLI – Modify Cache Ratio
Modify the cache ratio on a running system can be interesting for troubleshooting and performance beanchmarking.
# Show - Cache Ratio Status
./ssacli ctrl slot=1 modify cacheratio=?
# Modify - Cache Ratio read: 25% / write: 75%
./ssacli ctrl slot=1 modify cacheratio=25/75
# Modify - Cache Ratio read: 50% / write: 50%
./ssacli ctrl slot=1 modify cacheratio=50/50
# Modify - Cache Ratio read: 0% / Write: 100%
./ssacli ctrl slot=1 modify cacheratio=0/100
HPE SSACLI – Modify Write Cache
Changing the write cache settings on the storage controller can be done with the following commands:
Viewing or changing the rebuild priority can be done on the fly. Even when the rebuild is already active. Used it myself a couple of times to lower the impact on production.
# Show - Rebuild Priority Status
./ssacli ctrl slot=1 modify rp=?
# Modify - Set rebuildpriority to Low
./ssacli ctrl slot=1 modify rebuildpriority=low
# Modify - Set rebuildpriority to Medium
./ssacli ctrl slot=1 modify rebuildpriority=medium
# Modify - Set rebuildpriority to High
./ssacli ctrl slot=1 modify rebuildpriority=high
HPE SSACLI – Modify SSD Smart Path
You can modify the HPE SDD Smart Path feature by disabling or enabling. To make clear what the HPE SDD Smart Path includes, here is a officialstatement by HPE:
“HP SmartCache feature is a controller-based read and write caching solution that caches the most frequently accessed data (“hot” data) onto lower latency SSDs to dynamically accelerate application workloads. This can be implemented on direct-attached storage and SAN storage.”
For example, when running VMware vSAN SSD Smart Path must be disabled for better performance. In some cases worse the entire vSAN disk group fails.
# Note: This command requires the array naming type like A/B/C/D/E
# Modify - Enable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=enable
# Modify - Disable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=disable
HPE SSACLI – Delete Logical Drive
Deleting a logical drive on the HPE Smart Array controller can be done with the following commands.
In some cases, you need to erase a physical drive. This can be performed with multiple erasing options. Also, you can stop the process.
Erase patterns available:
Default
Zero
Random_zero
Random_random_zero
# Erase physical drive with default erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase
# Erase physical drive with zero erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase erasepattern=zero
# Erase physical drive with random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_zero
# Erase physical drive with random random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_random_zero
# Stop the erasing process on phsyical drive 1E:1:1
./ssacli ctrl slot=1 pd 1E:1:1 modify stoperase
HPE SSACLI – License key
In some cases a licence key needs to be installed on the SmartArray storage controller to enable the advanced features. This can be done with the following command:
In this post, we are going to change the Virtual Storage Controller from LSI Logic Parallel to VMware Paravirtual for a CentOS 7 based Virtual Machine that is running on VMware vSphere. This blog post will contain step-by-step guidance for performing the operation.
In my case the virtual machine was built in VMware Workstation and after some time migrated to VMware ESXi. The VMware Paravirtual Storage Controller is not supported in VMware Workstation. That is why the virtual machine came over with the “wrong” storage controller.
My 24×7 Lab environment is running shared iSCSI based storage and all virtual machines are thin provisioned. The Virtual Machine that came over from VMware Workstation is installed with CentOS 7.
Why VMware Paravirtual?
Why should you want to migrate from an LSI Logic Parallel to a VMware Paravirtual SCSI Controller? Two simple reasons and they are two good ones:
Lower CPU utilization
Higher Throughput
Personally, I have a third reason to add… compliance. All my virtual machines should be compliant with the VMware Best Practice and my personal Home Lab standard. In my Lab environment, this means using the VMware Paravirtual where ever possible/supported.
The most important step in the process is to make sure you have a valid backup! After that, it is just following the steps described below:
Create a virtual machine snapshot or backup before you begin.
Power-off the virtual machine.
Add the VMware Paravirtual Controller to the Virtual Machine. Do not change the disk controller assignment yet, only add the storage controller to the VM (screenshot 01).
Power-on the virtual machine.
Login with an account on the virtual machine (account must be able to obtain root access).
Assign disks to the new storage controller and remove the old storage controller (screenshot 03).
Power-on the virtual machine.
Validate that everything is working and disks are mounted (screenshot 04).
Remove the virtual machine snapshot or backup after you are done.
Screenshots
Here are some screenshots from the procedure:
Conclusion
At this point, I have swapped out three virtual machines from the LSI controller to the VMware Paravirtual SCSI Controller. The machines have been running now for about two weeks without any problems. So everything is compliant again ;).
If you encounter any problems or have any questions about this subject please feel free to contact me on Twitter or the Reply option below.
Source
Here are some interesting related articles that I used for creating this blog post:
In this blog post, we are going to automate the installation of VMware ESXi 5.5, 6.0 and 6.5. This can be done with a so-called “kickstart” configuration file which is officially supported by VMware. The file contains the configuration for a VMware ESXi Host to configure settings like IP address, subnet mask, hostname, license key, datastore, etc.
The kickstart configuration file can be made available in the following locations:
FTP
HTTP/HTTPS
NFS Share
USB flash drive
CD/DVD device
Personally, I prefer to use the HTTP protocol.
Use Case
You might ask yourself, why should I install an ESXi Host with a kickstart file? Some of the use cases I identified over the years are:
The very first ESXi Hosts for your SDDC environment (before VMware vCenter is deployed or vSphere Auto Deploy is configured).
A standalone ESXi Host for a small environment.
A Home Lab environment to install nested VMware ESXi Hosts.
Setup a web server
To make the kickstart configuration file available for the ESXi host we need a web server. Basically, every web server available on the market can serve this file. Here is a list of web server products that I have used: Apache, Microsoft IIS and NGINX.
In this environment/example I used a Microsoft IIS server on a Windows 10 Client. Do not forget to add the cfg extension to the MIME types.
Configuration file
Now it’s time to create a text file with your favourite text editor. The text file in this example is called (ks.cfg). I have added two configuration files as samples, one with the minimum settings and one I normally use for my Lab environment.
Configuration file – Simple (ks.cfg)
This is a default ks.cfg configuration file with just the minimum of settings required.
#
# Sample scripted installation file
#
# Accept the VMware End User License Agreement
vmaccepteula
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
# The install media is in the CD-ROM drive
install --firstdisk --overwritevmfs
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )
Configuration file – Advanced (ks.cfg)
This is the more advanced version of the configuration file that also configures a lot of other settings like NTP servers, search domain, CEIP and a static IP address for the management interface.
### ESXi Installation Script
### Hostname: LAB-ESXi01A
### Author: M. Buijs
### Date: 2017-08-11
### Tested with: ESXi 6.0 and ESXi 6.5
##### Stage 01 - Pre installation:
### Accept the VMware End User License Agreement
vmaccepteula
### Set the root password for the DCUI and Tech Support Mode
rootpw VMware1!
### The install media (priority: local / remote / USB)
install --firstdisk=local --overwritevmfs --novmfsondisk
### Set the network to DHCP on the first network adapter
network --bootproto=static --device=vmnic0 --ip=192.168.151.101 --netmask=255.255.255.0 --gateway=192.168.151.254 --nameserver=192.168.126.21,192.168.151.254 --hostname=LAB-ESXi01A.lab.local --addvmportgroup=0
### Reboot ESXi Host
reboot --noeject
##### Stage 02 - Post installation:
### Open busybox and launch commands
%firstboot --interpreter=busybox
### Set Search Domain
esxcli network ip dns search add --domain=lab.local
### Add second NIC to vSwitch0
esxcli network vswitch standard uplink add --uplink-name=vmnic1 --vswitch-name=vSwitch0
### Disable IPv6 support (reboot is required)
esxcli network ip set --ipv6-enabled=false
### Add NTP Server addresses
echo "server 192.168.126.21" >> /etc/ntp.conf;
echo "server 192.168.151.254" >> /etc/ntp.conf;
### Allow NTP through firewall
esxcfg-firewall -e ntpClient
### Enable NTP autostartup
/sbin/chkconfig ntpd on;
### Rename local datastore (currently disabled because of --novmfsondisk)
#vim-cmd hostsvc/datastore/rename datastore1 "DAS - $(hostname -s)"
### Disable CEIP
esxcli system settings advanced set -o /UserVars/HostClientCEIPOptIn -i 2
### Enable maintaince mode
esxcli system maintenanceMode set -e true
### Reboot
esxcli system shutdown reboot -d 15 -r "rebooting after ESXi host configuration"
Installing an ESXi Host with Kickstart file
The following procedure needs to be performed to boot from a kickstart file:
Boot the ESXi host with a VMware ESXi ISO (ISO file can be obtained from the VMware download page).
Press the key combination “shift + o” at boot.
Enter one of the following lines after runweasel:
For an HTTP share: ks=http://%IP_or_FQDN%/kg.cfg
For an HTTPs share: ks=https://%IP_or_FQDN%/kg.cfg
For a NFS share: ks=nfs://%IP_or_FQDN%/ks.cfg
The installation will start and use the kickstart configuration file (ks.cfg).
After the installation is complete the ESXi Host will reboot.
Screenshots
Here are some screenshots of the procedure:
Article updates:
2018-10-04 – This article has been updated.
2018-11-16 – Code blocks were not displaying correctly.
A security vulnerability has been discovered in some VMware products (CVE-2017-5638). It’s a critical vulnerability which allows remote code execution (RCE) on Apache Struts 2.
The vulnerability affects the following VMware products:
– DaaS 6.X / 7.X
– Hyperic 5.X
– vCenter 5.5 / 6.0 / 6.5
– vROPS 6.X
I recently got a question about enabling and disabling the quest time synchronization for virtual machines. The customer asked about a solution to change the settings from within the operating system instead of the VMware vSphere Client or vSphere Web Client. Normally you would change the virtual machine time synchronization settings by hand with the vSphere Client/Web Client/HTML5 or with a PowerCLI script, but after some searching, it appears, there is a solution provided by VMware. Read more
When deploying some virtual machines in a test environment I ran into the following problem. In most cases, I make use of a VMware vCenter Storage DRS cluster, in this case when deploying a virtual machine the best-suited datastore is selected for the virtual machines. The only problem is not all customers are entitled to use Storage DRS, because Storage DRS requires a vSphere Enterprise Plus license.
So I needed to create a workaround to select a datastore with enough space. The default PowerCLI behavior is selecting the first datastore detected on a alphabetic order.
So when you are deploying let’s say twenty virtual machines all those virtual machines will be put on the first datastore, so that isn’t going to work well in most cases.
PowerCLI Code
To solve the problem I created the following PowerCLI code. The code selects a cluster and lists all the datastore available. The datastore with the most space available is selected for the virtual machine that is being deployed.
In the PowerCLI code, I just create a very simple virtual machine but you probably get the point. The magic is the $DS line that selects the datastore.
Requirements:
The PowerShell code is tested with the following VMware software components on Microsoft Windows:
PowerCLI 6.5 Update 1
VMware vCenter Server 6.0
### Variables
$CLUSTER = "Production" # A Cluster Name
$FOLDER = "Deployed VMs" # A Virtual Machine folder name located in the vCenter inventory
### Select datastores available and sort them on free space (select the one with most space free)
$DS = Get-Cluster -Name $CLUSTER | Get-Datastore | Select Name, FreeSpaceGB | Sort-Object FreeSpaceGB -Descending | Select -first 1
### Create a virtual machine called VM01
New-VM -Name VM01 -ResourcePool $CLUSTER -Datastore $DS.Name -Location $FOLDER -MemoryGB 1 -CD -DiskGB 5
Article update:
2018-07-30 – Added feature image.
2018-11-17 – Updated article to support the new standards of the website.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. Accept and closeRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.