Recently I was removing an SD card from one of my lab servers but after removing it, the server kept complaining about it. The HPE ProLiant is equipped with HPE Integrated Lights-Out (iLO). This is an out-of-band management system to manage and configure the server. It also is responsible for monitoring the components inside the server.
This means it also monitors the health state of the SD card that is located on the motherboard slot. So when I removed the SD card it just kept checking the health of the component and causing health alerts.
In this blog post, I going to explain what I did to reset the HPE iLO to stop it from monitoring the SD card after permanent removal.
Environment
Here is a short list of information about the HPE ProLiant system that I used for this blog post:
HPE SD card: HP 32GB SD card / Part nr: 700135-001
Firmware: HPE iLO version: 2.78
Software: VMware ESXi 7.0.3
Location – SD Card
To make the blog post complete I added the motherboard drawing from the HPE manual. The SD card slot is located on the HPE ProLiant DL360e Gen 8 motherboard and the slot is located at number 29 in the drawing below.
Problem – Removing SD card causes degraded state
The issue occurred when the SD card failed. After the SD card failure, I removed the SD card from the system and moved to an SSD-based boot media for VMware ESXi.
I performed some basic troubleshooting like removing the power from the server and restarting the HPE iLO but the health status was still degraded and it was still searching for the SD card.
Here are the error messages in the interface:
Error message on the login page: iLO Self-Test report a problem with: Embedded Flash/SD-CARD. View details on Diagnostics page.
Here are some screenshots related to the error messages:
Resolving – Resetting the SD card slot
Resolving the issue isn’t partially hard… if you know which buttons to push and in what order ;). Before starting, make sure the SD card is removed from the system and that the iLO has been rebooted.
To make sure that everything just works directly… open a clean browser and login into the iLO and directly follow the procedure described below.
Closing words
In the end, it cost me about three hours to get it fixed. The reason why I wanted it so badly fixed was that it kept triggering my monitoring system and that drove me crazy. This server in particular powers on and powers off regularly and during every power cycle, the health state resets and triggers monitoring alerts.
This wraps up the blog article hopefully it is useful for somebody, please respond below if you have any comments or additional information! See you next time! 🙂
Today we are going to work on an HPE ProLiant DL20 Gen9 server. After the initial installation, I was using an SD card as boot media but I still had some Delock SATADOMs laying around from my older lab servers that were replaced. So it was time to improve the performance of the boot media in the servers. In this blog post, I am explaining in detail the SATADOM installation in an HPE ProLiant DL20 Gen9.
So what are the advantages compared to an SD card:
VMware ESXi boot time about 50% faster
VMware ESXi upgrade time about 70% faster
Inventory performance (very noticeable when clicking through the VMware vCenter or VMware ESXi web GUI)
The overall stability of the host, this because of the “high” failure rate of the SD card.
The summary of advantages is based on my own comparison between SD cards en SATADOMs in my ESXi Hosts in my Home Lab.
Delock SATADOM Specifications
Here are the specifications of the Delock Satadom devices I am using for both HPE ProLiant DL20 Gen9 servers. Here are some tips about what I have learned so far… I bought them in 2018 so they are not brand new anymore:
Buy them a little bit bigger because of the future proof > minimal 32GB I would suggest.
Verify before buying if you need the vertical or horizontal model (rack model server go for horizontal / tower model server no really important).
So now it is time to install the device on the server. Of course, it is a little more complicated in a small half-size rack server. For example, there are no Molex power connections available by default. So in the end the cable kit is almost more expensive than the device itself. The preferred option should be to find an HPE cable kit, not sure which one you will need. So after some thinking and looking into the server I came up with the following solution to just plugin the SATADOM.
At first, I needed to find a SATA port on the motherboard. Both ports are available in my case but I used the one that is normally used for the DVD ROM drive number 14 (see the image from the HPE manual).
The storage device itself can be placed in the space of the storage controller battery pack. Both of my machines do not have the expensive storage controller option. Only the onboard default controller. So the space is completely empty and an easily accessible location for the SATADOM.
The power is the most difficult one. I ended up with converters to into the power connection from the storage backplane (keep in mind my server has no internal storage except the boot device (the SATADOM in this post…) If you have your storage filled with SSDs or HDDs you need to figure out a new solution where to get the power from. I have read something about a power kit for the DVD ROM for example. I have never seen it on a picture or in a server so I do not know which connectors are in that cable kit but it might be an option.
To make some more sense and pictures explain more than words… Here is a gallery with some pictures of the SATADOM installation:
DL20 Gen 9 BIOS Settings
After the physical installation, it was time to set up the BIOS. To be honest it was quite easy compared to the HPE Gen8 where I had a lot of problems because of the ports and bios settings.
Here are two screenshots. The first one is the activation of the internal storage controller. Note: make sure you power cycle the machine before the SATADOM is detected. After the power cycle, the VMware ESXi installer should detect the SATADOM when trying to install VMware ESXi.
After this point, the SATADOM installation is completed. Just continue your normal procedures and put your host into production when you are done.
Wrap-up
So that is it for today…! I hope it was useful for other people and interesting to read. Keep in mind this blog post was focused on the HPE ProLiant DL20 Gen9 but I think the procedure will be quite identical to other HPE Gen9 servers. The most difficult part will always be the cabling and after that, the BIOS settings to get the device detected correctly.
So far my hosts have been running for about 40+ days without any issues and are working perfectly fine. If you got additional questions or remarks please respond in the comment section below. Thanks for reading my blog post and see you next time.
This blog post is about replacing my current 24×7 Lab with a new set of two HPE ProLiant DL20 Gen9 servers. In this blog post, I am going to tell you about the configuration of the machines and how they are running on VMware ESXi. Also, I am going to compare them to my other lab hardware and my past home lab equipment.
Hardware
So let’s kick off with the hardware! The HPE DL20 Gen 9 servers I bought were both new in the box from eBay and I changed the hardware components to my own liking.
A couple of interesting points I learned so far nearly all servers that you will find for sale are provided with an Intel Xeon E3-12XX v5 processor. One item you need to take into account: yes you can swap the CPU from a v5 to a v6 like I did but you need to replace the memory modules also! The memory modules are compatible with a v5 or v6 processor but not both ways. The Intel Xeon E3-12XX v5 CPUs are using 2133 MHz memory and the Intel Xeon E3-12XX v6 CPUs are using 2400 MHz memory. So keep that in mind when swapping the processor and/or buying memory.
In the end, after some swapping of components, I ended up with the following configuration. Both ProLiant servers have an equal configuration (like it should be in a vSphere cluster):
So far I have measured the power usage of the machines individually with the listed configuration in the hardware section. When measuring the power usage the machine was running VMware ESXi and on top of about seven virtual machines that were using about 30% of the total compacity. I was quite amazed by the low power consumption of 31.7 watts per host but I have to take into account that this is only the compute part! The hosts are not responsible for storage. Here is a photo of my power meter when performing the test:
Screenshot(s)
Here are some screenshot(s) of the servers running in my home lab environment and running some virtual machine workload:
Screenshot 01: Is displaying one of the hosts running VMware ESXi 6.7 (screenshot from HPE iLO).
Screenshot 02: Is displaying one of the hosts connected to VMware vCenter and running virtual machines.
Screenshot 03: Is displaying one of the hosts HPE iLO web page.
Positives & Negatives
To sum up, my experience I have created a list of positives and negatives to give you some insight into the HPE ProLiant DL20 Gen9 as a home lab server.
Rack-mounted servers (half-size deep with sliding rails).
Out of band management by default (HPE iLO).
Power usage is good for the amount of compute power delivered.
No additional drivers are required for VMware ESXi to run.
The HPE DL20 Gen9 has been on the VMware HCL, link.
Negatives:
Noisy compared to my previous setup (HPE ProLiant ML10 Gen8). For comparison, the HPE ProLiant DL360 Gen8 is in most cases “quiet” compared to the HPE ProLiant DL20 Gen9.
Would be nice if there was support for more memory because you can never have enough of that in a virtualization environment ;).
Photos
Here are some photos of the physical hardware and the internals, I did not take any pictures of the hardware when the components were all installed. I am sorry :(.
Screenshot 01 – Is displaying both machines running and installed in the 19″ server rack.
Screenshot 02 – Is displaying the internals of the DL20 Gen9. Keep in mind this one is empty. As you can see in that picture the chassis is just half-size!
Wrap-up
So that concludes my blog post. If you got additional questions or remarks please respond in the comment section below. Thanks for reading my blog post and see you next time.
This time I decided to do a blog post about the HPE Smart Array RAID controllers with their wonderful ssacli tool. The tooling of HPE is very powerful because you can online manage a VMware ESXi host and migrate for example from a RAID 1 volume to a RAID 10 without downtime or change the read and write cache ratio.
So far as I know I haven’t seen an identical tool yet from the other server hardware vendors like Cisco, Dell EMC, IBM, and Supermicro. The main difference has always been that the HPE tool can perform the operation live without downtime.
So far as I can remember it has been there for ages. It was already available for VMware ESX 4.0 and is still available in VMware ESXi 6.7. So thumbs-up for HPE :).
Let’s talk about controller support. The tool supports the most HPE SmartArray controllers over the last 10 to 15 years, for example, the Smart Array P400 was released in 2005 and is still working fine today.
Here is an overview of supported controllers:
HPE Smart Array P2XX
HPE Smart Array P4XX
HPE Smart Array P7XX
HPE Smart Array P8XX
HPE SSACLI – Location
In case you are using the HPE VMware ESXi custom images. The tool is already pre-installed when installing ESXi. The tool is installed as a VIB (vSphere Installable Bundle). This means it can also be updated with vSphere Update Manager.
Over the years the name of the HPE Storage Controller Tool has been changed and so has the location. Here is a list of locations that have been used for the last ten years for VMware ESXi:
I have collected some screenshots over the years. Screenshots were taken by doing maintenance on VMware ESXi servers. The give you an idea what valuable information can be shown.
All commands have a short name to reduce the length of the total input provided to the ssacli tool:
### Shortnames:
- chassisname = ch
- controller = ctrl
- logicaldrive = ld
- physicaldrive = pd
- drivewritecache = dwc
- licensekey = lk
### Specify drives:
- A range of drives (one to three): 1E:1:1-1E:1:3
- Drives that are unassigned: allunassigned
HPE SSACLI – Status
To view the status of the controller, disks or volumes you can run all sorts of commands to get information about what is going on in your VMware ESXi server. The extensive detail is very useful for troubleshooting and gathering information about the system.
# Show - Controller Slot 1 Controller configuration basic
./ssacli ctrl slot=1 show config
# Show - Controller Slot 1 Controller configuration detailed
./ssacli ctrl slot=1 show detail
# Show - Controller Slot 1 full configuration
./ssacli ctrl slot=1 show config detail
# Show - Controller Slot 1 Status
./ssacli ctrl slot=1 show status
# Show - All Controllers Configuration
./ssacli ctrl all show config
# Show - Controller slot 1 logical drive 1 status
./ssacli ctrl slot=1 ld 1 show status
# Show - Physical Disks status basic
./ssacli ctrl slot=1 pd all show status
# Show - Physical Disk status detailed
./ssacli ctrl slot=1 pd all show status
# Show - Logical Disk status basic
./ssacli ctrl slot=1 ld all show status
# Show - Logical Disk status detailed
./ssacli ctrl slot=1 ld all show detail
HPE SSACLI – Creating
Creating a new logical drive can be done online with the HPE Smart Array controllers. I have displayed some basic examples.
# Create - New single disk volume
./ssacli ctrl slot=1 create type=ld drives=2I:0:8 raid=0 forced
# Create - New spare disk (two defined)
./ssacli ctrl slot=1 array all add spares=2I:1:6,2I:1:7
# Create - New RAID 1 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2 raid=1 forced
# Create - New RAID 5 volume
./ssacli ctrl slot=1 create type=ld drives=1I:0:1,1I:0:2,1I:0:3 raid=5 forced
HPE SSACLI – Adding drives to logical drive
Adding drives to an already created logical drive is possible with the following commands. You need to perform two actions: adding the drive(s) and expanding the logical drive. Keep in mind: make a backup before performing the procedure.
# Add - All unassigned drives to logical drive 1
./ssacli ctrl slot=1 ld 1 add drives=allunassigned
# Modify - Extend logical drive 2 size to maximum (must be run with the "forced" flag)
./ssacli ctrl slot=1 ld 2 modify size=max forced
HPE SSACLI – Rescan controller
To issue a controller rescan, you can run the following command. This can be interesting for when you add new drives in hot swap bays.
### Rescan all controllers
./ssacli rescan
HPE SSACLI – Drive Led Status
The LED status of the drives can also be controlled by the ssacli utility. An example is displayed below how to enable and disable a LED.
# Led - Activate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=on
# Led - Deactivate LEDs on logical drive 2 disks
./ssacli ctrl slot=1 ld 2 modify led=off
# Led - Activate LED on physical drive
./ssacli ctrl slot=0 pd 1I:0:1 modify led=on
# Led - Deactivate LED on physical drive
./ssacli ctrl slot=0 pd 1I:0:1 modify led=off
HPE SSACLI – Modify Cache Ratio
Modify the cache ratio on a running system can be interesting for troubleshooting and performance beanchmarking.
# Show - Cache Ratio Status
./ssacli ctrl slot=1 modify cacheratio=?
# Modify - Cache Ratio read: 25% / write: 75%
./ssacli ctrl slot=1 modify cacheratio=25/75
# Modify - Cache Ratio read: 50% / write: 50%
./ssacli ctrl slot=1 modify cacheratio=50/50
# Modify - Cache Ratio read: 0% / Write: 100%
./ssacli ctrl slot=1 modify cacheratio=0/100
HPE SSACLI – Modify Write Cache
Changing the write cache settings on the storage controller can be done with the following commands:
Viewing or changing the rebuild priority can be done on the fly. Even when the rebuild is already active. Used it myself a couple of times to lower the impact on production.
# Show - Rebuild Priority Status
./ssacli ctrl slot=1 modify rp=?
# Modify - Set rebuildpriority to Low
./ssacli ctrl slot=1 modify rebuildpriority=low
# Modify - Set rebuildpriority to Medium
./ssacli ctrl slot=1 modify rebuildpriority=medium
# Modify - Set rebuildpriority to High
./ssacli ctrl slot=1 modify rebuildpriority=high
HPE SSACLI – Modify SSD Smart Path
You can modify the HPE SDD Smart Path feature by disabling or enabling. To make clear what the HPE SDD Smart Path includes, here is a officialstatement by HPE:
“HP SmartCache feature is a controller-based read and write caching solution that caches the most frequently accessed data (“hot” data) onto lower latency SSDs to dynamically accelerate application workloads. This can be implemented on direct-attached storage and SAN storage.”
For example, when running VMware vSAN SSD Smart Path must be disabled for better performance. In some cases worse the entire vSAN disk group fails.
# Note: This command requires the array naming type like A/B/C/D/E
# Modify - Enable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=enable
# Modify - Disable SSD Smart Path
./ssacli ctrl slot=1 array a modify ssdsmartpath=disable
HPE SSACLI – Delete Logical Drive
Deleting a logical drive on the HPE Smart Array controller can be done with the following commands.
In some cases, you need to erase a physical drive. This can be performed with multiple erasing options. Also, you can stop the process.
Erase patterns available:
Default
Zero
Random_zero
Random_random_zero
# Erase physical drive with default erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase
# Erase physical drive with zero erasepattern
./ssacli ctrl slot=1 pd 2I:1:1 modify erase erasepattern=zero
# Erase physical drive with random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_zero
# Erase physical drive with random random zero erasepattern
./ssacli ctrl slot=1 pd 1E:1:1-1E:1:3 modify erase erasepattern=random_random_zero
# Stop the erasing process on phsyical drive 1E:1:1
./ssacli ctrl slot=1 pd 1E:1:1 modify stoperase
HPE SSACLI – License key
In some cases a licence key needs to be installed on the SmartArray storage controller to enable the advanced features. This can be done with the following command:
Today a blog about my Home Lab. At the end of 2017, it was time to replace the old Dell PowerEdge R710 servers with something new. Currently, I was running two R710 servers for my lab environment.
These servers are ‘powered-on for a couple of hours a week to test new products and learn for certifications. My other environment described on this page is running 24/7 is providing a full set of infrastructure services.
Because of the price and I’m very familiar with the DL360 G8 it was a no brainer. Over the last couple of years, all my virtualization projects were based for 75% on the DL360 Gen8… so we have a lot of history together ;).
Technical specifications – HP DL360e Gen8:
Chassis: Small Form Factor (8-bays)
CPU1: Intel® Xeon® Processor E5-2430 v2
CPU2: Intel® Xeon® Processor E5-2430 v2
Memory:Â 128 GBÂ (8x 16GB DDR3 1600 MHz)
Disks HDD: 2x Seagate Constellation SAS 1TB
Disks SSD: 4x Samsung EVO 850 250 GB
Storage controller: HP SmartArray P420 with 1 GB FBWC
NIC: 4 port 1 Gbit
Rack mounting kit, cable arm, and security front bezel
The spinning drives provide “safe” storage because they are configured as a mirrored volume. The SSD drives are configured as JBOD drives for performance without data protection (if I want protection I just create a virtual machine back-up to one of my storage arrays).
Update 2018:
In 2018, I did a full write-up of this server on this page: link
In this article, we are going to flash an HPE MSA 1040 via FTP. At work, I faced a problem with a couple of HPE MSA 1040 storage arrays. Three out of ten were not displaying their web interface after about 200 days of uptime. This is not really a known problem for the HPE MSA 1040 :(. So it was time figure out a way to work around it.Â
After some time, I noticed there is a built-in FTP flash option. About two hours later I finally got the latest firmware installed.
I have made a write-up to do it yourself. It is not really a difficult procedure but you need a couple of items ready and the correct command to get it all working.
Flashing the HPE MSA 1040
Note: I have tested this procedure on a Windows 10 client. The FTP tool for uploading the firmware is the built-in from Windows.
Prerequisites:
Download the firmware from the HPE website.
Store the firmware files in the following folder (C:\Temp).
Extract the bin file from the downloaded bundle.
Make sure no workloads are running on the HPE MSA.
Procedure:
Start an SSH session with an available controller (no preferred choice between controller A & B)
Enable the FTP service on the controller with the command: “set protocols ftp enabled”.
Open a CMD shell (with administrator rights) on your workstation or management server.
Run the following commands:
# Navigate to C:\Temp
# Start FTP session
ftp %IP-Address%
# Login
Username: ftp
Password: !ftp
# Navigate to directory
cd /
# Upload firmware and start flash
put TS252P001.bin flash
# Close FTP session
ftp close
# Go back to the SSH Session and disable the FTP service on the MSA
set protocols ftp disabled
Article update:
2018-03-26 – Added feature image to page.
2018-11-17 – Updated article to support the new standards of the website.