When configuring the NSX Advanced Load Balancer for some testing in my Home Lab. I noticed something odd related to the service engines management network and data network settings. After thinking I was crazy… I saw an issue in the interface surrounding the data network configuration which caused the issue. In this short blog post, I will explain what was happening and how to resolve the issue.
My Home Lab environment was running the following products:
Note: Licenses are provided by the vExpert program (this also includes the NSX Advanced Load Balancer licenses for lab usage).
Data Network Issue
At first, we will go to the location in the interface that causes the issue:
Log in on the web interface.
Navigate to “Infrastructure > Cloud Resources > Service Engine Group“.
Click for example on the “Default-Group” (depending on your configuration).
Go to the section “Placement“.
Check the following setting “Override Data Network“.
Select a network that you want…
Sounds all good so far… but look at the description popup on the last screenshot. Are we configuring the management or data network for the service engines? Because the description and the field tell something different.
Management Network or Data Network?
After verifying what happened to the service engines in the group the management network for the Service Engines is changed. This was noticeable to me because the service engines were not reachable anymore on the management network for the controller.
My conclusion after some testing was that the description field is correct. This setting changes the management network!
How can you verify the changes to the service engine group?
Open a command prompt.
Run the following command “ping %management-ip-address service engine%“.
They are probably not available anymore because they are on the wrong network.
Navigate to the vCenter Server.
Login with your account.
Select the Service Engine virtual machine belonging to the group where you configured this setting.
Check the virtual network cards.
There the management network card is assigned to the “override data network” network.
So that was my blog post about the service engine group data network issue. I hope it was useful for somebody because it took me some hours to figure it out…
This wraps up the blog article hopefully it is useful for somebody, please respond below if you have any comments or additional information! See you next time! 🙂
In the latest release of VyOS, a new feature has been added to the product called VRF. VRF or Virtual Routing and Forwarding is a technology that makes it possible to create multiple routing tables on a single router. In this blog post, we are going to set up a VyOS management VRF for out-of-band management traffic.
VRF is for a lot of people in network land a known technology and is leveraged in companies all over the world. The only limit was that VyOS was not capable of running a VRF before. So after the release of the VRF feature is was time to figure out if it working as I would expect it.
So what is a VRF?
I already talked a little bit about Virtual Routing and Forwarding but here is the official statement from the Wikipedia website:
“Virtual routing and forwarding (VRF) is a technology that allows multiple instances of a routing table to co-exist within the same router at the same time. One or more logical or physical interfaces may have a VRF and these VRFs do not share routes therefore the packets are only forwarded between interfaces on the same VRF. VRFs are the TCP/IP layer 3 equivalent of a VLAN. Because the routing instances are independent, the same or overlapping IP addresses can be used without conflicting with each other. Network functionality is improved because network paths can be segmented without requiring multiple routers.”
The goal for me was to create an out-of-band management interface on my virtual VyoS router that is running on VMware vSphere. This can only be achieved by the new VRF feature because you get an extra/new routing table that is used by the VRF only. The main reason for me was to split the SSH and SNMP traffic from the rest of the traffic. One of the perks of having a dedicated interface is to improve security and it makes creating firewall rules easier because all of the out-of-band interfaces are in one dedicated network.
Here is an overview of the vSphere VM running VyOS with two virtual network cards connected. As you can see one NIC is connected to a portgroup that allows multiple VLANs and the other is connected to a dedicated network for out-of-band management.
Now it is time to start configuring VyOS to leverage the VRF. Below you will find the IP addresses that I have used as an example in this blog post.
The first step is setting up an interface that will be leveraged by the VRF in the next part of the configuration.
### Create a new interface
set interfaces ethernet eth1 address 192.168.200.1/24
### Set interface description (optional)
set interfaces ethernet eth1 description 'Dedicated Out-of-Band Management Interface'
Now it is time to set up the VRF configuration and link it to the newly created interface. After that point, the VyOS Management VRF should be reachable in the network.
### Create a VRF called OOB-Management with a new routing table
set vrf name OOB-Management table 100
### Add a description
set vrf name OOB-Management description Out-Of-Band_Management
### Assign the physical interface to the VRF
set interfaces ethernet eth1 vrf OOB-Management
### Add a static route for the VRF to get access to a gateway
set protocols vrf OOB-Management static route 0.0.0.0/0 next-hop 192.168.200.254
Here are some troubleshooting commands that I used when configuring the VRF on VyOS.
### Routing table VRF
show ip route vrf OOB-Management
ping 192.168.200.254 vrf OOB-Management
Now it is up and running it is time to set up the out-of-band management services. In my case, this will be SSH & SNMP. SSH is used for access to the command-line of the VyOS router and SNMP is used for monitoring.
### SSH - Activate the service on the VRF
set service ssh vrf OOB-Management
### SSH - Active listing address for SSH on Out-of-Band network
set service ssh listen-address 192.168.200.1
### SNMP - Active the service on the VRF
set service snmp vrf OOB-Management
### SNMP - Add permissions
set service snmp community routers authorization ro
set service snmp community Public
set service snmp community routers client 192.168.200.20
### SNMP - Set the location and contact
set service snmp location "Be-Virtual.net - Datacenter"
set service snmp contact "firstname.lastname@example.org"
### SNMP - Activate the listening address
set service snmp listen-address 192.168.200.1 port 161
Here is some information about my IP numbers:
VyOS IP Address for Out-of-Band Management = 192.168.200.1
Gateway of the Out-of-Band Management network = 192.168.200.254
Monitoring server that monitors with SNMP = 192.168.200.100
The VRF feature that is added to VyOS is really great! It is a great addition to an already great product. There are a lot of use cases think about multiple routers with different routing protocols running on a single VyOS box with there own routing table.
For me, this was an easy step to test the VRF feature with the Out-of-Band management test. This is just the first of testing the VRF. The next step will be to connect with my lab environment and leveraging BGP. Currently, I am running multiple boxes for multi-site just to test VMware NSX-T in my Lab environment. This can be simplified with VRFs!
Thanks for reading this blog post and see you next time. If you have any comments? Please respond below! 🙂
Here are some sources I used for setting up the management VRF:
Currently, I have been involved in a Dell EMC VxRail design & deployment with VMware Cloud Foundation on Dell EMC VxRail. There were some noticeable items that you need to consider when using the Dell EMC VxRail as your hardware layer in combination with VMware NSX-T as a network overlay. So it was time to write down the items that I have learned so far surrounding the VxRail NSX-T considerations.
This blog post is focused on the NSX design considerations that are related to the physical level when using the Dell EMC VxRail hardware.
At first, I am going to talk about VMware NSX-V because a lot of customers are already running Dell EMC VxRail in combination with NSX-V and need to move to NSX-T in some time.
In case you are already using Dell EMC VxRail with VMware NSX-V. Your physical NIC configuration would in most cases look like one of the following:
Scenario 01: Dual port physical NIC – 10 Gbit
Scenario 02: Dual port physical NIC – 25 Gbit
The default configuration that I see in the field at this moment is based on a single dual-port card with either 10 Gbit or 25 Gbit. This is for fine for VMware NSX-V but not for his replacement…
When using Dell EMC VxRail with VMware NSX-T you are required to use four physical NICs! This is because of the limitation surrounding the Dell EMC VxRail software that makes a “PowerEdge server” a “VxRail server”.
So this leaves us with three scenarios provided by Dell EMC for the VxRail nodes:
Scenario 01: Quad-port physical NIC
Scenario 02: Quad-port physical NIC (two ports used) with dual-port physical NIC
Scenario 03: Dual-port physical NIC with dual-port physical NIC.
Dell EMC VxRail is the only hardware platform currently on the market that requires four physical NICs to operate with NSX-T. This means you have to make sure your hardware and datacenter are capable of supporting this requirement. You need to make some choices surrounding the physical network cards, network capacity and datacenter rack space.
So let’s start with my list of VxRail NSX-T considerations!
Physical Network Card
When you are at a point of buying the Dell EMC VxRail solution, buy at least a quad-port NIC configuration. Personally, I prefer the double dual-port NIC setup. As shown here below:
I prefer this hardware setup because of the hardware redundancy created by two cards with there separate chips and PCIe slots. This reduces the change of losing all your network connections when a physical NIC dies.
Another recommendation should be to buy physical NICs that support 25 Gbit. It is a minimum price difference and will make the setup more future proof.
Top of Rack (TOR)
As discussed in the last paragraph: when you move to VMware NSX-T you are forced to use four physical NICs in each VxRail node. After installing the card you need to make sure you have enough physical ports in your Top of Rack switches/Leaf switches.
At the customer where I am currently working, they are forced to increase there Top of Rack switches capacity from two ports per server with NSX-V to four ports per server with NSX-T. This meant a full redesign of there datacenter rack topology and network topology. The spine switches were also not able to connect with that amount of leaf switches.
Keep in mind: This is only required of course when you are running a decent amount of servers per rack. In the customer case, they are running 32 VxRail nodes per rack. This means they require at least 128 physical switch ports per rack without uplink ports counted.
Here is an overview of the scenarios as just described, the first is the NSX-V scenario and the second the NSX-T scenario.
I know that VMware & Dell EMC are currently working on a solution for the VxRail hardware but time will tell. At this point keep your eyes open when moving from NSX-V to NSX-T with Dell EMC VxRail. Customers how are deploying greenfield also need to be aware that they need additional network capacity.
So that wraps up my VxRail NSX-T Considerations blog post. Thanks for reading my blog post and see you next time!
In my home lab environment, I wanted to rebuild my VyOS virtual router/firewall. So I exported the configuration from the old appliance and I tried to perform a restore on the new virtual appliance. The question that arose was: how do you perform a VyOS configuration restore?
Somehow on the internet, I could not find any tutorial or manual that explained to me how this action could be performed. There are enough write-ups and articles surrounding the TFTP, FTP and SCP restore procedure but the VyOS appliance is empty… with a default configuration. I just wanted to restore the configuration without setting up all kinds of services and configuring by hand my interfaces on the VyOS appliance.
So it was time to examine the VyOS appliance and figure out what was going on under the covers.
Why do you need VyOS?
Before diving any further let‘s talk about VyOS! I use VyOS for my Lab environment because it is easily configurable and has an entire feature set of enterprise-grade network technology onboard by default. Like the routing protocols BGP and OSPF and high availability option VRRP.
So why do you need OSPF and BGP at home? I’m a VMware Consultant that is responsible for SDDC / SDN / NSX designs and implementations. I regularly need to perform tests in my Lab environment. VMware NSX likes to have a dynamic routing protocol to connect the virtual overlay network to the physical world. Both routing protocols can be used to perform this. An article about a detailed configuration can be found here at Jeffrey Kusters his blog (my ITQ colleague). I not going into further detail on VMware NSX, this blog post is focused on VyOS.
VyOS Virtual Hardware
My VyOS appliance is deployed on a VMware vSphere 6.5 infrastructure. I used the OVA file that is available on the VyOS website (vyos-1.1.8-amd64.ova). The virtual machine is called the “LAB-FW01” this hostname will appear in the video record. The YouTube video is listed below.
The Virtual Machine hardware is configured as default. I only assigned the virtual network cards to the right networks. An overview is listed here:
Public – Network adapter 1 – Connected to a WAN interface
Private – Network adapter 2 – Connected to a VLAN trunk
VyOS Configuration Restore
Now it is time for restoring the VyOS configuration file on a newly deployed VyOS appliance.
Deploy a new VyOS appliance and make sure that the virtual networks are connected to the correct adapter.
Verify and/or change the MAC addresses were needed. The MAC addresses should align between the configuration file and the new virtual appliance:
Option 01: Change the virtual network card MAC address to the ones that were used on the old appliance.
Option 02: Change the MAC addresses in the configuration file that is used for the restore. The MAC address should align with the newly deployed VyOS appliance.
Create an ISO file with your latest configuration on it. I used the following opensource tooling as displayed below. Link to IsoCreator.
Open your vSphere Infrastructure and navigate to the Virtual Machine. This would be in my case “LAB-FW01“.
Assign the newly created ISO file to the Virtual Machine. You connect the ISO file to the CD-ROM drive.
Power-on the Virtual Machine.
Make sure you select in the GRUB bootloader the following mode to startup from: “VyOS 1.1.8 linux (USB console)”.
When VyOS is booted, log in with the following default credentials:
You are now logged in into the Linux Shell.
Now it’s time to mount the connected CD-ROM media:
sudo mount /dev/cdrom /mnt
To make sure my configuration is available. I list the directory content with the following command:
ls -l /mnt
Now it is time to copy my old configuration to the startup configuration location of VyOS. Use the following command to perform this action (keep in mind: My configuration is called “2018-06-05-vyos.config.boot”):
To verify the copy action, I run the following command to display my hostname that is listed in the configuration file:
cat /config/config.boot | grep LAB-FW01
Now it is time to reboot the VyOS appliance. At the next boot, the old configuration will be loaded and everything should be restored. The following commands are required for rebooting VyOS:
After the reboot is completed you should log in with your old credentials that belong to the restored configuration.
To verify that the configuration is loaded correctly I run the following command to display all my interfaces and sub-interfaces:
From this point, everything should be working.
I have listed all the VyOS configuration locations that are important to this article.
In case you messed up your VyOS configuration you can always restore the default out-of-the-box configuration with the procedure described above. You only need to change the copy action in step eight to the following: (cp /opt/vyatta/etc/config.boot.default /config/config.boot).
Because there are a couple of steps involved I decided to record a video of me performing the procedure. Keep in mind:my VMware Remote Console is in dutch :).
About six months back I switched from pfSense to VyOS. The main reason was the BGP support and stability of the BGP routing process. I am happy I did. The VyOS appliance is just amazing and extremely reliable and robust.
If you are familiar with the Cisco CLI than you will be flying through the VyOS CLI in no-time.
I’m very happy to announce that I passed the VCAP6-NV Deploy exam and unlocked the VCIX-NV accreditation!
About the VMware VCIX-NV:
The VCIX-NV exam consists of approximately 23 live lab activities and the passing score for this exam is 300 (scale is from 100 to 500). The total time for this exam is 210 minutes, but candidates who take the VCIX-NV Exam and have a home address in a country where English is not a primary language will have an additional 30 minutes added to the exam time.
For my study I used the following list of website’s, HOL Labs and Blogs. This helped me to pass the exam:
My best advice is: Build a Home Lab and deploy VMware NSX-V. After the deployment you start using all the features that NSX-V has to offer (yeah, I know that is a lot). Get familiar and deploy and design like you would in a production environment. This will help you to get the best understanding possible for the exam.
This week (12-06/15-06), I attended a VMware training (thanks to my employer ITQ). The training is only available for VMware partners and is called “NSX LiveFire”. It was held at the VMware office in Sofia City, Bulgaria. The training is a technical training given by VMware employees. This time by the following three instructors Bal Birdy, Luca Camarda and Nikodim Nikodimov.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.