Showing posts with label VXLAN. Show all posts
Showing posts with label VXLAN. Show all posts

Thursday, April 18, 2019

NSX Host Preparation Failure reasons



 In the preparing steps of VMware NSX deployment, after NSX Manager and also controller nodes deploying (at least 3 nodes or other odd number of controllers) you need to do "Host Preparation" procedures in the ESXi clusters. Sometimes after execute "install" on the specified cluster, you may encounter some problems that are related to communication channel health, So the NSX Deployment phases will stop in this step. Today I want to review the cause of these problems:
1. Before executing any changes, please check the version of your vSphere suite (ESXi hosts belong to the cluster and vCenter server) and also the NSX Manager appliance version and review the compatibility of these products.
2. Check Lookup Service of PSC and NSX Manager connection. it can be related to the time settings that are not-synchronized or lack of same time value between these two servers. 
Also, you should Check port 443, maybe it has been blocked by a firewall. you need to check the certificates that have been used by vCenter and NSX manager, maybe they are self-signed Certificates and can cause trouble in SSL connectivity. each of them can be the cause of vCenter~NSX communication problems.
3. It's recommended to design and prepare vSphere Distributed Switches before deploying NSX in the SDDC, especially for preparing of VXLAN TCP/IP stack (and also VTEP state) in the SDN environment. So you need to attach all hosts of the cluster to the VDS and then deploy NSX on their cluster. Here is an important point you should remember: VDS and Cluster are not forced to have 1:1 relations (a VDS can be deployed to more than one cluster and a single cluster can have communication with multiple VDS) but when we talk about the NSX, it's better to have exactly a 1:1 relation between VDS and Cluster.

4. Check DNS settings of the ESXi management network stack (VMkernel) because it's may not do name resolution on Network Fabric for downloading VIB packages from the NSX Manager server.



5. Check the VXLAN configuration: IP Pools and vmkNIC status, try "resolve" option in the Host preparation and let the NSX Manager's Agents (Firewall and Control Plane) be get UP.



At last, if you change one of these settings, review every single operation that has been logged in the NSX Audit Logs and vCenter Events tabs.

Monday, August 13, 2018

VMware SDDC Design Considerations - PART Two: SDDC Fabric



 

  In the second part of SDDC Design (based on VMware Validated Design Reference Architecture Guide) we will review this book about Data Center Fabric and Networking design:
One of the main mandatory considerations of Datacenter design is networking, consists aspect of communication infrastructure, routing and switching. So development of modern datacenter in era of virtualization, leads us to this two-tier Architecture of DC-Networking: Leaf and Spine
  1. Leaf switch also called ToR (Top of Rack) will lie inside racks; provide network access for servers and storages and each leaf node has identical VLAN with a unique/24 subnet.
  2. Spine switch as the aggregation layer will make connectivity of Leaf switches among the racks and also will provide redundancy on Leaf-to-Spine (L2S) links. There is no redundant link required between two Spine switches.
This topology creates Fabric L2-L3 transport services:
  1. L2 switched fabric consists of Leaf and Spine switches will act as a larger switch for massive virtual environments (High-Performance Computing and Private Clouds). One of the popular switching fabric products is Cisco Fabric Path that provides highly scalable L2 multipath networks without STP. You have freedom on design by permitting to spread different VLANs for Management, vMotion or FT logging makes a big opportunity, but as its disadvantages, the size of the fabric is limited and only supports single vendor fabric switching products.
  2. L3 switched fabric can mix different L3-capable switching product vendors and act as an uplink for P2P L2S by enabling dynamic routing protocols like OSPF, IS-IS and  iBGP.
     As a critical approach of network virtualization, it’s very important to consider P2V Networking requirements: connectivity and uplinks. So IP-based physical fabrics must have below characteristics:   
  1. Simplicity of network design: Identical configuration (NTP, SNMP, AAA …) and central management scheming for all switches configuring (P or V). 
  2. Scalability of Infrastructure: Highly depends on server and storage equipment and their generated traffic, total uplink port, link speed and network bandwidth. 
  3. High Bandwidth (HBW): Racks are usually hosting different workloads, so total connections or ports maybe cause oversubscription (equal of Total BW / Aggregate Uplink BW) on Leaf (ToR) switches. On the other hand, total uplinks of a Rack (Leaf switch) to each Spine switch must be same because of the hotspot phenomenon avoidance. 
  4. Fault-Tolerance Transport: Using more Spine switches can reduce failure impacts on fabric. Because of multipath structure of L2S connectivity, adding of Spine switch creates more low available capacity per each of them and switch failures affect on less network capacity. For maintenance operations on switch devices by changing routing metrics and will ensure traffics pass away from only available uplinks. 
  5. Different Quality of Services (QoS): Based on SLA, each type of traffics (Management, Storage and Tenant) have different characteristics like: Throughput volume, Sensitivity of data and storing location. QoS values will be set by hypervisors and the physical fabric switching infrastructure must accept it as a reliable value. Network congestion can be handled by sequencing and prioritizing and there is no requirement for re-classification of Server-to-Leaf switch ports. VDS switches support both L2 QoS (Class of Service) and L3 QoS (DSCP Marking) and for VXLAN networking, QoS values are copied from internal packet header to the VXLAN-encapsulated header.





I will start a new journey soon ...