Monday, August 13, 2018

VMware SDDC Design Considerations - PART Two: SDDC Fabric



 

  In the second part of SDDC Design (based on VMware Validated Design Reference Architecture Guide) we will review this book about Data Center Fabric and Networking design:
One of the main mandatory considerations of Datacenter design is networking, consists aspect of communication infrastructure, routing and switching. So development of modern datacenter in era of virtualization, leads us to this two-tier Architecture of DC-Networking: Leaf and Spine
  1. Leaf switch also called ToR (Top of Rack) will lie inside racks; provide network access for servers and storages and each leaf node has identical VLAN with a unique/24 subnet.
  2. Spine switch as the aggregation layer will make connectivity of Leaf switches among the racks and also will provide redundancy on Leaf-to-Spine (L2S) links. There is no redundant link required between two Spine switches.
This topology creates Fabric L2-L3 transport services:
  1. L2 switched fabric consists of Leaf and Spine switches will act as a larger switch for massive virtual environments (High-Performance Computing and Private Clouds). One of the popular switching fabric products is Cisco Fabric Path that provides highly scalable L2 multipath networks without STP. You have freedom on design by permitting to spread different VLANs for Management, vMotion or FT logging makes a big opportunity, but as its disadvantages, the size of the fabric is limited and only supports single vendor fabric switching products.
  2. L3 switched fabric can mix different L3-capable switching product vendors and act as an uplink for P2P L2S by enabling dynamic routing protocols like OSPF, IS-IS and  iBGP.
     As a critical approach of network virtualization, it’s very important to consider P2V Networking requirements: connectivity and uplinks. So IP-based physical fabrics must have below characteristics:   
  1. Simplicity of network design: Identical configuration (NTP, SNMP, AAA …) and central management scheming for all switches configuring (P or V). 
  2. Scalability of Infrastructure: Highly depends on server and storage equipment and their generated traffic, total uplink port, link speed and network bandwidth. 
  3. High Bandwidth (HBW): Racks are usually hosting different workloads, so total connections or ports maybe cause oversubscription (equal of Total BW / Aggregate Uplink BW) on Leaf (ToR) switches. On the other hand, total uplinks of a Rack (Leaf switch) to each Spine switch must be same because of the hotspot phenomenon avoidance. 
  4. Fault-Tolerance Transport: Using more Spine switches can reduce failure impacts on fabric. Because of multipath structure of L2S connectivity, adding of Spine switch creates more low available capacity per each of them and switch failures affect on less network capacity. For maintenance operations on switch devices by changing routing metrics and will ensure traffics pass away from only available uplinks. 
  5. Different Quality of Services (QoS): Based on SLA, each type of traffics (Management, Storage and Tenant) have different characteristics like: Throughput volume, Sensitivity of data and storing location. QoS values will be set by hypervisors and the physical fabric switching infrastructure must accept it as a reliable value. Network congestion can be handled by sequencing and prioritizing and there is no requirement for re-classification of Server-to-Leaf switch ports. VDS switches support both L2 QoS (Class of Service) and L3 QoS (DSCP Marking) and for VXLAN networking, QoS values are copied from internal packet header to the VXLAN-encapsulated header.





No comments:

Post a Comment

I will start a new journey soon ...