In continue and as the fourth part of SDDC Design (based on VMware Validated Design Reference Architecture Guide)
we will look more about Leaf-Spine model and topology characteristics. Before explain it, an important question must be answered: Why recommended to choose the two-tier model of Leaf-Spine in datacenter networking design instead of three-tier model of Core-Aggregate-Access? (Or as Cisco named: Core-Distributed-Access). It’s so important to know the first and the main reason is the simplicity in design, especially about datacenter communication. In the networking terminology, two terms of traffic stream, North-South and East-West have been defined. Let's see what what are they?
As the perspective of datacenter design, North-South traffics almost means incoming and outgoing traffics across datacenter and the world (outside of datacenter, usually traffics of servers in branch offices, secondary sites, requests from client subnets and regions and other part of the network infrastructure). In contrast, East-West traffics are datacenter internally streams and absolutely related to the server to server communication inside the datacenter. It means East-West traffics never leave the datacenter as the North-South traffics always do. So for the North-South, traditional definition of networking 3-tier model is still the best choice, because you may need to establish and keep redundancy at each layer of network to stand against every network failures, but fully redundant connections are never be essential for the datacenter inside links, as the true meaning for Leaf-Spine uplinks. Because it will require to high bandwidth and more throughput transfer more than network failure resilience or full-mesh switch connection inside of the datacenter (No need to spine-spine or leaf-leaf switch uplinks). Also we must consider the STP role, because it can block one of two redundant uplinks to prevent network loops, so simplicity and bandwidth growth is more and more important than full-redundancy inside each tier of the datacenter. Redundancy is important only between Spine-Leaf Connectivity.
In Leaf-Spine architecture, consider the Spine switches as the mix of Core & Aggregate layers (called Distributed Core) as the heart of structure and Leaf switches as the Access layer for servers and especially hypervisors in virtualization area. So instead of handling total load of network by one or two massive strong core switch devices, all of the generated traffics from leaf switch uplinks will be distributed between all Spine switches. Then scalability factor is the second in lead factor in datacenter networking design.
Many characteristic can be effective on size of datacenter and its rate of growth. For example total racks inside the fabric and provisioned bandwidth between two racks in the datacenter and type and speed of connection between leaf (as the ToR switch) and spine. To design better and reliable datacenter networking, it’s so important to implement uplink connections based on ECMP. Equal-Cost Multi-Pathing let the network transmitting happen across multiple paths as same as each other, so all of them can carry packets equal and will lead the infrastructure to better load-balancing and more bandwidth. Also this structure prevents aggregation of load in one or more uplinks.
Many characteristic can be effective on size of datacenter and its rate of growth. For example total racks inside the fabric and provisioned bandwidth between two racks in the datacenter and type and speed of connection between leaf (as the ToR switch) and spine. To design better and reliable datacenter networking, it’s so important to implement uplink connections based on ECMP. Equal-Cost Multi-Pathing let the network transmitting happen across multiple paths as same as each other, so all of them can carry packets equal and will lead the infrastructure to better load-balancing and more bandwidth. Also this structure prevents aggregation of load in one or more uplinks.
Type of services provided by each racks can lead us to think more about how to provide a scalable datacenter design? For example if you dedicated one rack for storage devices, increasing number of hypervisors on other racks can confront our design with a great bottleneck. Because you need to provide new storage connectivity (SCSI or FC HBA, SAN switch and etc.) and maybe there is no more room for your new equipment such as storage enclosure. Jumbo Frame is a great solution if you can configure it on your end-2-end devices. So MTU value on the vSwitches (VSS/VDS) and physical end devices must be same. (MTU 9000 is the best choice.)
Or if you need to improve your computing pools with new CPU cores, what should you do if you need more physical servers? So before allocated each racks for hosting specified process, services or even storing procedure you should calculate growth rate of your datacenter, especially hardware resources and physical equipment to satisfying future datacenter service providing needs.
At last always remember to consider maximum rate and average bandwidth of different traffics inside a virtual environment, like vMotion, VSAN, NFS, VXLAN and VR to calculate datacenter infrastructure requirement accurately.
At last always remember to consider maximum rate and average bandwidth of different traffics inside a virtual environment, like vMotion, VSAN, NFS, VXLAN and VR to calculate datacenter infrastructure requirement accurately.
No comments:
Post a Comment