- Distribute all computing resources (Workload ESXi Hosts) between the dedicated racks for computing pods. Remember to measure overhead of burst workload and resource requirements of that. Although performance metrics depend on average usage of computing resources but to improve the efficiency of your SDDC, we should encounter with high resource running situation to calculate exact required hardware for each rack of computing pods.
- It's better to provide similar circumstances for storage pods if you don't use VMware VSAN Solution and bunch of SAN storage devices are provided for your SDDC.
- Management and Edge pods can be deployed and established inside same racks. Also you must consider some places for spine switches and other connectivity devices, especially external connections of Edge services in these racks. Note that if you provide same racks for these two types of pod, VLANs for DMZ & WAN Connections are spread on resources of both pod types, So must be respected to some security limitations, for example never provide Edge VLANs into the management pod hosts if you do not need them. Maybe VM power users or some virtual admins have access to change VM network connectivity and intentionally or not, connect the critical management service of SDDC to an outside network and break the security policies.
- Provide dual power supplies at least for each of critical devices like storage and server and also each rack must have two separate power feeds to increase total availability of SDDC. So it's recommended to use dual UPS and power generators on the datacenter electricity infrastructure. (Check each types of prepared racks & UPS best practices documentaries)
- Try to provision identical hardware config (or similar at least similar CPU family) for ESXi hosts for the compute balancing and providing access to same resources for each of VMs across the SDDC. it also enables greater performance in virtualization computing and we can prevent more issues affected from a server problem. By default it's better to use more blade servers instead of a few tower/stand servers or some other powerful types of servers on the SDDC physical server provisioning.
- VMware VSAN can grant better storage resources distributing and load-balance disk usage across all the hosts inside a VSAN Deployment. Just consider VSAN Design documentaries if you want to deploy it on your SDDC.
- Calculate total physical memory required before provision physical servers/blades for the SDDC. Testing and analyzing memory usage of some selected VMs of end-users applications and network services can be good way to estimate average memory usage and required RAM Modules.
- We can provide some SD memory disks or USB devices (4GB or more storage) to install VMware ESXi inside them instead of local disks. They can be replaced fast & easy and there is no dependency to local arrays.
- Physical networking design is following the Leaf-Spine architecture and is fully-compliance with this structure design method, (we reviewed it at Part4 of this series) So just need to adhere to simplicity and scalability factors in your SDDC physical networking design.
In the deep darkness of IT Infrastructure, there are too much to learn and so many ways to go ...
Sunday, February 24, 2019
VMware SDDC Design Considerations - PART Five: Physical Design
Subscribe to:
Post Comments (Atom)
-
One of my students, asks me about the difference between vpxa & hostd. hostd (daemon) is responsible for the performing main manageme...
-
All of you maybe see a file name like ".sdd.sf " in the ESXi root directory of each VMFS volume especially when you connect via ...
-
FDM agent is a part of vSphere HA to monitor availability of the ESXi host and its VMs and also power operations of that protected VMs in f...
No comments:
Post a Comment