![]() ![]() ![]() ![]() It doesn’t matter if these are the only two uplinks presented, such as with a 2 vNIC design, or if there are many vNICs. The vMotion port group contains two uplinks – vmnic0 and vmnic1. It can be difficult to picture all the pieces and parts, so I have created a visualization of the common design used for Cisco UCS. The design varies slightly for the two use cases, so I will focus on the simple fact of pinning traffic for now. In these situations, there exists the ability to keep the vMotion traffic from ever egressing the chassis (for intra-chassis blade communication) or domain (for intra-domain blade communication). Now, enter converged infrastructure and the use of either chassis switches (as with an HP c7000 BladeSystem) or domain switches (as with Cisco UCS Fabric Interconnects). In larger designs, where servers spanned multiple cabinets, the data may have to ride up into the aggregation (formerly known as distribution) layer – or sometimes all the way up to the core. Often, a TOR switch (top of rack) would be the only hop necessary for transporting the vMotion Ethernet frames because the cabinet contained all of the vSphere hosts inside. It was assumed that the northbound switching infrastructure would handle transporting the traffic to another host. A pair of them would often be dedicated to vMotion in one way or another – either paired with management traffic in a flip flop manner, or isolated completely. In a traditional rack mount vSphere design, a host would have several NICs. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |