Required bandwidth rising toward 100 gigabits per second / R&M provides information on new network architectures
Wetzikon, August 19, 2014. Today’s dynamic applications and traffic patterns require rethinking three-tier data center architecture. Until now, structured cabling tended to be designed for static applications. The need to rethink current architecture has been triggered on the one hand by the increase in server virtualization and on the other by the demand for shorter latency times for real-time applications. The latter include Voice over IP (VoIP), Unified Communication (UC), Video Conferencing, Video on Demand (VoD), cloud-based CAD and high frequency trading used in electronic financial trading. Swiss cabling specialist R&M now provides information about the technical interconnections and the organizational consequences in its Data Center Manual.
Data center networks are traditionally structured hierarchically. They consist of an access/storage layer with switches to desktops, servers and storage resources as well as two further layers. One is an aggregation/distribution layer where switches aggregate the data flows of the access/storage layer and a core layer as the network backbone.
Thomas Wellinger, Market Manager Data Center at R&M: “In the late 1990s, these three-layer networks overcame the capacity bottlenecks of two-layer networks, which lacked the aggregation layer. In the meantime, they have become an obstacle to virtualization.”
For example, ten virtual servers can be operated on one physical computer and shifted from hardware to hardware as needed. In other words, where a network used to have to manage data traffic for 1,000 servers, it now has to do so for 10,000 virtual machines. In addition, the latter can be moved automatically to other physical computers without interrupting the applications running on those computers. A favorable load distribution can be achieved using this method.
In a traditional network architecture, it is disadvantageous if two virtual servers that were formerly directly connected with each other via the backplane or a top-of-rack switch can only communicate with each other via multiple network nodes after the migration. That is why the number of hierarchical layers should be reduced. A mesh structure is preferable for the network to a star or tree structure. “Network fabrics” is the catchword now. It describes a structure of this kind with equally entitled nodes, which appear as a single big switch by means of management logic.
The introduction of single root I/O virtualization (SR-IOV) must be seen in this context. For instance, with SR-IOV a network card can depict its I/O channels to the system as if there were multiple cards. SR-IOV is therefore a way to eliminate I/O bottlenecks in virtualized server environments.
The use of SR-IOV has far-reaching consequences. Until now, the I/O performance of a virtualized system has been limited to about 3 to 4 gigabits per second by the fact that all transmission processes ran over the hypervisor. In continuous operation, performance was usually less than 1 gigabit per second, which meant that a double 1 GbE connection was suitable. With SR-IOV the possible I/O performance of a virtualized system is around 20 to 30 gigabits per second because the hypervisor is largely freed of doing transmission tasks. There are server blades which have to be connected individually with two 10 GbE to reach their potential.
In addition, all switches have to be high performance devices capable of carrying out all functions in the network. R&M still sees the justification for conventional hierarchical solutions. As long as servers are sufficiently equipped with 1 GbE LAN interfaces, i.e. are connected with 1 gigabit per second of top-of-rack switches, the required performance for the next higher hierarchical level can be achieved with 10 GbE or with aggregation using a multiple connection depending on the number of servers. However, the uplink ports are overstrained if the server output is increased to 10 gigabits per second and aggregation is done in blade systems that want to transmit toward the core at a rate of 40 or even 100 gigabits per second. In these cases, you have to change from a three-tier to a two-tier switching architecture to be able to achieve the requisite uplink performances in the core.
R&M provides thorough information in its Data Center Manual about three-tier and two-tier network concepts, their advantages and disadvantages, as well as migration possibilities. Anyone interested can download the 190-page manual as an e-book free of charge from the company’s Internet site (www.rdm.com).