0
1.0kviews
Server Placement :
1 Answer
0
2views

Within a campus network, servers may be placed locally in the Building Access or Building Distribution layer, or attached directly to the Campus Core. Centralized servers are typically grouped into a server farm located in the Enterprise Campus or in a separate data center.

Servers Directly Attached to Building Access or Building Distribution Layer Switches

If a server is local to a certain workgroup that corresponds to one VLAN, and all workgroup members and the server are attached to a Building Access layer switch, most of the traffic to the server is local to the workgroup. If required, an access list at the Building Distribution layer switch could hide these servers from the enterprise. In some midsize networks, building-level servers that communicate with clients in different VLANs, but that are still within the same physical building, can be connected to Building Distribution layer switches.

Servers Directly Attached to the Campus Core

The Campus Core generally transports traffic quickly, without any limitations. Servers in a medium-sized campus can be connected directly to Campus Core switches, making the servers closer to the users than if the servers were in a Server Farm, as illustrated in Figure below. However, ports are typically limited in the Campus Core switches. Policy-based control (QoS and access control lists [ACL]) for accessing the servers is implemented in the Building Distribution layer, rather than in the Campus Core.

enter image description here

Servers in a Server Farm Module

Larger enterprises may have moderate or large server deployments. For enterprises with moderate server requirements, common servers are located in a separate Server Farm module connected to the Campus Core layer using multilayer server distribution switches, as illustrated in Figure below. Because of high traffic load, the servers are usually Gigabit Ethernet–attached to the Server Farm switches. Access lists at the Server Farm module’s multilayer distribution switches implement the controlled access to these servers. Redundant distribution switches in a Server Farm module and solutions such as the HSRP and GLBP provide fast failover. The Server Farm module distribution switches also keep all server-to-server traffic off the Campus Core.

enter image description here

Rather than being installed on only one server, modern applications are distributed among several servers. This approach improves application availability and responsiveness. Therefore, placing servers in a common group (in the Server Farm module) and using intelligent multilayer switches provide the applications and servers with the required scalability, availability, responsiveness, throughput, and security. For a large enterprise with a significant number of servers, a separate data center, possibly in a remote location, is often implemented. Design considerations for an Enterprise Data Center are discussed in the later “Enterprise Data Center Design Considerations” section.

Server Farm Design Guidelines

As shown in Figure below, the Server Farm can be implemented as a high-capacity building block attached to the Campus Core using a modular design approach. One of the main concerns with the Server Farm module is that it receives the majority of the traffic from the entire campus. Random frame drops can result because the uplink ports on switches are frequently oversubscribed. To guarantee that no random frame drops occur for business-critical applications, the network designer should apply QoS mechanisms to the server links.

enter image description here

Figure: Sample Server Farm Design

The Server Farm design should ensure that the Server Farm uplink ports are not as oversubscribed as the uplink ports on the switches in the Building Access or Building Distribution layers. For example, if the campus consists of a few Building Distribution layers connected to the Campus Core layer with Gigabit Ethernet, attach the Server Farm module to the Campus Core layer with either a 10-Gigabit Ethernet or multiple Gigabit Ethernet links. The switch performance and the bandwidth of the links from the Server Farm to the Campus Core are not the only considerations. You must also evaluate the server’s capabilities. Although server manufacturers support a variety of NIC connection rates (such as Gigabit Ethernet), the underlying network operating system might not be able to transmit at the maximum line capacity. As such, oversubscription ratios can be raised, reducing the Server Farm’s overall cost.

Server Connectivity Options

Servers can be connected in several different ways. For example, a server can attach by one or two Fast Ethernet connections. If the server is dual-attached (dual-NIC redundancy), one interface can be active while the other is in hot standby. Installing multiple single-port NICs or multiport NICs in the servers extends dual homing past the Server Farm module switches to the server itself. Servers needing redundancy can be connected with dual-NIC homing in the access layer or a NIC that supports EtherChannel. With the dual-homing NIC, a VLAN or trunk is needed between the two access switches to support the single IP address on the two server links to two separate switches.

Within the Server Farm module, multiple VLANs can be used to create multiple policy domains as required. If one particular server has a unique access policy, a unique VLAN and subnet can be created for that server. If a group of servers has a common access policy, the entire group can be placed in a common VLAN and subnet. ACLs can be applied on the interfaces of the multilayer switches.

Several other solutions are available to improve server responsiveness and evenly distribute the load to them. For example, Figure 3.10 includes content switches that provide a robust front end for the Server Farm by performing functions such as load balancing of user requests across the Server Farm to achieve optimal performance, scalability, and content availability.

The Effect of Applications on Switch Performance

Server Farm design requires that you consider the average frequency at which packets are generated and the packets’ average size. These parameters are based on the enterprise applications’ traffic patterns and number of users of the applications. Interactive applications, such as conferencing, tend to generate high packet rates with small packet sizes. In terms of application bandwidth, the packets-per-second limitation of the multilayer switches might be more critical than the throughput (in Mbps). In contrast, applications that involve large movements of data, such as file repositories, transmit a high percentage of full-length (large) packets. For these applications, uplink bandwidth and oversubscription ratios become key factors in the overall design. Actual switching capacities and bandwidths vary based on the mix of applications.

Please log in to add an answer.