Your browser version is outdated. We recommend that you update your browser to the latest version.

Network Design

A good platform is needed for reliable and maintenance free wired or wireless networks, the original network design is of critical importance to any network. The applications, protocols and systems play an important role when deciding which type of network to be installed, as the network is a long term investment. Furthermore, the technicality involved in such a design, and the ever changing nature of the computer industry, force company’s to make use of specialist in designing their networks. With extensive experience in data, voice, video and process control networks, Core Networks is more than capable and willing to join forces with customers in designing, redesigning or upgrading their networks.

Guidelines

Gigabit Ethernet Layer 3 switches give you the flexibility to solve many problems that plague a wide range of network environments – desktops, LAN segments, server groups and the core infrastructure. In each of these areas, Extreme Networks switches provide smooth and cost-effective solutions for reducing congestion, increasing capacity, eliminating poor routing performance, and easing application overload. Ultimately, you’will be able to focus on the applications that run on the network.

Enterprise Centrally Managed wireless controller and Access Points form an integral part of any modern network.  Meru Wireless Solutions offer a unique single channel deployment with additions to wireless as we know it, that have revolutionized the market.   

Over-subscribed links and blocking hardware architectures usually cause congestion and capacity problems. Poor routing performance refers to the limitations of software-based routing. And application overload is typically caused by a lack of quality of service on the network. An optimal network design cannot be achieved without fully understanding the impact of these problems.

Leading the Third Wave of LAN Switching

When designing a network, it is important to first identify the interconnection speed and capacity of devices. Speed and capacity are characterised by "over-subscription" and "blocking/non-blocking."

Over-subscription is one of the most important tools used to design networks. Over-subscription ratios deal specifically with points in a network where bottlenecks occur. The impact of improper over-subscription ratios is congestion, which causes packet loss. However, over-subscription is acceptable in some areas of the network. But what is considered "acceptable" has been fiercely debated in the desktop, segment, server-farm and core application spaces. In general, different aggregation points in the network have unique over-subscription requirements. So the acceptable ratio for desktop connectivity is different from the ratio for server connectivity.

Over-subscription is vital at the edge or in feeder branches of a network. In intermediate distribution frame (IDF) or wiring closet layouts, over-subscription defines the traffic model. By understanding over-subscription, you can identify and correct the adverse effects that congestion has on network performance. Over-subscription ratios are calculated by adding the potential bandwidth requirements of a particular path and dividing the total by the actual bandwidth of the path.    

Over-Subscription

Device capacity, which can be limited by hardware architectures, is also crucial to network design. Hardware architectures can be blocking or non-blocking. Can a device handle the traffic load that all its interfaces can accept or generate? In terms of capacity, a blocking architecture cannot meet the full-duplex bandwidth requirements of its ports, which results in packet loss. Non-blocking means that a device’s internal capacity matches or exceeds the full-duplex bandwidth requirements of its ports, and will not drop packets due to architecture.

Despite industry confusion about the terms "blocking," "non-blocking" and "over-subscription," the true difference is where you apply the terms. Blocking describes the hardware architecture of a network device, which can limit its total capacity. Over-subscription describes characteristics of link capacity. Every switch in todays network should have a non-blocking architecture.       

Although they both route packets, the integration of forwarding in hardware differentiates Layer 3 switches from traditional software-based routers. Several issues should be addressed when designing a routing infrastructure, including where to route and performance. Determining where to route is perhaps most challenging because variations in network design have a direct effect on routing performance. When considering a network design, it is important to remember that the constraints of traditional router performance have driven the model. This is not the case with today’s faster Layer 3 switching.

The advent of Layer 3 switching dramatically changed the way we design networks. Low-cost Layer 3 switches eliminate our dependence on expensive core routers because they can be deployed throughout the network – from desktops at the edge, to data centres at the core. And they bring with them wire-speed performance, whether switching at Layer 2 (hardware-based bridging) or Layer 3 (hardware-based routing). Wire-speed performance at Layer 3 means that network architects are no longer bound by the performance penalty usually associated with software-based routers. The increased latency introduced by Layer 3 subnet boundaries do not exist with wire-speed Layer 3 switching. Layer 3 switches let you design networks that are driven by the physical flow of packets, instead of designing around the limitations of software-based routers and other slow network equipment. Ultimately, this allows you to focus on the logical organisation of your network and optimise the performance of applications that run on it.

Layer technology lets you deploy routing throughout a network. Packets are routed at wire speed. Application performance improves significantly. But you still need control. And only quality of service can deliver it. QoS gives you the power to control network behaviour by prioritising applications, subnets and end stations, and guaranteeing them a specific amount of bandwidth. So, while a good network design can reduce the negative effects of over-subscription and blocking, QoS can deliver end-to-end control over traffic flows.

QOS

QoS is divided into two types – implicit and explicit.

Implicit QoS includes the characteristics that are inherent in your underlying network technology, such as speed or bandwidth. Its benefits are derived from the addition of bandwidth and reductions in over-subscription. Although you can add speed and bandwidth to improve the health of a network, you cannot control the behaviour of the network without explicit QoS. Controlling the behaviour of the network means that network managers can allocate limited resources (i.e. bandwidth) based on policies rather than congestion.

Explicit QoS is the act of regulating the flow of information on the network. It lets you guarantee bandwidth to mission-critical applications, set aside bandwidth for videoconferencing sessions, limit bandwidth to control file server access, prioritise backups and control congestion on over-subscribed links.

Todays QoS capabilities support explicit QoS with eight configurable queues per port on each switch. Without these queues, explicit QoS is impossible. Each queue is defined by multiple QoS profiles that control minimum and maximum bandwidth and relative priority. These profiles can be applied to ports, VLANs, subnets and applications.

Ports. By applying priorities to individual input ports, 802.1D-1998 priority flags can be used to propagate QoS throughout the entire enterprise network – from the desktop to the core.

VLANs. This is QoS in a simple form. By applying a QoS profile to port-based VLANs, you can quickly control congestion where they are trunked through an uplink.

Subnets. Protocol-based VLANs can tie QoS to specific IP, IPX and AppleTalk subnets. This allows you to easily single out a specific group of users and provide them with explicit QoS.

Applications – TCP/UDP Port Numbers. The key to application-specific QoS is having visibility into the transport layer (Layer 4). This gives you control over the applications using individual port numbers. Applications that use well-known port numbers can have QoS profiles applied to them.

The Core

Core switching is the most critical point in network design. The core supports the aggregate traffic load produced by the network. This is where many networks fail. Wire-speed routing and switching, massive capacity and scalability characterise requirements throughout the core. As a result, core devices should be non-blocking and have no over-subscription. There’s no point in surrounding the core with high-capacity feeders if the core cannot handle the traffic load.

Performance The biggest challenge in the network core is routing performance. The core is where the most intelligent devices reside and is where most traffic converges. However, software-based core routers impede performance due to their blocking architectures and limited packet-per-second forwarding rates.

The network core design must meet some key requirements – increase routing performance to wire speed, have a non-blocking architecture to support it, have zero over-subscription and preserve proper over-subscription ratios from the edge into the core.

Layer 3 Switching Layer 3 switching in the core should be properly balanced with Layer 3 switching at the edge. In the core, Layer 3 switching lets you optimise your network’s logical structure. At the edge, Layer 3 switching lets you efficiently organise end user communities. In most networks, the core is still the best place to route. Routing in the core with Layer 3 switches preserves legacy IP address structures and increases performance by an order of magnitude, thanks to hardware-based routing.

Quality of Service The core is the easiest place to implement quality of service because the widest range of traffic groups is flowing through it. Application prioritisation is the primary reason you need quality of service in the core. For example, a delay-sensitive manufacturing application that runs through the core can be given a high priority and 10 percent guaranteed bandwidth at all times. This type of traffic segregation and prioritisation ensures your mission-critical applications perform optimally and with minimum delay.