What are the characteristic traits of a network switch? (Select all that apply)

Layer 2 switches are similar to bridges. They interconnect networks at layer 2, most commonly at the MAC sublayer, and operate as bridges, building tables for the transfer of frames among networks.

Historically, layer 2 switches emerged to alleviate the contention problem of shared media LANs. As structured cabling emerged and star-based connectivity to network centers was adopted, the exploitation of existing cabling and existing network adapters led to the continuation of using typical LANs, such as Ethernet and Token Ring, but enabled the development of layer 2 switches. The original goal of these switches was to enable use of a single LAN segment, if feasible, per attached end system, minimizing contention delays that existed in the older shared segments. For example, with an Ethernet switch and a dedicated Ethernet segment per attached system, collisions are avoided and delay is minimized.

Considering the need for autonomous operation and high performance, layer 2 switches perform all operations that typical bridges do. However, due to their focus on performance for dedicated segments, they employ specialized hardware for frame forwarding, and some of them even employ cut-through routing techniques instead of the typical store-and-forward technique used in common bridges. Thus, their main difference from bridges is typically the technology used to implement frame forwarding, which is mostly hardware-based, in contrast to typical bridges, which generally are more programmable and accommodate a wider range of heterogeneous LANs.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744944000062

MCSE 70-293: Planning, Implementing, and Maintaining a Routing Strategy

Martin Grasdal, ... Dr.Thomas W. ShinderTechnical Editor, in MCSE (Exam 70-293) Study Guide, 2003

Switches

Switches are like bridges, except that they have multiple ports with the same type of connection (bridges generally have only two ports) and have been described as nothing more than fast bridges. Switches are used on heavily loaded networks to isolate data flow and improve the network performance. In most cases, most users get little, if any, advantage from using a switch rather than a hub.

That’s not to oversimplify and suggest that a switch doesn’t have many benefits. Switches can be used to connect both hubs and individual devices. These approaches are known as segment switching and port switching, respectively.

Segment switching implies that each port on the switch functions as its own segment. This process tends to increase the available bandwidth, while decreasing the number of devices sharing each segment’s bandwidth, but at the same time maintaining the Layer 2 connectivity. Each shared hub and the devices that are connected to it make up their own media access domain, while all devices in both domains remain part of the same MAC broadcast domain. Figure 4.21 illustrates how a segment-switched LAN can be divided to improve performance.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 4.21. Segment Switching

Port switching implies that each port on the switching hub is directly connected to an individual device. This makes the port and the device their own self-contained media access domain. All of the devices in the network still remain part of the same MAC broadcast domain. Figure 4.22 illustrates how the media access and MAC broadcast domains are configured in a port-switched LAN.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 4.22. A Port-switched LAN

Layer 2 Switches

Layer 2 switches, operating at the Data Link layer, can be programmed to respond automatically to a wide range of circuit conditions. By monitoring control and data events, these switches automatically reroute circuits or switch to backup equipment, as the need requires. These switches operate using physical network, or MAC, addresses. These switches will be fast but not terribly smart. They only look at the data packet to find out where it’s headed.

Layer 3 Switches

Layer 3 switches, operating at the Network layer, are designed for disaster recovery service (or, more importantly, for disaster avoidance). These network backup units are usually designed specifically to provide high levels of automation, intelligence, and security. Layer 3 switches use routing protocols such as RIP or OSPF to calculate routes and build their own routing tables.

Layer 3 switches use network or IP addresses to identify locations on the network, identifying the network location as well as the physical device. These switches are smarter than Layer 2 switches. They incorporate routing functions to actively calculate the best way to get a packet to its destination. Unless their algorithms and processor support high speeds, though, these switches are slower.

Layer 4 Switches

Layer 4 switches, operating at the Transport layer, allow network managers to choose the best method of communicating for each switching application. Because Layer 4 coordinates communication between systems, these switches are able to identify which application protocols (HTTP, SMTP, FTP, and so forth) are included in the packets, and they use this information to hand off the packet to the appropriate higher layer software. This means that Layer 4 switches make their packet-forwarding decisions based not just on the MAC and IP addresses, but also on the application to which the packet belongs.

Because these devices allow you to set up priorities for your network traffic based on applications, you can assign a high priority for your vital in-house applications and use different forwarding rules for low-priority packets, such as generic HTTP-based traffic. Layer 4 switches can also provide security, because company protocols can be confined to only authorized switched ports or users. This feature can be reinforced using traffic filtering and forwarding features.

All these devices can be used to segment your network, but segmentation does not create separate LANs. LANs exist at only the first two layers of the OSI reference model. There’s another way to segment your network into separate LANs: use a router.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836937500087

Overview of IP Multicast

Vinod Joseph, Srinivas Mulugu, in Deploying Next Generation Multicast-enabled Applications, 2011

1.1.2.3 Layer 2 Multicast Addressing

The IEEE 802.2 specification makes provisions for the transmission of broadcast and/or multicast packets. As shown in Figure 1.8, Bit 0 of Octet 0 in an IEEE MAC address indicates whether the destination address is a broadcast/multicast address or a unicast address. If this bit is set, then the MAC frame is destined for either an arbitrary group of hosts or all hosts on the network (if the MAC destination address is the broadcast address, 0xFFFF.FFFF.FFFF). IP Multicasting at Layer 2 makes use of this ability to transmit IP Multicast packets to a group of hosts on an LAN segment. IP Multicast frames all use MAC layer addresses beginning with the 24-bit prefix of 0x0100.5Exx.xxxx. Unfortunately, only half of these MAC addresses are available for use by IP Multicast. This leaves 23 bits of MAC address space for mapping Layer 3 IP Multicast addresses into Layer 2 MAC addresses. Since all Layer 3 IP Multicast addresses have the first 4 of the 32 bits set to 0x1110, this leaves 28 bits of meaningful IP Multicast address information. These 28 bits must map into only 23 bits of the available MAC address. This mapping is shown graphically in Figure 1.9. Since all 28 bits of the Layer 3 IP Multicast address information cannot be mapped into the available 23 bits of MAC address space, 5 bits of address information are lost in the mapping process. This results in a 32:1 address ambiguity when a Layer 3 IP Multicast address is mapped to a Layer 2 IEEE MAC address. This means that each IEEE IP Multicast MAC address can represent 32 IP Multicast addresses as shown in Figure 1.10. It should be obvious that this 32:1 address ambiguity has the potential to cause some problems. For example, a host that wishes to receive multicast group 224.1.1.1 will program the hardware registers in the network interface card (NIC) to interrupt the CPU when a frame with a destination multicast MAC address of 0x0100.5E00.0101 is received. Unfortunately, this same multicast MAC address is also used for 31 other IP Multicast groups. If any of these 31 other groups are also active on the LAN, the host’s CPU will receive interrupts any time a frame is received for any of these other groups. The CPU will have to examine the IP portion of each of these received frames to determine if it is the desired group such as 224.1.1.1. This can have an impact on the host’s available CPU power if the amount of “spurious” group traffic is high enough.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 1.8.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 1.9.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 1.10.

IGMP Snooping is normally used by Layer 2 switches to constrain multicast traffic to only those ports that have hosts attached and who have signaled their desire to join the multicast group by sending IGMP Membership Reports. However, it is important to note that most Layer 2 switches flood all multicast traffic that falls within the MAC address range of 0x0100.5E00.00xx (which corresponds to Layer 3 addresses in the Link-Local block) to all ports on the switch even if IGMP Snooping is enabled. The reason that this Link-Local multicast traffic is always flooded is that IGMP Membership Reports are normally never sent for multicast traffic in the Link-Local block. For example, routers do not send IGMP Membership Reports for the ALL-OSPF-ROUTERS group (224.0.0.5) when OSPF is enabled. Therefore, if Layer 2 switches were to constrain (i.e., not flood) Link-Local packets in the 224.0.0.0/24 (0x0100.5E00.00xx) range to only those ports where IGMP Membership reports were received, Link-Local protocols such as OSPF would break. The impact of this Link-Local flooding in combination with the 32:1 ambiguity that arises when Layer 3 multicast addresses are mapped to Layer 2 MAC addresses means that there are several multicast group ranges besides the 224.0.0.0/24 that will map to the 0x0100.5E00.00xx MAC address range and hence will be also be flooded by most Layer 2 switches. It is therefore recommended that multicast addresses that map to the 0x0100.5E00.00xx MAC address range not be used. The following lists all multicast address ranges that should not be used if Layer 2 flooding is to be avoided. These entire Multicast addresses map to 0x0100.5E00.00xx range

224.0.0.0/24 and 224.128.0.0/24

225.0.0.0/24 and 225.128.0.0/24

226.0.0.0/24 and 226.128.0.0/24

227.0.0.0/24 and 227.128.0.0/24

228.0.0.0/24 and 228.128.0.0/24

229.0.0.0/24 and 229.128.0.0/24

230.0.0.0/24 and 230.128.0.0/24

231.0.0.0/24 and 231.128.0.0/24

232.0.0.0/24 and 232.128.0.0/24

233.0.0.0/24 and 233.128.0.0/24

234.0.0.0/24 and 234.128.0.0/24

235.0.0.0/24 and 235.128.0.0/24

236.0.0.0/24 and 236.128.0.0/24

237.0.0.0/24 and 237.128.0.0/24

238.0.0.0/24 and 238.128.0.0/24

239.0.0.0/24 and 239.128.0.0/24

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849236000016

Data Center Networks

Caesar Wu, Rajkumar Buyya, in Cloud Data Centers and Cost Modeling, 2015

13.1.1.3 Bridge

In the above section, we mentioned that a layer 2 switch is a bridge. The function of a network bridge is to join two network segments or divide one network into two separated network segments or LANs. Usually, two networks may adopt the same protocol, such as the Ethernet protocol, but this is not limited to the same protocol for two networks. Historically, a bridge may only have two Ethernet ports connected to each network segment. However, because a layer 2 switch is the same as a bridge, many switch vendors may provide a bridge that has more than two ports. One of most popular types of bridge is an Ethernet bridge. It can overcome some inherent obstacles of the Ethernet protocol by controlling data flow among different network segments. When a bridge is connected to an Ethernet network, its function is to make the network device transparent. Transparent bridging is the bridging method to route and manage data traffic efficiently. A transparent bridge has four operation modes:

Frame or packet filtering

Forwarding packets

Broadcasting and learning addresses

Loop resolution

The concept of a transparent bridge is to examine the incoming data stream of the Media Access Control (MAC) address for the packet destination first and then compare it to the destination MAC with the MAC address-forwarding table. If the destination address exists in the forwarding table, it will send these packets to the destination device. If not, it will broadcast the data packet to all devices and listen to the destination device’s response. In other words, the bridge has begun to search for the destination device in the network. Once the destination device responds to the broadcasting signal, the bridge will add the destination device’s MAC address into its forwarding table and transmit packets to the destination device. The broadcasting time will be set for a certain interval. If the broadcasting time has expired and no destination device has responded to the broadcast signal, the bridge entry of the filtering database will become invalid (see Figure 13.7).

What are the characteristic traits of a network switch? (Select all that apply)

Figure 13.7. Transparent bridging on Ethernet network.

The reason for broadcasting and listening for a MAC address is not only to learn about the new devices that are plugged into the network but also to upgrade the forwarding table due to existing devices being moved around from time to time (see Figure 13.8).

What are the characteristic traits of a network switch? (Select all that apply)

Figure 13.8. Transparent bridge updating forwarding table.

All devices that are plugged into the network have a unique MAC address. It is the fingerprint of any physical hardware or device. This is why sometimes it is also named the hardware or physical address. It doesn’t matter whether it is a network interface card (NIC), hosting server, printer, switch, hub, storage disk, or even the bridge itself, they all have this address. It is this address that a bridge can trace among different LANs.

One thing that we should remember is that when sending and receiving devices within the same network segment, this forwarding activity will not happen. This reduces overall network congestion. In short, the bridge will only occur between two network segments. However, if the destination device doesn’t receive a quality signal over a longer distance, the bridge will retransmit the packets again.

For a bridge to operate correctly and effectively, all networks shouldn’t have any network loops. Therefore, a transparent bridge will include a loop resolution process. The principle of the loop resolution process is quite simple to understand; the process will learn the network topology of all the network devices that have been bridged and calculate a spanning tree of the network. A spanning tree is a subset of the topology that links all network devices without any loops. A Spanning Tree Protocol (STP) is defined by the IEEE 802.1D standard. Actually, this protocol is one of the routing technologies to solve the problem of network loops via adaptive and dynamic routing. We will discuss STP and routing in detail in the following section.

A bridge can also talk to different types of physical networks or media, such as wireline and wireless networks such as 100Base-T and WiFi (see Figure 13.9).

What are the characteristic traits of a network switch? (Select all that apply)

Figure 13.9. Bridge between different types of network segments.

A bridge can not only join a number of network segments, but also split a large network into a few smaller networks, which can reduce the number of network devices competing for transmission privileges (see Figure 13.10).

What are the characteristic traits of a network switch? (Select all that apply)

Figure 13.10. Split a large company network into two network segments.

We often utilize bridges to separate internal groups for a large enterprise or government agent to improve network performance. For example, if salespeople are using iOS for their wireless iPad and R&D people adopt Linux systems for web hosting, a bridge can provide a partition to separate the traffic between these two internal groups.

In summary, a network bridge can’t handle any network protocol higher than the Link Layer Control (LLC) protocol. It is like an unmanageable switch that doesn’t accept any IP address and network commands. You can’t “ping” it. For the TCP/IP network model (refer to Section 13.2), a network bridge can only interact with the Address Resolution Protocol (ARP), Neighbor Discovery Protocol (NDP), and Open Shortest Path First (OSPF). Regardless of how many ports are available, a bridge can only provide one port for packet forwarding and another for packet distribution. From a network perceptive, a bridge only has one network interface.

A bridge doesn’t support routing paths but can transmit packets based on its destination MAC address. You can add as many network bridges as you like in a network, but you can’t stretch the network segment extension beyond a network bridge. For example, in Figure 13.8, if both bridge 1 and 2 have been connected with port 2, port 1 of bridge 1 can’t be connected to port 1 of bridge 2. If so, the forwarding tables of both bridge 1 and 2 will not work properly. Because a port is kept by the MAC address-forwarding table, it can be considered a logical part of a bridge rather than a physical one. If extra ports have been detected by a bridge, they will be self-configured in the forwarding table. They are not managed by anyone. If the operating system is Windows, the network bridge will be software-based or a virtual network interface that extends two or more different networks. Normally, the bridge will store incoming frames into a buffer and take subsequent actions based on the MAC address-forwarding table. Therefore, the throughput of a bridge will be less than a repeater.

In general, a bridge will be more expensive than a repeater or a hub, but will be cheaper than a switch or router. Normally, the price of a wireless enterprise grade bridge will vary from approximately $500 to a few thousand dollars.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128014134000131

Understanding Advanced MPLS Layer 3 VPN Services

Vinod Joseph, Srinivas Mulugu, in Network Convergence, 2014

Network Components

In the context of RFC 2547bis, a VPN is a collection of policies, and these policies control connectivity among a set of sites. A customer site is connected to the service provider network by one or more ports, where the service provider associates each port with a VPN routing table. In RFC 2547bis terms, the VPN routing table is called a VPN routing and forwarding (VRF) table.

Figure 2.1 illustrates the fundamental building blocks of a BGP/MPLS VPN.

What are the characteristic traits of a network switch? (Select all that apply)

Figure 2.1.

Customer Edge (CE) Routers

A customer edge (CE) device provides customer access to the service provider network over a data link to one or more provider edge (PE) routers. While the CE device can be a host or a Layer 2 switch, typically the CE device is an IP router that establishes an adjacency with its directly connected PE routers. After the adjacency is established, the CE router advertises the site’s local VPN routes to the PE router and learns remote VPN routes from the PE router.

Provider Edge (PE) Routers

PE routers exchange routing information with CE routers using static routing, RIPv2, OSPF, or EBGP. While a PE router maintains VPN routing information, it is only required to maintain VPN routes for those VPNs to which it is directly attached. This design enhances the scalability of the RFC 2547bis model because it eliminates the need for PE routers to maintain all of the service provider's VPN routes.

Each PE router maintains a VRF for each of its directly connected sites. Each customer connection (such as Frame Relay PVC, ATM PVC, and VLAN) is mapped to a specific VRF. Thus, it is a port on the PE router and not a site that is associated with a VRF. Note that multiple ports on a PE router can be associated with a single VRF. It is the ability of PE routers to maintain multiple forwarding tables that supports the per-VPN segregation of routing information.

After learning local VPN routes from CE routers, a PE router exchanges VPN routing information with other PE routers using IBGP. PE routers can maintain IBGP sessions to route reflectors as an alternative to a full mesh of IBGP sessions. Deploying multiple route reflectors enhances the scalability of the RFC 2547bis model because it eliminates the need for any single network component to maintain all VPN routes.

Finally, when using MPLS to forward VPN data traffic across the provider’s backbone, the ingress PE router functions as the ingress LSR and the egress PE router functions as the egress LSR.

Provider (P) Routers

A provider (P) router is any router in the provider's network that does not attach to CE devices. P routers function as MPLS transit LSRs when forwarding VPN data traffic between PE routers. Since traffic is forwarded across the MPLS backbone using a two-layer label stack, P routers are only required to maintain routes to the provider’s PE routers; they are not required to maintain specific VPN routing information for each customer site.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123978776000023

Network Virtualization

Gary Lee, in Cloud Networking, 2014

MAC address learning

Because most tunneling endpoints in the network appear as layer 2 ports to the host virtual machines, media access control (MAC) address learning must also be supported through these tunnels. Before we get into the details of these tunneling protocols, let’s discuss how traditional layer 2 MAC address learning is accomplished.

When a frame comes into a layer 2 switch, it contains both a destination MAC (DMAC) address and a source MAC (SMAC) address and, in most cases, also includes a VLAN tag. The switch first compares the SMAC/VLAN pair against information in its local MAC address table. If no match is found, it adds this information to the MAC table along with the switch port on which the frame arrived. In other words, the switch has just learned the direction (port number) of a given MAC address based on the received SMAC. Now any frames received on other ports with this MAC address and VLAN will be forwarded to this switch port. This process is called MAC address learning.

But what happens if a frame arrives and has a DMAC/VLAN pair that is not currently in the MAC address table (called an unknown unicast address)? In this case, the frame is flooded (broadcast) to all egress ports except the port it arrived on. In this way, the frame should eventually arrive at its destination through the network. Although flooding ties up network bandwidth, it is considered a rare enough event that it will not impact overall network performance. The assumption is that the device with the unknown destination address will eventually receive the frame and then send a packet back through the network from which its source address can be learned and added to the various switch address tables throughout the network. This can happen fairly rapidly as many network transactions require some sort of response and protocols such as transmission control protocol (TCP) generate acknowledgment packets. These responses and acknowledgments can be sent back to the originator by using the SMAC in the received flooded frames.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007280000072

Configuring Cisco Switches

Dale Liu, ... Luigi DiGrande, in Cisco CCNA/CCENT Exam 640-802, 640-822, 640-816 Preparation Kit, 2009

Summary of Exam Objectives

This chapter began with an overview of the differences between hubs and switches. As we discussed, a hub works at Layer 1 and a switch works at Layer 2. A switch will make its decisions based on the MAC addresses available in the MAC address table. Three switch modes are available: cut-through, fragment-free, and store-n-forward.

We also discussed the MAC address table and how to clear it, and we differentiated between a Layer 2 switch and a Layer 3 switch. In addition, we discussed the LED indicators on the Cisco Catalyst 2950 switch, and you learned that if the System LED is green, the switch is functioning normally, but if the System LED is amber, there is a possible failure.

This chapter also covered the steps for configuring a switch. We discussed the options of configuring a switch with the Windows-based HyperTerminal, with the Minicom Linux-based terminal emulation program, and with Cisco Network Assistant, a freeware tool that you can use to configure your switch via a GUI.

In addition to learning the differences between User Exec mode and Privileged mode, you also learned about port-based security, which is a Layer 2 feature designed to protect your switch against malicious users by assigning one MAC address per switchport. If the switch detects a violation, the switch can shut down, protect, or restrict the switchport.

A bit later in the chapter you learned that by running the HTTP services on your switch, you can configure and manage your switch via your favorite Web browser. We also discussed how to upgrade the firmware of your switch. You now know that you can use the command copy tftp flash to copy the new IOS from your TFTP server to your switch's Flash memory.

We rounded out the chapter with a discussion of how to back up and restore the configuration of your switch, when to use the show and clear commands, how to solve boot problems, and how to configure your switch for a manual or automatic boot.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493062000154

Industrial Network Design and Architecture

Eric D. Knapp, Joel Thomas Langill, in Industrial Network Security (Second Edition), 2015

Latency and jitter

Latency is the amount of time it takes for a packet to traverse a network from its source to destination host. This number is typically represented as a “round-trip” time that includes the initial packet transfer plus the associated acknowledgment or confirmation from the destination once the packet has been received.

Networks consist of a hierarchy of switches, routers, and firewalls interconnected both “horizontally” and “vertically” making it necessary for a packet to “hop” between appliances as it traverses from host to destination (see Figures 5.1 and 5.2). Each network hop will add latency. The deeper into a packet the device reads to make its decision, the more latency will be accrued at each hop. A Layer 2 switch will add less latency than a Layer 3 router, which will add less latency than an application layer firewall. This is a good rule of thumb, but is not always accurate. The adage “you get what you pay for” is true in many cases, and network device performance is one of them. A very complex and sophisticated application layer device can outperform a poorly defined software-based network switch built on underpowered hardware if built with enough CPU and NPU horsepower, or custom-designed high-performance ASICs.

Jitter on the other hand is the “variability” in latency over time as large amounts of data are transmitted across the network. A network introduces zero jitter if the time required transferring data remains consistent over time from packet-to-packet or session-to-session. Jitter can often be more disruptive to real-time communications than latency alone. This is because, if there is a tolerable but consistent delay, the traffic may be buffered in device memory and delivered accurately and with accurate timing—albeit somewhat delayed. This translates into deterministic performance, meaning that the output is consistent for a given input—a desirable feature in real-time ICS architectures. Latency variation means that each packet suffers a different degree of delay. If this variation is severe enough, timing will be lost—an unacceptable condition when transporting data from precision sensors to controls within a precisely tuned automation system.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124201149000058

SDN Futures

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

15.2 SD-WAN

SDN for the WAN, or SD-WAN as it is commonly known, is one of the most promising future areas for the application of the SDN principles presented throughout this book. In Section 14.9.5 we discussed several of the startups that target this potent strategy of using an overlay of virtual connections, often via some tunneling mechanism across a set of WAN connections, including MPLS, Internet, and other WAN transport links [5]. The overlay is achieved by placing specialized edge devices at branch locations. These Customer Premises Equipment (CPE) devices are managed by a centralized controller. The key function of these edge devices is the intelligent mapping of different user traffic flows over the most appropriate available tunnel. To a large degree, this is focused on cost savings achieved by placing only the traffic that is highly sensitive to loss, delay, or jitter on the more expensive links offering those guarantees. Normal data traffic such as email and web browsing may be mapped to best-effort, less expensive broadband Internet connections. Some classes of traffic may be routed over less expensive wireless backup links if that traffic can sustain periodic downtime or lower bandwidth.

It is easy to envision such an SD-WAN as a natural extension of the SDN-via-Overlays introduced in Section 4.6.3 and discussed at length in Chapters 6 and 8. SD-WAN is indeed related to our earlier discussions on SDN-via-Overlays in that in the data center the virtual switches in the overlay solution decide over which tunnel to inject traffic over the data center network. The CPE gateways in SD-WAN solutions perform a similar function of intelligently mapping application traffic to a particular tunnel. The criteria by which these mapping decisions are made are much more complex in SD-WAN than in the data center, however. For instance, in the data center overlay solution, there is typically just one tunnel between two hosts, whereas a significant feature of the SD-WAN solution is intelligent selection between tunnel alternatives to optimize cost and performance.

We saw earlier that this overlay approach can be successfully used to address data center problems of large layer 2 domains where the size of the MAC address tables can exceed the capacity of the switches. This technology was also used to limit the propagation of broadcast traffic that is an inherent part of the layer 2 networking so prevalent in the data center. While such issues might not be relevant in binding together a geographically dispersed enterprise over the WAN, the application of overlays in SD-WAN is more motivated by cost savings. While the variations in cost and QoS characteristics of different transport links within the data center are negligible, this is not the case in the WAN. The cost of an MPLS link offering QoS guarantees is significantly higher than a best-effort Internet connection. The provisioning time for such an MPLS connection is far greater than for the Internet connection. The difference is even greater with other wireless WAN link alternatives such as LTE or satellite [6]. Another major difference between the data center overlay paradigm and SD-WAN lies in the fact that SD-WAN CPE devices exercise little control over the actual paths used by the virtual link. Whereas in theory an OpenFlow controller could determine the exact hop-by-hop routing of a virtual link in a data center, within SD-WAN the edge devices view the tunnels they have at their disposal as coarse entities with certain cost, loss, speed, and QoS characteristics. They may choose to map a particular application flow over a given link according to those characteristics, but they are unlikely to be able to dynamically change the characteristics of any one of those links on the fly, including the actual path over which that tunnel, for example, is routed.

The interest in applying SDN to the WAN in some ways is an extension of earlier efforts at WAN optimization, such as data compression, web caching, and Dynamic Multipoint Virtual Private Networks (DMVPN) [7]. Indeed, some of the companies offering SD-WAN products today have their roots in WAN optimization. In addition to generating local acknowledgments and caching data, WAN optimization is also related to shaping and steering traffic to where it is best suited. For example, QoS-sensitive traffic like VoIP could be directed to MPLS paths that could offer latency and jitter guarantees while other traffic could be shunted to less expensive Internet links. There were certainly pre-SDN attempts of moving control away from the data plane such as the use of Route Reflectors for computing the best paths for MPLS links [8]. This is fully generalized within SD-WAN via the use of a centralized controller in the SDN paradigm. Of the five fundamental traits of SDN that have been a recurring theme throughout this work, SD-WAN solutions exhibit centralized control, plane separation, and network automation and virtualization. To date, there is little evidence in SD-WAN solutions of a simplified device or openness.

Some facets of the SD-WAN solutions converge on standards-based solutions [9]. In particular, the encryption tunnels and key distribution converge on SSL VPNs and IPSec. Conversely, compression and optimization methods, along with the path selection algorithms, tend to be proprietary and where the unique value-add of the vendors is concentrated.

In general, SD-WAN refers to any WAN managed by software control. The Open Networking User Group (ONUG) defines two broad types of SD-WANs [10]:

Intradomain SD-WANs, where a single administrative domain uses SDN-controlled switches to accomplish various network management tasks, such as provisioning of secure tunnels between multiple geographically distributed portions of a network that are under the control of a single administrative domain.

Interdomain SD-WANs, where multiple independently operated domains connect to one another via a shared layer-2 switch to accomplish various network management tasks, including inbound traffic engineering and denial-of-service (DoS) attack prevention.

As with any purported technological panacea, there are many challenges confronted in the actual implementation of the SD-WAN paradigm. We discuss these in the next section.

Discussion Question

Which of the five basic SDN traits defined in Section 4.1 do SD-WAN solutions generally exhibit? Give an example of each.

The devil is in the details

Due to the vibrant demand for commercial SD-WAN solutions, the marketplace abounds with contending claims and technologies. A very useful guide for evaluating and comparing SD-WAN solutions is found in [11]. Another is [12], where the author poses 13 questions to ask about how a given SD-WAN solution will address a specific challenge. These questions arose during the Networking Field Day Event 9 [13]. Two vendors published their responses in [14, 15], respectively. We leave it to interested reader to see all 13 questions and the various responses via the provided references. Here we look at three of the questions, one related to the concept of host routing, another dealing with asymmetric routing, and a third about the issue of double encryption.

Host routing question:

How does the SD-WAN solution get traffic into the system? As in, routers attract traffic by being default gateways or being in the best path for a remote destination. SD-WAN tunnel endpoints need to attract traffic somehow, just like a WAN optimizer would. How is it done? WCCP? PBR? Static routing? (All 3 of those are mostly awful if you think about them for about 2.5 seconds.) Or do the SD-WAN endpoints interact with the underlay routing system with BGP or OSPF and advertise low cost routes across tunnels? Or are they placed inline? Or is some other method used? [12]

The answer depends on whether the CPE equipment is attempting to be standards-based or proprietary. If proprietary, some of the proposed solutions described application traffic sniffing whereby the CPE box analyzes the application traffic to determine the nature of the application and thus the appropriate transport link over which it should be mapped. Such a proprietary approach can be stymied if the application traffic is already encrypted by the application. Since most SD-WAN solutions suggest that the tunnels forming the virtual links should themselves be encrypted, this highlights the issue cited in question 8 in [12], which is how to deal with unnecessary double encryption. An alternative to application traffic sniffing in the CPE gateway is to use host routing. The key concept here is that the host itself understands that one gateway exiting the local site is preferred over another for that traffic type. A layer 2 alternative to this is for hosts to belong to different VLANs depending on the traffic type. A simple example of this is a data VLAN and a voice VLAN. The voice VLAN’s traffic is mapped over the WAN virtual link that offers QoS guarantees and the data VLAN is mapped over the lower cost link that does not offer such guarantees.

Asymmetric routing question:

Is path symmetry important when traversing an SD-WAN infrastructure? Why or why not? Depending on how the controller handles flow state and reflects it to various endpoints in the tunnel overlay fabric, this could be an interesting answer. [12]

When routing decisions between two endpoints were based on simple criteria such as the least cost path, traffic flowing between those two endpoints would generally follow the same path regardless of which direction it flowed. With more complex routing criteria, the possibility of asymmetric routing becomes reality. Asymmetric routing means that the packets flowing from application A to application B may take a different path than packets flowing from B to A. This may or may not be a problem. It certainly is a documented problem for certain older classes of firewalls. Thus knowing whether or not a given SD-WAN solution may engender asymmetric routing may be an important consideration.

Double encryption question:

Double-encryption is often a bad thing for application performance. Can certain traffic flows be exempted from encryption? As in, encrypted application traffic is tunneled across the overlay fabric, but not encrypted a second time by the tunnel? [12]

The possibility of double encryption is both a performance consideration as well as a design consideration. If the SD-WAN device purports to determine application flow routing based on the aforementioned sniffing, this will not work in the event that the application has already encrypted the traffic. Possible workarounds here include using unencrypted DNS lookups to guess the nature of the application and thus the appropriate mapping of its traffic to a given WAN virtual link [16]. The second consideration of performance is simply the inefficiency of encrypting and decrypting the traffic a second time. Some SD-WAN solutions could offer unencrypted tunnels for WAN virtual links specifically for already-encrypted traffic. Another possible alternative would be to terminate the application encryption at the originating CPE gateway, perform the sniffing, and then reencrypt.

Cisco’s Intelligent WAN (IWAN) is a hybrid WAN offering and another example of SD-WAN. Cisco’s venerable Integrated Services Router (ISR) plays the role of the CPE gateway in this solution. You can automate IWAN feature configuration using the IWAN application that runs in Cisco’s Application Policy Infrastructure Controller—Enterprise Module (APIC-EM). IWAN is built upon the DMVPN technology mentioned previously. DMVPN provides dynamic secure overlay networks using the established Multipoint GRE (mGRE) and Next-Hop Resolution Protocol (NHRP) technologies, all of which predate SD-WAN. DMVPN builds a network of tunnels across the Internet to form the SD-WAN virtual links.

Cisco IWAN builds an SD-WAN solution using largely existing Cisco technology as well as technology obtained via its acquisition of Meraki. Many of these are Cisco-proprietary protocols, but they are generally less opaque than the closed systems of the SD-WAN startups. The Cisco IWAN design guide [17] gives detailed instructions about configuration of the HSRP and Enhanced Interior Gateway Routing Protocol (EIGRP) routing protocols in order to get the right traffic routed over the right WAN link. The ability to form and send traffic over VPN tunnels over two interfaces provides administrators with a way to load-balance traffic across multiple links. Policy-based Routing (PBR) allows IT staff to configure paths for different application flows based on their source and destination IP addresses and ports [18]. IWAN also allows configuration of performance criteria for different classes of traffic. Path decisions are then made on a per-flow basis as a function of which of the available VPN tunnels meet these criteria, which is in turn determined by automatically gathered QoS metrics.

Cisco’s IWAN differs from some of the startup alternatives in a few fundamental ways. The mapping of traffic is not based on an application sniffing approach. Instead, APIC-EM tells the border router which egress path to take based on the conditions of the path as well as routing policies, and then intelligently load-balances the application traffic across the available WAN virtual links. According to [17], the variety of the WAN links is limited to MPLS and the Internet, whereas some of the alternatives offer wireless WAN connections as well. The number of WAN links for a given site is limited to a primary and secondary link, whereas the choice between more than two WAN links is feasible with some of the other solutions. It is important to understand that the number of tunnel endpoints may be far greater than the number of offered WAN links, depending on the particular SD-WAN solution. Each tunnel endpoint may map to a different customer premise if the design is based on a mesh topology, or each tunnel could terminate in a centralized hub if the design is based on a hub and spoke topology. In addition, tunnels may offer different levels of encryption and potentially different QoS guarantees.

Many of the SD-WAN solutions purport to offer dynamic QoS assessment of links and, based on those assessments, to dynamically shift QoS traffic from one path to another. This assessment can be accomplished by performance probes periodically injected into the tunnels and then using them to empirically measure QoS levels. The assessments can then be used to adaptively direct traffic via the most appropriate tunnel. This is certainly feasible but the point is obscured when related marketing statements imply that this allows provision of QoS guarantees over the Internet. While it may be possible to detect current QoS characteristics over a given Internet connection and potentially react to it, this falls far short of the marketing hype that implies that the solution can actually enforce QoS levels.

While this general concept, called dynamic path selection, is touted by SD-WAN vendors as novel in SD-WAN, we remind the reader of the example in Section 9.6.1 of dynamically shunting traffic over Optical Transport Networks (OTN), which is based on similar principles. Hence, SDN has contemplated this before the advent of SD-WAN.

An interesting adjunct to SD-WAN is the possible application of the Locator/ID Separation Protocol (LISP) [19]. LISP uses Endpoint Identifiers (EIDs), which correspond to hosts, and Routing Locators (RLOCs), which correspond to routers. An EID indicates who the host is and denotes a static mapping between a device and its owner. The RLOC, on the other hand, indicates where that device is at a given time. If one considers the mobility of users coming and going from branch offices of an enterprise, it is clear that integration of this kind of technology is very relevant to the problems purported to be addressed by SD-WAN solutions. Indeed, Cisco’s APIC-EM controller integrates LISP with IWAN.

Discussion Question

Cisco’s IWAN SD-WAN solution offers the alternatives of MPLS VPN and Internet-based WAN links. One possible configuration shown in [17] depicts a configuration with no MPLS links but only two Internet links, each from a different provider. Beyond additional bandwidth, what is one advantage proferred by such a configuration?

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000156

Understanding Networks and Networked Video

Anthony C. Caputo, in Digital Video Surveillance and Security (Second Edition), 2014

Video Networking Design

Good design practices include avoiding Layer 2 switching in any video surveillance networking environment, especially “stupid switches.” These “wannabe” hubs have no smarts, no functionality (Spanning Tree Protocol or IGMP) other than passing data, and no method of manageability. In general, Layer 3 switches are more costly, especially if Layer 2 switching domains is all that’s needed initially. Once the requirements change and more functionality is needed (e.g., digital video streaming), expandability and scalability become far more painful.

What are characteristic traits of a network switch?

Features of Switches It uses packet switching technique to receive and forward data packets from the source to the destination device. It is supports unicast (one-to-one), multicast (one-to-many) and broadcast (one-to-all) communications.

Which characteristic of a switch distinguishes from a hub quizlet?

Which characteristics of a switch distinguish it from a hub? Switches forward traffic based on the MAC address. Switches operate at layer 2 of the OSI model, Hubs operate at layer 1.

What are the characteristic features of the 100basetx Ethernet standard?

Main characteristics of 100Base-TX FastEthernet are listed below. Operating speed of FastEthernet is 100 Mbps. Similar to other Ethernet standards, 100Base-TX uses baseband signals to transfer data. FastEthernet supports a maximum distance 100 meters between the network switch and the client computer.

Which of the following protocols provide protection against switching loops select 2 answers?

Answer is A. Spanning Tree Protocol (STP) prevents switching loop problems and should be enabled.