One neat feature I've been able to experiment with lately is the port-based 802.1X authentication. Essentially this allows devices directly attached to your switch ports to authenticate with a RADIUS server. In a high security environment, or one which has devices frequently swapped - you may decide it is best to allow network access only to individuals whom are previously configured in RADIUS.
If a switch port has been configured to require 802.1X authentication, the user will be forced to use domain credentials in order to gain network access. There are some caveats surrounding the use of 802.1X authentication, which are that the PC connecting into the switch port must support EAPOL (Encapsulation Authentication Protocol over LAN) in order to successfully communicate with the switch. Upon physical connectivity from switch to PC, the switch port can be set up in three distinct modes. First of all the default is called "Unauthorized". Unauthorized defaults to not let any traffic pass even upon successful authentication with a RADIUS server. Secondly, there is a mode called "Authorized" which passes traffic even if the device was not able to authenticate with RADIUS. Thirdly is the only viable mode called "Auto" which will pass traffic only if the PC/device can authenticate first with a radius server. Each port defaults into the "Authorized" state upon enabling 802.1X globally, which allows all ports full network access. The first thing you should do is configure them all with a range command and configure them for AUTO mode.
In order for a Windows PC to use the EAPOL authentication method, they must have services "Wireless Zero-Config" and "Wired Zero-Config" started and preferably automatically started. These services are absolutely required perform any authentication services from the PC to Switch to Radius server.
The configuration is rather simple - however the overall administration and maintenance may prove too much for the majority of network teams.
1. Enable AAA on the switch
(Config)#aaa new-model
2. Define the RADIUS server
(Config)#radius-server host 10.0.0.1 key MYSECRETKEY
3. Define authentication method as 802.1X
(Config)#aaa authentication dot1x default group radius
4. Enable dot1x on the switch globally
(Config)# dot1x system-auth-control
5. Configure each switch port for 802.1X
(Config)#interface range GigabitEthernet0/1 - 48
(Config-if)#dot1x port-control [force-authorized | force-unauthorized | auto]
Optionally can configure the 802.1X port to allow for multiple hosts on the switch port. This is done on a per interface basis:
(Config-if)#dot1x host-mode multi-host
This is essentially all you need to configure a switch to support 802.1X authentication. You will now have much stronger security on access-layer switch ports in any network environment. It is however vital to point out the reliance on RADIUS server capability. Every time a device plugs into the network access layer, there will be an authentication request made. It will place a drastic load on RADIUS servers, and it is absolutely vital to provide 3-4 RADIUS servers for redundancy and load balancing.
Kyle
Follow my studies on the path to becoming an expert in service provider networking.
Sunday, August 8, 2010
Tuesday, August 3, 2010
Link Aggregation: Load Balancing Algorithm
The first question to ask yourself when forming an EtherChannel or Aggregated link of any kind, would be... how is data distributed amongst these links?
First of all.. I think myself along with any new engineer will automatically assume that data/frames/packets are distributed equally amongst every link in the bundle.. Unfortunately this is NOT the case at all! There is actually a very simple algorithm taking place in the background of that EtherChannel bundle, making use of our sub-netting mathematical prowess.
The process taking place depends on the type of load-balancing used on your device.
Setting the load-balancing method:
(Config)#port-channel load-balance method
There are several load-balancing techniques available on the EtherChannel.... these options include:
src-mac - Source MAC Address
dst-mac - Destination MAC Address
src-dst-mac - Source AND Destination MAC Address
src-ip - Source IP Address
dst-ip - Destination IP Address
src-dst-ip - Source AND Destination IP Address
src-port - Source TCP/UDP Port
dst-port - Destination TCP/UDP Port
src-dst-port - Source AND Destination Port number
You MUST be wondering at this point, how is this converting into a load-balancing algorithm? Well, I will share that with you now.
If you are using only ONE address or port number to load-balance, the algorithm is incredibly simple. Let's say for example you are using EIGHT ports in your EtherChannel (the protocol isn't important at this point) and using a load-balancing method of "src-ip" with a source IP address of 172.16.0.30.
1. The first step is finding out how many bits are important to the hashing algorithm used for load-balancing. The way we can discover this is by converting EIGHT (the number of links in our bundle) into binary.. Eight in binary is... 00000111 (Don't forget about ZERO being a number) The only thing that WE care about are those LAST three bits.. You will never care about more than 3 bits in ANY EtherChannel. In a 4-link Etherchannel, you will care about 2 bits.. in a 2-link EtherChannel you will only care about 1 bit. (Remember this!)
2. Now that we have these final three bits.. we add them up! 1 + 2 + 4 = 7 .... The load balancing algorithm will use Link 7 to send this packet!!! It is that simple! (Valid links in this case are: 0,1,2,3,4,5,6,7)
3. Now verify the load-balancing algorithm via the "show etherchannel load" command. This will (SORTA) help you discover which links in each bundle are being used, and how often.
So a question for you, if all of the EtherChannel traffic was sourced as this IP address - how would the load-balancing look? Your gut reaction is absolutely correct, Link 7 will be used over.. and over.. and over... and the only benefit of your EtherChannel will be redundancy in case one link were to fail.. However the distribution of traffic is practically non-existent.
Quick factoid, relevant to this topic.. If you were to use ANY other Layer 3 protocol while the load balancing method was set to "src-ip" , what would happen? Well there would be no IP address to source the packet from, and the switch would automatically fall back to the next-best load-balancing method - which would be "src-mac" in this case.
Well that is it for load-balancing via Etherchannel.. have a good....... wait... what if we were to use 2 addresses as our load-balancing technique (such as src-dst-ip).. this would throw off our entire way of thinking!
In this situation we use a slightly modified algorithm - although it uses the same basic methodology.
Lets pretend our source IP Address = 172.16.0.30 again and that our destination IP address - 192.168.0.200.
1. First step will be again to conver that to Binary... we are using an 8-link bundle again, therefor only really care about those final 3 bits.
172.16.0.30 = 10101100.00010000.00000000.00011110 (Final three bits = 110)
192.168.0.200 = 11000000.10101000.00000000.11001000 (Final three bits = 000)
2. Step two is to perform an exclusive-or (XOR) on these last three bits!
110
XOR = 110
000
XOR Rule = If bits are same = 0 - If bits are different = 1
Following our XOR operation, we are left with 110. This is the link we will use for this packet! (110 = 6) The packet will be placed on link 6 and be on its merry way. As you can see, this has a better chance of evenly distributing traffic - as there will most likely be a difference source IP address if not a different destination IP address.
It is important to get your time and money's worth when deploying an aggregated link, you do not want 6 idle links and 1 link completely overloaded with traffic.. it is a failed design.
My final word of caution is when using a mac as destination or source address for an etherchannel. Any router will use its BIA (burned-in-address) as its source of ethernet frames. This will likely throw off the distribution load-balancing of any EtherChannel...
Hope this will help you discover the most efficient way of load-balancing your EtherChannel links in order to make full use of your bandwidth!
Kyle
Link Aggregation - LACP
Link Aggregation - LACP

There are three flavors of Etherchannel: LACP, PAgP, and "ON" - each of these "modes" offer distinct advantages (and disadvantages) to their competitors. Most commonly I think you will see "LACP" or Link Aggregation Control Protocol used in the majority of situations. LACP is a standards based solution to link aggregation, it can negotiate an EtherChannel with any vendor device (HP, Dell, Juniper, whatever you want) without any hastle. The caveat with LACP is some of the more detailed features such a the system-priority and port-priority settings. Technically LACP can bundle 16 ports together in one logical EtherChannel port.. however the caveat here is that 8 of these links will be in a standby mode and 8 will be active. The priority for which are active or standby is in the port-priority setting, configured on a port-by-port basis. If you don't care which ports are active or standby, then you just leave it default at 32768 priority.. in which case the lowest numbered ports will have priority. (Lower is higher priority in LACP - remember this!) In addition to this priority setting, there is a system-priority setting which is configured globally on a switch. The lower the system-priority on a switch, the more likely it is to be the "master" in an etherchannel negotiation with another switch. One switch needs to be designated the leader or master, and this is how that decision is made in the LACP protocol. The option to use 16 redundant links in a single etherchannel is slick.. but do remember you are capped at 8 active links at any once time.
The last important piece of information are the LACP "modes"; active and passive. Active mode is a mode which ACTIVELY tries to negotiate with the neighbor switch.. while passive sits back and waits to be negotiated with. Needless to say (I hope) passive ----> passive will NOT work in forming an Etherchannel link. Passive ---- > Active will work however.. Two passive links will send no LACP negotiation packets whatsoever, and permanently remain unchanneled.
Lastly, an interesting factoid I discovered through experimentation was the difference between LACP Slow and Fast PDU packets. If you do a "show etherchannel port" you may see Slow PDU or Fast PDU's beside your link status. I was worried at first when I saw the slow PDU status on my links - however this is completely normal. Slow PDU's are used when the Etherchannel is established and stable (every 30 seconds), while Fast PDU's are used when the Etherchannel is first forming (every 1 second!)
Bye for now...
Kyle
High Availability..
HSRP for the WAN:

My first post will be my recent findings and experimentation with Layer 3 High Availability.. Specifically using HSRP.
I must say - these techniques have mostly been transparent to me so far in my career/studies, and finally having a grasp on their architectures is very rewarding.
HSRP and VRRP are incredibly similar.. the main difference being HSRP is a Cisco proprietary protocol, where VRRP is a standards based protocol. Both are designed to provide the exact same redundancy - mainly gateway or Layer 3 redundancy for clients. The opportunity to use both of these protocols is abundant, and any enterprise network is bound to use either of these protocols or GLBP somewhere along the line, whether it be internal or WAN facing.
Recently I was tasked with providing for a redundant solution using two WAN facing routers connecting to various remote offices. The two redundant routers on the head office side are running GRE tunnels over a Service Provider MPLS network in order to reach their intended destinations. The goal was to have two routers, one one of them which will run as a hot standby router, and the other will be the primary forwarder. The design goal was simple.. run HSRP on the internal interfaces (LAN) and tweak OSPF metrics in order to favor the primary router if it up and running. The task also involved adding static routes on the area office and head office side in order to prevent recursive routing. Recursive routing is when the routing engine believes the best way to reach the tunnel destination is through the tunnel itself.. however this process creates somewhat of a continuous loop. The router sends the data to the tunnel interface.. however the tunnel interface references the tunnel destination for a real physical destination and since the routing table would point to the tunnel interface as the path to reach that destination, you would have a continuous loop. Thankfully the software is smart enough to place an error in this situation, rather than a continuous loop crashing the router.
So now that this solution is working.. it is important to remember.. That the Active router or forwarder will continue to be the forwarder even if its primary link to the WAN is down completely. This would create a complete black hole, where traffic would be sent to the active and have absolutely no path outbound. The solution is to use HSRP tracking!! We can track the WAN interface on both WAN routers, decrementing its priority in the case the WAN interface were to either have its line-protocol or ip-routing capability go down. In our case, we used line-protocol, as this is more deterministic. The HSRP priority is the only way to measure which router should be the forwarder of the HSRP group... and one major caveat to remember is to NOT forget to use the preempt command. Using preempt will allow your standby router to immediately take over the active HSRP router if its own priority exceeds that of the active. This is ESSENTIAL in this design.. as the active router's WAN connection could down, it's priority could drop to 0.. however if the standby router does not PREEMPT (the default is to NOT preempt), there will be no failover to the standby. You see, HSRP standby by default will only take over the active role if the active router completely disappears and stops sending hello messages to the multicast address of 224.0.0.2 (UDP 1985). This is not ideal in our design.. so we will simply add the "standby # preempt" command, to remedy this situation.
Sample Configuration:
Router 1 (Active)
interface Gig0/0
ip address 192.168.0.2 255.255.255.0
standby 1 ip 192.168.0.1
standby 1 authentication ISPexpert
standby 1 preempt
standby 1 priority 200
standby 1 track interface Gig0/1 line-protocol decrement 100
//Gig0/1 is the WAN interface
Router 2 (Standby)
interface Gig0/0
ip address 192.168.0.3 255.255.255.0
standby 1 ip 192.168.0.1
standby 1 authentication ISPexpert
standby 1 preempt
standby 1 priority 150
standby 1 track interface Gig0/1 line-protocol decrement 100
In this situaiton, this is really all we need to do from an HSRP perspective.
I will be posting more information on GRE tunnels and topics such as VRRP and GLBP... as they have proven to have certain complexity and esoteric tendency which are not fully documented by Cisco or anywhere else I have tried to research...
Bye for now..
Kyle
Subscribe to:
Posts (Atom)