Wednesday, June 29, 2016

Why is Everyone Upset with RadioShack?

 on  with No comments 
In , , ,  
The following is a position paper that I wrote in April of 2015 To set the timeline, this was merely weeks if not days after RadioShack announced that it was selling it's customer information database, which came shortly after it's bankruptcy. You know, that database that was assembled with the information demanded of you at the register every time you stopped in to grab a pack of batteries. This is in spite of their policy that they would never sell that information without your consent (emphasis mine).

Saturday, June 25, 2016

Better Questions Get Better Answers

 on  with No comments 
In , ,  
When it comes to online based support, whether it's a forum, a Facebook group, or whatever else, there is a constant stream of poor questions being posted. To begin, I'll use a specific example of a poor question that was recently posted in the CCNA group on Facebook, and then break down what was wrong with the question, where the original poster kept going wrong as the thread progressed, and what he could have done from the beginning in order to avoid all the back and forth and get a good answer relatively quickly.

The original post is as follows:

hi guy

people help me this is nothing to fault, cisco 2851 router console not break password, displays a message


entire text of login banner omitted for brevity

thank all

I replied back to the post asking for a better description of what exactly the poster is trying to do here. The poster replied back that they cannot get into rommon mode and gave a link to the document from the Cisco website that they are using as a reference. We're off to a good start here, the poster at least put some effort into their problem before consulting the group. However, they don't feel it necessary to share all the relevant information. As we'll see soon enough, this leads anyone attempting to help the poster to only be able to make guesses at this point. And I know I speak for a lot of members of the group when I say that life is too short for that.

Let’s look at where the poster went wrong. I'm going to ignore the horrific grammar as I understand that English is not the first language for a lot of these guys, and the poster is at least trying. The first thing that jumps out at me is that the poster does not tell us anything about the computer that they are sitting at. We don't know what operating system that they are using, and we don't know what program they are using to access the router. This is hugely important here, as anyone knowledgeable with Cisco routers and switches knows, the break key differs by program. And there are numerous other cases where a specific program or even the computer’s operating system has little ways of behaving differently that the majority of other cases, so knowing if any of those cases applies to the current question is important. If you’re still using Windows 98 for whatever reason, and it has a little idiosyncrasy, I’m probably not going to think to ask if you’re using Windows 98. Help us to help you.

Now where could the poster have helped out a bit more with the question in the original post? Here’s a working list of things that should be included with your question, in no particular order.
  • What specifically you are trying to accomplish. Be detailed and specific. Please don’t just assume I know what you’re up to. 
  • What specifically is or isn’t working as expected. 
  • What you have tried so far, and why you have tried that (as in, what how to or guide are you working off of?) 
  • Any error messages displayed along the way. This also includes anything showing up in the system log file, or in the application’s local log. 
  • A description of your environment. What program(s) are you using to accomplish this task, what operating system is your computer running, what router are you doing this on, how is your computer connected to said router, etc.

And conversely, here are some things that you can leave out, also in no particular order.
  • Attitude - Remember who is trying to help who here. If you don’t like a reply, there is nothing saying you have to respond. 
  • Self-deprecating comments – it’s old, tired, and just obnoxious. “He he, I’m so dumb!” Don’t do it. 
  • Irrelevant information – while we want as much information as possible, don’t let the important information get lost in the noise of stuff that doesn’t matter. In the example provided, the multiple lines of the company login banner can be considered irrelevant here.

Now to contrast, I’ll give a couple of examples of a much better post, from other members of the same group. This second user posts:

So I saw this post today. If I understand the question correctly, 172.16.1.X /24 can divided into 64 subnets (/30 subnet mask with 2 usable host addresses). However, people are saying that the answer is 256 because they subnetted 172.16.X.X /16 (Class B) using /24.

I don't see why they need to refer back to the default since the question says "could be created". What am I missing?

The Third user posts:

Hi people,

I am confused in the depth understanding of the four way handshake of the dhcp. The todd lammle book states the following,

DHCP discover - Broadcast
DHCP offer - unicast
DHCP request - Broadcast
DHCP acknowledge - unicast

I labbed up a simple topology in GNS3 and was running Wireshark.

There I noted, only the discover message was with L2 broadcast and the rest with the specific physical address.

The DHCP request was from to which I already knew. I was expecting L2 broadcast address in the destination mac but I saw the mac of the server. This means that the request is unicast right ? or Am i mis-understanding anything ?

I was searching web but couldn't find satisfying information as there were multiple information from site to site and video to video.Any reliable source for the information will also be thankful. Thanks in advance.
Notice the difference? These posts give the question, what they think is the answer, and some of their thinking in arriving at that answer. The group is much more likely to give time and thought to this post than to the first example I mentioned.

A few other things, in no particular order:
  • Don’t type your entire post in capital letters. On the Internet, this is considered shouting, and I for one am not going to help someone who is yelling at me from the start. Being polite will get you a lot in life. 
  • Let us know what finally fixed the problem, even if this thread didn’t lead you to that answer. Help out someone else who may be having the same problem. 
  • Give thanks to who helped you out, and also to those who gave a good effort. 
  • Don’t just take. Contribute back to the group. Even if you’re a beginner and can’t answer questions yourself, try out the answers for yourself and reply back if it worked for you. Ask good follow up questions. On Facebook, give a like to a correct answer. 
  • Do your own homework. People will no doubt do your homework for you if you ask nicely enough, but what’s the point? If you don’t want to do the work, don’t take the class. In the CCNA group, you’ll be banned soon enough for rapid fire posting your homework questions and expecting the group to do it for you. 
  • Don't thread-jack. If your question has nothing to do with the thread you are about to post it on, start a new thread. 
  • Don’t take offense when I tell you this might not be the best place to ask your question. Nobody is an expert in everything. While the CCNA group allows technical discussion in just about any area, we’re networkers and not web developers. So if you’re asking a complex CSS related question, we may refer you to a more relevant forum. This isn’t an insult or an attempt to get rid of you, we are trying to help you out.

And finally, and most importantly, take a minute to read through the rules of the group or forum before posting anything. I can’t even fathom the number of people who have been banned from the CCNA group on their very first post for this very reason. Some rule violations will get you a warning, others aren’t so forgiving. I don’t care if you’re new, if you violate a zero tolerance rule you will be removed from the group immediately. Save yourself the trouble of typing up a post nobody is ever going to see, and save an admin the trouble of having to ban you. Know the rules and stay within them. If I don’t know you’re a cheater, we’re still good. It’s only when you publicly admit to being a cheater, and even ask for assistance in cheating that we have problems.

Wednesday, June 22, 2016

Symmetric Traffic and IPS

 on  with No comments 
In ,  
A well known problem for network and security professionals in the enterprise is asymmetric routing.  At it's simplest, this is where traffic flows outbound through Router A, while the return traffic returns through Router B, or through both Routers A and B.   If you're using a reflexive ACL, for example, this will lead to some, if not all of the return traffic being blocked as it attempts to return through Router B.  This is due to Router A having a record of the outbound traffic while Router B does not.  Riverbed breaks this down into several sub-categories such as complete asymmetry, server-side asymmetry, client-side asymmetry, and multi-SYN retransmit.  But for our purposes here, it's all asymmetric, and it's all a bad thing.  While some firewalls are able to share state to avoid this situation, not all do.  And Cisco Routers running IOS do not.

While asymmetric routing is known to be a problem at the network edge, it can be a problem for security professionals internally as well.  And the larger the network is, the more likely asymmetric traffic is to occur at some level.  When you deploy an IPS sensor in the network, it must be able to see all traffic in both directions for maximum effectiveness.  When an IPS sensor is able to see all the traffic involved in a particular session, you get better threat detection, reduced susceptibility to IPS evasion techniques, and less susceptibility to false-positives and false-negatives. 

While it cannot be completely avoided at the enterprise edge, the good news is that internally, steps can be taken to reduce if not eliminate the effects of asymmetric routing.  So good network design is a must to get the maximum effectiveness of an IPS deployment, particularly if there are going to be multiple sensors along a given traffic flow.

There's a few options to ensure symmetric traffic flows, or to mitigate the effect of asymmetric traffic flows including:

  • Duplicate traffic across multiple IPS sensors to ensure each sensor can see all applicable data.  In addition to the challenges presented in getting all the relevant data to each IPS, we also have a greater likelihood of overloading IPS sensors with traffic, which will result in packets being dropped.
  • Integration of an IPS switch.  This is reducing traffic down to a single switch.  While it is better from an IPS standpoint, it's introducing a single point of failure into the network.
  • Correctly configuring spanning tree parameters to ensure symmetrical paths across Layer 2 areas.
  • Routing manipulation with techniques such as PBR. This is a cost effective solution as it involves only configuration changes rather than additional hardware.  But it adds complexity to the network in addition to requiring cooperation between security and networking. 
  • Sticky load-balancing utilizing technology such Cisco's ACE module or Riverbed's Asymmetric Routing Detection to better reduce the chances of asymmetric routing.
  • In cases of HSRP induced asymmetry, utilize EEM and EOT in order to change the paths of HSRP related routes dynamically.
  • Configuring firewalls as active/standby pairs rather than active/active pairs.
But as you can see, many of these techniques involve taking redundant data paths out of the equation, and therefore reducing the amount of overall usable bandwidth across the network.  Others involve sending more data to or through each IPS unit, increasing the burden on each unit and increasing the likelihood of dropped packets.  So there is obviously a balancing act between performance and visibility.

Saturday, June 18, 2016

One Library of Congress

 on  with No comments 
In ,  
One of my favorite units of measurement that has been thrown around is "One Library of Congress."  Particularly on Slashdot, armchair storage engineers and generalists alike throw around this measurement when talking about astronomical amounts of data.  Often times, posters will talk in terms of data volume being "the equivalent of three Libraries of Congress" or data transfers "at the speed of one Library of Congress traveling by station wagon."  so let's just get the real fact of the matter out of the way now.  It will probably never be known just how much information is stored in the Library of Congress.  There's just too many variables that are still unknown.  Many estimates exist, with some being better than others. But at the end of the day they're still just that, estimates.  I'm also not bothering to see just how many CDs I can fit into a station wagon, one of my favorite methods of fitting one Library of Congress into the mythical station wagon.  A few cents each still adds up when we're talking about that many discs.

In 2000, UC Berkley professors Peter Lyman and Hal Varian weighed in with what is believed to be one of the earliest authoritative estimates on how much information was produced in that year.  Besides the stated goal of the research, they estimated that the Library of Congress print collection contains 10 TB of data, a figure that is often still cited today.  This number is based off of the average book containing 300 pages, which if scanned at 600 DPI in the TIFF format and then compressed, would be an average of 8MB per book.  With the print collections consisting of 26 million books at the time, their math should have been closer to around 200TB, clearly indicating that 10 TB was just a guess.  This is also only accounting for textual data, as images would change that number.  Audio, video, photographs, and other forms of nontextural data would also vastly increase that number.

An area that is able to be better estimated is the Web Archiving program, which has collected 525 TB of data itself as of July 2014. A Library of Congress storage engineer by the name of Carl Watts (now there's a position that'll really let you demonstrate your skills, or more likely your lack of skills,in storage and backup) gave an estimate of 27 petabytes of data in September 2012.  In comparison, it was estimated that global data would grow to 2.7 zettabytes during 2012, up 48% from 2011.  And in 2008, Americans consumed 3.6 zettabytes of information.   So back on the topic at hand, we'll probably never know just how much data is stored in the Library, especially when looking at it in terms of bits and bytes when there's so many dead trees still containing the data.

Since it's a nice large number that comes from an authoritative source, let's just go with 27 petabytes for now.  The largest HDD that I can purchase at NewEgg today is 8TB. I know there are larger (and no doubt costlier drives based on GB per dollar), but I'm just talking what is commonly available.  By my math, that'll be 3456 of those 8TB models, before taking into account additional drives for parity in RAID sets, lost space due to overhead (filesystem use, files not taking up the entire sector), etc. And for now I'm going to ignore the whole 1000 bytes vs. 1024 bytes in a kilobyte argument that the HDD manufacturers have put upon us. That will only lead to more drives needed.

Since the general consensus is that you shouldn't be building RAID5 sets with drives that big anyway (rebuilding an array will take days, putting unnecessarily stress on other drives which may in turn cause others to fail as well), we'll take extra drives for parity out of the equation.  That eliminates the need to worry about just how many RAID5 or RAID6 arrays we should be building out of that many discs.  So let's go ahead and bump that up to a nice even 3500 drives to account for space lost to overhead.  At an average of $300 per drive (I never buy the cheapest, nor the most expensive), we're at $1,050,000 to house one copy.  And of course that's just the drives, I haven't even begun to factor in the servers required to house them, the electricity (regular, generator, and battery backup) required to keep them spinning, or the air conditioning required to keep them from melting down.  Hopefully you can get that down some by purchasing in bulk, but even then it's going to be a pretty big number.  And certainly with that much data, you're going to want to have a good backup strategy, as in more than one copy.  And no, we're not going to call the physical books the backup.

And in case you were curious, you'd have to fit over 21 million CDs into the back of that station wagon in order to calculate a transfer rate.  We can get that down around 5 million if we move up to DVDs, and down to 416,000 if we move to Blu-Ray discs.  We'll be better off, at least in terms of sanity, if we load the station wagon up with 3500 external 8TB HDD enclosures.  I still recall backing up data to countless CD-R discs back in the day, and it was not fun sitting there switching discs in and out of the drive every few minutes.  Another idea is that we can do the transfers with tape, the largest of which I can find are 185TB in size, though the fact that the articles all date from 2014 and there is still no actual product that I can find may indicate that this technology is simply vaporware.  We should be able to get a Library of Congress on 150 of these tape cartridges.

And based on the fastest cross country trip on record, this station wagon going from New York to Las Angeles would be moving data at approximately 2.2Tbps according to my math, which I backed up with this handy bandwidth calculator.  This is of course assuming that you can get all 27 petabytes into a single station wagon, which may still be a bit of a challenge since those 185GB tape cartridges from Sony don't appear to be a purchasable product just yet.  But even if you have to send out 4 or 5 station wagons, that's still a lot better than what I'm getting on Comcast.

Leslie Johnston of the Library of Congress has her own take (as well as a lively discussion in the comments) on just how much data is housed by the Library, including a collection of comparisons from around the web.  She also posted a follow up (along with more great comments) here.  I found the discussion in the comments about just how many books exist (all time) and what exactly constitutes a book to be pretty interesting.  Another take on this book debate from Google.  Matt Raymond, also of the Library of Congress, gives his take on it here.

Contel Bradford, an apparent fellow Detroiter, posted another interesting take on the Library of Congress, as well as the state of libraries in general today, at the StorageCraft Recovery Zone blog.  Contel breaks down the contents of the library, including some analysis of the audio and video assets.  Recovery Zone is StorageCraft's blog that is dedicated to "exploring BDR solutions and technologies relevant to MSPs, VARs and IT professionals."

So how much data is actually contained in the Library of Congress?  We're going to have to settle for "a lot" as the final answer.  We'll probably never really know for sure.

Wednesday, June 15, 2016

The Accuracy of Sampled Netflow

 on  with No comments 
In , ,  
To alleviate the fear of overburdening the CPU due to the collection of NetFlow statistics, Cisco gives us the option of using Sampled NetFlow. Sampled NetFlow allows you to sample 1 out of 10 packets, 1 out of 100 packets, or however much of a subset of the total number of packets. The theory is that with a good sample, the traffic will still be indicative of what is flowing through the router. If 10% of the total amount of packets following through the router is DNS queries, for example, then approximately 10% of the total amount of packets in the sample will also be DNS queries, and so on.

The reason that this is necessary is because of the way that a router handles traffic when collecting NetFlow statistics. In order to process a packet in order to collect NetFlow statistics, that packet has to be processed by the CPU. When sampling is enabled, the packets that are not part of the sample are switched faster because they will not require the additional processing required.

NetFlow sampling is enabled on supported IOS platforms with just a few commands.

ip route-cache flow sampled
ip flow-sampling-mode packet-interval 100

NetFlow sampling can be monitored with the show ip flow sampling command.

So as you can see, NetFlow sampling is simple to configure and monitor. It only takes a couple commands. But the question now needs to be asked, how indicative of the total network traffic is the sample? In other words, if I’m seeing 10% of all traffic being DNS queries in my sample, is 10% of the total traffic flowing through this router really DNS queries? Or is there some significant level of error in the sampling? In Cisco documentation and Certification Exam Guides, it is admitted that the sample will never be 100% accurate, but that it should usually be pretty close. They’ll also mention that you should obviously check the accuracy periodically.

Recently, I came across an academic article talking about the accuracy of NetFlow sampling. In the article, they collect data over time with a 1 in 250 packet NetFlow sample and compared it to a raw traffic sniff utilizing tcpdump. Shown below is Figure 8 from the article, which summarizes their findings. The red dotted line shows real time data of traffic flowing through the router, while the solid blue line shows real time data of their 1 in 250 NetFlow sample.

The article states that "In Figure 8 the cumulative empirical probability is plotted with its relative error. It indicates that the performance of systematic and static random sampling is not distinguishable in practice. We believe it is true in most of backbone links where the degree of multiplexing of flows is high."  In other words, the sample is really indistinguishable from the full data set.  Equally of importance, they found that the processing overhead of NetFlow sampling to be insignificant.  Further accuracy of their collection methodologies is demonstrated by SNMP byte count data strongly correlating with NetFlow byte count data.  There's a lot of statistics and graphs in the article if you're into that sort of thing.

Conversely, in another academic article, the researchers found their sampling to be significantly less accurate.  They stated that "Our experimental results allow us to come to the conclusion that: (i) our traffic classification method can achieve similar accuracy than previous packet-based techniques, but using only the limited set of features provided by NetFlow, and (ii) the impact of packet sampling on the classification accuracy of supervised learning methods is severe."  They discuss a training process which gets their accuracy to 85% for a 1/100 sampling.  Good enough for most use cases, but still too manual and still still a far cry from the results of the first study.

So where do we stand with Sampled NetFlow accuracy?  One study says it's pretty accurate, and the other says not so much.  So the jury is still out, and we're back to Cisco's recommendation that you should be testing the accuracy to determine if it is good enough for your use case.  Like the team in the first article, you can easily use a network tap or SPAN port to compare what is actually coming out of a router interface with the NetFlow sample estimating what is coming out of that router interface.  Don't just assume.

Saturday, June 11, 2016

This is Why We Can't Have Nice Things

 on  with 1 comment 
In , ,  
There's any number of reasons why users will not conform to good password policy.  It's difficult to remember so many without writing them down somewhere, we're not supposed to write them down, it's difficult to come up with a new one every 90 or 180 days that isn't one of the x number of previous used passwords, etc.  Honestly, I would rather see a user write them all down in a notebook kept in a safe location than to use the same username/password combination for everything, but I'll get flamed for that view by some.


Wednesday, June 8, 2016

IOS Zone Based Firewall

 on  with No comments 
In , ,  
One of the most commonly covered security features when it comes to Cisco security is the ZBF.  It wouldn't be much of a network security blog without at least one post on this topic, so here's my take.

With IOS version 12.4(6)T, Cisco introduced the Zone Based Firewall (ZBF), sometimes referred to as the Zone Policy Based Firewall.  With this, the Classic IOS Firewall or Context-Based Access Control (CBAC), available since IOS version 11.2, is now deprecated. Nearly all of the features of the Classic IOS Firewall are implemented in ZBF as well as wide range of new features. In addition to the new features available in ZBF, it is also said to improve firewall performance over CBAC for most inspection activities.  I've seen it stated in some places that if you attempt to inter-mingle CBAC configuration commands with your ZBF, it MIGHT work, however most documentation states that it wont.  So I wouldn't risk it.  Choose one or the other.

Saturday, June 4, 2016

Transfer to Lenny

 on  with No comments 
In ,  
When I was at my last job, we all worked slightly different hours to cover all the hours that our clients were in the office.  My coworker, who came in at 6, told me right before he left that when telemarketers called, he started giving them my name specifically for whatever they were selling or whatever position they would ask for.  Apparently they call early, because I started getting tons of calls every day asking for me specifically. It still goes on today, they just forward the calls to my extension. I no longer check the voicemail, but I do see emails notifying me that they come in still.

So I recently came across this great "service" while browsing Reddit one day. There's a phone number that you can transfer calls to, and "Lenny" will take the call.  Lenny is a bot that plays prerecorded things back to the caller, giving the appearance that they're talking to a real person.  So the next time you start getting calls from a telemarketer who just won't take no for an answer, transfer the call to Lenny.  I'm thinking about programming the number into my cell phone for the next time Cisco calls asking for "whomever recently browsed the Cisco site from this number."

If you don't have anyone to transfer to Lenny, you can still visit Lenny's YouTube channel to enjoy his hi-jinks.  While it's not Tom Mabe, it'll definitely do.

Friday, June 3, 2016

Padding the Statistics

 on  with No comments 
In , ,  
I read something on the the topic of Search Engine Optimization (SEO) recently, and one of the the things it mentioned is that Google pays attention to the the length of a page and/or article.  More specifically, things that extend out to 1,200 words or more get ranked higher than short articles.  I've seen this supported elsewhere, though some say it's 1,500 words rather than 1,200.  Either way, if you publish something that is a quick blurb, conventional wisdom says that it will be ranked lower than the the long article on the the same topic that I published.  I'll argue the the validity of this after reading hundreds of articles over the the years that are nothing more than attempts at fitting in every key word they can to turn up in more searches.  But it's not my call.

So now that bloggers who care about these things are on a minimum word count, let's revert back to high school and college and look for the the easy tactics to increase it.  There's a lot of guides online that give great suggestions such as add examples, address different viewpoints, clarify statements, and use quotations.  But we don't have time for that right now, and besides, we're only a hundred or so words short.  I just need a quick fix here! 

One of the things that I've always wanted to try, but never had the guts to, is a simple little trick where you take every instance of the word the, and type it twice.  The idea is that unless you're specifically looking for it, your eye won't catch it.  In fact, have you noticed that I've been doing it before I brought it up?  I have, 8 times before this paragraph.  It hasn't helped though, my word count is only at 306 for this post.

Wednesday, June 1, 2016

Server 2003 IAS RADIUS Server

 on  with No comments 
In ,  
Since I'm sure many home labbers are still rocking Server 2003, I'll put it up in hopes that someone will still find it useful. This post was originally done a number of years ago when Server 2008R2 was still new and memory was still at a premium on my virtual machine host. I was hoping to save a few MB by sticking with 2003. I'm sure 2000 Server is pretty similar (and even smaller), though I have never set up IAS on that platform.

The first step is to install Internet Authentication Service (referred to as IAS from hereon out). Ensure that you have your Server 2003 installation CD handy. Go to Start, Control Panel, and launch the Add or Remove Programs applet. Along the side of the applet, there will be a button called Add/Remove Windows Components. Launch that. In the Components box, highlight Networking Services and then click on Details. Scroll down until you find Internet Authentication Service and select it. Choose OK, then click Next. That’s it, IAS is now installed and ready to be configured.

Now let’s launch the IAS Control Panel. Depending on the configuration of your server and your preferences, you can go to Start > Administrative Tools > IAS. Once it’s started, you’ll see a window such as the below screenshot. This is where you'll be doing all your RADIUS server configuration.

Next we want to add the clients that will be allowed to authenticate. Right click on RADIUS clients and then select New RADIUS Client. You will get a dialog box that pops up with allows you to enter the information for the client. For Friendly Name, enter a string to identify the device. It will probably be a good idea to enter the hostname of the device, especially if you are going to enter dozens of routers and switches. In IP Address, enter the IP address of the device. You want to enter the IP Address that will be seen in the source address of the packet being received by Windows Server. In the Client-Vendor drop down list, select Cisco. In Shared Secret, enter the RADIUS password to be used with this device. Enter the same password again in Confirm Shared Secret, and you're done. Click OK to complete the configuration. Repeat these steps for each additional device you wish to authenticate to this server.

Next, you’ll want to choose users who will be allowed to authenticate via RADIUS. You can go with existing users, or you can create new users here. It doesn’t matter if you want to use local users or Active Directory users, the process really isn’t that different. You just need to add the users to a group which you'll be using later.

Right click on Remote Access Policies and select New Remote Access Policy. Click next through the welcome screen. You'll now be at the Policy Configuration Method screen. Select Set up a custom policy, give it an appropriate name and click next. You're now at the Policy Conditions window. Click Add. In the Select Attribute window, scroll down to "Windows-Groups" and select Add. You'll now get a window called Select Groups. From this location indicates where you'll be selecting the group from, the local machine or a domain. If you want to use a group on the local machine, this should be the computer name, otherwise it should be the name of the domain. In the large white box below that, enter the name of the group and hit Check Names. If all is well, you will see the group listed in the form "Computer\GroupName." Hit OK. You'll be back at the policy conditions box and your policy conditions will say something to the effect of Windows-Group matches "Computer\Group." Hit next, Grant remote access permission, hit next again and you'll be at the profile window.

Hit Edit Profile. You'll be at the Edit Dial In Profile window seen here. Uncheck all authentication methods except for unencrypted authentication and click apply. Now select the advanced tab. In the box, select Service-Type, and change the value to Login. Click OK, and now remove the Framed-Protocol option. Click Add to add a new option. Scroll down and find Vendor-Specific and click add. Click add and select Cisco. Select Yes, It conforms. Complete the window as follows: Vendor assigned attribute number - 1. Attribute Format - string. Attribute value - shell:priv-lvl=15. This string will be used by IOS to determine a privilege level for the user once authenticated to the device. OK your way back out to the Edit dial-in profile box, which should now appear as follows:

Click OK and then a couple Next's to finish up.

Now back at the IAS window, select Remote Access Policies,right click on your policy, and select Move Up until it is the first policy in the list. You have now completed setting up IAS to serve as a RADIUS server for all of your devices.