Saturday, December 3, 2011

Building an ACL

 on  with No comments 
In , ,  
The different types of ACLs are first identified by the line number used. Standard IP ACLs use numbers in the range of 1 – 99 and 1300 – 1999. Extended ACLs use numbers in the range 100 - 199 and 2000 – 2699. Other types of ACLs which filter traffic utilizing other protocols such as Appletalk, DECNet, IPX, and XNS use other number ranges, however those are rarely used today. Named ACLs of course do not use numbers, but instead text names as identifiers. Other than ensuring that an ACL number falls into the correct range, the numbers have no meaning and can be used as you see fit.

There are two steps in defining an ACL. First, you enter the series of ACEs that define the ACL. Finally, you apply the ACL to an interface. For a standard ACL, the syntax is as follows:

access-list 10 permit
access-list 10 permit
access-list 20 deny

This simple ACL allows all traffic from hosts with IP addresses in the or network. The “access-list 10” statement signifies that each of these statements belongs to the ACL designated as 10. An extended ACL looks as such:

access-list 100 permit tcp eq www

This extended ACL permits tcp traffic originating from the network with a destination in the network utilizing port 80 (“eq www” means “equals www” or port 80). In addition to ‘eq’ for equals, we can also use “lt” for less than, “gt” for greater than, or “range” to specify a range of ports. To apply an ACL, simply enter the configuration of that interface and specify which ACL as such:

interface Serial0/1
access-group 10 out
line con 0
access-group 15 in

This applies ACL 10 to the Serial0/0 interface, and inspects traffic moving in the outbound direction through that interface. An important thing to note here is that ACLs use wild card masks rather than the more traditional subnet masks used elsewhere when configuring a router. An ACL can be applied to any interface, or to any line (console, aux, or vty).

There are a number of mnemonics used by Cisco IOS to specify ports. You can use the actual port number when configuring the router, however the mnemonic will still be shown in the running configuration and startup configuration. Some mnemonics that you will often see include:
  • bootpc
  • ftp
  • isakmp
  • lpd
  • ntp
  • rip
  • ssh
  • telnet
  • www
or “any” to specify any protocol.

A router can have one ACL per interface, per direction and per protocol. What this means is that each interface may have one ACL in each direction for each protocol that the router supports. For example, in a router that supports IP, IPX and Appletalk, each interface may have an ACL for inbound IP, outbound IP, inbound IPX, outbound IPX, inbound Appletalk, and outbound Appletalk. For a router that supports those three protocols and has 3 interfaces, that router can have 18 active and applied ACLs. An administrator can have as many ACLs defined as memory permits, however only the previously specified 18 may be applied and active.

Saturday, October 15, 2011

Seizing Internet Domains

 on  with No comments 
In , ,  
Homework Assignment from the past.

The question of who, if anyone, had authority to seize the domain name of a questionable website first came to the forefront two years ago when the commonwealth of Kentucky attempted to take control of 141 domain names belonging to websites associated with online gambling. While most forms of online gambling are currently illegal in the United States, it was quite controversial when a county circuit judge gave the state the green light to seize control of these sites. The major question about this was the motive. In the state of Kentucky as of 2005, 96,000 jobs were in some way related to the horse racing industry. It is fair to ask whether this was simply attempting to shut down illegal websites, or a state simply looking after its own bottom line.

This issue came to the forefront again recently with what has been dubbed in the media as the “Internet Kill Switch.” This past June, a Senate committee approved the Protecting Cyberspace as a National Asset Act of 2010 (S. 3480). This bill will create a White House office of cyber security and a vaguely worded section that many interpret as giving the president the authority to effectively shut down the Internet in an emergency. The committee however denies that the president would be able to shut down the Internet. A version of the bill, H.R. 5548, has also been introduced in the House.

Saturday, September 24, 2011

Pirate DNS

 on  with No comments 
In ,  
Some of the main characters in the peer to peer file sharing world, led by former Pirate Bay spokesperson Peter Sunde have announced their intentions to launch a competitor to the ICANN manged DNS system. ICANN is an independent non-profit organization, however it often complies with the wishes of the U.S. government. The alternative system will feature its own root server followed by a full naming system. The Pirate Bay is an infamous web site known for the coordination of illegal file sharing on the Internet who's servers are constantly on the move across the world while it's operators thumb their noses at law enforcement. The ultimate purpose of the so called “P2P DNS” project is to maintain an Internet free from censorship according to Sunde. The alternative root server can accomplish this by providing an alternative system to map familiar domain names such as to the IP addresses that the Internet uses to route traffic.

The announcement of this alternative DNS system is immediately on the heels of the Department of Homeland Security seizing a number of domain names linked to websites that are linked to illegal files haring.  Sites such as, DVDcollects, and were taken over by the DHS and visitors were greeted by an image explaining the seizure.  In all, more than 70 domain names were seized.  Surprisingly, The Pirate Bay escaped this round of seizures despite them being a high profile target in the past.  Another popular torrent tracking site, Demonoid,com recently announced that it will be changing it's domain name to  The U.S. does not have jurisdiction over .me as it does over .com.

Saturday, September 3, 2011

Excessive Kaseya Database Size

 on  with No comments 
In ,  
For years, we had set Kaseya to maintain 30 days of log files for workstations and servers that we manage.  However, a matter arose that made it really attractive to have access to a larger amount of historical data as what we wanted to double check was well beyond 30 days out.  There's nothing that could be done in that case, but can this headache be eliminated moving forward?  Kaseya support said that there would be no ill effects by upping this to 365 days or more, it would just cause the ksubscribers database to grow.  And since our database server was barely breaking a sweat, we made the change.

Fast forward about a year, and our previously 35GB ksubscribers database had ballooned up to well over 400GB, and everything involving Kaseya was dog slow.  The breaking point came when I could no longer get a reapply schema operation to complete.  It would fail at a different point each time, but not too far off from each other failure.  Since we were a couple revisions of Kaseya behind, and a new version was due to come out soon, I figured it was time to address this. Dropping the log retention back to 30 days did nothing, so I opened a ticket. 

The support tech found that the reapply schema failure was because the operation was timing out before completion due to the size of the database.  He was able to increase the timer so that it would finish, and then reset it back to it's original value.  He did not tell me what the value was or where I could find it and noted that leaving it too high would ultimately mask serious issues later on.  

Once we had a reapply schema completed, I was left with these steps to get the database size back under control.  We were told to only do this during a maintenance window, but since steps 2 and 3 took several days each, and everything ran fine while it was running, we couldn't do so.  Nobody can have a maintenance window on their main business app for a week or more.
  1. Stop IIS and the Kaseya services.
  2. In SQL Management Studio, run the following:
  3. Reindex / Rebuild the indexes.  We were provided with a stored procedure to do this, I'll have to dig it up but a Kaseya support tech should have it.
    • exec [dbo].[sp_rebuildindexes]
  4. Update the DB statistics.  Again, we were provided with a stored procedure to do this. 
    • EXEC [dbo].[sp_update_stats]
    • @dbs = N'ksubscribers'
  5. Start IIS and Kaseya services.
  6. Access Kaseya VSA web interface and double check the database size reported under System > Statistics. 
  7. Run Reapply Schema.  


Tuesday, July 26, 2011

Mapping the Internet

 on  with No comments 
In ,  
One of my computer hobbies is distributed computing.  Distributed computing is a technique that allows a project go give volunteers a piece of software to run on their computer which will allow them to participate in the project. This piece of software will download data commonly referred to as a work unit.  It will use the volunteers computer to process the work unit, and then upload the results to the project.  The volunteer can choose how many computers to run this software on, and they can decide how much time to allocate to it.  Most projects award points for completed work and allows the formation of teams.  Both of these add a level of fun for the volunteer and leads to some dedicating great amounts of computing power that they probably wouldn't have purchased and continue to power without that carrot.

There are large number of distributing computing projects active on the Internet.  Folding@Home uses a custom client to conduct research in various biological areas such as Alzheimer's Disease.  Seti@Home uses the more common BOINC client to analyze radio signals captured by a large radio telescope for signs of extraterrestrial intelligence.  A user over at [H]ardForum maintains a comprehensive list of active distributing projects covering a wide range of research topics.

A project that I feel would be of interest to network and security engineers is The DIMES Project, which is ran by Tel Aviv University.  This is an ambitious project looking to "study the structure and topology of the Internet, with the help of a volunteer community."  In a nutshell, the project runs a script on the volunteers computer that uses ping and traceroute between known hosts on the Internet to discover previously unidentified hosts.  Like other projects, volunteers are able to create a user account to track their contributions, can install the client on as many computers as they please, and can join a team for friendly competition.  What is really interesting about this project is that the client uses little to no CPU, instead it only consumes Internet bandwidth (stated at about 1KB/s per client).  This allows the client to run simultaneously with clients from other projects without interference.

Tuesday, July 12, 2011