Saturday, December 3, 2011

Building an ACL

 on  with No comments 
In , ,  
The different types of ACLs are first identified by the line number used. Standard IP ACLs use numbers in the range of 1 – 99 and 1300 – 1999. Extended ACLs use numbers in the range 100 - 199 and 2000 – 2699. Other types of ACLs which filter traffic utilizing other protocols such as Appletalk, DECNet, IPX, and XNS use other number ranges, however those are rarely used today. Named ACLs of course do not use numbers, but instead text names as identifiers. Other than ensuring that an ACL number falls into the correct range, the numbers have no meaning and can be used as you see fit.

There are two steps in defining an ACL. First, you enter the series of ACEs that define the ACL. Finally, you apply the ACL to an interface. For a standard ACL, the syntax is as follows:

access-list 10 permit 192.168.1.0 0.0.0.255
access-list 10 permit 192.168.2.0 0.0.0.255
access-list 20 deny 10.0.0.0 0.255.255.255

This simple ACL allows all traffic from hosts with IP addresses in the 192.168.1.0/24 or 192.168.2.0/24 network. The “access-list 10” statement signifies that each of these statements belongs to the ACL designated as 10. An extended ACL looks as such:

access-list 100 permit tcp 192.168.1.0 0.0.0.255 192.168.2.0 0.0.0.255 eq www

This extended ACL permits tcp traffic originating from the 192.168.1.0/24 network with a destination in the 192.168.2.0/24 network utilizing port 80 (“eq www” means “equals www” or port 80). In addition to ‘eq’ for equals, we can also use “lt” for less than, “gt” for greater than, or “range” to specify a range of ports. To apply an ACL, simply enter the configuration of that interface and specify which ACL as such:

interface Serial0/1
access-group 10 out
line con 0
access-group 15 in

This applies ACL 10 to the Serial0/0 interface, and inspects traffic moving in the outbound direction through that interface. An important thing to note here is that ACLs use wild card masks rather than the more traditional subnet masks used elsewhere when configuring a router. An ACL can be applied to any interface, or to any line (console, aux, or vty).

There are a number of mnemonics used by Cisco IOS to specify ports. You can use the actual port number when configuring the router, however the mnemonic will still be shown in the running configuration and startup configuration. Some mnemonics that you will often see include:
  • bootpc
  • ftp
  • isakmp
  • lpd
  • ntp
  • rip
  • ssh
  • telnet
  • www
or “any” to specify any protocol.

A router can have one ACL per interface, per direction and per protocol. What this means is that each interface may have one ACL in each direction for each protocol that the router supports. For example, in a router that supports IP, IPX and Appletalk, each interface may have an ACL for inbound IP, outbound IP, inbound IPX, outbound IPX, inbound Appletalk, and outbound Appletalk. For a router that supports those three protocols and has 3 interfaces, that router can have 18 active and applied ACLs. An administrator can have as many ACLs defined as memory permits, however only the previously specified 18 may be applied and active.
Share:

Saturday, November 12, 2011

Free Rainbow Tables

 on  with No comments 
In ,  
I previously blogged about a distributed computing project called The Dimes Project.  The purpose of that project is to use participant's computers to map the Internet with a piece of client software that uses pings and traceroutes to known hosts to discover new hosts and paths.  As stated in that post, there are dozens of if not hundreds of such projects, and this post will cover another one, DistRTGen, which is a part of the Free Rainbow Tables project.

A rainbow table is a table of precomputed hashes for known inputs using a common crypographic hash algorithm.  Tables are used in recovering a plaintext password, for good or for evil purposes. Rainbow tables are built by taking known input values, performing the has algorithm on them, and then storing the plaintext and cyphertext values together in the table.  This allows the user to take a hash, compare it against known hashes already in the table, and then have the plaintext input for that hash, which is commonly a plaintext password.

The downsides to this approach are obvious.  Running the hash algorithm takes CPU processing.  While it may not be significant for one operation, running it for every possible input will be prohibitively expensive for a user with only their own computer(s) available for computation.  And while the input and output of one operation will not be a significant amount of data, it will possibly be prohibitively large for a single user.

The DistRTGen project tackles this like any other distributed computing project.  Users download the BOINC client, configure it to connect to the project with their user account and allow it to run in the background.  The client requests work units, compute them, and then upload the results.  The project will allow the user to run work units simultaneously on as many of their available CPU and GPU cores as they choose.  DistRTGen is building tables for the LM, NTLM, MD5 and MYSQLSHA1 algorithms. MYSQLSHA1 are double binary sha1 hashes used for MySQL authentication.

Users are able to download the current rainbow tables via bittorrent, where you can connect to the torrents for any or all of the data.  The project can also sell you a hard drive filled with the current tables.  There is currently 9741GB of data, so it will take some time to download the torrent(s), and it will take  a few days for them to prepare a hard drive for shipment.  They also have programs available on their downloads page to covert the data to other formats.
Share:

Saturday, October 15, 2011

Seizing Internet Domains

 on  with No comments 
In , ,  
Homework Assignment from the past.

The question of who, if anyone, had authority to seize the domain name of a questionable website first came to the forefront two years ago when the commonwealth of Kentucky attempted to take control of 141 domain names belonging to websites associated with online gambling. While most forms of online gambling are currently illegal in the United States, it was quite controversial when a county circuit judge gave the state the green light to seize control of these sites. The major question about this was the motive. In the state of Kentucky as of 2005, 96,000 jobs were in some way related to the horse racing industry. It is fair to ask whether this was simply attempting to shut down illegal websites, or a state simply looking after its own bottom line.

This issue came to the forefront again recently with what has been dubbed in the media as the “Internet Kill Switch.” This past June, a Senate committee approved the Protecting Cyberspace as a National Asset Act of 2010 (S. 3480). This bill will create a White House office of cyber security and a vaguely worded section that many interpret as giving the president the authority to effectively shut down the Internet in an emergency. The committee however denies that the president would be able to shut down the Internet. A version of the bill, H.R. 5548, has also been introduced in the House.
Share:

Saturday, October 1, 2011

No Good Deed Goes Unpunished

 on  with No comments 
In ,  
Earlier today I was helping someone troubleshoot a problem with their computer.   In order to pinpoint a display issue, I installed their video card into one of my computers to verify it was the problem.  Since the box I have configured as my file server is the only one of my computers with a PCI-E slot, I installed the video card in question into that box.  The card was indeed bad and I thought nothing of it again until I noticed that one of the shared hosted on that box was no longer  accessible over the network.

On this box I have two 1.5TB hard drives, each divided into a 500MB and a 1000MB partition.  The two 500MB partitions are mirrored and host what I consider to be critical data, such as the digital family photo album (you know, the things that the wife would literally kill me if they disappeared).  The two 1000MB partitions are a spanned volume, and host data considered to be non critical, or in other words things I can pull out the CD's and/or hit the web and re-download it.  The critical data partition is the one I could still access, and the server said it was a degraded mirror.  Computer management said one of the drives was unavailable.  And for what it's worth, the computer is running Server 2008R2.

What had happened was that when I put my video card back into the computer, the power cable for the missing hard drive got caught by the corner of the video card, and it was pulled out of the drive.  Easy fix right?  Not this time.  I reconnected power to the drive and verified that it was found by the BIOS before proceeding.  However, when I booted into Windows, the drive showed up as a foreign volume, and the two partitions were not recognized.  I right clicked on the partitions and selected reactivate volume, but no dice.  Windows responded with "The Plext is missing"

So I right click on the unrecognized volume and my only option is to import foreign volume.  This strikes me as a very scarey option, since its probably going to make changes to the disc, but it's my only choice at the moment.  Punching "import foreign volume" and "the plext is missing" into Google, I find a couple posts on various forums with the exact same problem and stating that importing the foreign volume indeed fixed the problem.  So I hold my breath and proceeded, and Windows then immediately brought the spanned volume back online (the non critical data), and two identical volumes of the critical data, rather than one mirrored volume.  So I deleted one of the identical volumes and re-mirrored it.  The mirror is re-synchronizing as I type this, and everything is now accessible again.

Moral of the story, RAID is not a backup.  RAID 1 is not a backup.  But in this instance, it saved me from having to either pull out a couple dozen DVD-R's or copy everything back across the network tonight.   Not bad at all since I had 6 sugared up kids running wild through the house during all of this.  All I really wanted was to stream a video from the server to the Playstation 3.
Share:

Saturday, September 24, 2011

Pirate DNS

 on  with No comments 
In ,  
Some of the main characters in the peer to peer file sharing world, led by former Pirate Bay spokesperson Peter Sunde have announced their intentions to launch a competitor to the ICANN manged DNS system. ICANN is an independent non-profit organization, however it often complies with the wishes of the U.S. government. The alternative system will feature its own root server followed by a full naming system. The Pirate Bay is an infamous web site known for the coordination of illegal file sharing on the Internet who's servers are constantly on the move across the world while it's operators thumb their noses at law enforcement. The ultimate purpose of the so called “P2P DNS” project is to maintain an Internet free from censorship according to Sunde. The alternative root server can accomplish this by providing an alternative system to map familiar domain names such as google.com to the IP addresses that the Internet uses to route traffic.

The announcement of this alternative DNS system is immediately on the heels of the Department of Homeland Security seizing a number of domain names linked to websites that are linked to illegal files haring.  Sites such as Torrent-finder.com, DVDcollects, and TorrentFreak.com were taken over by the DHS and visitors were greeted by an image explaining the seizure.  In all, more than 70 domain names were seized.  Surprisingly, The Pirate Bay escaped this round of seizures despite them being a high profile target in the past.  Another popular torrent tracking site, Demonoid,com recently announced that it will be changing it's domain name to Demonoid.me.  The U.S. does not have jurisdiction over .me as it does over .com.
Share:

Saturday, September 3, 2011

Excessive Kaseya Database Size

 on  with No comments 
In ,  
For years, we had set Kaseya to maintain 30 days of log files for workstations and servers that we manage.  However, a matter arose that made it really attractive to have access to a larger amount of historical data as what we wanted to double check was well beyond 30 days out.  There's nothing that could be done in that case, but can this headache be eliminated moving forward?  Kaseya support said that there would be no ill effects by upping this to 365 days or more, it would just cause the ksubscribers database to grow.  And since our database server was barely breaking a sweat, we made the change.

Fast forward about a year, and our previously 35GB ksubscribers database had ballooned up to well over 400GB, and everything involving Kaseya was dog slow.  The breaking point came when I could no longer get a reapply schema operation to complete.  It would fail at a different point each time, but not too far off from each other failure.  Since we were a couple revisions of Kaseya behind, and a new version was due to come out soon, I figured it was time to address this. Dropping the log retention back to 30 days did nothing, so I opened a ticket. 

The support tech found that the reapply schema failure was because the operation was timing out before completion due to the size of the database.  He was able to increase the timer so that it would finish, and then reset it back to it's original value.  He did not tell me what the value was or where I could find it and noted that leaving it too high would ultimately mask serious issues later on.  

Once we had a reapply schema completed, I was left with these steps to get the database size back under control.  We were told to only do this during a maintenance window, but since steps 2 and 3 took several days each, and everything ran fine while it was running, we couldn't do so.  Nobody can have a maintenance window on their main business app for a week or more.
  1. Stop IIS and the Kaseya services.
  2. In SQL Management Studio, run the following:
    • DBCC SHRINKDATABASE('KSUBSCRIBERS')
  3. Reindex / Rebuild the indexes.  We were provided with a stored procedure to do this, I'll have to dig it up but a Kaseya support tech should have it.
    • exec [dbo].[sp_rebuildindexes]
  4. Update the DB statistics.  Again, we were provided with a stored procedure to do this. 
    • EXEC [dbo].[sp_update_stats]
    • @dbs = N'ksubscribers'
  5. Start IIS and Kaseya services.
  6. Access Kaseya VSA web interface and double check the database size reported under System > Statistics. 
  7. Run Reapply Schema.  

Share:

Saturday, August 20, 2011

Who Owns Your Identity?

 on  with No comments 
In , ,  
The following is is the final paper written for my Internet Law class back in 2010.  Still relevant?

Social networking sites are becoming more and more a part of our lives. We use sites such as Facebook and MySpace to keep touch with friends and family all over the world. We update our statuses with what is going on in our lives, post or latest vacation pictures, and comment on the statuses and pictures of our friends. When there are no more updates to read or comment on, we can play games such as Mafia Wars and Farmville with our friends and family against complete strangers all over the world and in real time. Sites such as LinkedIn provide much of the same features, but with a more professional theme. Rather than friends and family, LinkedIn links us with our coworkers and other professionals in our industry. There are other sites such as Gawker and LiveJournal, known commonly as Blogs, where we submit longer and more informative posts on just about any topic imaginable. And then there are sites such as Classmates.com which let us look up friends from school that we haven't heard from in years.
Share:

Saturday, August 13, 2011

Which WIC

 on  with No comments 
In ,  
I've seen the question asked a number of times: which WIC modules should I buy for my routers?  If you have a fixed function router such as those in the 2500 line (except for the 2524 and 2525, but that's a different story) it's simple.  You don't.   If you have a modular router, such as the 1700, 2600, 3600, 2800 and other lines, you have a number of choices.  And if your router has an NM slot, then you have another set of options available. Here I'll present the most obvious options and weigh some of their pros and cons.

WIC-1T

THis module provides one serial interface via a DB-60 connector.  If you're utilizing an NM-4A/S or NM-8A/S elsewhere or you have 1600 or 2500 series routers with built in serial interfaces, this WIC uses the same connector and this will allow you to standardize on a single cable for your lab.  I use WIC-1T's for this reason, I don't want the added expense of having to buy all the different cables.  These cables (DB-60 to DB-60) can be purchased from sites such as Monoprice for $5 per cable.

This module also presents you with the highest per interface cost.


WIC-2T or WIC-2A/S

For the purpose of a study lab, these WIC's are identical.  The only difference is the top end speed that they operate at, and in the lab that doesn't really matter.  These modules provide two serial interfaces via the Smart Serial connector.  A single WIC-2T or WIC-2A/S normally costs less than two WIC-1T modules.

However, if you're utilizing WIC-1T, NM-4A/S or NM-8A/S modules elsewhere in your lab, you'll probably have to stock DB-60 to Smart Serial cables as well as Smart Serial to Smart Serial and/or DB-60 to DB-60 cables.  Any cable with one or two Smart Serial connectors is going to cost more than a DB-60 to DB-60 cable.  And finally, some older models of routers cannot use these, such as the 1600 series routers.

WIC-1DSU-T1

These are the absolute cheapest modules you're going to come across that you can actually use in your lab.  Many times a router you pick up off of eBay will have one of theses with it, and they can be had for as little as $5 otherwise.  If you have the capability to make your own cables, you won't find a cheaper cable for your lab.  They use the same cable and connectors as standard Ethernet but utilize a different pin-out.

A lot of people claim to have found the T1 Crossover cables necessary to connect these modules dirt cheap, but I've never seen them priced reasonably.  So YMMV.  If you use this module, there is no way that I am aware of to connect it to any other type of serial interface.

NM-4A/S or NM-8A/S

These modules provide the highest port density per module, but not every router has an NM slot.  If you use 1700 or 1800 series routers for example, then you're out of luck.  If you have a router that does have an NM slot, then one of these will allow you to use that router as a pretty cost effective Frame Relay switch.  These modules use the DB-60 connector.

BRI-S/T, NM-4B-S/T, or other ISDN modules

You cannot connect these directly together, or directly to any other module.  If you already have an ISDN simulator then you can use them.  Otherwise, the ISDN simulator will run you at least $100, which would be better spent on routers or switches.


Share:

Saturday, August 6, 2011

TEMPEST and SIGINT

 on  with No comments 
In  
Here's another classic from the vault.  A paper on the relationship between TEMPEST and SIGINT that I wrote for a class.

TEMPEST is a codename used by the United States Military which originally referred to a classified program which studied emission security (or EMSEC) and attempted to develop technologies and standards to be used in combating these emissions. This work can be traced back to World War I where German troops were able to intercept and listen in to enemy voice transmissions from the ground due to poorly insulated cabling used by allied phone lines. Like many classified military projects, TEMPEST is based on a random dictionary word rather than being an actual acronym. Despite the origin of the word, many attempts at fitting the word into an acronym have been made, the most commonly used one being Transient Electromagnetic Pulse Surveillance Technology.

The first test standards were defined in “NAG1A” and “FS222” in the 1950’s. In 1970, a revision titled “National Communications Security Information Memorandum 5100: Compromising Emanations Laboratory Test Standard, Electromagnetics” was created, followed by “NAC-SIM 5100A” in 1981, which sets the requirements. National Communications Security Committee Directive 4 currently sets the standards for TEMPEST in the United States. Other nations and organizations have similar documents defining their standards and requirements. For example, the NATO standard is defined by “AMSG 720B.” One thing that these and other documents relating to the TEMPEST program have in common is that they are all classified.

Sensitive information systems require intensive metallic shielding to prevent emissions from escaping. Individual devices, interconnecting cables and even entire rooms or buildings must be properly shielded. Within this shielded environment, there is a red/black separation employed. Red equipment is used to process confidential data, while black equipment is used to process unclassified data. Red equipment must remain isolated from black equipment.

The TEMPEST standards define three categories of approved devices. Type 1 is the most secure, but is only available to the US government and contractors that it approves. Type 2 is less secure, but its use still requires government approval. Type 3 is approved for commercial use by entities outside of the government. There is also a newer standard, known as ZONE, which is less secure than Type 3 equipment, but is still effective and is much more affordable.

SIGINT, or signals intelligence, is claimed to be the exclusive domain of the National Security Agency (more commonly referred to as the NSA), by the NSA. It is the type of intelligence that deals specifically with transmissions from the voice communications, radars, weapons systems, and the like of enemies of the United States. The NSA states the mission of SIGNINT is limited to the gathering of information about foreign nations, groups or individuals, as well as terrorists that operate internationally. The NSA lists its customers of this intelligence as “all departments and levels of the United States Executive Branch” . While the NSA claims exclusivity to SIGINT, every branch of the government from the FBI to Navy SEALS whose role is driven by intelligence utilizes SIGINT in function if not in title.

SIGINT can also be preventing communications. For example, Egypt shut off all Internet access within its borders earlier this year. The global routing table, used to direct all traffic across the Internet, had nearly every route to Egypt removed . A month later, it was reported that satellite phone communications handled by Thuraya Satellite Telecommunications Co. were being jammed within Libya. This was in direct response to protest and unrest similar to that in Egypt . Similar is China’s attempts to continually censor the Internet and control what comes over the wire into its borders.

SIGINT is related to TEMPEST and EMSEC in that they fall on the opposite sides of a transmission. The organization sending and receiving the transmission utilizes TEMPEST/EMSEC techniques to secure the transmission, while the opposition uses SIGINT technologies in order to overhear the transmission. In Information Assurance, we work to preserve the confidentiality, integrity, and availability of data. TEMPEST/EMSEC is another method of ensuring the confidentiality of data. It is a counter to SIGINT, which attempts to violate the confidentiality of data. While these concepts began as government projects and most of what they’ve learned remains classified, the theory behind them can be applied anywhere that sensitive data is stored, processed or transmitted.
Share:

Tuesday, July 26, 2011

Mapping the Internet

 on  with No comments 
In ,  
One of my computer hobbies is distributed computing.  Distributed computing is a technique that allows a project go give volunteers a piece of software to run on their computer which will allow them to participate in the project. This piece of software will download data commonly referred to as a work unit.  It will use the volunteers computer to process the work unit, and then upload the results to the project.  The volunteer can choose how many computers to run this software on, and they can decide how much time to allocate to it.  Most projects award points for completed work and allows the formation of teams.  Both of these add a level of fun for the volunteer and leads to some dedicating great amounts of computing power that they probably wouldn't have purchased and continue to power without that carrot.

There are large number of distributing computing projects active on the Internet.  Folding@Home uses a custom client to conduct research in various biological areas such as Alzheimer's Disease.  Seti@Home uses the more common BOINC client to analyze radio signals captured by a large radio telescope for signs of extraterrestrial intelligence.  A user over at [H]ardForum maintains a comprehensive list of active distributing projects covering a wide range of research topics.

A project that I feel would be of interest to network and security engineers is The DIMES Project, which is ran by Tel Aviv University.  This is an ambitious project looking to "study the structure and topology of the Internet, with the help of a volunteer community."  In a nutshell, the project runs a small application on the volunteers computer that uses ping and traceroute from you to known hosts on the Internet to discover previously unidentified hosts.  Like other projects, volunteers are able to create a user account to track their contributions, can install the client on as many computers as they please, and can join a team for friendly competition.  What is really interesting about this project is that the client uses little to no CPU, instead it only consumes Internet bandwidth (stated at about 1KB/s per client).  This allows the client to run simultaneously with clients from other projects without interference.
Share:

Wednesday, July 20, 2011

Introduction to ACLs

 on  with No comments 
In , ,  
In any network device with the responsibility of moving data, the ability to inspect and filter data is absolutely critical. In routers running Cisco IOS software, this inspection and filtering is conducted by an Access Control List (hereafter referred to as ACL). Within an ACL, entries known as Access Control Entries (hereafter referred to as ACEs) describe which traffic to permit through, and which traffic to deny. In this paper, I will assume that you already have a basic understanding of IP addressing, VLSM, CIDR, subnet masks and wildcard mask. These building blocks are elementary topics in IP networking, but are crucial to the understanding of ACLs. I will also assume you have a basic knowledge of how to configure a router or switch running IOS software. The commands used may appear familiar to someone knowledgeable in Cisco PIX or ASA firewalls, but there are differences.

There are many different types of ACLs used in Cisco routers. The most basic are Standard ACLs. These simple ACLs can only filter traffic based on the source IP address of the packet. Building on the standard ACL is the extended ACL. These follow a similar format, but allow filtering based on the source and destination IP address and optionally, the source and destination port number of the packet. However, with additional functionality comes additional cost in terms of router memory and processer utilization. Named ACLs are simply extended ACLs which use names rather than numbers as identification and allow additional features such as line numbers and editing capability. Reflexive ACLs allow a router to inspect packets based on a basic session table, allowing the router to act as a rudimentary stateful firewall. Time-based ACLs allow permitting or denying traffic based on the time of day. And finally, Cisco routers running IOS version 12.0.5T and higher support Context -Based Access Control (better known as CBAC), which extends traditional ACLs to allow a router to provide full stateful packet inspection. While the more advanced ACL types are quite useful for a network administrator, the focus of this paper will be standard and extended ACLs.
Share:

Tuesday, July 12, 2011