Firewalls are commonly known as a security device - but I bet you didn’t know that they could do much more than that! Some of the newer firewalls being developed have support for increasing your networks performance by caching or using load balancing.
If you just can’t seem to get enough speed out of your network, your firewall can change all of that!
Increase your Network Performance
Networks with a common firewall have one simple problem: there is a bottleneck at the firewall. Multiple computers might be trying to access data at once, and data can only flow through the one firewall to get to the outside world - see the problem? To make things worse, firewalls now come with advanced filtering and logging features that can really put a dent in your system performance. Whatever is a system administrator to do?
Improving Performance with Data Caching
You’ve already likely had experience with caching. Try loading a webpage that has a lot of images - takes a good amount of time to render everything doesn’t it? Now close your browser, and go to the same webpage. It magically loads a lot faster! Of course if it doesn’t, you probably don’t have caching enabled on your web browser.
Data caching is very similar to this. In the example, the images are stored in a temporary folder. When the webpage is viewed again, the webpage remembers where it stored that file, and you automatically have it ready for display. With data caching, you aren’t necessarily using a web browser to do the caching for you - your very own firewall takes care of it! There are several types of caching for different situations.
Active caching is a caching technique that will automatically download the cached content when it expires. After a certain period of time, all the content in your temporary folder will expire, and be deleted. With this method, we simply just download the content again automatically when this happens. This is the direct opposite of passive caching, which will wait for the user to visit the site again until the cache is downloaded again.
Hierarchical caching is for those who are serious about caching. This uses one central firewall, and many smaller firewalls to share cache information. This is obviously suited for big networks with many computers. With the central firewall constantly adding cache data, this will relieve the network from a world of stress. Distributive caching is almost exactly similar, except that it uses several firewalls that work together from each other’s cache - instead of a central firewall. Distributive caching is very popular, and thus has two well-known caching mechanisms: Internet Cache Protocol, or ICP, and Cache Array Routing Protocol, or CARP.
While we won’t go into detail about these mechanisms, it is good to know about them in case a network is experiencing a high load. CARP is the more advanced of the two, since it uses actual mathematical equations to determine which cache server has the requested cache. ICP, on the other hand, simply broadcasts a request for the cache to other cache servers. In both mechanisms, the cache is then downloaded if no other cache server has it.
Improving Performance with Load Balancing
Load balancing is a very straight-forward term, and it means just what it implies. With load balancing, more than one firewall is used to share the load of requests from users. Since more than one firewall is being used, the processing time is divided between the firewalls, and creates a noticeable difference in high-load situations. This is also great in case of failure - if one firewall becomes unavailable for whatever reason, the remaining firewalls can be used. Compare that to the single-firewall that fails and makes the entire network go down!
Just like the caching methods, there are multiple mechanisms for deploying a load balanced environment. First we have round robin DNS. This technique will balance loads at the DNS server, as the name implies. It works based on obtaining multiple IP addresses for one server. When a request is made, each user will get a different IP address than the one before him. It is common to obtain at least three IP addresses for this technique, so the load can be balanced between three different sources. If one of the IP addresses becomes unavailable, this method will still route users to that IP address. This is a flaw in which software load balancing and hardware load balancing don’t have.
Software load balancing is the most commonly used technique. This is a more configurable method than round robin DNS and hardware load balancing. Software is generally slower than its hardware counterpart, but is much cheaper. Hardware load balancing is much more expensive, but routes traffic based on the circuit level - and makes the process much faster. Usually, software is the preferred method in this instance. Hardware tends to be the favorite of larger networks who can actually afford it.
And the Winner is…
What’s the best route to go for, given the surplus of options we have? Most of the options discussed are for medium to large networks, and involve multiple servers and firewalls. Obviously, the home user will most likely only be able to take advantage of the caching features firewalls have to offer.
As far as caching goes, home users will mainly have to decide between passive and active caching. Most of the other options are more oriented towards larger networks.
Performance, speed, and efficiency - these three things are what system administrators strive for. Most people will not associate a firewall with being a performance accelerator, but many of today’s technologies are starting to merge. This holds true with different security types - such as intrusion detection or encryption.
In the next section, we will take a closer look on more advanced applications of firewalls - with an emphasis on security. You may have seen the prices at which intrusion detection systems, or IDSs, sell. What’s stopping you from paying thousands of dollars for one of those, and spending much less on a firewall? Get to the next section to find out!