Introduction

As you may or may not know, the University of KwaZulu-Natal has a rather strange server setup. They currently have three proxy servers (that I know of) and at any given time, one or more of these may be down. As you may or may not also know, I run a linux box and have a whole bunch of apps and scripts that like to look at the internet. Consequently, every time I have to switch proxy I have to change several different places, including every text console I may use for http stuff.

This became very irritating very quickly. So I spent a couple of days fixing it.

The new solution

Since I wrote this, someone moderately clueful has replaced the campus proxies with squid which don't require authentication but impose a bandwidth limit. Also, I've figured out how to round-robin the various proxies to get around this limit to some extent. Here's the new squid.conf snippet:

cache_peer dbnproxy1.ukzn.ac.za parent 8080 0 no-query proxy-only round-robin
cache_peer dbnproxy2.ukzn.ac.za parent 8080 0 no-query proxy-only round-robin
cache_peer pmbproxy1.ukzn.ac.za parent 8080 0 no-query proxy-only round-robin
never_direct allow all
never_direct allow CONNECT

acl myclients src 127.0.0.1
http_access allow myclients

acl ukzn src 146.230.0.0/255.255.0.0
no_cache deny ukzn

acl ukznlocal dst 146.230.0.0/255.255.0.0
always_direct allow ukznlocal

I also told squid in the last couple of lines not to hit the campus proxies for anything local. This means I don't have to worry about exclude lists in all my clients and I also don't get local traffic (testing new versions of this site, for example) added to my bandwidth limit.

The old solution

Since I don't have access to the campus proxy servers (except as a user) I had to do something locally. I eventually decided on setting up squid as a proxy on my own box, pointing everything at that and pointing squid at whichever proxy I felt like using.

Unfortunately ICT (as they now like to call themselves, not sure what it's supposed to stand for) have decided that Novell BorderManager is the way to go, so I can't automate authentication and whatever, but I can tell squid to just get everything through them.

I took a default squid.conf and added the following to the bottom:

cache_peer ::PROXY:: parent 8080 0 no-query proxy-only default
never_direct allow all
never_direct allow CONNECT

acl myclients src 127.0.0.1
http_access allow myclients

acl ukzn src 146.230.0.0/255.255.0.0
no_cache deny ukzn

You may be interested in the ::PROXY:: buried in there. What you see above is actually a squid.conf.template file, which the following script uses to generate squid.conf:

#!/bin/bash

E_NOARGS=65

if [ -z "$1" ]
then
    echo "Usage: `basename $0` <proxy>"
    exit $E_NOARGS
fi

case "$1" in
    "und") proxy="proxy.und.ac.za";;
    "und2") proxy="proxy2.und.ac.za";;
    "unp") proxy="proxy.unp.ac.za";;
    *) proxy="$1";;
esac

conftemplate="/etc/squid/squid.conf.template"
conf="/etc/squid/squid.conf"

echo "switching proxy to $proxy"
sed "s/::PROXY::/$proxy/g" < $conftemplate > $conf
/etc/init.d/squid reload

The result of this is that when the proxy I'm using breaks, all I have to do is switchproxy <foo> (as root, of course) and everything uses the new proxy.

Shiny? I like to think so. Oh, I could probably have done something with routing tables and things, but this gives me the bonus of a shared cache locally and it's more configurable.

Update: For some reason, authentication on the campus proxy causes squid to throw a "403: Forbidden". Still trying to figure out why and how to fix it.