Stateless VMware ESXi 3.5 on an HP c7000 Blade Server…

NOTE:  This is only an overview.  Due to the detailed nature of this project, I will break it up over several more-focused articles over time for easier reference.

Well, despite my more negative impression of this year’s VMworld conference, it still really paid off.  There I learned about stateless ESX deployment.  Using this information, I was able to build in my lab, after a couple months of trial and error, a highly robust VMware environment, fully managed and licensed, using the midwife scripts I modified for this effort.  And configuration is hands-free.

Here are the system components:

  • SERVER – HP c7000 Blade Enclosure with sixteen Bl465c blades, two 4 GB FC modules, and four VC Enet modules
  • Each blade has two dual-core AMD CPUs, 16 GB RAM, two 72 GB SAS drives (hardware RAID-1), two embedded gig NICs, and a mezzanine card with two more gig NICs/iSCSI initiators and two FC HBAs
  • NETWORK – Cisco 6509 with two SUP 720 cards, two 48 port LC Gig-E fiber cards, and four 48 port gig copper cards
  • MANAGMENT – Dell 1850 with two 146 GB SAS drives (hardware RAID-1) for management and boot services
  • STORAGE – Scavenged proof-of-concept totally ghetto Dell Optiplex desktop with four internal 1.5 TB SATA drives (software RAID-10 formatted with tuned XFS) providing 3 TB of NFS shared storage
  • Scavenged HP IP-KVM box for OOB-management of the two Dells

Here are the steps I took:

  1. First I had to update all the firmware on the blade server.  This includes the two OA cards for the Onboard Administrator, the Virtual Connect software, the iLO2 software for each blade, the BIOS on each blade, and the Power Management Controller firmware.  There is a particular order this is done in, and it is not easy, but it really needs to be done.  The fixes that come with these updates are often vital to success.  Overall, I spent a week researching and updating.  I set all the blades to boot via PXE.
  2. Next, I built the storage server.  I really had no choice – nothing was available but a Dell Optiplex desktop.  It had four internal SATA ports available, and room for four 1GB RAM modules.  It also had a single dual-core Intel CPU and PCI slots for more NICs, and a PCI-Express mini-slot as well.  I had to order parts, and it took a little while, but once done, it had a total of four gig NICs (one embedded, two PCI, one PCI-Express), four 1.5 TB SATA drives, and 4 GB RAM.  I loaded it with 64-bit Ubuntu-9.04, hand-carved the partitions and RAID-10 setup, formatted the 3 TB volume with XFS, tuned as best I knew how, and then put it on the 2.6.31 kernel (I later updated it to  There were no BIOS or other firmware updates needed.
  3. I then built the management server on the Dell 1850.  It only has one power supply (I cannot find a second one), but it does have 8 GB RAM and two dual-core CPUs.  I loaded 64-bit Ubuntu-9.04 on it afte installing two 146 GB SAS drives in a RAID-1 mirror (hardware-based).  I also updated the BIOS and other firmware on it.
  4. Having these components in place, I studied the blade server to see what I could get away with, and ultimately decided to use each NIC on a blade server to support a set of traffic types, and balanced the likelyhood of traffic demands across them.  For example, Vmotion traffic, while it may be intense, should be relatively infrequent, so it shares a V-Net with another type of traffic that is low-bandwidth (the alternate management  network).  Altogether, I ended up with a primary management network on up V-Net, Vmotion and the alternate on another V-Net, storage traffic (NFS and iSCSI) on a third V-Net, and VM traffic on its own V-Net.  Each V-Net maps to the its own NIC on a blade, the same NIC on each blade.

The physical network design:

For the V-Nets, the management network went on NIC 1 as an untagged VLAN.  It has to be untagged, because when it boots up, it needs to get a DHCP address and talk to the boot server for its image.  Since it comes up untagged, it will not be able to talk out to the DHCP/PXE server if the V-Net is set to pass through tags.  The other V-Nets support tagged VLANs to further separate traffic.  Each V-Net has four links to the Cisco 6509, except for the storage V-Net, which has eight.  Two links form an LACP bundle from the active side (VC-Enet module in Bay 1), and two make up an LACP bundle (or etherchannel) from the module in Bay 2, which is the offline side.  This is repeated for the other networks across the other modules in Bays 5 and 6.  Bays 3 and 4 house the Fiber Channel modules, which I am not using.  Everything is on its own individual private 10.x.x.x network as well, except for the VM traffic net, which will contain the virtual machine traffic.

The storage design:

Like I said, a really ghetto NFS server.  It does not have enough drives, so even though it would be overkill for a home PC, it will not cut it in this situation.  I expect it to run out of steam after only a few VMs are added, but it does tie everything together and provides the shared storage component needed for HA, Vmotion, and DRS.  I am working on an afforable and acceptable solution, rack-mounted, with more gig NICs and up to 24 hot-swap drives – more spindles should offer more thoughput.  I bonded the NICs together into a single LACP link, untagged back the the Cisco, on the NFS storage VLAN.  Once working, I stripped out all unneeded packages for a very minimal 64-bit Ubuntu server.  It boots in seconds, and has no GUI.  Unfortuately, I did not get into the weeds enough to align the partitions/volumes/etc.  I just forgot to do that.  I will have to figure that out next time I get a storage box in.

The management server:

It is also on a very minimal 64-bit Ubuntu-9.04 install.  Ithas four NICs, but I only use two (the other two are only 100 MB).  The two gig NICs are also bonded into one LACP link back to the Cisco, untagged.  The server is running a stripped down 2.6.31 kernel, and has VMware Server 2.0.x installed for the vCenter Server (running on a Windows 2003 server virtual machine).  On the Ubuntu host server, I have installed and configured DHCP, TFTP, and gPXE.  I also extracted the boot guts from the ESXi 3.5.0 Update 4 ISO and set up the tftpboot directory so that each blade will get the image installed.  On the vCenter Server virtual machine, I installed the Microsoft PowerShell tool (which installed ActiveState PERL), and the VMware PowerCLI tool.  I also downloaded the midwife scripts and installed Notepad++ for easy editing.  The vCenter Server VM is on a private 10.x.x.x net for isolated management, but this gets in the way of the Update Manager plugin, so I still have some work to do later to get around this.

Really key things I learned from this:

  1. The blade server VC-Enet modules are NOT layer-2 switches.  They may look and feel that way in some aspects, but they, by design, actually present themselves to network devices as server ports (NICs), not as more network devices.  Learn about them – RTFM.  It makes a difference.  For instance, it may be useful to know that the right side bay modules are placed in standby by default, and the left-side are active – they are linked via an internal 10Gig connection.  I know of another lab with the same hardware that could not figure out why they could not connect the blade modules to the network if all the modules were enabled, so they solved it by disabling all but Bay-1, instead of learning about the features and really getting the most out of it.
  2. Beware old 64-bit CPUs.  Just because it lets you load a cool 64-bit OS on it does NOT mean it will let you load a cool 64-bit virtual machine on it.  If it does not have virtualization instruction sets in its CPU(s), you will run into failure.  I found this out the hard way, after trying to get the RCLI appliance (64-bit) from VMware in order to manage the ESXi hosts.  I am glad I failed, because it forced me to try the PowerCLI/PowerShell tools.  Without those tools, I seriously doubt I could have gotten this project working.
  3. Learn PowerShell.  The PowerCLI scripts extend it for VMware management, but there are plenty of cool tricks you can do using the base PowerShell scripts as well.  I am no fan of Microsoft, so it is not often I express satisfaction with one of their products.  Remember where you were on this day, ‘cuz it could be a while before it happens again.
  4. Name resolution is pretty important.  HA wants it in a real bad way.  Point your hosts to a DNS server, or give them identical hosts files (a little ghetto, but a good failsafe for a static environment).  I did both.
  5. Remember those Enet modules?  Remember all that cool LACP stuff I mentioned?  Rememeber RTFM?  Do it, or you will miss the clue that while the E-net modules like to play with LACP, only one link per V-Net is set active to avoid loops.  So if, on your active V-Net, you have two LACP links, each for a different tagged VLAN, and your NFS devices won’t talk to anyone, you will know that it is because it saw your iSCSI V-Net first, so it set your NFS link offline.  Meaning, the iSCSI link on Bay-1 and it’s offline twin on Bay-2 both have to fail before your NFS link on Bay-1 will come up.  Play it safe – one LACP link per V-Net per bay.  Tag over multiple VLANs on the link instead. The E-net modules only see the LACP links, and do not care if they support different VLANs – only one is set active at a time.
  6. Be careful with spanning tree (this can be said for everything related to networking).    Use portfast on your interfaces to the E-net modules, and be careful with spanning tree guards on the Cisco side.  In testing, I would find that by pulling one of the pairs in a link, it would isolate the VLAN instead of carrying on as if nothing had happened.  Turns out a guard on the interface was disabling the link to avoid potential loops.  Once I disabled that, the port-channel link functioned as desired.
  7. Doesn’t it suck to get everything working, and then not have a clean way to import in VMs?  I mean, now that you built it, how do you get stuff into it?  I ended up restructuring my NFS server and installing Samba as well.  This is because when importing a VM from the GUI (say, by right-clicking on a resource pool), the “Other Virtual Machine” option is the only one that fits.  However, it then looks for a UNC path (Windows share-style) to the .vmx file.  I could browse the datastore and do it that way, but for VMs not on the NFS datastore already, I needed to provide a means for other labs to drop in their VMs.  Samba worked.  Now they can drop in their VMs on the NFS server via Samba, and the vCenter Server can import the VMs from the same place.

Currently, we are restructuring phycial paths between labs for better management.  It is part of an overall overhaul of the labs in my building.  Once done, my next step is to start building framework services, such as repository proxy servers, WSUS servers, DHCP/DNS/file/print, RADIUS/S-LDAP/AD, etc., etc.  I also need to wrap in a management service framework as well that extends to all the labs so everyone has an at-a-glance picture of what is happening to the network and the virtual environment.  One last issue I am fighting is that I am unable to complete importing VMs I made on ESX 3.5 U2 earlier this year.  It keeps failing to open the .vmdk files.  I will have to pin that down first.

The end result?

  1. If I run the midwife service on the vCenter server and reboot a blade, it is reloaded and reconfigured within minutes.
  2. If I upgrade to beefier blades, I pop them in and let them build.
  3. If I update to a newer release of ESXi (say, update 5 or 6), I extract from the ISO to the tftpboot directory and reboot the blades.  The old configs get applied on the new updated OS.
  4. All configs are identical – extremely important for cluster harmony.  No typos.
  5. If someone alters a config and “breaks” something, I reboot it and it gets the original config applied back.
  6. If I make a change to the config, I change it in the script once, not on each blade individually.  This also allows for immediate opportunity to DOCUMENT YOUR CHANGES AS YOU GO.  Which is just a little bit important.

As stated before, this is an overview.  I will add more detailed articles later, which will include scripts and pictures as appropriate.  I am at home now and do not have access to my documentation, but once I get them, I will post some goodies that hopefully help someone else out.  To include myself.

Wireless Dilemma and Kubuntu 9.04 Network Manager….

While upgrading my kid’s computer and installing the web proxy and filter (see article titled “SquidGuard Blacklists…“), I ran across a real problem.  Wireless would start only after a user logged into their desktop, so the system had no IP address until then.  However, without an IP, Dansguardian would fail to start.  I tried scripting the problem away, essentially waiting indefinitely until a periodic check showed an IP address in use and then starting the services, but this did not work.  I played around with making an init script under /etc/init.d and using “update-rc.d” to create the proper sym links.  This also did not work.  I tried manually defining the wireless network using /etc/network interfaces and creating a /etc/wpa_supplicant.conf file.  This did not work.

It was then I remembered a server I had built at work, using Ubuntu-9.04, in which I had stripped off all of the GUI/desktop stuff, leaving a bare-bones server instead.  It worked fine on the network, and did not have Network Manager installed.  Looking in the init script folder under /etc/init.d, I found a NetworkManager service, so I made it non-executable (“sudo chmod -x /etc/inint.d/NetworkManager“), and ran “sudo update-rc.d -f NetworkManager remove” to get rid of the startup links.  After that, the wireless network started on boot just fine, with no need for user interaction, and the services for the proxy and filters started flawlessly (I added them into /etc/network/interfaces).

So, Network Manager was stepping all over /etc/network/interfaces.  Not anymore.  I could have removed the package, but other packages will then be removed, and I don’t want that.

For someone having trouble with their manual wireless setup, here are my scrubbed /etc/network/interfaces and /etc/wpa_supplicant.conf files:


auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant.conf

# added 10-18-09 for proxy filter
pre-up iptables-restore < /etc/iptables.rules
post-up /usr/local/squid/sbin/squid
post-up /usr/local/dansguardian/sbin/dansguardian
post-down iptables-save -c > /etc/iptables.rules


pairwise=CCMP TKIP

This is for a WPA2 wireless setup (SSID and passphrase are bogus, of course).  Hope this helps someone.

SquidGuard Blacklists…

Here is a listing of some sites that have actively managed blacklists freely available for non-commercial download:

Shalla Secure Services
Blacklists UT1
MESD Blacklists (not sure how current this one is)

Anyway, I updated the script from the HOWTO – Child-Proofing Internet Access on Kubuntu article. It was failing because squidGuard kept not finding files and going into emergency mode when run with “-C all” to build databases. By also running it with the -d option, I was able to see where it was failing. The Norway site was not permitting the blacklist download to occur, so I found these other sites and wrote that into the script. By doing that and adjusting my squidguard.conf file (commented out the “not_ok” ACL block), as well as by creating files that it could not find (copied ok/domains.db to ok/domains and adult/very_restrictive_expressions to adult/expressions and porn/expressions), the script now ran without errors to completion.

The script is updated here and on the linked article. (pdf file)

VMware Server 2.0.1 and Kernel…

I finally decided to get VMware Server running on my new kernel.  Whenever the kernel is updated, there are some things you can count on having to reinstall, such as NVidia video drivers and VMware installations.   I expected problems, so my methodology was to attempt a normal install, expect failure, and search on the resulting errors.  This did not pan out, so I tried the VMware Community Forums, and I found this little gem on how to patch the VMware modules:

This apparently works with 32-bit as well, but may not be confirmed.

I downloaded the patch and shell script, ran the script, and followed the directions of the output:

  • Move original files that could cause issues with VMware – “mv /usr/lib/vmware/modules/binary /usr/lib/vmware/modules/binary-orig
  • Run the config again, without the -d option (otherwise, root would be the only user allowed to log into the web interface) – “

Essentially, there were no problems getting everything running.  Now I have to figure out what my password was to log into my Windows XP VM.  I have to complete some online training that can only be done in Windows (thanks a ton).  I would hate to have to crack my way in to my own VM….

Huge thanks out to both michelmase and Krellan for the patches and scripts!

HOWTO – Child-Proofing Internet Access on Kubuntu

[UPDATED 10-18-2009 – Numerous old typos fixed, several new typos added, syntaxes corrected, updates made for newer versions of stuff, better instructions, cooler errors, and even a little more attention to detail paid.]

[CREDIT for goes to Step By Step – Thank you for this script, and sorry it took so long to put this credit in.]

This article is a revision of this post. It has been adapted for use on Kubuntu 8.04. I got a lot of info from this link here. Another excellent resource is here (PDF). As always, YMMV. This is a long and involved post – be prepared to take an afternoon, and to work on that degree from Google. But when you are done, you will have a powerful transparent-proxy-content-filter-porn-stomper. No charge.

1. Download the following (there may be newer versions, but definitely need db-2.7.7):

I checked these versions against the repositories, and except for the db-2.7.7, these are still fairly current. The version of iptables I am using is 1.3.8. For this, I prefer installing from tarballs, even though this means they will not get updates. The main advantages I see to this approach are that you can more directly control where they go in the file system (making them easier to troubleshoot and remove), and updates to packages might cause feature/config file breakage, whereas these ensure a static environment. Unfortunately, I cannot upload the actual tarballs for use, so either find these versions in an archive, or brace yourself for an adventure in configuration differences.

2. Unpack the downloaded files:

  • tar xvfz db-2.7.7.tar.gz
  • tar xvfj squid-2.6.STABLE5-20061110.tar.bz2
  • tar xvfz dansguardian-
  • tar xvfz squidGuard-1.2.0.tar.gz

3. Check that you don’t already have squid, squidGuard, or dansguardian already installed, and that you have iptables installed. Adept Manager is an easy way to find out. Check that you do not already have a squid group and user. If you do not, then pick a group ID between 1 and 999 to use for the squid group:

  • more /etc/group | grep -i squid <is there a squid group?>
  • more /etc/passwd | grep -i squid <is there a squid user?>
  • more /etc/login.defs | grep -i UID_MIN <what is the lowest user ID? anything below this is a system account, and will not get a home directory by default, which is a good thing – so pick something lower than UID_MIN>
  • more /etc/group | grep <number below UID_MIN> <is the group ID you picked already in use? If so, keep picking one until you find a number not in use.>

4. As root (sudo -s), make user and group. The “groupadd -r squid” command is out – this would have made a system account. The new command syntax is shown below instead.

  • groupadd -g <number you picked> squid
  • useradd -u <number you picked> -g squid -d /var/spool/squid -s /bin/false -r squid

5. When making firewall rules (below), I kept getting the error “iptables: No chain/target/match by that name” until I discovered that I did not have the ipt_owner.ko module available to be loaded (on my current version of, it is called “xt_owner”). Issue an “updatedb” command, followed by “locate _owner.ko” to see if you have it for your kernel version. If you have it, see if it is loaded – “lsmod | grep -i _owner“. I ended up compiling a new kernel from to (to get some other features I wanted, not just for the module), and ensuring the owner module was built (“make oldconfig” and “make menuconfig” steps of this post, under the networking section). Once I had that module, I was good to go with matching packets by owner.

Make menuconfig (need ncurses libraries installed: libncurses5-dev and libncursesw5-dev; helpful to have ncurses-term packages installed):
“Networking Support —>
Networking Options —>
Network Packet Filtering Framework (Netfilter) —>
Core Netfilter Configuration —>”

  • (M) Netfilter connection tracking support (NF_CONNTRACK)
  • (M) Transparent proxying support (EXPERIMENTAL) (NETFILTER_TPROXY)
  • (M) “owner” match support (NETFILTER_XT_MATCH_OWNER)

REMEMBER: If you upgrade your kernel to a new version and use a proprietary video driver (ATI or NVIDIA), set your xorg.conf driver to “vesa” BEFORE you reboot. Reboot on the new kernel, log into the console (so as not to start any window manager or x session), and upgrade your video driver (update xorg.conf to reflect the new driver). Then either reboot, or just start your window manager normally.

6. Make BerkelyDB – must be 2.x version, not newer, not older:

  • cd db-2.7.7/dist/
  • ./configure
  • make
  • make install

7. Make squid v.2-6 (NOTE – To have SSL, I needed to install the libcurl4-openSSL-dev package. Otherwise, “make” generated this error: “../include/md5.h:14:2: error: #error Cannot find OpenSSL headers” ):

  • cd squid-2.6.STABLE5-20061110/
  • ./configure --enable-icmp --enable-delay-pools --enable-useragent-log --enable-referer-log --enable-kill-parent-hack --enable-cachemgr-hostname=hostname --enable-arp-acl --enable-htcp --enable-ssl --enable-forw-via-db --enable-cache-digests --enable-default-err-language=English --enable-err-languages=English --enable-linux-netfilter --disable-ident-lookups --disable-internal-dns
  • make
  • make install

It is located in /usr/local/squid/.

8. Make squidGuard v.1.2:

  • cd squidGuard-1.2.0/
  • ./configure
  • make
  • make install

Default install is in /usr/local/bin/.

9. Make dansguardian v.2.9.8:

  • cd dansguardian-
  • mkdir /usr/local/dansguardian
  • ./configure --prefix=/usr/local/dansguardian --with-proxyuser=squid --with-proxygroup=squid --enable-email=yes
  • FOR EMBEDDED URL WEIGHTING AND OTHER FEATURES: ./configure --prefix=/usr/local/dansguardian --with-proxyuser=squid --with-proxygroup=squid --enable-email=yes --enable-pcre=yes (this last option is CPU intensive; turn on in dansguardianf1.conf)
  • make
  • make install

It is located in /usr/local/dansguardian/.

If you get an error during the configure part like this: “configure: error: pcre-config not found!“, install the libpcre++-dev package.
When using GCC 4.3, I got errors of “error: ‘strncpy’ was not declared in this scope“. The fix was found on GCC 4.3 Release Series – Porting to the New Tools. Basically, for each such error, go to the file referenced under the src folder and add the line #include (cstring) (replace parentheses with angle brackets).

10. Make and configure squid directories:

  • mkdir /usr/local/squid/var/cache
  • chown -R squid:squid /usr/local/squid/var
  • chmod 0770 /usr/local/squid/var/cache
  • chmod 0770 /usr/local/squid/var/logs

11. Make and configure squidGuard directories (see for reference):

  • mkdir /usr/local/squidGuard
  • mkdir /usr/local/squidGuard/log
  • chown -R squid:squid /usr/local/squidGuard/log
  • chmod 0770 /usr/local/squidGuard/log
  • mkdir /var/log/squidguard
  • touch /var/log/squidguard/squidGuard.log
  • touch /var/log/squidguard/ads.log
  • touch /var/log/squidguard/stopped.log
  • chown -R squid.squid /var/log/squidguard
  • mkdir /var/lib/squidguard
  • mkdir /var/lib/squidguard/db
  • mkdir /var/lib/squidguard/db/blacklists
  • mkdir /var/lib/squidguard/db/blacklists/ok
  • mkdir /var/lib/squidguard/db/blacklists/porn
  • mkdir /var/lib/squidguard/db/blacklists/adult
  • mkdir /var/lib/squidguard/db/blacklists/ads
  • chown -R squid:squid /var/lib/squidguard

12. Configure dansguardian directories:

  • chown -R squid:squid /usr/local/dansguardian/var/log
  • touch /var/lib/squidguard/db/blacklists/porn/domains_diff.local
  • touch /var/lib/squidguard/db/blacklists/porn/urls_diff.local

13. Edit and copy squid configs from respective source directories:

  • cp squid.conf /usr/local/squid/etc/squid.conf
  • sample squid.conf settings:
    • http_port transparent
    • icp_port 0
    • htcp_port 0
    • redirect_program /usr/local/bin/squidGuard
    • cache_effective_user squid
    • cache_effective_group squid
    • acl all src
    • acl manager proto cache_object
    • acl localhost src
    • acl to_localhost dst
    • acl allowed_hosts src
    • acl SSL_ports port 443
    • acl Safe_ports port 80 21 443 # http ftp https
    • ##acl Safe_ports port 21 # ftp
    • ##acl Safe_ports port 443 # https
    • ##acl Safe_ports port 1025-65535 # unregistered ports
    • acl CONNECT method CONNECT
    • acl NUMCONN maxconn 5
    • acl ACLTIME time SMTWHFA 7:00-21:00
    • #http_access allow manager localhost
    • #http_access deny manager
    • http_access deny manager all
    • http_access deny !Safe_ports
    • http_access deny CONNECT !SSL_ports
    • http_access allow localhost ACLTIME
    • http_access deny NUMCONN localhost
    • #http_access allow allowed_hosts
    • http_access deny to_localhost
    • http_access deny all
    • http_reply_access allow all
    • #icp_access allow allowed_hosts
    • #icp_access allow all
    • icp_access deny all
    • visible_hostname localhost

Edit squid.conf and set up time based access, to prevent late night surfing (add the following lines):

  • acl ACLTIME time SMTWHFA 7:00-21:00 (add to the ACL section)
  • http_access allow localhost ACLTIME (add to the http_access section)

14. Edit and copy squidGuard configs from respective source directories:

  • cp squidGuard.conf /usr/local/squidGuard/squidGuard.conf
    • change ip gateway address in squidGuard.conf

15. Edit and copy dansguardian configs from respective source directories:

  • cp dansguardia*.conf /usr/local/dansguardian/etc/dansguardian/
  • sample dansguardian.conf settings:
  • sample dansguardianf1.conf settings:
    • groupmode = 1
  • copy (it is posted as a PDF – copy the text to a shell script) to /usr/local/bin
  • [UPDATED 10-18-2009 with more current blacklist sites]

16. Make the firewall rules (iptables commands may appear wrapped in two lines):

  • iptables -t nat -A OUTPUT -s -d -p tcp --dport 3128 -j ACCEPT (without this rule, dansguardian may fail with the error: “Error connecting to parent proxy”)
  • iptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner squid -j ACCEPT
  • iptables -t nat -A OUTPUT -p tcp --dport 3128 -m owner --uid-owner squid -j ACCEPT
  • iptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner exemptuser -j ACCEPT (change exemptuser)
  • iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8080
  • iptables -t nat -A OUTPUT -p tcp --dport 3128 -j REDIRECT --to-ports 8080

It is a good idea to do this part *after* compiling and installing, as these rules will get in the way if you need to install a package (like libcurl4-openSSL-dev). If this happens, Adept Manager will abruptly crash (leaving you to find and remove the lock files), and apt-get install will fail with a connection refused error. Just rerun the rules above, but replace the -A with a -D to delete them. Get your packages, install your software, and reapply the firewall rules.

17. Save and apply the firewall settings permanently (visit Iptables HowTo – Community Ubuntu Documentation for details):

  • sudo sh -c "iptables-save > /etc/iptables.rules"
  • sudo nano /etc/network/interfaces
    • pre-up iptables-restore < /etc/iptables.rules
    • post-down iptables-save -c > /etc/iptables.rules

18. Start or restart services as needed:

  • /usr/local/squid/sbin/squid -z (first-time config)
  • /usr/local/squid/sbin/squid -N -d 1 -D (test squid, kill when working fine)
  • /usr/local/squid/sbin/squid (this also runs squidGuard from “/usr/local/bin/squidGuard”)
  • /usr/local/dansguardian/sbin/dansguardian
  • /usr/local/bin/ (you may have to kill this – it hangs after displaying the line “adult/usage”)
  • /usr/local/squid/sbin/squid -k reconfigure
  • /usr/local/dansguardian/sbin/dansguardian -Q

The squid test revealed that I was missing a custom file: “errorTryLoadText: ‘/usr/local/squid/etc/errors/ERR_ACCESS_DENIED_TIME’: (2) No such file or directory”. So, I copied it from “/usr/local/squid/etc/errors/English/ERR_ACCESS_DENIED”, and “edited” it in vi for a little access-denied humor. Never miss a chance to have a spot of fun! After that, squid worked fine.

Dansguardian kept failing with “Error connecting to parent proxy”, until I edited iptables with “iptables -t nat -I OUTPUT 1 -s -d -p tcp --dport 3128 -j ACCEPT"
(to place it as the first output rule on the nat table). Then DG worked fine.

The script hung and had to be killed. I confirmed everything was finished by checking the last file date-time-stamp against the date-time-stamp it displays right after it is run. So if the DTS displayed was “20090214185211”, and the DTS returned with “ls -l /var/lib/squidguard/db/blacklists/porn/stats/20090214185211_stats” was more recent, say “2009-02-14 18:53”, then you can be sure it is finished. Or you can just use “lsof” and look for the process. That is probably smarter.

[UPDATED 10-18-2009]
The script hung because a.) I could not download from the Norway site and b.) “squidguard -C all” from the script was not finding files and went into emergency mode, apparently a place it can hide and whimper silently. Forever. I ran instead “squidguard -d -C all” and discovered it was failing to find certain files, which I just created or copied into existence. This quieted squidguard down and let it finish. Almost – I also commented out the “not_ok” ACL block in the squidguard.conf file, since I am not using it. Details are on this article concerning the updated blacklist script “”: SquidGuard Blacklists…

19. Set up a mailer for notifications (here is a link for assistance):

  • using postfix, point it to your mailserver.isp.domain
  • postfix needs /etc/postfix/transport and /etc/postfix/generic
  • dansguardian.conf calls it with ‘sendmail -t' command
  • for non-authenticated use, do not set ‘by user = on’ in dansgaurdianf1.conf

20. Post-install testing and tweaking:

  • Test with browser as different users – should be transparent proxy surfing now, works with lynx as well (“su - <username>, lynx, G,” should get either Playboy for an approved user or the dansguardian access denied page for a restricted user.)
  • Check if your system emails you violations.
  • Be sure to update your startup files (/etc/init.d/ or your rc.local) to ensure everything starts when the computer is booted.
  • When you are ready, reboot, and check again with lynx as different users.

I have been working on this all day. I have not yet gotten email to work, and am not sure I need to – maybe I’ll just check the logs instead. So, hope this helps, and good luck.

Time for a beer.

Fun Script to Track Your Blog Hit Count…

I got the inspiration for this script from CoolTechie’s blog. He made a script to display cricket match scores from a web site. I decided to have a little fun with it, so made one of my own that periodically checks this blog and displays a popup window if the hit counter has changed, with the current visitor count.

I started with the command, “touch blog-hits”, then edited it in vi. When I was done, I typed “chmod +x blog-hits” to make it executable and ran it by typing “./blog-hits”.

Here is the script…

### Adapted from a soccer score script found on
### Thanks out to cooltechie!


# Set variables - interrupt, url, url title, search phrase, and unchanged counter
##phrase="hits" ### this is the default, change if you use something else
title="Linux Free Trade Zone" ### I wanted to come up with a cool way to extract the title of the URL, but it got late...

# Catch Control-C events to break out of the loop and remove the dump file
trap 'echo "Quitting..."; rm -f dump; exit $USER_INTERRUPT' TERM INT

while [ 1 ]
lynx -dump $url > dump
hits=`grep "$phrase" dump`
if [ "$hits" != "$same" ]
kdialog --title "$title" --passivepopup "$hits" 10
sleep 60

It grabs the text of the url, looks for the word “hits” or whatever phrase you tell it to if you have changed that on your site, and compares it to the old value (same). When the current value is different from the “same” value, such as when the script is first run and when people visit your blog, it displays that new hit count in the popup for 10 seconds, then waits one minute before re-downloading a text dump of the site. You end it with CTRL-C, which also tells it to clean up the dump file it makes. You can run it in the background if you want. I am sure there is plenty of stuff you can add to this as well, and it might even be a little buggy (inaccurate). I just thought it would be fun to have a little popup counter, and it was fun to do.

One thing I found is that it also prints the phrase you searched on to display the visitor count line, because that phrase is getting appended to the “hits” variable. Too tired to troubleshoot, however…

Anyway, enjoy!