You are currently browsing the monthly archive for July 2007.

Lifehacker a nice write up on Firefox web browsing with SOCKS proxies. The tip about network.proxy.socks_remote_dns was new to me and I will have to play with that sometime. Safari, my primary browser, seems to resolve DNS requests at the proxy by default so that saves me the hassle in the meantime.

One of the take home messages of the Lifehacker entry is that you can run “ssh -D 1080 server.remotehost.com” on your workstation, then configure Firefox (as well as most other browsers) to use a SOCKS proxy at localhost port 1080. This provides encrypted communications between your workstation and server (great when your workstation is on an untrusted wireless network) and for masquerading as the server (useful when accessing websites that are behind a firewall or that restrict access by IP address).

Very simple, extremely handy. But what if you want to use remote server that is behind a firewall and only accessible via a gateway machine?

----------------               -------------         -------------
| workstation  |               |           |         |  server   |
|              | --------------|  gateway  | ------- |           |
|(web browser) |               |           |         |  (SOCKS)  |
----------------               -------------         -------------

In that case you have to tunnel through the gateway to get to the SOCKS server running on the server. In this post I’m going to walk though building up the ssh command that will achieve such a tunnel. I will then present an alternate, more generic method.

Read the rest of this entry »

Last week SpikeLab.org posted a set of benchmarks for scp, tar over ssh and tar over netcat. The loser in SpikeLab’s environment was scp, coming in at roughly three times slower than tar over ssh (10m10s vs 3m18s, respectively) for a directory of two hundred 10MB files.

Interesting. I had not ever noticed scp performing any worse than ssh but then again I had never compared them directly. I decided to run my own unscientific tests on my servers to see if I’m in the same boat.

The Hardware

sender: RedHat EL4, OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003
16GB memory, 4x Dual Core AMD Opteron
receiver: Apple OSX Server, OpenSSH_4.5p1, OpenSSL 0.9.7l 28 Sep 2006
1GB memory, 2GHz PowerPC G5

These are on a 1Gb/s network. There are four router hops between them and potentially competing traffic on the network so I repeated the tests a few times during non-peak hours to minimize effects of traffic interference.

The Tests

First I created a directory of two hundred 10MB files.

$ until [ `ls|wc -l` -gt 199 ]; do let $((i=$i+1)); dd if=/dev/urandom of=$i bs=10k count=1k; done;

Then I ran a series of scripted file transfer tests, modeling after SpikeLab’s tests. The script is posted at the end of this post.

$ ./perfTests.sh

The Results

A representative result of one of the benchmarks for transferring urandom-generated files across the network is shown in Table 1.

Table 1.

command compression time
scp no 251.04s
scp ssh 262.37s
tar no 264.99s
tar ssh 267.34s
tar gzip 324.88s
tar ssh and gzip 331.32s
tar no, blowfish encryption 279.94s
nc no 69.45s
nc gzip 219.53s

In contrast to SpikeLab’s results, I saw no significant difference between scp and tar over ssh. Also, in my environment the addition of gzip compression to the tar transfers had a detrimental impact on the performance. Compare that with SpikeLab’s results in which the gzip compression significantly improved the transfer rates.

I can think of a few reasons for the transfer rate differences at the two sites. Different versions or build options of SSH could affect the results. ssh/scp have a number of options that can be set in configuration files so what I’m showing as the command line execution is not telling the whole story. Those behind-the-scene configurations may be affecting results.

The effects of gzip compression I see could be explained by the randomness of the files being compressed. The Table 1 results were using files generated from the contents of /dev/urandom. If I repeat the tests with files composed uniformly of NULL characters from /dev/zero then gzip gives a marked improvement (Table 2) on the transfer times. The more random the contents of a file the less compression gzip can achieve. In fact, if the contents is fully random, as the case here, no compression can take place and the size of the compressed file will actually be larger due to gzip’s accounting overhead stored with the file. So in some cases gzip can introduce compute time overhead with no reduction in data sent over the wire. The NULL files from /dev/zero compress nicely – the 2GB directory can be compressed down to a 2MB tarball – so the bandwidth savings is substantial.

Table 2.
Trials using non-random files generated from /dev/zero

command compression time
scp no 271.80s
scp ssh 264.76s
tar no 269.62s
tar ssh 272.20s
tar gzip 78.33s
tar ssh and gzip 76.25s
tar no, blowfish encryption 277.29s
nc no 78.51s
nc gzip 78.12s

Interestingly enabling compression in scp/ssh had no real effect on the NULL files although it should be using the same zlib compression algorithm and same default compression level (6) as gzip. The CPU on the receiver seems to be the limiting factor with gzip compression over netcat so no improvement was seen there. The previous ssh results were using ssh version 2. If I use ssh version 1, with and without compression, I do see a dramatic difference (Table 3).

Table 3.
SSH-1 and /dev/zero data

command compression time
scp no 587.42s
scp ssh 98.88s
tar no 687.80s
tar ssh 93.92s
tar gzip 78.05s
tar ssh and gzip 87.90s

As an aside, I tested SSH blowfish encryption which is reportedly faster than the default AES. However I saw no improvement to the transfer rate by using that algorithm (Table 1).

I think in summary all this highlights the need to benchmark specific environments and adjust accordingly. Your mileage may vary.

Read the rest of this entry »

Categories

July 2007
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Latest del.icio.us