sshfs is wickedly handy for mounting remote directories on your local filesystem. Recently I needed to mount the /logs directory off a remote server so a program on my workstation could process log files in /logs.
The textbook command to do that would be:
The tricky part in this particular case is that the server is on a private network so my workstation can not directly access it. I’m required to first ssh to a gateway machine and then ssh to the server.
---------------- ------------- ------------- |workstation | | | | server | | | --------------| gateway | ------- | | |/mnt/svrlogs | | | | /logs | ---------------- ------------- -------------
I found three ways to work with this scenario. I’d love to hear of more ways and get feedback on these.
The first method is to pre-establish a forwarded port over SSH.
Then I have sshfs connect to the localhost (127.0.0.1) address at port 10101.
For a more robust and long lived mount I initiated the ssh port forwarding in a screen session so it will persist after I log out. I use autossh to re-establish the unattended port forwarding should the ssh process become disconnected.
To re-establish the mount when the workstation boots, I used an entry in /etc/fstab
I’m connecting to server.remotenet.org (via the localhost port forwarding) with my personal username (me). The ‘uid=0,gid=0,umask=222,default_permissions’ makes the mountpoint owned by root and read only by everyone (appropriate for my particular task at hand).
I created a script ‘jump_server’
ssh me@gateway ssh $*
And test mounted with the command
(I had to provide a full path for the ssh_command to get this to work.)
The /etc/fstab entry is a small modification from the previous.
The third method utilizes a set of SSH keys dedicated for sshfs usage. It’s somewhat cumbersome and perhaps should only be treated as an academic exercise. This method does have potential when needing to use keys with empty passphrases. Restrictions can placed on how the keys are used on the remote machines to mitigate the security risks of unlocked keys . One of the limits placed on the key will be a remote command that will be run whenever the key is used. We will use that to establish the ssh connection on behalf of sshfs.
Here are the steps to set this up.
On the workstation I create a new key, leaving the passphrase empty when prompted.
I copy the contents of ~/.ssh/sshfs.pub to my ~/.ssh/authorized_keys file on the gateway machine and add options to limit how the key can be used.
The ‘command=’ option specifies the ssh connection from the gateway to the server. Part of that connection string is the instruction to use my ~me/.ssh/sshfs_gw identity file which contains another key I created, also without a passphrase.
Now I copy the contents of ~/.ssh/sshfs_gw to my ~/.ssh/authorized_keys file on the server and add options to limit how the key can be used.
The gateway initiates the ssh connection to the server. As an aside, under normal conditons the ‘-s sftp’ argument would start up the sftp subsytem on the server but it’s actually not needed or used here. In this case the authorized_keys on server has a forced command to start the sftp-server and so it is that explicit command, not the ‘-s sftp’ request, that is acting here.
[Be sure the ~/.ssh/known_hosts file on each machine is properly configured. That is, my workstation has an entry for gateway.remotenet.org and gateway has an entry in its known_hosts for server.remotenet.org . Typically the first time you ssh to a machine you will be prompted to record that machine in known_hosts. If you do not set this up in advance sshfs could hang at this prompt. You may not see or have access to this prompt resulting in much head scratching over what is going wrong.]
Reviewing, I have an unlocked key on my workstation that gets me into the gateway without a password. As soon as I’m logged in the gateway will automatically execute the specified ssh connection to server.remotenet.org. This ssh connection uses an unlocked key on the gateway machine that gets me into the server without a password. When using that key, the server will automatically start up the sftp-server.
I now have an sftp server connection tunneled through the gateway between my workstation and the remote server. sshfs can now mount jump the gateway and mount the server directory.
Note that I’m making my connection to the gateway which is tunneling the /logs directory on the server.
[Update2/21/2007: If I’m using an ssh-agent to store my normal default keys (eg. id_rsa) then one of those keys gets used in preference to the one specified by IdentityFile. I’ve added “-oIdentitiesOnly=yes” to the command to disable the ssh-agent stored keys.
The /etc/fstab entry looks like:
The downside of this technique is that changes to sshfs options have to be made in the authorized_key file on the gateway machine. That’s fairly straightforward, afterall they should not need to be adjusted outside any initial testing phases, but this coupled with the casual appearance that the mount is off the gateway machine rather than the server really obsures the system design. Personally I view this as a fairly serious drawback that warrants careful consideration before deploying a setup like this on a production system.
The ssh keys used in the third technique are unlocked which incurs some security risks should the keys fall into the wrong hands. I’ve attempted to mitigate some of the risks by including use restrictions when add them to authorized_keys on the remote machines. In particular the forced command restricts the key to that singular usage. In this case though that means the establishment of an sftp connection with the server. That leaves the server open to reading and writing files (subject to the permissions my account has on the server). Hmmm, can commands be executed through the sftp subsystem? There are a few ways to improve this situation. One is to set up and log into a dedicated user account on the server. This account could have tight restrictions on what it can read/write/execute. Another option is to run the sftp-server in a chroot environment such that only /logs is exposed.
So there you have it. Three flavors of jumping across a gateway with sshfs. sshfs is working pretty well for my current use case and I am able to have access to server /logs files on my workstation without having to bother the server’s sysadmin. This is suitable for a testing phase. However, it is a little slow and cumbersome to manage (more effort needs to be undertaken to assure the ssh connection and mount get automatically re-established after a network outage). I’m now thinking that having the server remote log directly to my workstation may be a more robust solution.