sshfs is wickedly handy for mounting remote directories on your local filesystem. Recently I needed to mount the /logs directory off a remote server so a program on my workstation could process log files in /logs.
The textbook command to do that would be:
The tricky part in this particular case is that the server is on a private network so my workstation can not directly access it. I’m required to first ssh to a gateway machine and then ssh to the server.
---------------- ------------- ------------- |workstation | | | | server | | | --------------| gateway | ------- | | |/mnt/svrlogs | | | | /logs | ---------------- ------------- -------------
I found three ways to work with this scenario. I’d love to hear of more ways and get feedback on these.
The first method is to pre-establish a forwarded port over SSH.
Then I have sshfs connect to the localhost (127.0.0.1) address at port 10101.
For a more robust and long lived mount I initiated the ssh port forwarding in a screen session so it will persist after I log out. I use autossh to re-establish the unattended port forwarding should the ssh process become disconnected.
To re-establish the mount when the workstation boots, I used an entry in /etc/fstab
I’m connecting to server.remotenet.org (via the localhost port forwarding) with my personal username (me). The ‘uid=0,gid=0,umask=222,default_permissions’ makes the mountpoint owned by root and read only by everyone (appropriate for my particular task at hand).
The second method I adapted from the fuse-ssh mailing list1, 2
I created a script ‘jump_server’
ssh me@gateway ssh $*
And test mounted with the command
(I had to provide a full path for the ssh_command to get this to work.)
The /etc/fstab entry is a small modification from the previous.
The third method utilizes a set of SSH keys dedicated for sshfs usage. It’s somewhat cumbersome and perhaps should only be treated as an academic exercise. This method does have potential when needing to use keys with empty passphrases. Restrictions can placed on how the keys are used on the remote machines to mitigate the security risks of unlocked keys . One of the limits placed on the key will be a remote command that will be run whenever the key is used. We will use that to establish the ssh connection on behalf of sshfs.
Here are the steps to set this up.
On the workstation I create a new key, leaving the passphrase empty when prompted.
I copy the contents of ~/.ssh/sshfs.pub to my ~/.ssh/authorized_keys file on the gateway machine and add options to limit how the key can be used.
The ‘command=’ option specifies the ssh connection from the gateway to the server. Part of that connection string is the instruction to use my ~me/.ssh/sshfs_gw identity file which contains another key I created, also without a passphrase.
Now I copy the contents of ~/.ssh/sshfs_gw to my ~/.ssh/authorized_keys file on the server and add options to limit how the key can be used.
The gateway initiates the ssh connection to the server. As an aside, under normal conditons the ‘-s sftp’ argument would start up the sftp subsytem on the server but it’s actually not needed or used here. In this case the authorized_keys on server has a forced command to start the sftp-server and so it is that explicit command, not the ‘-s sftp’ request, that is acting here.
[Be sure the ~/.ssh/known_hosts file on each machine is properly configured. That is, my workstation has an entry for gateway.remotenet.org and gateway has an entry in its known_hosts for server.remotenet.org . Typically the first time you ssh to a machine you will be prompted to record that machine in known_hosts. If you do not set this up in advance sshfs could hang at this prompt. You may not see or have access to this prompt resulting in much head scratching over what is going wrong.]
Reviewing, I have an unlocked key on my workstation that gets me into the gateway without a password. As soon as I’m logged in the gateway will automatically execute the specified ssh connection to server.remotenet.org. This ssh connection uses an unlocked key on the gateway machine that gets me into the server without a password. When using that key, the server will automatically start up the sftp-server.
I now have an sftp server connection tunneled through the gateway between my workstation and the remote server. sshfs can now mount jump the gateway and mount the server directory.
Note that I’m making my connection to the gateway which is tunneling the /logs directory on the server.
[Update2/21/2007: If I'm using an ssh-agent to store my normal default keys (eg. id_rsa) then one of those keys gets used in preference to the one specified by IdentityFile. I've added "-oIdentitiesOnly=yes" to the command to disable the ssh-agent stored keys.
The /etc/fstab entry looks like:
]
The downside of this technique is that changes to sshfs options have to be made in the authorized_key file on the gateway machine. That’s fairly straightforward, afterall they should not need to be adjusted outside any initial testing phases, but this coupled with the casual appearance that the mount is off the gateway machine rather than the server really obsures the system design. Personally I view this as a fairly serious drawback that warrants careful consideration before deploying a setup like this on a production system.
The ssh keys used in the third technique are unlocked which incurs some security risks should the keys fall into the wrong hands. I’ve attempted to mitigate some of the risks by including use restrictions when add them to authorized_keys on the remote machines. In particular the forced command restricts the key to that singular usage. In this case though that means the establishment of an sftp connection with the server. That leaves the server open to reading and writing files (subject to the permissions my account has on the server). Hmmm, can commands be executed through the sftp subsystem? There are a few ways to improve this situation. One is to set up and log into a dedicated user account on the server. This account could have tight restrictions on what it can read/write/execute. Another option is to run the sftp-server in a chroot environment such that only /logs is exposed.
So there you have it. Three flavors of jumping across a gateway with sshfs. sshfs is working pretty well for my current use case and I am able to have access to server /logs files on my workstation without having to bother the server’s sysadmin. This is suitable for a testing phase. However, it is a little slow and cumbersome to manage (more effort needs to be undertaken to assure the ssh connection and mount get automatically re-established after a network outage). I’m now thinking that having the server remote log directly to my workstation may be a more robust solution.

6 comments
Comments feed for this article
February 11, 2007 at 6:52 am
Hameed Khan
Thanks, That was really very informative
October 31, 2008 at 2:02 pm
Ken Fallon
Method 1 worked for me just now. Thanks this saves me a lot of time and grief. Subscribing to the blog.
March 13, 2009 at 1:08 am
LonW
Hi,
excuse me, i’ve a similar problem, but a different situation.
Well, i wrote a little bash script to manage duplicity backup (powered by sshfs :P)
If i run ./backup.sh from shell, everything works fine but, when i try to execute my backup.sh from crontab i got a “read: connection reset by peer”.
i’m using keychain to keep the passphrase.
Backup Server Remote Server
192.168.0.248 -> SSHFS -> 192.168.0.111
I’ve done some research and now, things are better with:
sshfs -o ro, no-pty,IdentityFile=/home/duplicity/.ssh/id_dsa.pub,IdentitiesOnly=yes -C root@ip_address:/ $MPOINT
however auth.log still report:
Mar 13 01:59:07 myserver sshd[10138]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7
Mar 13 01:59:07 myserver sshd[10084]: debug1: Forked child 10138.
Mar 13 01:59:07 myserver sshd[10138]: debug1: inetd sockets after dupping: 3, 3
Mar 13 01:59:07 myserver sshd[10138]: Connection from 192.168.0.248 port 53326
Mar 13 01:59:07 myserver sshd[10138]: debug1: Client protocol version 2.0; client software version OpenSSH_4.7p1 Debian-8ubuntu1.2
Mar 13 01:59:07 myserver sshd[10138]: debug1: match: OpenSSH_4.7p1 Debian-8ubuntu1.2 pat OpenSSH*
Mar 13 01:59:07 myserver sshd[10138]: debug1: Enabling compatibility mode for protocol 2.0
Mar 13 01:59:07 myserver sshd[10138]: debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2
Mar 13 01:59:07 myserver sshd[10138]: debug1: PAM: initializing for “root”
Mar 13 01:59:07 myserver sshd[10138]: debug1: PAM: setting PAM_RHOST to “vm2.lan”
Mar 13 01:59:07 myserver sshd[10138]: debug1: PAM: setting PAM_TTY to “ssh”
Mar 13 01:59:07 myserver sshd[10138]: Failed none for root from 192.168.0.248 port 53326 ssh2
Mar 13 01:59:07 myserver sshd[10138]: debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
Mar 13 01:59:07 myserver sshd[10138]: debug1: temporarily_use_uid: 0/0 (e=0/0)
Mar 13 01:59:07 myserver sshd[10138]: debug1: trying public key file /root/.ssh/authorized_keys
Mar 13 01:59:07 myserver sshd[10138]: debug1: matching key found: file /root/.ssh/authorized_keys, line 1
Mar 13 01:59:07 myserver sshd[10138]: Found matching DSA key: 29:91:b6:db:11:b7:2c:ad:3d:10:6a:5d:bc:ce:ba:69
Mar 13 01:59:07 myserver sshd[10138]: debug1: restore_uid: 0/0
Mar 13 01:59:07 myserver sshd[10138]: Failed password for root from 192.168.0.248 port 53326 ssh2
Mar 13 01:59:07 myserver last message repeated 2 times
Mar 13 01:59:07 myserver sshd[10138]: debug1: do_cleanup
Mar 13 01:59:07 myserver sshd[10138]: debug1: PAM: cleanup
Why? :/
Can you help me?
Greetings and Thanks from Italy ^_^
March 13, 2009 at 4:56 am
crashingdaily
I suspect your cron job doesn’t have access to your ssh-agent. That’s the most common culprit for ssh connections that work from the shell but not cron. Or the key hasn’t been added to the agent. I haven’t used keychain (I presume you are referring to Gentoo’s keychain) so can’t help with that but I’m guessing it’s not doing its job. A quick test is to temporarily remove the passphrase from your id_dsa key and see if the cron job works then. If so, then poke a stick at keychain. Try ‘ssh-add -l’ in cron job to list the keys the cron daemon has access to, to make sure your expected key is accessible.
March 13, 2009 at 10:04 am
LonW
Doh!
# m h dom mon dow command
21 10 * * * ssh-add -l && /home/duplicity/SCRIPTS/backup.sh
and i got:
“Could not open a connection to your authentication agent.”
damn..! :P
Usually it works perfectly when i run the script from the command line ./backup.sh
“Another problem with the default ssh-agent setup is that it’s not compatible with cron jobs. Since cron jobs are started by the cron process, they won’t inherit the SSH_AUTH_SOCK variable from their environment, and thus won’t know that a ssh-agent process is running or how to contact it. It turns out that this problem is also fixable. ” – From http://www.ibm.com/developerworks/library/l-keyc2/
I’m trying to resolve it :/
Thanks!
July 31, 2009 at 9:49 pm
Persisting ssh connections while changing networks « crashingdaily
[...] Linux, Tips | Tags: autossh, Linux, screen, SOCKS, ssh, sshfs I recently re-discovered autossh. I’ve been using it to persist mounted sshfs volumes but am only just now realizing it’s the scratch for some of [...]