Using AutoSSH for Remote Access Behind a Carrier Grade NAT


I have some equipment connected to the internet via an ISP that uses a Carrier Grade NAT. While this allows the internet to continue growing with IPv4, it effectively restricts access to this equipment so that it is unreachable from outside the ISP network – not ideal for remote management.

To enable remote management, I need to setup an SSH tunnel from a server inside the remote network to an externally reachable host. In my case, I am using an AWS EC2 instance.


Since this equipment is situated nearly ~1000 miles away at a small family business, there is no IT support. I cannot just hop in my car and intervene when there are issues.

The tunnel must be resilient

It must be able to recover from network interruptions

Keeping a connection alive for an extended duration is intrinsically hard. Any number of things can happen between the connected equipment and the AWS EC2 instance where I am hosting the tunnel endpoint.

Possible failure modes

  • The AWS cloud could do any number of things to cause the connection to break. This is the nature of the cloud.
  • Routers between my equipment and AWS could drop or reappear. This is the nature of the internet.
  • The ISP could experience intermittent outages.
  • Zombie processes at the target port (on the terminating side) could prevent connections.

A resilient solution

AutoSSH is designed for our exact use case. AutoSSH is able to detect and respond to connection timeouts.

The remote server and EC2 instance must be resilient

If the server or EC2 instance must be rebooted, it must reconnect automatically.

Possible failure modes

  • Critical software patches may cause a reboot.
  • A power outage could cause a reboot.
  • Remote management may require a reboot.
  • Finally, and perhaps most importantly… If a connection is broken and not recovering, it is very easy for a family member to reboot the remote server.

A resilient solution

The remote equipment must initiate a tunneling session after rebooting. For this requirement, systemd is a great option. Through systemd we have control of when to start a tunneling session, and logic to restart the service should AutoSSH exit.


To set this up, I am using an EC2 instance as my jump host. Your jump host can be any server accessible to the remote server and the internet.


  1. On the remote server, create bash script: create-ssh-tunnel

  2. On the remote server, create service definition: autotunnel.service

  3. On the jump host configure SSH to allow the jump host port forwarding on all interfaces.

  4. On the remote server, install the autotunnel.service

Create bash script:

As root, create this bash script. For this write-up, I will use the path /root/services/


# The username used to connect to the EC2 instance.
declare username=foo

# The IP address or DNS name of the EC2 instance.

# The absolute path to the ssh key for connecting to EC2 instance.
declare ssh_key_path=/home/foo/foo-ssh-key.pem

# The port we will bind to on the jump host 
declare port=9122

# Deal with zombies or other processes bound to our target port.
# Connect to AWS endpoint and kill any process that may already be bound to the target port.
ssh -i "$ssh_key_path" $username@$jump_host "\
  for process in '$(sudo lsof -i :$port | grep 'LISTEN' | sed -E 's/\\s+/ /g' | cut -d' ' -f2)'; do \
    kill -9 $process; \

# Re-establish the tunnel
autossh -M 0 -o ServerAliveInterval=30 -o ServerAliveCountMax=3 \
    -o StrictHostKeyChecking=no -o ConnectTimeout=5 -o ExitOnForwardFailure=yes \
    -nNTv -i $ -R "$port:localhost:22" $username@$jump_host

Create service definition autotunnel.service

As root, create this service definition. For this write-up, I will use the path /root/services/autotunnel.service.

Description=AutoSSH Tunnel



Configure SSHD on the Jump Host

This step is optional. If you want the jump host to be transparent, meaning you can connect directly to the remote server without having to create a login session on the jump host, then you must expose the port forwarding on all interfaces of the jump host. Otherwise, the default configuration will only allow the port-forward on the loopback interface.

Update the GatewayPorts setting in /etc/ssh/sshd_config. If this doesn’t work, try updating /etc/ssh/ssh_config.

GatewayPorts clientspecified

The GatewayPorts setting specifies whether remote hosts are allowed to connect to local forwarded ports. By default, ssh binds local port forwardings to the loopback address, but we want to connect to (all interfaces). This value should either be ‘yes’ or ‘clientspecified’.

If you choose not to set this, then you can still connect to your remote server, but you must do so from a session within the jump host.

Install the autotunnel.service

Navigate to the directory path where you created autotunnel.service. For this write-up, that path is /root/services.

cd /root/services
systemctl enable autotunnel.service    # Install the service definition
systemctl restart autotunnel.service   # Start the service