Remote SSH Sessions

Updated April 9, 2026

Overview

When you run Claude Code on a remote machine via SSH, the permission hooks need a way to reach the Crystl bridge running on your local machine. Crystl solves this with an automatic SSH wrapper that sets up reverse tunnels, installs hooks on the remote, and tracks the session for its entire lifecycle.

Once a gem is SSH’d to a remote, Crystl treats it like a first-class remote workspace: shards inherit the connection, git worktrees run on the remote host, and orphaned branches can be reattached from across the wire.

Enable automatic tunneling in Settings > Claude > Remote SSH:

  1. Open Settings (Cmd+,)
  2. Go to the Claude page
  3. Toggle Auto-tunnel for SSH sessions on

Once enabled, every ssh command you run in a Crystl terminal is transparently wrapped by a shell function that sets up reverse tunnels and installs Claude Code hooks on the remote. Approval panels appear locally just like they do for local Claude sessions.

New terminal sessions pick up the setting immediately.

How the ssh() Wrapper Works

Crystl injects a zsh ssh() function into every terminal via ZDOTDIR. When you type ssh user@host, the wrapper intercepts the call and runs command ssh with extra flags. Here’s what it does on every invocation:

1. Reverse tunnels

Two -R flags are added — one Unix socket, one TCP port — so the remote has multiple ways to reach the local bridge:

ssh \
    -R /tmp/crystl-bridge.sock:localhost:19280 \
    -R 19281:127.0.0.1:19280 \
    -o StreamLocalBindUnlink=yes \
    ...

The Unix socket is the primary path; the TCP forward is a fallback for hosts that don’t allow StreamLocalForward. The remote hook script tries them in order (configured address → Unix socket → SSH_CLIENT IP → TCP tunnel → localhost).

2. ControlMaster multiplexing

Every SSH session opens a ControlMaster socket at /tmp/crystl-ssh-$$-%C, kept alive for 30 seconds after the last client disconnects:

-o ControlMaster=auto
-o ControlPath=/tmp/crystl-ssh-$$-%C
-o ControlPersist=30

This lets Crystl reuse the existing authenticated connection for SCP, file relay, and remote git operations without re-prompting for credentials.

3. Claude Code hook installation

Before the interactive shell starts, a Python one-liner runs on the remote that:

  • Writes ~/.claude/crystl-hook.sh — a bash script that POSTs hook payloads to the bridge (trying the configured address, Unix socket, SSH_CLIENT IP, TCP tunnel, and localhost in sequence, with a 2-second connect timeout)
  • Merges hook entries into ~/.claude/settings.json for PermissionRequest, Stop, PostToolUse, SubagentStop, and Notification
  • Removes any stale invalid hook types left over from older installs
  • Sets CRYSTL_GEM and CRYSTL_SHARD env vars so the remote can identify itself when posting to the bridge

Installation is idempotent — reconnecting to the same host doesn’t add duplicate entries.

4. OSC 7771 session tracking

The wrapper emits an OSC 7771 escape sequence before running command ssh:

\033]7771;ssh|user@host|/tmp/crystl-ssh-$$-%C\007

TerminalSession registers an OSC 7771 handler that stores the host and ControlMaster path on the session. This is how Crystl knows a shard is remote, which ControlMaster to talk to for SCP operations, and which host to reconnect new shards to.

Configure Hooks Manually (optional)

If you prefer not to use the auto-tunnel, you can run the hook installer yourself. In Settings > Claude, click “Copy Remote Setup Command” — this copies a one-liner to your clipboard. SSH into your remote machine and paste it. It installs hooks to ~/.claude/settings.json with a fast-fail timeout.

You’ll also need to forward the bridge port:

ssh -R 19280:127.0.0.1:19280 user@remote-host

Or add it permanently to your ~/.ssh/config:

Host myserver
    HostName remote-host.example.com
    User myuser
    RemoteForward 19280 127.0.0.1:19280

Shard Directory Inheritance

Clicking ”+” on the shard bar inside a remote gem creates a new shard that inherits everything about the current one:

  • SSH session — the new shard runs ssh user@host automatically, reusing the ControlMaster connection (no re-auth)
  • Working directory — after the SSH handshake, it cds into the same remote path the current shard is sitting in

Crystl figures out the remote cwd by reading the prompt from SwiftTerm’s buffer. It scans up to 3 rows above the cursor looking for a line that contains the SSH hostname and matches the pattern user@host:path# (or $ or %). The hostname match is important — it prevents local prompts (from tmux, screen, or nested shells) from being mistaken for remote ones.

// Simplified — see TerminalWindow+Sessions.swift:parseRemoteCwdFromPrompt
for row in stride(from: cursorRow, through: max(0, cursorRow - 3), by: -1) {
    let line = terminal.getLine(row: row).translateToString()
    guard line.contains(hostname) else { continue }
    // match "user@host:path#"
    // ...
}

If the cwd can’t be parsed (unusual prompt format, or the prompt has scrolled off), the new shard lands in the remote home directory.

Remote Git Worktrees

Option+clicking ”+” on the shard bar of a remote gem creates a git worktree on the remote host, not locally. The new shard SSHs to the same host and cds into the worktree path.

This is the remote mirror of local isolated shards — multiple agents can work on the same remote project in parallel without stepping on each other’s files.

Remote: ~/myapp (main branch)
├── diamond     — main working directory (shared, at ~/myapp)
├── ⎇ aquamarine — ~/myapp/.crystl/worktrees/aquamarine (branch: crystl/aquamarine)
└── ⎇ sapphire  — ~/myapp/.crystl/worktrees/sapphire   (branch: crystl/sapphire)

Lifecycle

RemoteGitWorktree mirrors the local GitWorktree API but executes every git command over SSH via the shared RemoteSession (same ControlMaster connection as the parent shard). All operations happen remotely:

OperationRuns on remote
Creategit worktree add -b crystl/{name} .crystl/worktrees/{name}
Symlink configsCLAUDE.md, AGENTS.md, .mcp.json, .claude/ into the worktree
Mergegit merge crystl/{name} into the parent branch
Rebasegit rebase main inside the worktree
Closegit worktree remove and optionally git branch -D

No local state is created — everything lives on the remote under .crystl/worktrees/. The Isolation panel’s merge/rebase actions run over the same SSH connection.

Close prompts

Closing a remote isolated shard with commits shows the same prompt as local worktrees: Merge to Main / Keep Branch / Discard. The selected action is executed remotely. Closing a gem with multiple unmerged remote branches shows Keep Branches / Discard All / Cancel.

Reuse and recovery

If you reconnect a shard whose worktree directory still exists and is functional, Crystl reuses it as-is (preserves any uncommitted changes). If only the branch exists — the directory was cleaned up — the worktree is recreated from the branch. Broken worktree dirs are removed and rebuilt.

Remote Orphaned Branches

The Isolation panel’s OPEN section lists orphaned crystl/* branches — worktree branches from previous sessions that still have commits or uncommitted changes. For remote gems, this now includes orphans on the remote host.

Clicking a remote orphan reopens it: a new shard is created, SSHs to the host, and RemoteGitWorktree.create() either reuses the existing worktree directory or rebuilds it from the branch. Uncommitted changes survive.

Remote orphans are queried asynchronously (RemoteGitWorktree.orphanedBranches() runs on a background queue) so the Isolation panel opens instantly even when the SSH round-trip is slow. The list populates in place once the query returns.

Crystl Quest Parties over SSH

Crystl Quest parties work fully over SSH. When you start a quest from a remote shard, all four mode combinations are supported — open and sealed, local and remote. The quest setup panel detects the SSH session, switches the directory picker to the remote filesystem, and creates agent shards that inherit the SSH connection. In sealed mode, worktrees are created on the remote host using RemoteGitWorktree. Approval panels and chat messages relay through the same tunnel described above.

See Starting a Quest > Quest over SSH for the full walkthrough.

File Relay

When a session is connected via SSH with auto-tunnel enabled, Crystl bridges local resources to the remote:

Drag & Drop / Image Paste

Drag a file or paste an image into an SSH terminal — Crystl SCPs it to /tmp on the remote and types the remote path. No manual file transfer needed. Uses the SSH ControlMaster socket opened by the ssh() wrapper, so there’s no extra auth prompt.

Click-to-Open Remote Files

Click a file path in an SSH session and Crystl downloads it via SCP (through the same ControlMaster) to a local temp directory, then opens it in your default editor.

Gem Settings over SSH

When configuring a gem that’s running in an SSH session, CLAUDE.md templates, agent files, and .mcp.json configs are written directly to the remote filesystem.

Relay Endpoints

The bridge server exposes endpoints for remote-to-local communication through the SSH tunnel:

  • POST /relay/image — get the local clipboard image as base64
  • POST /relay/open — open a local file in the default editor
  • GET /relay/clipboard — read the local clipboard text

Bridge Token Isolation

Each bridge run writes an auth token file and listens on its own port/socket. Crystl and Stone (the dev build) use entirely separate identities so they can run side-by-side without colliding:

IdentityCrystlStone
Bridge port1928019380
SSH tunnel TCP port1928119381
Unix socket/tmp/crystl-bridge.sock/tmp/stone-bridge.sock
Token file~/.crystl-bridge-token~/.stone-bridge-token

Hooks on the remote always POST with the parent process’s CRYSTL_BRIDGE_TOKEN, so a Stone-spawned SSH session can’t accidentally route approval requests to a Crystl bridge. This only matters if you’re running both builds at once — most users can ignore it. Guild members with access to Stone nightlies should know the tokens and sockets are fully isolated.

How It Works (end-to-end)

Claude Code hooks POST to the bridge whenever a tool needs permission. On a remote machine, the hook script tries several paths in order until one succeeds:

Remote: Claude Code --> ~/.claude/crystl-hook.sh
                              |
                              v
                   1. configured address (SSH_CLIENT IP)
                   2. Unix socket via -R /tmp/crystl-bridge.sock
                   3. TCP forward via -R 19281:127.0.0.1:19280
                   4. localhost:19280 fallback
                              |
                         SSH tunnel
                              |
Local:  Crystl BridgeServer  <-- shows approval panel

Every hop carries the auth token (Authorization: Bearer ...) and gem/shard headers (X-Crystl-Gem, X-Crystl-Shard) so the bridge can route the decision back to the right panel.

Troubleshooting

Approval panels don’t appear over SSH:

  • Make sure the auto-tunnel is enabled in Settings > Claude
  • Check that nothing else on the remote is bound to port 19280 or to /tmp/crystl-bridge.sock
  • If connecting to the same host from multiple Crystl shards, only the first tunnel binds the port — the others fall through to the Unix socket or SSH_CLIENT IP path

“Address already in use” warning:

  • Another SSH session may already be forwarding port 19280 or the Unix socket to this remote host
  • Harmless — the first tunnel is still active, and the remote hook script tries all paths

Claude Code hooks not firing on remote:

  • Normally the auto-tunnel installs hooks automatically on every connect
  • If you connected without the wrapper, use the “Copy Remote Setup Command” button in Settings > Claude and run it manually on the remote
  • Check that ~/.claude/crystl-hook.sh exists and is executable (ls -la ~/.claude/)

New shard on remote gem doesn’t land in the right directory:

  • The prompt parser needs the SSH hostname to appear in the prompt line — unusual prompt formats (e.g. pure path, no user@host) aren’t detected
  • Fix: include the hostname in your remote PS1, or manually cd after the new shard connects

Remote worktree operations hang:

  • ControlMaster connections have a 30-second persistence window; if the original shard exited long ago, the shared socket may be gone
  • The operation will re-authenticate, which may prompt for keys — this is expected