logoalt Hacker News

SSH has no Host header

151 pointsby apitmanyesterday at 5:18 AM144 commentsview on HN

Comments

miyuruyesterday at 6:08 AM

> We cannot issue an IPv4 address to each machine without blowing out the cost of the subscription. We cannot use IPv6-only as that means some of the internet cannot reach the VM over the web. That means we have to share IPv4 addresses between VMs.

Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.

Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.

show 6 replies
morpheuskafkayesterday at 5:52 AM

They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either. With only one routable IP for the host, all traffic on a port shared by VMs has to go to a server on the host first (unless you route based on port or source IP with iptnbles, but that is not hostname based).

The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.

What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:

ssh [email protected] becomes: ssh -j [email protected] user@vm1

And just make jumpusr have no host permissions and shell set to only allow ssh.

show 4 replies
dlenskiyesterday at 5:33 AM

SSH is an incredibly versatile and useful tool, but many things about the protocol are poorly designed, including its essentially made-up-as-you-go-along wire formats for authentication negotiation, key exchange, etc.

In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:

https://cryptographycaffe.sandboxaq.com/posts/survey-public-....

show 2 replies
c45yyesterday at 6:10 AM

I would love it if more systems just understood SRV records, hostname.xyz = 10.1.1.1:2222

So far it feels like only LDAP really makes use of it, at least with the tech I interact with

show 4 replies
hnarnyesterday at 4:01 PM

There are about 60k ports you can choose from for each IP, so I don’t understand why you can’t just give one user 1.2.3.4:1001 and the other 1.2.3.4:1002 and route that.

Setting it up like this where you just assume:

> The public key tells us the user, and the {user, IP} tuple uniquely identifies the VM they are connecting to.

Seems like begging for future architectural problems.

show 1 reply
krautsaueryesterday at 5:52 AM

SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?

show 2 replies
thaumaturgyyesterday at 6:32 AM

Yeah, I ran into this problem too. I tried a few different hacky solutions and then settled on using port knocking to sort inbound ssh connections into their intended destinations. Works great.

I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.

But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.

When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.

Been using this for a few years and no problems so far.

show 1 reply
otterleyyesterday at 5:50 AM

This is a clever trick, but I can’t help but wonder where it breaks. There seems to be an invariant that the number of backends a public key is mapped to cannot exceed the number of proxy IPs available. The scheme probably works fine if most people are only using a small number of instances, though. I assume this is in fact the case.

Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.

show 2 replies
mritzmannyesterday at 9:31 PM

Why not "ssh [email protected]" (naming based on the example in the blog)? That way, you would have the "Host header" as username.

elricyesterday at 9:51 AM

Two options I use:

1. Client side: ProxyJump, by far the easiest

2. Server side: use ForceCommand, either from within sshd_config or .ssh/authorized_keys, based on username or group, and forward the connection that way. I wrote a blogpost about this back in 2012 and I assume this still mostly works, but it probably has some escaping issues that need to be addressed: https://blog.melnib.one/2012/06/12/ssh-gateway-shenanigans/

dspillettyesterday at 11:22 AM

The workaround I use for my own stuff is to have a single jump-host that listens on the public IPv4 address and from there connect to the others. I can still just ssh username@namedhost (which could be [email protected], though I usually give short aliases in .ssh/config) without extra command-line options with the on-time config of adding a host entry in .ssh/config listing the required jump host and internal IP address. Connecting this way (rather than alternatives like manual multi-hop) means all my private keys stay local rather than needing to be on the jump host, without needing to muck around with a key agent.

I even do this despite having a small range of routable IPv4s pointing at home, so I don't really need to most of the time. And as an obscurity measure the jump/bastion host can only be contacted by certain external hosts too, though this does still leave my laptop as a potential single point of security failure (and of course adds latency) and one or any bot trying to get in needs to jump through a few hoops to do so.

binarinyesterday at 6:28 AM

In kinda the same situation, I was using username for host routing. And real user was determined by the principal in SSH certificate - so the proxy didn't even need to know the concrete certificates for users; it was even easier than keeping track of user SSH keys.

Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.

show 1 reply
niobeyesterday at 10:05 AM

I had to reread the first paragraph several times before I understood - the author was misuing a term.

> unexpected-behaviour.exe.dev

That is not a URL, that's a fully qualified domain name (FQDN), often referred to as just 'hostname'.

GoblinSlayeryesterday at 4:11 PM

Host header is poorly designed builtin socks5 protocol. Use proper socks5 protocol. Its intended purpose is proxy access to inner networks, which became ubiquitous with this docker/kube/microservice thing.

3r7j6qzi9jvnveyesterday at 5:47 AM

I wonder if it's something like https://github.com/cea-hpc/sshproxy that sits in the middle (with decryption and everything) or if they could do this without setting up a session directly with the client.

Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.

EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper

will try to remember to look later.

show 1 reply
ulrikrasmussenyesterday at 6:42 AM

Wouldn't a much simpler approach be to have everyone log in to a common server which sits on a VPN with all the VMs? It introduces an extra hop, but this is a pretty minor inconvenience and can be scripted away.

show 1 reply
loktarogaryesterday at 9:24 AM

I'm building something that has to share a pool of phone numbers for SMS between many businesses with many clients and the architecture I had planned out looks a lot like this - client gets assigned a phone number from the pool for all its interactions with a certain business.

Good write up of a tricky problem, and glad to real-world validate the solution I was considering.

Shorelyesterday at 2:29 PM

We all should do our part to move to IPv6, the sooner, the better.

thomashabets2yesterday at 7:53 AM

While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxy

Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.

gorgoileryesterday at 8:59 AM

Hosting DNS on the same machine as your application opens up all sorts of nice hacks. For example, you can add domain names to nf_conntrack by noticing the client resolving example.com to 10.0.0.1, then making a connection to 10.0.0.1 tcp/443. This was how I made my own “little snitch” like tool.

hamandcheeseyesterday at 7:28 AM

This would be a great use case of SSH over HTTP/3[0]. Sadly it doesn't seem to have gained traction.

[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html

show 1 reply
dwedgeyesterday at 9:15 AM

This is a problem I've come up against a few times. Enforcing a different key per server would also help solve it in their case, but really I just want a haproxy plugin that allows selecting a backend based on the public key

kazinatoryesterday at 6:27 PM

> SSH, on the other hand, has no equivalent of a Host header.

SSH cannot multiplex to different servers on the same host:port. But you can use multiple ports and forwarding.

You could give each machine a port number instead of a host name:

   ssh-proxy:10001
   ssh-proxy:10002
When you ssh to "ssh-proxy:10002" ("ssh -p 10002 ssh-proxy" wth your OpenSSH client that doesn't take host:port, sigh), it forwards that to wherever the 10002 machine currently is.

It would be interesting to know why they rejected the port number solution, but the only hit for "port" in the article is in the middle of the word "important" in the sentence:

But uniform, predictable domain name behavior is important to us, so we took the time to build this for exe.dev.

You can have uniform, predictable domain + port behavior. Then you don't need a smart proxy which routes connections based on identities like public keys. Just manipulation of standard port forwarding (e.g. iptables).

ksk23yesterday at 8:41 AM

Once hooked into PAM to have a central „ssh box“ mount remote boxes filesystems on user connect. Just need to have a lookup table: which username belongs to wich customer(s server). Ezpz.

Eikonyesterday at 5:33 AM

I am not sure to understand what this is this achieving compared to just assigning a ip + port per vm?

show 2 replies
estyesterday at 7:27 AM

jump servers, it's a thing and a good security measure.

show 1 reply
YooLcyesterday at 6:47 AM

Why not include header in the username field :)

Take a look at this repo: https://github.com/mrhaoxx/OpenNG

It allows you to connect multiple hosts using the same IP, for example:

ssh [email protected] -> hostA

ssh [email protected] -> hostB

show 1 reply
TZubiriyesterday at 6:37 AM

It's hard to think of a clearer example for the concept of Developer Experience.

One similar example of SSH related UX design is Github. We mostly take the git clone [email protected]/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.

spwa4yesterday at 10:23 AM

True, BUT you can use proxycommand in sshconfig, along with wildcard matches to make this sort of thing very practical, at the cost of a single config change.

fcpkyesterday at 8:29 AM

I mean it works... but it's really ghetto. You have to handle username collisions(or enforce unique usernames). IPv4 should be non free, and that'd cover the costs...

snvzzyesterday at 6:33 AM

The solution is ipv6.

XorNotyesterday at 6:22 AM

The solution to this is TLS SNI redirecting.

You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.

show 2 replies
jamesvzbyesterday at 12:09 PM

surprised this isn't talked about more

shablulmanyesterday at 6:21 AM

[dead]

oleh_kos38yesterday at 12:43 PM

[dead]

spanjeryesterday at 6:02 AM

[dead]

charcircuityesterday at 6:28 AM

You don't need SSH. Installing an SSH server to such a VM is a hold over from how UNIX servers worked. It puts you in the mindset of treating your server as a pet and doing things for a single vm instead of having proper server management in place. I would reconsider if offering ssh is an actual requirement here or if it could be better served by offering users a proper control panel to manage and monitor the vms.

show 4 replies