logoalt Hacker News

When internal hostnames are leaked to the clown

420 pointsby zdwtoday at 5:22 AM227 commentsview on HN

Comments

notsylvertoday at 6:14 AM

I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.

show 4 replies
andixtoday at 3:05 PM

Hostnames are not private information. There are too many ways how they get leaked to the outside world.

It can be useful to hide a private service behind a URL that isn't easy to guess (less attack surfaces, because a lot of attackers can't find the service). But it needs to be inside the URL path, not the hostname.

  bad: my-hidden-fileservice-007-abc123.example.com/
  good: fileservice.example.com/my-hidden-service-007-abc123/
In the first example the name is leaked with DNS queries, TLS certificates and many other possibilities. In the second example the secret path is only transmitted via HTTPS and doesn't leak as easy.
show 2 replies
b1temytoday at 6:16 AM

Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?

Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).

In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.

show 6 replies
yabonestoday at 1:56 PM

Stuff like this is why I consider uBlock Origin to be the bare minimum security software for going on the web. The amount of 3rd party scripts running on most pages, constantly leaking data to everybody listening, is just mind boggling.

It's treating a symptom rather than a disease, but what else can we do?

show 1 reply
mike-cardwelltoday at 12:47 PM

Only way I can think of protecting against this is to put a reverse proxy in front of it, like Nginx, and inject CSP headers to prevent cross site requests. Wouldn't block the NAS server side from making external calls, but would prevent your browser doing it for them as is the case here. Also would prevent stuff like Google Analytics if they have it. If you set up a proxy, you could also give it a local hostname like nas.local or something with a cert signed by your private CA that Nginx knows about, and then point the real hostname at Nginx, which has the wildcard cert.

Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy

P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.

show 1 reply
atmosxtoday at 7:43 AM

I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.

Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?

Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.

I should have gone with something that runs proper Linux or BSD.

show 8 replies
alimoeenytoday at 6:39 PM

I personally have been blocking sentry and all relevant domains on my machines. I understand this is not a generally applicable advice. For me that’s the right choice

trjordantoday at 3:09 PM

Having recently set up sentry, at least one of the ways they use this is to auto-configure uptime monitoring.

Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"

Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.

show 1 reply
ggmtoday at 7:39 AM

Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.

Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.

Darknet collection during final /8 run-down captured audio in UDP.

Firewalls? ACLs? Pah. Humbug.

show 1 reply
mixedbittoday at 9:08 AM

I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com. I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.

show 2 replies
ashu1461today at 8:16 AM

Isn't the article over emphasising a little bit on leakage of internal urls ?

Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.

show 2 replies
m3047today at 6:06 PM

This is exactly why I have a number of "appliances" which never get clown updates: have addresses in a subnet I block at the segment edge, have DNS which never answers, and there are a few entries in the "DNS firewall" [0] (RPZ) which mostly serve as canaries.

This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"

[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.

teekerttoday at 6:52 AM

Is this a Chrome/Edge thing? Or do privacy respecting browsers also do this? If so, it's unexpected.

If Firefox also leaks this, I wonder if this is something mass-surveillance related.

(Judging from the down votes I misunderstood something)

show 1 reply
linhnstoday at 4:38 PM

Well somehow Rachel's website is not sending back any response now.

zaptheimpalertoday at 7:26 AM

Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.

show 2 replies
superkuhtoday at 3:07 PM

I love that this write-up is hosted both on HTTP and HTTPS. I cannot access the HTTPS version but the HTTP display just fine. Now that's reliability.

HocusLocustoday at 7:10 PM

The Clown is my master

I've been chosen!

Eeeeeeeeeah!

stingraycharlestoday at 5:57 AM

I don’t understand. How could a GCP server access the private NAS?

I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.

show 2 replies
NitpickLawyertoday at 6:08 AM

Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...

show 4 replies
rcakebreadtoday at 3:17 PM

TIL Rachel uses a Mac.

show 1 reply
cwillutoday at 1:00 PM

Just getting 404 not found

that_guy_iaintoday at 7:13 AM

This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.

TZubiritoday at 6:24 AM

>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.

So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.

I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.

That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).

So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.

show 5 replies
dcrazytoday at 5:49 AM

Slightly surprised that this blog seems to have succumbed to inbound traffic.

show 3 replies
ck2today at 3:04 PM

that's actually a great spy trap idea, no?

create an impossible internal hostname and watch for it to come back to you

you don't even need a real TLD if I am not mistaken, use .ZZZ etc

fragmedetoday at 6:05 AM

This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.

show 7 replies
ranger_dangertoday at 5:46 AM

Pennywise found my hostname? We're cooked.

show 2 replies
draw_downtoday at 1:32 PM

[dead]

lsofzztoday at 7:54 AM

[flagged]

show 4 replies
renewiltordtoday at 7:57 AM

Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.

rini17today at 12:56 PM

Fancy web interfaces are road to hell. Do simplest thing that works. Plain apache or nginx with webdav, basic auth(proven code, minimal attack surface). Maybe firewall with ip_hashlimit on new connections. I have it set to 2/minute and for browser it's actually fine, while moronic bots make new connection for every request. When they improve, there's always fail2ban.

That the nas server incl. hostname is public does not bother me then.