logoalt Hacker News

Welcome to the Strip Mining Era of OSS Security

108 pointsby salsakranyesterday at 11:37 AM77 commentsview on HN

Comments

ahlCVAyesterday at 1:29 PM

Whenever one of these vulnerability apocalypse posts comes along I cannot help but think of the Litany of Gendlin:

  What is true is already so.
  Owning up to it doesn't make it worse.
  Not being open about it doesn't make it go away.
  And because it's true, it is what is there to be interacted with.
  Anything untrue isn't there to be lived.
  People can stand what is true,
  for they are already enduring it.
I cannot wrap my mind around why people think finding vulnerabilities is bad. The code already was broken before somebody published the vulnerability. The difference now only is that you know about this.

Imagine somebody finding a flaw in a mathematical proof and everybody being sad because a beautiful proof got invalidated rather than being glad future work won't build on flawed assumptions.

I get that the rate of vulnerability discovery can be a burden, especially for people doing FOSS in their spare time, but the sustainability problem with that has always existed and only gets exacerbated by the vulnerability stuff, but the latter isn't the cause you need to make go away.

show 5 replies
_alternator_yesterday at 12:54 PM

The article focuses on OSS, but closed-source software is at major risk too. Perhaps more.

It's gotten much easier to reverse engineer binaries in general, and security patches in particular. Basically, an LLM can turn binaries into 'readable' code, and then reason about said code.

show 3 replies
marginalxyesterday at 12:12 PM

Clearly for commercial oriented opensource software, security through obscurity is one way to keep the pace in the short term. Not an option for proper open source software. Will this be the case that people who use open source software that is easily detectable will also start to shy away from using them for the fear of zero-days?

One of the benefits of Open source has been that there are more eye balls on the source, leading to more secure code/better quality. I think given enough time the bug reports will plateau and we will be back to a normal cadence - once the tsunami is over, hopefully things will settle at a more manageable cadence .

show 3 replies
likesHumidityyesterday at 5:00 PM

I think this is going to play out in interesting ways. There's a saying in lockpicking: even if you can pick the lock, the easiest way in is usually a window. Distribution is core to a technology's market reach, and every distribution point is a window. The strip-mining piece reads to me as one phase of a longer cycle — tech distributed → adopted → experimented with → misused → secondary protection market emerges. The 2017 S3 misconfiguration wave is a useful reference: Verizon and Booz Allen both spilled data through bucket ACLs with inadequate controls, AWS responded with Block Public Access, and the CSPM market matured to sell the discipline back. Mythos looks like the same shape at the AI layer — leaked via vendor environment chain (Mercor breach + contractor credentials + URL guessing), not code-level vulnerability. The proliferation of high-quality security research the article points at reads to me as one sign of the secondary market forming as the primary disruptive technology stabilizes — Gartner already has AI governance spending crossing $1B by 2030. And it seems to be landing on OSS first because that's where the surface area and public exposure are, which is what the labor pressure is documenting. Cascade to closed source seems likely as the compression between vulnerability-introduced and vulnerability-identified continues, just on a lag and behind NDAs. The OSS side could go either way under that pressure, and I don't know which dominates. FFmpeg slowed in 2024 from overload and got rescued by Germany's Sovereign Tech Fund — pressure converted into hardened posture. Ingress-Nginx retired November 2025 after two maintainers couldn't sustain it on weekends, despite running in ~50% of cloud-native environments — same pressure, no backing. Tracks institutional backing more than anything intrinsic to OSS. The wrinkle Mythos adds: same product is both weapon and salve. Pay to run it against your own systems, or be vulnerable to what it finds in everyone else's — discovery and protection collapsed into one SKU. We'll need to watch the locksmiths to see if they end up selling lockpicks in addition to keys. Going to be an interesting summer.

aetherspawnyesterday at 12:30 PM

Say I had $1000, how do I get the best value for money to discover vulnerabilities? Are there any worthwhile LLM powered services that are turnkey and ready to go?

show 3 replies
mtlynchyesterday at 1:52 PM

> Most are not serious, and we’ve quietly fixed them, thanked the researcher, and went our merry way... These come from a wide variety of locations and people, and sometimes, but not always, are looking for bug bounties.

I take it that Metabase is both not paying bug bounties and not using these tools internally?

If that's the case, Metabase is not going to get meaningful investment from researchers who want to fix issues, but they'll get increased attention from malicious attackers who have no qualms exploiting the vulnerabilities for profit.

LLMs have made it a lot easier for people to find vulnerabilities in software. Open-source makes it easier, but we already have non-AI tooling (IDA Pro, Ghidra) that's good at binary reverse engineering, and LLMs can use that output to find vulnerabilities as well.

This year, as I select products to use for sensitive data, I've been paying a lot more attention to whether they offer bug bounties and for how much. For example, I like Kagi for search and thought about trying Orion, their web browser. Then, I saw that Kagi's been paying $100 for UXSS vulnerabilities.[0] For comparison, Firefox pays $8-10k,[1] and Chrome pays up to $10k for the same class of bug.[2]

[0] https://help.kagi.com/kagi/privacy/bug-bounty-program.html

[1] https://www.mozilla.org/en-US/security/client-bug-bounty/

[2] https://bughunters.google.com/about/rules/chrome-friends/chr...

show 2 replies
cyrusradfaryesterday at 3:17 PM

I'd buy the core thesis and appreciate the concern.

I do think security is going to require more, not, less human investment as attackers may be running automated vulnerability screens from the outside that you must counter, as well. Without rigorous internal processes to manage and screen all changes and upgrades, companies risk leaving themselves open.

One design change which limits exposure is to have more local-first apps or experiences so there's less cloud / server to computer interactions to secure.

adamtaylor_13yesterday at 12:36 PM

> Did you have other plans for the weekend? Or a long term project you’re prioritizing? That’s nice, you have a new plan — fix every vulnerability that comes in NOW.

Umm... no? It's called OPEN source. Expecting people to cancel their plans to make your free software more secure is pretty audacious. Luckily, many WILL, but the expectation is just foolish.

show 1 reply
Machayesterday at 1:32 PM

> Did you have other plans for the weekend? Or a long term project you’re prioritizing? That’s nice, you have a new plan — fix every vulnerability that comes in NOW.

Or you know, provide the security companies and businesses using your software for free with all the fix timelines and out of hours support they’ve paid for (none).

show 1 reply
devinaboxyesterday at 1:43 PM

This is something I struggle with as someone building a tool for debugging and security.

I have dog-fooded it heavily on my own projects, client projects and friends projects. It finds things that are really quite clever and not obvious. It really helps me.

But when I try to do the obvious thing for sales of using an OSS project to get hype, show off etc. I find that it becomes really hard to really know that I am helping and not just spamming.

To be clear - I think for an AI tool like mine to actually give you clever results that finds not obvious issues and security flaws - it needs to have some level of false positives.

I find myself struggling to justify the approach of firing off defects to an OSS maintainer without verifying them - which takes considerable time if I am going to do a good job. Even with tools to help pull apart the code, the core problem is always you don't know what you don't know.

The same process working on my own projects I can eat through a ton of defects and find some really great stuff. But that's only possible because I can tell at a glance what is real, what is fake, and also what is an oh ** issue.

So I think this is true, but the risk is that people who don't understand the projects just point scanners at OSS blindly and ruin the good work maintainers are doing.

This stuff is more complicated than people give credit - and it's so easy to kid yourself into thinking any bug report is helpful.

show 1 reply
le-markyesterday at 1:05 PM

So what does this mean for the open source ecosystem? Unmaintained or “finished” projects will be labeled as to unsafe to use?

show 1 reply
hrjriritififyesterday at 1:05 PM

I do not think author understands how opensource works. You have a problem on your computer, in __your__ software, and somehow some random dude is responsible for fixing it? Sure if you gimme a few kilo USDs I will drop everything and come to rescue you. But for free it is a volunteer gig I do once a month....

show 1 reply
gmuslerayesterday at 12:29 PM

The problem on the side of closed source software is that if there had been leaks of source code, the vulnerabilities and exploits may remain unknown for long time.

show 1 reply
xbaryesterday at 2:27 PM

A focus on security in 2026 is driving code quality improvements in long-lived software. There is a step-function increase in the identification and remediation loop among disciplined engineers.

Defining an "era" as a "summer" is short-sighted. Calling an industry-wide efforts to find and fix security vulnerabilities with better tools "strip mining" is backwards thinking, from where I sit.

People who prefer 0days in their code baffle me.

salsakranyesterday at 12:37 PM

Side conversation -- This is all stuff we're seeing in white/grey hat land. What's going on in blackhat land?

show 1 reply
ryanackleyyesterday at 3:27 PM

This needs to be read after the article from Turso on how they're retiring their bug bounty program because of being inundated with useless AI slop reports. It's the top story on HN right now.

https://turso.tech/blog/the-wonders-of-ai

krupanyesterday at 5:26 PM

AI hype. Don't bother

dynawickiyesterday at 12:19 PM

Good luck getting anyone who values their time to even triage the results. I would rather lick the bottom of a NYC dumpster that a rat had just died in.

show 1 reply
as3qkaHyesterday at 12:41 PM

Apparently the AI company Metabase has a very poor code base. Like so many others, instead of questioning their own (or AI) output, they help their AI overlords by promoting security scans.

Fact is that Mythos found only one issue in curl and nothing at all in most code bases. It is getting quiet around Mythos, and the AI companies will move on to the next scam.

show 1 reply