My go-to example of a whole mesh of "accountability sinks" is... cybersecurity. In the real world, this field is really not about the tech and math and crypto - almost all of it is about distributing and dispersing liability through contractual means.
That's why you install endpoint security tools. That's why you're forced to fulfill all kinds of requirements, some of them nonsensical or counterproductive, but necessary to check boxes on a compliance checklist. That's why you have external auditors come to check whether you really check those boxes. It's all that so, when something happens - because something will eventually happen - you can point back to all these measures, and say: "we've implemented all best practices, contracted out the hard parts to world-renowned experts, and had third party audits to verify that - there was nothing more we could do, therefore it's not our fault".
With that in mind, look at the world from the perspective of some corporations, B2B companies selling to those corporations, other suppliers, etc.; notice how e.g. smaller companies are forced to adhere to certain standards of practice to even be considered by the larger ones, etc. It all creates a mesh, through which liability for anything is dispersed, so that ultimately no one is to blame, everyone provably did their best, and the only thing that happens is that some corporate insurance policies get liquidated, and affected customers get a complimentary free credit check or some other nonsense.
I'm not even saying this is bad, per se - there are plenty of situations where discharging all liability through insurance is the best thing to do; see e.g. how maritime shipping handles accidents at sea. It's just that understanding this explains a lot of paradoxes of cybersecurity as a field. It all makes much more sense when you realize it's primarily about liability management, not about hat-wearing hackers fighting other hackers with differently colored hats.
We should really define a new term for such work.
Perhaps "Risk Compliance Security" or "Security Compliance Engineering"
Where "Security Compliance Engineering" is the practice of designing, implementing, and maintaining security controls that satisfy regulatory frameworks, contractual obligations, and insurance requirements. Its primary objective is not to prevent cyberattacks, but to ensure that organizations can demonstrate due diligence, minimize liability, and maintain audit readiness in the event of a security incident.
Key goals:
- Pass external audits and internal reviews - Align with standards like ISO 27001, SOC 2, or NIST
- Mitigate organizational risk through documentation and attestation
- Enable business continuity via legal defensibility and insurability
In contrast…
Cybersecurity is focused on actively detecting, preventing, and responding to cyber threats. It’s concerned with protecting systems and data, not accountability sinks.
That is also why so much of the security[tm] software is so bad. Usability and fitness for purpose are not box-tickers. The industry term in play is "risk transfer".
Most security software does not do what it advertises, because it doesn't have to. Its primary function is for the those who bought the product, to be able to blame the vendor. "We paid vendor X a lot of money and transferred the risk to them, this cannot be our fault." Well, guess what? You may not be legally the one holding the bag, but as a business on the other end of the transaction you are still at fault. Those are your customers. You messed up.
As for vendor X? If the incident was big enough, they got free press coverage. The incentives in the industry truly are corrupt.
Disclosure: in the infosec sphere since the early 90's. And as it happens, I did a talk about this state of affairs earlier this week.
The most unfortunate thing about much of corporate 'cybersecurity' is that it combines expensive and encumbering theatre around compliance and deniability... with ridiculously insecure practices.
Imagine, for example, if more companies would hire for software developers and production infrastructure experts who build secure systems.
But most don't much care about security: they want their compliances, they may or may not detect and report the inevitable breaches, and the CISO is paid to be the fall-person, because the CEO totally doesn't care.
Now we're getting cottage industries and consortia theatre around things like why something that should be a static HTML Web page is pulling in 200 packages from NPM, and now you need bold third-party solutions to combat all the bad actors and defective code that invites.
I wonder what the difference is between cybersecurity and civil aviation safety. At a glance they both have a lot of processes and requirements. Somehow on one side they are as you said, a way to deal with liability without necessarily increasing security, while on the other safety is actually significantly increased.
Honestly is is just like Insurance. You understand the value of things you are protecting (and simple compliance has a value to you in penalties and liabilities avoided) and make sure it costs more than that to break into your system.
At a corporate level, it is contractually almost identical to insurance, with the product being sold liability for that security, not the security itself.
Rhyming with this observation - the only time I've ever heard someone getting fired over a phishing incident anywhere I've worked.. was a guy on the cybersecurity team who clicked through and got phished.
Security is closer to product management and marketing than engineering. It's a narrative and the mirror image of product and marketing, where instead of creating something people want based on desire, it's managing the things people explicitly don't want. When organizations don't have product management, they have anti-product management, which is security. We could say, "There is no Anti-Product Division."
Specifically on accountability, I bootstrapped a security product that replaced 6-week+ risk assessment consultant spreadsheets with 20mins of product manager/eng conversation. It shifted the accountability "left" as it were.
When I pitched it to some banks, one of the lead security guys took me aside and said something to the effect of, "You don't get it. we don't want to find risk ourselves, we pay the people to tell us what the risks and solutions are because they are someone else. It doesn't matter what they say we should do, the real risk is transferred to their E&O insurance as soon as they tell us anything. By showing us the risks, your product doesn't help us manage risk, it obligates us to do build features to mitigate and get rid of it."
I was enlightened. Manage means to get value from. The decade I had spent doing security and privacy risk assessments and advocating for accountability for risk was as a dancing monkey.
+1 Insightful
Thank you for sharing this really illuminating take. I spend an unreasonable amount of time dealing with software security, and you've put things in a light where it makes a bit more sense.
This is the ultimate nihilistic take on security.
Yes, 'cyber' security has devolved to box checking and cargo culting in many orgs. But what's your counter on trying to fix the problems that every tech stack or new SaaS product comes without of the box?
For most people when their Netflix (or HN) password gets leaked that means every email they've sent since 2004 is also exposed. It might also mean their 401k is siphoned off. So welcome the annoying and checkbox-y MFA requirements.
If you're an engineer cutting code for a YC startup -- Who owns the dependancy you just pulled in? Are you or your team going to track changes (and security bugs) for it in 6 months? What about in 2 or 3 years?
Yes, 'cyber' security brings a lot of annoying checkboxes. But almost all of them are due to externalities that you'd happily blow past otherwise. So -- how do we get rid annoying checkboxes and ensure people do the right thing as a matter of course?
> "we've implemented all best practices, contracted out the hard parts to world-renowned experts, and had third party audits to verify that - there was nothing more we could do, therefore it's not our fault"
The amount of (useless) processes/systems at banks I've seen in my career that boil down to this is incredible, e.g. hundreds of millions spent on call center tech for authentication that might do nothing, but the vendor is "industry-leading" and "best in-class".
> It's just that understanding this explains a lot of paradoxes of cybersecurity as a field. It all makes much more sense when you realize it's primarily about liability management, not about hat-wearing hackers fighting other hackers with differently colored hats.
Bingo. The same situation for most risk departments at banks or healthcare fraud and insurance companies.
I thought risk at a bank was going to be savvy quants, but it's literally lawyers/compliance/box-checking marketing themselves as more sophisticated than they are. Like the KYC review for products never actually follow up and check if the KYC process in the new products works. There's no analytics, tracking, etc. until audit/regulators come in an ask, "our best-in-class vendor handles this". All the systems are implemented incorrectly, but it doesn't matter because the system is built by a vendor and implemented by consultants, and they hold the liability (they don't, but it will take ~5 years in court to get to that point).
Beginning to understand what "bureaucracy" mechanically is.