Interestingly, reading about Yegge's Gas Town got me thinking about this topic. Gas Town aims to create a "dark software factory" where agents organize themselves to build software autonomously with only high-level human direction, so Yegge created a weird rube goldberg fever dream of cats, preachers, diggers, mayors and god knows what else. But why? We already have real human organizations staffed and structured in particular ways that are able to deliver software, why not follow that pattern? With something like this, agents can start with a generic software dev shop and iterate on their own organization, instead of Yegge manually dreaming up what roles and relationships should exist.
"Code" is absolutely the wrong word here. There's nothing executable about it.
It's a model. And it will inevitably be incomplete and out of data, because the map is not the territory[1]
Of course, the same is true about the unstructured documents he laments, and whatever is done with those documents could probably sped up a lot this way, probably enough to justify the cost of building and maintaining it.
But the more advanced use cases he imagines run a big risk of making very costly decisions based on an incomplete or outdated model.
[1]: https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation
The OP doesn’t understand the “gray zone” corporations operate. Pretty much every interaction, decision and actions operate in this domain. Ambiguity and intentional compartmentalization on a need to know basis.
[dead]
In the future there will be one giant AI on premise with many physical bodies made in our own image to micromanage the humans. All conversations are monitored, depending on the complexity of your query and who you are talking to tokens are deducted from your account. A complex double-entry book keeping system divides the tokens and quality of the response over the things the company should be doing. Things will be neither investor, employee nor customer centric but 100% AI centric.