It’s interesting then to ask if this will behave the same as big orgs? Eg once your org is big and settled, anything but the core product and adjacent services become impossible, which is why 23 often see a 50-person company out-innovating a 5k person company in tech (only to be bought up and dismantled, of course, but that’s besides this point).
Will agents simply dig the trenches deeper towards the direction of the best existing tests, and does it take a human to turn off the agent noise and write code manually for a new, innovative direction?
I totally get your point and agree to an extends, though I have not yet been able to create that trust with the LLM. With human teams, yes, with LLMs, feels like I still have to verify too much.
It’s interesting then to ask if this will behave the same as big orgs? Eg once your org is big and settled, anything but the core product and adjacent services become impossible, which is why 23 often see a 50-person company out-innovating a 5k person company in tech (only to be bought up and dismantled, of course, but that’s besides this point).
Will agents simply dig the trenches deeper towards the direction of the best existing tests, and does it take a human to turn off the agent noise and write code manually for a new, innovative direction?