> Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
I have a feeling that OpenAI and Anthropic both use AI to code a lot more than we think, we definitely know and hear about it at Anthropic, I havent heard it a lot at OpenAI, but it would not surprise me. I think you 100% can "vibe code" correctly. I would argue, with the hours you save coding by hand, and debugging code, etc you should 100% read the code the AI generates. It takes little effort to have the model rewrite it to be easier to read for humans. The whole "we will rewrite it later" mentality that never comes to pass is actually possible with AI, but its one prompt away.
Boris has been very open about the 100% AI code writing rate and my own experience matches. If you have a typescript or common codebase, once you set your processes up correctly (you have tests / verification, you have a CLAUDE or AGENTS.md that you always add learnings to, you add skill files as you find repeatable tasks, you have automated code review), its not hard to achieve this.
Then the human touch points become coming up with what to build, reviewing the eng plans of the AI, and increasingly light code review of the underlying code, focusing on overall architectural decisions and only occasionally intervening to clean things up (again with AI)