You should do what you want, and as a break it’s fine. But IMO right now the most leverage for most people is learning how to effectively manage agents. It’s really hard. Not many are truly good with it. It will be relevant for a long time.
> It’s really hard
How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?
The agents are already learning to manage agents, if it’s relevancy you’re looking for you might want to take up plumbing instead.
What has been most valuable for you?
It is hard indeed. I find it really quite exhausting.
Personally, I feel like I have always been a very competent programmer. I'm embracing the new way of working, but it seems like quite a different skillset. I somewhat believe that it will be relevant for a long time, because there is an incredibly large gap in outcomes between members of my team using AI. I've had good results so far, but I'm keen to improve.
For the average and mundane stuff, sure do whatever everyone is doing.
For the good stuff, there’s no alternative but to know and to have taste. Llms change nothing.
If they're so great, then we will end up somewhere where it's easy to pick up.
You will be relevant for 6 months until they manage themselves.
Yeah, it's really difficult to remember to tell it "make no mistakes". Typing a prompt is also really hard, especially when you have to remember the cli command to open the agent. Sometimes I even forget if I need to use "medium", "high", or "xhigh" for a task.
When you say it's hard, what does that mean? Presumably, if the AI is so good, why can't you just ask it to do that? Why are you even needed?
> It will be relevant for a long time.
Citation needed.
I see you got downvoted by I agree. I went through a massive valley of despair and turned back to hand crafting only to realize that for me coding was always a means to an end and I really didn't care at all about how I got there. Now I'm having a lot of fun building out all kinds of wonky projects.
> It will be relevant for a long time.
Why would you think that? The landscape is fast-moving. Prompting tricks and "AI skills" of yesterday are already dated and sometimes actively counterproductive. The explicit goal of the companies working on the tech is to lower the barriers to entry and make it easier to use, building harnesses and doing refinement that align LLMs to an intuitive mode of interaction.
Do you think they'll fail? Do you think we've plateaued in terms of what using a computer looks like and your learnings for wrangling the agents of this year will be relevant for whatever the new hotness is next year? It's a strong claim that demands similarly strong argument to support.