I usually do most of the engineering and it works great for writing the code. I’ll say:
> There should be a TaskManager that stores Task objects in a sorted set, with the deadline as the sort key. There should be methods to add a task and pop the current top task. The TaskManager owns the memory when the Task is in the sorted set, and the caller to pop should own it after it is popped. To enforce this, the caller to pop must pass in an allocator and will receive a copy of the Task. The Task will be freed from the sorted set after the pop.
> The payload of the Task should be an object carrying a pointer to a context and a pointer to a function that takes this context as an argument.
> Update the tests and make sure they pass before completing. The test scenarios should relate to the use-case domain of this project, which is home automation (see the readme and nearby tests).
Yeah, I feel like I get really good results from AI, and this is very much how I prompt as well. It just takes care of writing the code, making sure to update everything that is touched by that code guided by linters and type-checkers, but it's always executing my architecture and algorithm, and I spend time carefully trying to understand the problem before I even begin.
What you’re describing makes sense, but that type of prompting is not what people are hyping
This is a good start. I write prompts as if I was instructing junior developer to do stuff I need. I make it as detailed and clear as I can.
I actually don't like _writing_ code, but enjoy reading it. So sessions with LLM are very entertaining, especially when I want to push boundaries (I am not liking this, the code seems a little bit bloated. I am sure you could simplify X and Y. Also think of any alternative way that you reckon will be more performant that maybe I don't know about). Etc.
This doesn't save me time, but makes work so much more enjoyable.
This is similar to how I prompt, except I start with a text file and design the solution and paste it in to an LLM after I have read it a few times. Otherwise, if I type directly in to the LLM and make a mistake it tends to come back and haunt me later.
I feel that with such an elaborated description you aren't too far away from writing that yourself.
If that's the input needed, then I'd rather write code and rely on smarter autocomplete, so meanwhile I write the code and think about it, I can judge whether the LLM is doing what I mean to do, or straying away from something reasonable to write and maintain.