prompt + all other bits of information the context has been seeded with before the output was created (documents, web searches, other sources) in which case it might be more efficient to just consume the final deliverable (yourself or via LLM).