logoalt Hacker News

jameshartlast Friday at 2:52 AM3 repliesview on HN

I’m not sure ‘patched’ is the right word here. Are you suggesting they edited the LLM weights to fix cabbage transportation and car wash question answering?


Replies

gf000last Friday at 5:45 AM

Absolutely not my area of expertise but giving it a few examples of what should be the expected answer in a fine-tuning step seems like a reasonable thing and I would expect it would "fix" it as in less likely to fall into the trap.

At the same time, I wouldn't be surprised if some of these would be "patched" via simply prompt rewrite, e.g. for the strawberry one they might just recognize the question and add some clarifying sentence to your prompt (or the system prompt) before letting it go to the inference step?

But I'm just thinking out loud, don't take it too seriously.

onemoresooplast Saturday at 10:08 PM

Used patched for lack of a better word. Not sure how they fix the edge cases for these types of fixes/patches or whatever they’re specifically called

TheLNLlast Friday at 5:13 AM

They might have further trained the model with these edgecases in the dataset

show 1 reply