Related to this, has anyone investigated how much typos matter in your chats? I would imagine that typing 'typescfipt' would not be a token in the input training set, so how would the model recognize this as actually meaning 'typescript'? Or does the tokenizer deal with this in an earlier stage?
I have tried prompting with a bunch of typos in Claude Code with Sonnet and found it to be fairly tolerant.
It has always done what I meant or asked me a clarifying question (because of my CLAUDE.md instruction).