Weird. No mention of the technical aspects, essentially just blaming average students for not being engaged enough with their simplistic ChatGPT clone. No wonder they have not dared yet to give out actual usage metrics. If I was given the choice between an inferior product that probably lags significantly behind on all features and one of the standard offerings from OpenAI, Google or Anthropic, I'd question why I should use this thing too. According to their website, they position Khanmigo like this:
>Unlike other AI tools such as ChatGPT, Khanmigo doesn’t just give answers. Instead, with limitless patience, it guides learners to find the answer themselves. In addition, Khanmigo is the only AI tool that is incorporated with Khan Academy’s world-class content library that covers math, humanities, coding, social studies, and more.
The first differentiation is literally just prompting (if at all). Nowadays you can tell any chatbot to behave that way. The second one may have been an edge before tool use was widely common, but with all chatbots now having access to the internet and code execution, it seems like this has also become a dud. This product was a nice idea on paper, but the fast technical evolution of the field has largely left it in the dust.
I mean, isn't the whole khan academy approach "we know better how to teach everything"? It's not surprising that they'd think they have more enlightened prompts than anyone else.
They had really cool math videos and got given too much money, that's about the story.