As a sizable share of the market is going to want to use this for local LLMs, I do not think this is that misleading.
Most people I know are not using TinyGrad for inference, but CUDA or Vulkan (neither of which are provided here).
Most people I know are not using TinyGrad for inference, but CUDA or Vulkan (neither of which are provided here).