Gemma certainly was trained for tool calling, but the implementation in llama.cpp has been troubled because Gemma uses a different chat template format. The processor from the transformers library works fine though.
Oh I must've missed this.
The AI space moves so fast! I'll check it out again.
Oh I must've missed this.
The AI space moves so fast! I'll check it out again.