The 4-bit quant weighs 4.25 GB and then you need space for the rest of the inference process. So, yeah you can definitely run the model on a Pi, you may have to wait some time for results.
https://huggingface.co/unsloth/gemma-3n-E4B-it-GGUF