Acquire Hardware

Buy a Raspberry Pi 5: https://www.raspberrypi.com/products/raspberry-pi-5/ and follow the standard Raspbian install process.

Local AI Infrastructure

Install Ollama with the following command. There may be better ways because running unverified shell scripts is not advised, but I just threw caution to the wind and ran it.

curl -fsSL https://ollama.com/install.sh | sh

In case this command is updated in the future, I found the command here: https://ollama.com/download/linux

After reading about the various models, it looks like phi3 is the best (as of this post in July 2025) for a Raspberry Pi. Use the following command to install that model.

ollama run phi3

Note: It should take a while to pull the model the first time. Once the model is pulled, it’s much quicker, though still take a little bit of time to instantiate the model for each conversation/session.

And there you go! You have a local AI! You can now chat with your local AI model!

As a side note, I tried to use the Raspberry Pi AI Hat+, which is based on the Hailo-8 hardware. Unfortunately, this does not work for LLMs. It is more focused on image/video analysis. Hailo-10 will have LLM support, but is not released yet. (as of July 2025)

So I think my next step will be to purchase an Nvidia Jetson Orin Nano Developer Kit. Stay tuned for those updates!