I built a local LLM server I can access from anywhere, and it uses a Raspberry Pi
…I initially wanted to opt for Ollama due to its simple setup process, but it’s far from efficient and lacks sheer performance. In the end, I opted for llama.cpp, which…