🧪 Tangnet Local LLM Node Guide

🖥️ System Overview

🧠 LLM Runtime (TinyLlama)

Run the model:

~/tangnet/llama.cpp/build/bin/llama-run ~/tangnet/llama.cpp/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf "Your prompt here"

Create a bash alias:

echo "alias tangnet='~/tangnet/llama.cpp/build/bin/llama-run ~/tangnet/llama.cpp/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf'" >> ~/.bashrc && source ~/.bashrc

Then just run:

tangnet "What's the mission?"

🔐 Connecting to Pi

From laptop or desktop:

ssh brand@192.168.1.31

Transfer a file from laptop to Pi:

scp path/to/your/model.gguf brand@192.168.1.31:~/tangnet/llama.cpp/

Use VNC Viewer (if enabled):

vncviewer 192.168.1.31:1

🔧 Managing the Pi

Start, stop, or reboot properly:

sudo shutdown -h now # Shutdown safely sudo reboot # Reboot the Pi

Check CPU temperature:

vcgencmd measure_temp

Monitor processes:

htop

Best practice: Shutdown the Pi if not used daily. Leave on for automation or persistent use.

🧪 Handling the Beast

This Pi is now your edge-node for LLM inference. Treat it with respect. Don’t yank the power — always shut down cleanly. For now, you can power it on when needed, SSH/VNC in, and fire up the llama.

Future plans: hook it into a smart outlet and run a local API!

🧠 Rick Note:

"Morty, we got a local brain running on a Pi, and it’s not even melting, Morty. That’s like fitting a black hole in a lunchbox. Don’t screw this up."