The easiest & fastest way
to run customized and fine-tuned Large Language Models (LLMs) locally or on the edge
Lightweight, Fast, Portable, Rust-powered and OpenAI compatible