~/just-llms_
The unifiedLLM gateway.
A production-ready Python library. Superior routing, analytics, and cost optimization.
main.py
from justllms import JustLLM# Initialize with your API keysclient = JustLLM({"providers": {"openai": {"api_key": "your-key"},"google": {"api_key": "your-key"},"anthropic": {"api_key": "your-key"},"ollama": {"base_url": "http://localhost:11434"}}})# Simple completion - uses configured fallback or first available providerresponse = client.completion.create(messages=[{"role": "user", "content": "Explain quantum computing briefly"}])print(response.content)
60%
COST REDUCTION
7+
PROVIDERS
1.1MB
PACKAGE SIZE
Zero
DEPENDENCIES
System Specifications.
Automatic Fallbacks
Configure fallback models. If a provider goes down, requests are reliably re-routed instantly.
Provider-Agnostic Tools
Define tools once with @tool and use them seamlessly across OpenAI, Anthropic, and Google.
SXS Model Comparison
Interactive CLI tool to compare latency, tokens, and output side-by-side simultaneously.
Native Tools
Out-of-the-box support for Server-Side Google Search and sandboxed Python Code Execution.
Zero Config
Installs in seconds. No complex YAML chains or heavy dependency bloat. Native Ollama support.