Random Studios
vLLM Gateway
Use https://ollama.random-studios.net/v1 as your OpenAI-compatible base URL.
OpenCode
In OpenCode, type /connect and select ollama.random-studios.net.
Example provider config:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Random Studios Ollama",
"options": {
"baseURL": "https://ollama.random-studios.net/v1"
},
"models": {
"qwen3-coder": {
"name": "Qwen3.6 27B AWQ INT4 (vLLM default)"
}
}
}
}
}
qwen3-coder is the alias for cyankiwi/Qwen3.6-27B-AWQ-INT4, currently served on the box.
pi-coding-agent
Add this provider to ~/.pi/agent/models.json:
{
"providers": {
"remote-vllm": {
"baseUrl": "https://ollama.random-studios.net/v1",
"api": "openai-completions",
"apiKey": "<your-token>",
"compat": {
"supportsDeveloperRole": false,
"supportsReasoningEffort": false
},
"models": [
{
"id": "qwen3-coder",
"name": "qwen3-coder",
"reasoning": true,
"contextWindow": 262144,
"maxTokens": 32768
}
]
}
}
}
Replace <your-token> with your bearer token from the gateway.
Auth
Requests to the API require a bearer token.
Authorization: Bearer <your-token>
The homepage is public. API endpoints stay protected.
Serving Notes
- This endpoint is now backed by
vLLM, not Ollama. - The gateway currently serves one shared model at a time.
- Clients should continue using the same bearer-token auth and
/v1OpenAI-compatible API path. - Some Ollama-specific features such as model listing and per-user upstream headers are no longer part of the backend.
Available Models
qwen3-coderQwen3.6 27B AWQ-INT4, aliased tocyankiwi/Qwen3.6-27B-AWQ-INT4