mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-17 05:00:16 +00:00
docs(coding-agent): improve models.md with minimal example and defaults
- Add minimal example showing only required fields for local models - Rename 'Basic Example' to 'Full Example' showing all fields - Add 'Default' column to model configuration table - Clarify that apiKey is required but ignored by Ollama
This commit is contained in:
parent
c8b8f043a7
commit
8306b3bc20
1 changed files with 43 additions and 14 deletions
|
|
@ -4,14 +4,17 @@ Add custom providers and models (Ollama, vLLM, LM Studio, proxies) via `~/.pi/ag
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
- [Basic Example](#basic-example)
|
- [Minimal Example](#minimal-example)
|
||||||
|
- [Full Example](#full-example)
|
||||||
- [Supported APIs](#supported-apis)
|
- [Supported APIs](#supported-apis)
|
||||||
- [Provider Configuration](#provider-configuration)
|
- [Provider Configuration](#provider-configuration)
|
||||||
- [Model Configuration](#model-configuration)
|
- [Model Configuration](#model-configuration)
|
||||||
- [Overriding Built-in Providers](#overriding-built-in-providers)
|
- [Overriding Built-in Providers](#overriding-built-in-providers)
|
||||||
- [OpenAI Compatibility](#openai-compatibility)
|
- [OpenAI Compatibility](#openai-compatibility)
|
||||||
|
|
||||||
## Basic Example
|
## Minimal Example
|
||||||
|
|
||||||
|
For local models (Ollama, LM Studio, vLLM), only `id` is required per model:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
|
@ -19,12 +22,38 @@ Add custom providers and models (Ollama, vLLM, LM Studio, proxies) via `~/.pi/ag
|
||||||
"ollama": {
|
"ollama": {
|
||||||
"baseUrl": "http://localhost:11434/v1",
|
"baseUrl": "http://localhost:11434/v1",
|
||||||
"api": "openai-completions",
|
"api": "openai-completions",
|
||||||
|
"apiKey": "ollama",
|
||||||
|
"models": [
|
||||||
|
{ "id": "llama3.1:8b" },
|
||||||
|
{ "id": "qwen2.5-coder:7b" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `apiKey` is required but Ollama ignores it, so any value works.
|
||||||
|
|
||||||
|
## Full Example
|
||||||
|
|
||||||
|
Override defaults when you need specific values:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"ollama": {
|
||||||
|
"baseUrl": "http://localhost:11434/v1",
|
||||||
|
"api": "openai-completions",
|
||||||
|
"apiKey": "ollama",
|
||||||
"models": [
|
"models": [
|
||||||
{
|
{
|
||||||
"id": "llama-3.1-8b",
|
"id": "llama3.1:8b",
|
||||||
"name": "Llama 3.1 8B (Local)",
|
"name": "Llama 3.1 8B (Local)",
|
||||||
|
"reasoning": false,
|
||||||
|
"input": ["text"],
|
||||||
"contextWindow": 128000,
|
"contextWindow": 128000,
|
||||||
"maxTokens": 32000
|
"maxTokens": 32000,
|
||||||
|
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
@ -95,16 +124,16 @@ The `apiKey` and `headers` fields support three formats:
|
||||||
|
|
||||||
## Model Configuration
|
## Model Configuration
|
||||||
|
|
||||||
| Field | Required | Description |
|
| Field | Required | Default | Description |
|
||||||
|-------|----------|-------------|
|
|-------|----------|---------|-------------|
|
||||||
| `id` | Yes | Model identifier |
|
| `id` | Yes | — | Model identifier (passed to the API) |
|
||||||
| `name` | No | Display name |
|
| `name` | No | `id` | Display name in model selector |
|
||||||
| `api` | No | Override provider's API for this model |
|
| `api` | No | provider's `api` | Override provider's API for this model |
|
||||||
| `contextWindow` | No | Context window size in tokens |
|
| `reasoning` | No | `false` | Supports extended thinking |
|
||||||
| `maxTokens` | No | Maximum output tokens |
|
| `input` | No | `["text"]` | Input types: `["text"]` or `["text", "image"]` |
|
||||||
| `reasoning` | No | Supports extended thinking |
|
| `contextWindow` | No | `128000` | Context window size in tokens |
|
||||||
| `input` | No | Input types: `["text"]` or `["text", "image"]` |
|
| `maxTokens` | No | `16384` | Maximum output tokens |
|
||||||
| `cost` | No | `{"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0}` |
|
| `cost` | No | all zeros | `{"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0}` (per million tokens) |
|
||||||
|
|
||||||
## Overriding Built-in Providers
|
## Overriding Built-in Providers
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue