fix(ai): skip errored/aborted assistant messages in transform-messages

Fixes OpenAI Responses 400 error 'reasoning without following item' by
skipping errored/aborted assistant messages entirely rather than filtering
at the provider level. This covers openai-responses, openai-codex-responses,
and future providers.

Removes strictResponsesPairing compat option (no longer needed).

Closes #838
This commit is contained in:
Mario Zechner 2026-01-19 15:55:18 +01:00
parent abb1775ff7
commit 2d27a2c728
10 changed files with 109 additions and 52 deletions

View file

@ -751,14 +751,6 @@ To fully replace a built-in provider with custom models, include the `models` ar
| `supportsUsageInStreaming` | Whether provider supports `stream_options: { include_usage: true }`. Default: `true` |
| `maxTokensField` | Use `max_completion_tokens` or `max_tokens` |
**OpenAI Responses (`openai-responses`):**
| Field | Description |
|-------|-------------|
| `strictResponsesPairing` | Enforce strict reasoning/message pairing when replaying OpenAI Responses history on providers like Azure (default: `false`) |
If you see 400 errors like "item of type 'reasoning' was provided without its required following item" or "message/function_call was provided without its required reasoning item", set `compat.strictResponsesPairing: true` on the affected model in `models.json`.
**Live reload:** The file reloads each time you open `/model`. Edit during session; no restart needed.
**Model selection priority:**