Send a completion request to a selected model (text-only format)
The model ID to use. If unspecified, the user’s default is used.
The text prompt to complete
Enable streaming of results.
Maximum number of tokens (range: [1, context_length)).
Sampling temperature (range: [0, 2]).
Seed for deterministic outputs.
Top-p sampling value (range: (0, 1]).
Top-k sampling value (range: [1, Infinity)).
Frequency penalty (range: [-2, 2]).
Presence penalty (range: [-2, 2]).
Repetition penalty (range: (0, 2]).
Mapping of token IDs to bias values.
Number of top log probabilities to return.
Minimum probability threshold (range: [0, 1]).
Alternate top sampling parameter (range: [0, 1]).
List of prompt transforms (OpenRouter-only).
Alternate list of models for routing overrides.
Model routing strategy (OpenRouter-only).
Preferences for provider routing.
Configuration for model reasoning/thinking tokens
Successful completion