Skip to main content

Interface: ModelParams

chat_models.internal.ModelParams

Hierarchy

Properties

frequencyPenalty

frequencyPenalty: number

Penalizes repeated tokens according to frequency

Defined in

langchain/src/chat_models/openai.ts:74


logitBias

Optional logitBias: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

Defined in

langchain/src/chat_models/openai.ts:83


maxTokens

Optional maxTokens: number

Maximum number of tokens to generate in the completion. If not specified, defaults to the maximum number of tokens allowed by the model.

Defined in

langchain/src/chat_models/openai.ts:92


n

n: number

Number of chat completions to generate for each prompt

Defined in

langchain/src/chat_models/openai.ts:80


presencePenalty

presencePenalty: number

Penalizes repeated tokens

Defined in

langchain/src/chat_models/openai.ts:77


streaming

streaming: boolean

Whether to stream the results or not. Enabling disables tokenUsage reporting

Defined in

langchain/src/chat_models/openai.ts:86


temperature

temperature: number

Sampling temperature to use, between 0 and 2, defaults to 1

Defined in

langchain/src/chat_models/openai.ts:68


topP

topP: number

Total probability mass of tokens to consider at each step, between 0 and 1, defaults to 1

Defined in

langchain/src/chat_models/openai.ts:71