Skip to main content

Interface: ModelParams

llms.internal.ModelParams

Hierarchy

Properties

bestOf

bestOf: number

Generates bestOf completions server side and returns the "best"

Defined in

langchain/src/llms/openai.ts:41


frequencyPenalty

frequencyPenalty: number

Penalizes repeated tokens according to frequency

Defined in

langchain/src/llms/openai.ts:32


logitBias

Optional logitBias: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

Defined in

langchain/src/llms/openai.ts:44


maxTokens

maxTokens: number

Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.

Defined in

langchain/src/llms/openai.ts:26


n

n: number

Number of completions to generate for each prompt

Defined in

langchain/src/llms/openai.ts:38


presencePenalty

presencePenalty: number

Penalizes repeated tokens

Defined in

langchain/src/llms/openai.ts:35


streaming

streaming: boolean

Whether to stream the results or not. Enabling disables tokenUsage reporting

Defined in

langchain/src/llms/openai.ts:47


temperature

temperature: number

Sampling temperature to use

Defined in

langchain/src/llms/openai.ts:20


topP

topP: number

Total probability mass of tokens to consider at each step

Defined in

langchain/src/llms/openai.ts:29