Skip to main content

Interface: OpenAIInput

chat_models.internal.OpenAIInput

Input to OpenAI class.

Hierarchy

Implemented by

Properties

frequencyPenalty

frequencyPenalty: number

Penalizes repeated tokens according to frequency

Inherited from

ModelParams.frequencyPenalty

Defined in

langchain/src/chat_models/openai.ts:74


logitBias

Optional logitBias: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

Inherited from

ModelParams.logitBias

Defined in

langchain/src/chat_models/openai.ts:83


maxTokens

Optional maxTokens: number

Maximum number of tokens to generate in the completion. If not specified, defaults to the maximum number of tokens allowed by the model.

Inherited from

ModelParams.maxTokens

Defined in

langchain/src/chat_models/openai.ts:92


modelKwargs

Optional modelKwargs: Kwargs

Holds any additional parameters that are valid to pass to openai.create that are not explicitly specified on this class.

Defined in

langchain/src/chat_models/openai.ts:107


modelName

modelName: string

Model name to use

Defined in

langchain/src/chat_models/openai.ts:101


n

n: number

Number of chat completions to generate for each prompt

Inherited from

ModelParams.n

Defined in

langchain/src/chat_models/openai.ts:80


presencePenalty

presencePenalty: number

Penalizes repeated tokens

Inherited from

ModelParams.presencePenalty

Defined in

langchain/src/chat_models/openai.ts:77


stop

Optional stop: string[]

List of stop words to use when generating

Defined in

langchain/src/chat_models/openai.ts:110


streaming

streaming: boolean

Whether to stream the results or not. Enabling disables tokenUsage reporting

Inherited from

ModelParams.streaming

Defined in

langchain/src/chat_models/openai.ts:86


temperature

temperature: number

Sampling temperature to use, between 0 and 2, defaults to 1

Inherited from

ModelParams.temperature

Defined in

langchain/src/chat_models/openai.ts:68


timeout

Optional timeout: number

Timeout to use when making requests to OpenAI.

Defined in

langchain/src/chat_models/openai.ts:115


topP

topP: number

Total probability mass of tokens to consider at each step, between 0 and 1, defaults to 1

Inherited from

ModelParams.topP

Defined in

langchain/src/chat_models/openai.ts:71