Integrations: LLMs
LangChain offers a number of LLM implementations that integrate with various model providers. These are:
OpenAI
import { OpenAI } from "langchain/llms";
const model = new OpenAI({ temperature: 0.9 });
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
HuggingFaceInference
- npm
- Yarn
npm install @huggingface/inference
yarn add @huggingface/inference
import { HuggingFaceInference } from "langchain/llms";
const model = new HuggingFaceInference({ model: "gpt2" });
const res = await model.call("1 + 1 =");
console.log({ res });
Cohere
- npm
- Yarn
npm install cohere-ai
yarn add cohere-ai
import { Cohere } from "langchain/llms";
const model = new Cohere({ maxTokens: 20 });
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
Replicate
- npm
- Yarn
npm install replicate
yarn add replicate
import { Replicate } from "langchain/llms";
const model = new Replicate({
model:
"daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8",
});
const res = await modelA.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
Additional LLM Implementations
PromptLayerOpenAI
LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
- Create a PromptLayer account here: https://promptlayer.com.
- Create an API token and pass it either as
promptLayerApiKey
argument in thePromptLayerOpenAI
constructor or in thePROMPTLAYER_API_KEY
environment variable.
const model = new PromptLayerOpenAI({ temperature: 0.9 });
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
The request and the response will be logged in the PromptLayer dashboard.
Note: In streaming mode PromptLayer will not log the response.