How to use AI Endpoints, LangChain and Javascript to create a chatbot

A robot with a parrot

Continuation of the series of blog posts on how to use AI Endpoints with LangChain.

Have a look at our previous blog posts:
Enhance your applications with AI Endpoints
How to use AI Endpoints and LangChain4j
LLMs streaming with AI Endpoints and LangChain4j
How to use AI Endpoints and LangChain to create a chatbot

In the world of generative AI with LLMs, LangChain is one of the most famous Framework used to simplify the LLM use with API call.

LangChain’s tools and APIs simplify the process of building LLM-driven applications like chat bots and virtual agents.

LangChain  is designed to be used with Javascript.

And, of course, we’ll use our AI Endpoints product to access to various LLM models 🤩.

ℹ️ All the code source used in the blog post is available on our GitHub repository: public-cloud-examples/tree/main/ai/ai-endpoints/js-langchain-chatbot ℹ️

Blocking chatbot

Let’s start by creating a simple chatbot with LangChain and AI Endpoints.

The first step is to get the necessary dependencies. To do this, create a package.json file:

{
  "name": "js-langchain-chatbot",
  "version": "1.0.0",
  "type": "module",
  "description": "Chatbot example with LangChain and AI Endpoints",
  "main": "chatbot.js",
  "scripts": {
    "start": "node chatbot.js"
  },
  "author": "OVHcloud",
  "license": "Apache-2.0",
  "dependencies": {
    "@langchain/mistralai": "^0.0.22",
    "@langchain/openai": "^0.0.34",
    "commander": "^12.1.0",
    "langchain": "^0.2.2"
  }
}

Run the npm install command.

Let’s develop our chatbot:

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { Command } from 'commander';

/**
 * Function to do a chat completion request to the LLM given a user question.
 * @param question (String) the question to ask to the LLM.
 */
async function chatCompletion(question) {
  const promptTemplate = ChatPromptTemplate.fromMessages([
    ["system", "You are Nestor, a virtual assistant. Answer to the question."],
    ["human", "{question}"]
  ]);

  // Use Mixtral-8x22B as LLM
  const model = new ChatMistralAI({
    modelName: "Mixtral-8x22B-Instruct-v0.1",
    model: "Mixtral-8x22B-Instruct-v0.1",
    apiKey: "None",
    endpoint: "https://mixtral-8x22b-instruct-v01.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/",
    maxTokens: 1500,
    streaming: false,
    verbose: false,
  });

  // Chain the model to the prompt to "apply it"
  const chain = promptTemplate.pipe(model);
  const response = await chain.invoke({ question: question });
  
  console.log(response.content);
}


/**
 * Main entry of the CLI.
 * Parameter --question is used to pass the question to the LLM.
 **/
const program = new Command();

program
  .option('--question <value>', 'Overwriting value.', '"What is the meaning of life?"')
  .parse(process.argv);

const options = program.opts();
chatCompletion(options.question);

And that’s all, you can now use your new chatbot assistant:

$ node chatbot.js --question "What is OVHcloud?"

OVHcloud is a global cloud computing company that offers a wide range of services including web hosting, 
virtual private servers, cloud storage, and dedicated servers. 
It was founded in 1999 and is headquartered in Roubaix, France. 
OVHcloud operates its own data centers and network infrastructure, providing services to customers around the world. 
The company offers a variety of cloud solutions, including public, 
private, and hybrid cloud options, as well as dedicated servers, web hosting, and other managed services. 
OVHcloud is known for its high-performance, scalable, and secure cloud infrastructure.

Streaming chatbot

As usual, you certainly want a real chatbot with conversational style. To do that let’s add streaming feature with the following code:

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { Command } from 'commander';
import { setTimeout } from "timers/promises";

/**
 * Function to do a chat completion request to the LLM given a user question.
 * @param question (String) the question to ask to the LLM.
 */
async function chatCompletion(question) {
  const promptTemplate = ChatPromptTemplate.fromMessages([
    ["system", "You are Nestor, a virtual assistant. Answer to the question."],
    ["human", "{question}"]
  ]);

  // Use Mixtral-8x22B as LLM
  const model = new ChatMistralAI({
    modelName: "Mixtral-8x22B-Instruct-v0.1",
    model: "Mixtral-8x22B-Instruct-v0.1",
    apiKey: "None",
    endpoint: "https://mixtral-8x22b-instruct-v01.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/",
    maxTokens: 1500,
    streaming: true,
    verbose: false,
  });

  // Chain the model to the prompt to "apply it"
  const chain = promptTemplate.pipe(model);
  const stream = await chain.stream({ question: question });

  for await (const chunk of stream) {
    // Timeout to simulate a human answering the question
    await setTimeout(150);
    process.stdout.write(chunk.content);
  }
}

/**
 * Main entry of the CLI.
 * Parameter --question is used to pass the question to the LLM.
 **/
const program = new Command();

program
  .option('--question <value>', 'Overwriting value.', '"What is the meaning of life?"')
  .parse(process.argv);

const options = program.opts();
chatCompletion(options.question);

Now you can try it:

$ node chatbot-streaming.js --question "What is OVHcloud?"

And that it!

Don’t hesitate to test our new product, AI Endpoints, and give us your feedback.

You have a dedicated Discord channel (#ai-endpoints) on our Discord server (https://discord.gg/ovhcloud), see you there!

Website | + posts

Once a developer, always a developer!
Java developer for many years, I have the joy of knowing JDK 1.1, JEE, Struts, ... and now Spring, Quarkus, (core, boot, batch), Angular, Groovy, Golang, ...
For more than ten years I was a Software Architect, a job that allowed me to face many problems inherent to the complex information systems in large groups.
I also had other lives, notably in automation and delivery with the implementation of CI/CD chains based on Jenkins pipelines.
I particularly like sharing and relationships with developers and I became a Developer Relation at OVHcloud.
This new adventure allows me to continue to use technologies that I like such as Kubernetes or AI for example but also to continue to learn and discover a lot of new things.
All the while keeping in mind one of my main motivations as a Developer Relation: making developers happy.
Always sharing, I am the co-creator of the TADx Meetup in Tours, allowing discovery and sharing around different tech topics.