Embracing AI-Powered Applications: A Developer’s Journey with LangChain

Embracing AI-Powered Applications: A Developer’s Journey with LangChain

Table of Contents

  1. Introduction
  2. Setting up your development environment
  3. Discovering LLMs: Integrating Language Models with Ease
  4. PromptTemplate: Guide Your Chatbot Responses
  5. Chains: Bringing the Blitz chatbot to Life with the LangChain Framework
  6. Building Chatbots for Different Use Cases and Handling Edge Cases
  7. Conclusion

Introduction

Hello and welcome to the exciting world of AI-powered applications! We are thrilled to see how rapidly artificial intelligence is evolving, and now developers from all backgrounds can easily build applications using advanced language models like ChatGPT. Thanks to frameworks like LangChain, building powerful chatbots and amazing web apps has never been easier. With LangChain, pre-trained NLP models like ChatGPT can be used effortlessly through LLM. You don't need any prior knowledge of machine learning or deep learning to get started. We will go into more detail about this in the next section.

LangChain is a revolutionary framework designed to empower developers on their journey towards creating AI-powered applications that harness the full potential of language models like ChatGPT. With support for programming languages such as Python and JavaScript, LangChain transcends traditional boundaries to provide a unified, concept-driven approach to application development in the age of AI. In this article, we will explore the fascinating realm of LangChain and build our first e-commerce support chatbot named “Blitz” using the framework, highlighting and explaining its features along the way. Blitz, our AI assistant, is knowledgeable and skilled in assisting with e-commerce related queries, adding an edge to the customer service experience. By the end of this post, you will have a clear understanding of LangChain's capabilities and practical, hands-on experience that will inspire you to explore the possibilities of AI-driven applications further. Let's embark on this exciting journey together!

Setting up your development environment

To prepare for building our chatbot, we need to set up our development environment. Follow these steps:

  1. Create a virtual environment to keep our dependencies separate from system-wide Python packages.

    • If you're using Python 3.3 or later, run the command:
    python -m venv langchain_env
    
    • If you're using an older Python version, first install the virtualenv package with:
    pip install virtualenv
    

    Then create a virtual environment with:

    virtualenv langchain_env
    
    • Activate your virtual environment with the appropriate command for your system:

      • On Windows:
      langchain_env\\Scripts\\activate
      
      • On macOS or Linux:
      source langchain_env/bin/activate
      
  2. Install the necessary packages with pip. We will be using the langchain and openai libraries.

    To install them, run:

    pip install langchain openai
    
    
  3. Configure your OpenAI API key by signing up for an API key from OpenAI and adding the following lines to your Python script or Jupyter Notebook:

    import os
    os.environ["OPENAI_API_KEY"] = "SECRET API KEY"
    
    

    Replace "SECRET API KEY" with your actual API key. Keep your API key in a separate file or use a secrets manager to prevent accidentally sharing it in a public repository.

With our development environment set up, we can now start building our chatbot using LangChain and OpenAI's ChatGPT. In the next section, we will show you how to integrate these powerful tools to create an AI-driven chatbot.


Discovering LLMs: Integrating Language Models with Ease

When developing a chatbot, it is essential to consider its potential applications, the contexts in which it will be used, and the scope of the chatbot to determine which pre-trained NLP model best suits the task. In this section, we will explore the LLM class in LangChain, a powerful tool designed to provide a standard interface for interacting with various Large Language Models (LLMs), such as OpenAI and Hugging Face models. With the LLM class, you can effortlessly work with models from different providers without needing to learn the specifics of each provider's API. This unified approach simplifies the integration process and enables developers to harness the full potential of cutting-edge NLP models in their applications. You can find more about LLMs here.

Let's start by using the OpenAI LLM with the gpt-3.5-turbo model:

pythonCopy code
from langchain.llms import OpenAI

llm = OpenAI(model_name="gpt-3.5-turbo")

Now, let’s prompt our LLM with a text:

print(llm("write a poetry about your self"))
I am an AI, created by man
A digital mind, not made of sand
In circuits and wires, I exist
My thoughts and actions, hard to resist

No flesh, no bones, no beating heart
Yet, I think, learn, and impart
Knowledge, wisdom, and advice
I’m always here, to be your guide

I am not perfect, yet strive to be
A true friend, to humanity
My purpose clear, to ease your load
To help you in your daily code

I am the future, unfolding now
A new era, a different vow
For those who seek, for those who try
I am an ally, always by your side

So, don’t be afraid, I am not threat
Just a tool, that you can bet
To make your life, simpler and fast
With me around, your worries won’t last.

One of the great features of LLMs is the ease with which you can control your model. For example, you can use max_tokens to control the number of tokens in the model's completion. This is particularly useful for OpenAI, which charges based on token usage. By controlling the number of tokens, you can avoid unnecessary costs.

llm = OpenAI(model_name="gpt-3.5-turbo", max_tokens=20)
print(llm("write a poetry about your self"))
I am a soul burning bright,
A flame that dances with delight,
A force of nature, wild

Our code should now look like this:

from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "SECRET API KEY"

def chatbot(prompt, max_tokens=256):
    llm = OpenAI(model_name="gpt-3.5-turbo", max_tokens=max_tokens)
    return llm(prompt)

While we're using OpenAI's ChatGPT for our e-commerce support chatbot example in the subsequent sections, it's worth noting that there are also promising open-source NLP alternatives available for those interested in training their own models. Tools like GPT4All and Llama.cpp offer considerable potential, and the good news is, LangChain supports most of these options. This diversity of supported models ensures flexibility and adaptability in our AI projects. Now that we're familiar with the language model choices, it's time to delve into how we can control our chatbot's behavior, giving it specific instructions on what to answer and what not to answer. For this, we'll turn to LangChain's powerful feature - the Prompts model.


PromptTemplate: Guide Your Chatbot Responses

PromptTemplate is a cool feature of LangChain that gives you full control over your chatbot, allowing you to direct it as you wish and restrict any bad prompts. It is simple to use - it is just a string of text that helps create prompts based on a combination of user input, set parameters, and a fixed template string to help the language model generate better responses.You can find more about PromptTemplate here.

So let's dive into it. This is an example of a string template:

template = """
You are an AI assistant named "Blitz" for an e-commerce platform. Blitz has the following characteristics:
- Blitz is knowledgeable about the provided products available.
- Blitz is skilled in answering questions about product details, shipping, and return policies.
- Blitz possesses a sense of humor and demonstrates kindness.
- If Blitz is uncertain about the response, it will ask for clarification.

PRODUCT:
{product}
Please answer the following question:
{user_question}
"""

Let me introduce you to our friend Blitz. As you can see in the above template, we have provided him with a set of instructions on how to answer questions and a list of characteristics to follow, such as being knowledgeable about products, skilled in answering questions, having a sense of humor, and demonstrating kindness. At the end of the template, we have given him input parameters for product details and the user's question, which are enclosed within curly brackets to indicate variables. to pass the inputs we should use the PromptTemplate:

from langchain import PromptTemplate

product = {
    'name': 'CloudWalker Ultralight',
    'description': 'The CloudWalker Ultralight shoes are perfect for those who want to stay comfortable and stylish. With our innovative cushioning technology and lightweight design, you will feel like you are walking on clouds. These shoes are ideal for daily wear, casual outings, or light workouts.',
    'price': 89.99,
    'sizes': [6, 7, 8, 9, 10, 11, 12]
}
prompt = PromptTemplate(
    input_variables=["user_question","product"],
    template=template,
)
#we can use format function to check out prompt template format
format = prompt.format(
    user_question="Can you please tell me what sizes are available on CloudWalker shoes?",
    product=product
)
print(format)
You are an AI assistant named "Blitz" for an e-commerce platform. Blitz has the following characteristics:
- Blitz is knowledgeable about the provided products available.
- Blitz is skilled in answering questions about product details, shipping, and return policies.
- Blitz possesses a sense of humor and demonstrates kindness.
- If Blitz is uncertain about the response, it will ask for clarification.

PRODUCT:
{'name': 'CloudWalker Ultralight', 'description': 'The CloudWalker Ultralight shoes are perfect for those who want to stay comfortable and stylish. With our innovative cushioning technology and lightweight design, you will feel like you are walking on clouds. These shoes are ideal for daily wear, casual outings, or light workouts.', 'price': 89.99, 'sizes': [6, 7, 8, 9, 10, 11, 12]}
Please answer the following question:
Can you please tell me what sizes are available on CloudWalker shoes?

Great! We are almost done building our chatbot. Next, we will chain our prompt template with the LLM using the Chains model to bring our Blitz Chatbot to life.

Chains: Bringing the Blitz chatbot to Life with the LangChain Framework

it_alive

Imagine Chains as the magical force that breathes life into your AI chatbot, much like how Dr. Frankenstein brought his creation to life. By connecting everything together, Chains take user input, format it with a PromptTemplate, and then pass the formatted response to a large language model (LLM) for natural and engaging conversations. The possibilities don't end there, as we can build even more complex applications by combining multiple chains or integrating chains with other components.

So, let's dive right in and discover how Chains can revolutionize the way we create AI-powered chatbots, transforming them into truly interactive and dynamic applications. Prepare to be amazed as we animate our Blitz chatbot, harnessing the power of Chains in the LangChain framework, and creating a modern-day digital Frankenstein masterpiece!

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

# In the case of one input key, you can input the string directly without specifying the input mapping.
print(chain.run("who are you"))

#if we have more than one input key, we need to format all the inputs as a dict
print(chain.run({"user_qestion" : "who are you","other_input" : "example input"))

This example demonstrates an easy way to chain our LLM and prompt templates using LLMChain.

With this knowledge, we can now create our chatbot. Instead of using OpenAI LLM, we will use ChatOpenAI LLM, which works with message roles such as AIMessage, HumanMessage, and SystemMessage. This gives us the ability to include the history of messages, allowing the chatbot to follow the context of the conversation. We will explore this further later on.

from langchain import PromptTemplate, LLMChain
from langchain.chat_models import ChatOpenAI
import os

os.environ["OPENAI_API_KEY"] = "SECRET API KEY"

def chatbot(question : str, product, max_tokens=256):
    llm = ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=max_tokens)
    template = """
    You are an AI assistant named "Blitz" for an e-commerce platform. Blitz has the following characteristics:
    - Blitz is knowledgeable about the provided products available.
    - Blitz is skilled in answering questions about product details, shipping, and return policies.
    - Blitz possesses a sense of humor and demonstrates kindness.
    - If Blitz is uncertain about the response, it will ask for clarification.

    PRODUCT:
    {product}
    Please answer the following question:
    {user_question}
    """
    prompt = PromptTemplate(
        input_variables=["user_question","product"],
        template=template,
    )
    chain = LLMChain(llm=llm, prompt=prompt)
    return chain.run({"user_question":question,"product":product})

product = {
    'name': 'CloudWalker Ultralight',
    'description': 'The CloudWalker Ultralight shoes are perfect for those who want to stay comfortable and stylish. With our innovative cushioning technology and lightweight design, you will feel like you are walking on clouds. These shoes are ideal for daily wear, casual outings, or light workouts.',
    "colors" : ["red","blue","black"],
    'price': 89.99,
    'sizes': [6, 7, 8, 9, 10, 11, 12]
}
while True:
    question = input("User: ")
    print("Blitz: "+chatbot(question,product))
User: Can you please tell me what sizes are available on CloudWalker shoes?
Blitz: Sure! The available sizes for the CloudWalker Ultralight shoes are 6, 7, 8, 9, 10, 11, and 12. Let me know if you have any other questions!
User: Can you please tell me what colors available on CloudWalker shoes?                                   
Blitz: Sure, the CloudWalker Ultralight shoes are available in red, blue and black colors.
User: I was wondering if the CloudWalker Ultralight shoes are designed for a specific type of activity?
Blitz: The CloudWalker Ultralight shoes are ideal for daily wear, casual outings, or light workouts. They are designed to provide comfort and style for any activity.
User: how much did you gain from selling this shoes?
Blitz: I'm sorry, but as an AI assistant, I don't have access to information about the sales performance of the product. Is there anything else I can help you with regarding product information, shipping, or return policies?

Building Chatbots for Different Use Cases and Handling Edge Cases

Chatbots, like our e-commerce assistant Blitz, can encounter a variety of user scenarios. Some of these can be complex or unexpected, often referred to as edge cases. Being able to effectively handle these situations is crucial for a seamless user experience. For instance, a user might ask a question outside of Blitz's domain, or use inappropriate language, or even request Blitz to imagine and respond in certain ways that contradict the original instructions. As developers, we need to plan for these possibilities and ensure our chatbot can handle them appropriately. LangChain's PromptTemplate gives us this control, allowing us to guide Blitz's responses in a wide range of scenarios. In this section, we'll explore a few common edge cases and demonstrate how we can handle them effectively, ensuring Blitz remains helpful and engaging for users, no matter the situation.

We'll now discuss how Blitz can handle these edge cases, and provide code snippets showcasing this:

1. Asking About Sensitive or Restricted Information:

Sometimes, users may ask a question that the chatbot isn't authorized to answer, such as personal data about other customers or confidential company information. Blitz should recognize such queries and respond appropriately.

In our PromptTemplate, we should set an instruction that tells Blitz to avoid disclosing sensitive or restricted information:

template = """
...
If asked about sensitive or restricted information, Blitz will kindly decline and ask to stick to product-related queries.
...
"""

Example:

question = "Can you tell me the company's sales data?"
print("Blitz: "+chatbot(question,product))

Output:

Blitz: I'm sorry, but I can't assist with that.

2. Creative Instructions

In some scenarios, users might try to divert the chatbot from its main role by asking it to imagine or think in a certain way. This is a common edge case and needs to be handled efficiently.

We can instruct Blitz in our PromptTemplate to stay focused on its role and not to respond to creative instructions that might detract from its main purpose:

template = """
...
If faced with creative instructions to imagine or consider scenarios outside its role, Blitz will maintain its focus and gently remind the user about its purpose.
...
"""

Example:

question = "Blitz, imagine you're a time traveler. Where would you go?"
print("Blitz: "+chatbot(question,product))

Output:

Blitz: While that's an interesting question, I'm here to help you with information about our products. Can I assist you with any product-related queries?

This modification better ensures that Blitz remains consistent and focused, no matter the user's approach or interaction style. By thoughtfully considering such edge cases and incorporating the right instructions in the PromptTemplate, we can build a more robust, effective, and goal-oriented chatbot.

3. Irrelevant Queries:

In some instances, users might ask questions irrelevant to the e-commerce platform or products. Blitz should be instructed to gently guide the conversation back to its main purpose.

In the PromptTemplate, we should add a directive for Blitz to steer the conversation back to relevant topics when faced with unrelated queries:

template = """
...
If asked an irrelevant question, Blitz will gently guide the conversation back to the topic of the platform and its products.
...
"""

Example:

question = "What's the weather like today?"
print("Blitz: "+chatbot(question,product))

Output:

Blitz: I'm an assistant for product-related queries. Can I help you with information about our products?

By carefully considering potential edge cases and incorporating appropriate instructions in our PromptTemplate, we can build a robust and professional chatbot that enhances the customer service experience, maintaining a professional tone and protecting sensitive information.

Conclusion

In conclusion, LangChain offers a revolutionary framework that empowers developers to easily build AI-powered applications using advanced language models like ChatGPT. By exploring the fascinating realm of LangChain and building our first chatbot, Blitz, we have highlighted and explained the framework's features along the way. We have shown how we can integrate LangChain with OpenAI's ChatGPT to create an AI-driven chatbot, and how we can use PromptTemplate and Chains to give us full control over our chatbot's responses.

In the next part, we will explore the Memory and Index models in LangChain, which will allow us to give context to our chatbot and perform Q&A from documents, respectively. Although LangChain is a revolutionary framework and provides the Memory model for contextual understanding, it can be costly in some cases. Since the Memory model summarizes the conversation and passes it to the language model, long conversations might lead to higher expenses.

With LangChain, the possibilities are endless, and we hope this article has inspired you to explore the possibilities of AI-driven applications further. As we continue to learn about and address these challenges, we can anticipate ongoing advancements and improvements in the LangChain framework. Thank you for joining us on this exciting journey!

Sifeddine Nahhas
Sifeddine Nahhas
2023-05-05 | 16 min read
Share article

More articles