# Superpower LLMs with Conversational Agents

> Large Language Models (LLMs) are incredibly powerful, yet they lack particular abilities that the “dumbest” computer programs can handle with ease. Logic, calculation, and search are examples of where computers typically excel, but LLMs struggle.

**L**arge **L**anguage **M**odels (LLMs) are incredibly powerful, yet they lack particular abilities that the “dumbest” computer programs can handle with ease. Logic, calculation, and search are examples of where computers typically excel, but LLMs struggle.

Computers can solve incredibly complex math problems, yet if we ask GPT-4 to tell us the answer to `4.1 * 7.9`, it fails:

![Asking GPT-4 to perform a simple calculation often results in an incorrect answer. A simple calculator can perform this same calculation without issue.](https://cdn.sanity.io/images/vr8gru94/production/efdf6239cfcdc58f310aa4bf3e71c88b8244b505-1920x480.png)


According to a simple calculator, the answer is `19.357`, rounded to three decimal places. Isn’t it fascinating that a simple calculator program can do this, but an incredibly sophisticated AI engine fails?

That’s not all. If I ask GPT-4, “How do I use the LLMChain in LangChain?” it struggles again:

![The LangChain spoken about here isn’t the LangChain we know. It’s an old blockchain project. The response is both outdated and full of false information.](https://cdn.sanity.io/images/vr8gru94/production/3cff47165b2c9cda8535377be1734440b508d83c-1920x1140.png)


Start using Pinecone for free


It’s true that LangChain _was_ a blockchain project [1] [2]. Yet, there didn’t seem to be any “LLMChain” component nor “LANG tokens” — these are both hallucinations.

The reason GPT-4 is unable to tell us about LangChain is that it has no connection to the outside world. Its only knowledge is what it captured from its training data, which cuts off in late 2021.

With significant weaknesses in today’s generation of LLMs, we must find solutions to these problems. One “suite” of potential solutions comes in the form of “agents”.

These agents don’t just solve the problems we saw above but _many_ others. In fact, adding agents has an almost unlimited upside in their LLM-enhancing abilities.

In this chapter, we’ll talk about agents. We’ll learn what they are, how they work, and how to use them within the LangChain library to superpower our LLMs.

[Video](https://www.youtube.com/watch?v=jSP-gSEyVeI)


---

## What are Agents?

We can think of agents as enabling “tools” for LLMs. Like how a human would use a calculator for maths or perform a Google search for information — agents allow an LLM to do the same thing.

![Agents are LLMs that can use tools like calculators, search, or executing code.](https://cdn.sanity.io/images/vr8gru94/production/7b05b9b5d2c0a7591266e768bcf0b1176737e90c-1409x1307.png)


Using agents, an LLM can write and execute Python code. It can search for information and even query a SQL database.

Let’s take a look at a straightforward example of this. We will begin with a “zero-shot” agent (more on this later) that allows our LLM to use a calculator.

### Agents and Tools

To use agents, we require three things:

- A base LLM,
- A tool that we will be interacting with,
- An agent to control the interaction.

Let’s start by installing `langchain` and initializing our base LLM.

```python
from langchain import OpenAI

llm = OpenAI(
    openai_api_key="OPENAI_API_KEY",
    temperature=0,
    model_name="text-davinci-003"
)
```

Now to initialize the calculator tool. When initializing tools, we either create a custom tool or load a prebuilt tool. In either case, the “tool” is a [utility chain](https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/02-langchain-chains.ipynb) given a tool `name` and `description`.

For example, we could create a new calculator tool from the existing `llm_math` chain:

```json
{
  "_key": "6f2b996aa608",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from langchain.chains import LLMMathChain\\n\",\n        \"from langchain.agents import Tool\\n\",\n        \"\\n\",\n        \"llm_math = LLMMathChain(llm=llm)\\n\",\n        \"\\n\",\n        \"# initialize the math tool\\n\",\n        \"math_tool = Tool(\\n\",\n        \"    name='Calculator',\\n\",\n        \"    func=llm_math.run,\\n\",\n        \"    description='Useful for when you need to answer questions about math.'\\n\",\n        \")\\n\",\n        \"# when giving tools to LLM, we must pass as list of tools\\n\",\n        \"tools = [math_tool]\"\n      ],\n      \"metadata\": {\n        \"id\": \"01qXr4jPHnH1\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"tools[0].name, tools[0].description\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"YvNlbIHsCOS6\",\n        \"outputId\": \"01c72dc5-4405-4695-bd92-7e1acb7cedf8\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"('Calculator', 'Useful for when you need to answer questions about math.')\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 4\n        }\n      ]\n    }\n  ]\n}"
}
```

We must follow this process when using custom tools. However, a prebuilt `llm_math` tool does the same thing. So, we could do the same as above like so:

```json
{
  "_key": "aab5fbe4d63d",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from langchain.agents import load_tools\\n\",\n        \"\\n\",\n        \"tools = load_tools(\\n\",\n        \"    ['llm-math'],\\n\",\n        \"    llm=llm\\n\",\n        \")\"\n      ],\n      \"metadata\": {\n        \"id\": \"atEddXO1B1-Q\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"tools[0].name, tools[0].description\"\n      ],\n      \"metadata\": {\n        \"id\": \"UVeXFJYYCWXJ\",\n        \"outputId\": \"5c977b7c-ab3a-401a-bbfb-19b806118a3b\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"execution_count\": 6,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"('Calculator', 'Useful for when you need to answer questions about math.')\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 6\n        }\n      ]\n    }\n  ]\n}"
}
```

Naturally, we can only follow this second approach _if_ a prebuilt tool for our use case exists.

We now have the LLM and tools but no _agent_. To initialize a simple agent, we can do the following:

```python
from langchain.agents import initialize_agent

zero_shot_agent = initialize_agent(
    agent="zero-shot-react-description",
    tools=tools,
    llm=llm,
    verbose=True,
    max_iterations=3
)
```

The _agent_ used here is a `"zero-shot-react-description"` agent. _Zero-shot_ means the agent functions on the current action only — it has _no_ memory. It uses the _ReAct_ framework to decide which tool to use, based solely on the tool’s `description`.

---

_We won’t discuss the_ **_ReAct framework_** _in this chapter, but you can think of it as if an LLM could cycle through_ _**Re**asoning and_ _**Act**ion steps. Enabling a multi-step process for identifying answers._

---

With our agent initialized, we can begin using it. Let’s try a few prompts and see how the agent responds.

```json
{
  "_key": "d4e21718f26d",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"zero_shot_agent(\\\"what is (4.5*2.1)^2.2?\\\")\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"4BJ_ky2Pdja-\",\n        \"outputId\": \"94f9c715-671c-4094-8884-2fa766145f06\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to calculate this expression\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: (4.5*2.1)^2.2\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 139.94261298333066\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I now know the final answer\\n\",\n            \"Final Answer: 139.94261298333066\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': 'what is (4.5*2.1)^2.2?', 'output': '139.94261298333066'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 8\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"(4.5*2.1)**2.2\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"uii8408sdjLD\",\n        \"outputId\": \"d0ab2e03-1682-4fcb-a49f-6be126e4d831\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"139.94261298333066\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 9\n        }\n      ]\n    }\n  ]\n}"
}
```

The answer here is correct. Let’s try another:

```json
{
  "_key": "a35ad24b2e64",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"zero_shot_agent(\\\"if Mary has four apples and Giorgio brings two and a half apple \\\"\\n\",\n        \"                \\\"boxes (apple box contains eight apples), how many apples do we \\\"\\n\",\n        \"                \\\"have?\\\")\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"DlGXFuSgh7kW\",\n        \"outputId\": \"23d11270-e6d3-4d9a-ce98-2ec091db4d94\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to figure out how many apples are in the boxes\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: 8 * 2.5\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 20.0\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I need to add the apples Mary has to the apples in the boxes\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: 4 + 20.0\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 24.0\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I now know the final answer\\n\",\n            \"Final Answer: We have 24 apples.\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': 'if Mary has four apples and Giorgio brings two and a half apple boxes (apple box contains eight apples), how many apples do we have?',\\n\",\n              \" 'output': 'We have 24 apples.'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 10\n        }\n      ]\n    }\n  ]\n}"
}
```

Looks great! But what if we decide to ask a non-math question? What if we ask an easy common knowledge question?

```json
{
  "_key": "3244de44ad04",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"zero_shot_agent(\\\"what is the capital of Norway?\\\")\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 496\n        },\n        \"id\": \"hxSpYf8vZpBv\",\n        \"outputId\": \"805a8db7-1513-4a95-d357-7cb17b4ab1fc\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to look up the answer\\n\",\n            \"Action: Look up\\n\",\n            \"Action Input: Capital of Norway\\u001b[0m\\n\",\n            \"Observation: Look up is not a valid tool, try another one.\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I need to find the answer using a tool\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: N/A\\u001b[0m\"\n          ]\n        },\n        {\n          \"output_type\": \"error\",\n          \"ename\": \"ValueError\",\n          \"evalue\": \"ignored\",\n          \"traceback\": [\n            \"\\u001b[0;31m---------------------------------------------------------------------------\\u001b[0m\",\n            \"\\u001b[0;31mValueError\\u001b[0m                                Traceback (most recent call last)\",\n            \"\\u001b[0;32m<ipython-input-11-c49ec2f16b95>\\u001b[0m in \\u001b[0;36m<cell line: 1>\\u001b[0;34m()\\u001b[0m\\n\\u001b[0;32m----> 1\\u001b[0;31m \\u001b[0mzero_shot_agent\\u001b[0m\\u001b[0;34m(\\u001b[0m\\u001b[0;34m\\\"what is the capital of Norway?\\\"\\u001b[0m\\u001b[0;34m)\\u001b[0m\\u001b[0;34m\\u001b[0m\\u001b[0;34m\\u001b[0m\\u001b[0m\\n\\u001b[0m\",\n            \"\\u001b[0;31mValueError\\u001b[0m: unknown format from LLM: N/A\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
}
```

We run into an error. The problem here is that the agent keeps trying to use a tool. Yet, our agent contains only one tool — the calculator.

Fortunately, we can fix this problem by giving our agent more tools! Let’s add a plain and simple LLM tool:

```python
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

prompt = PromptTemplate(
    input_variables=["query"],
    template="{query}"
)

llm_chain = LLMChain(llm=llm, prompt=prompt)

# initialize the LLM tool
llm_tool = Tool(
    name='Language Model',
    func=llm_chain.run,
    description='use this tool for general purpose queries and logic'
)
```

With this, we have a new general-purpose LLM tool. All we do is add it to the `tools` list and reinitialize the agent:

```python
tools.append(llm_tool)

# reinitialize the agent
zero_shot_agent = initialize_agent(
    agent="zero-shot-react-description",
    tools=tools,
    llm=llm,
    verbose=True,
    max_iterations=3
)
```

Now we can ask the agent questions about both math and general knowledge. Let’s try the following:

```json
{
  "_key": "f79fc428ac14",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"zero_shot_agent(\\\"what is the capital of Norway?\\\")\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"NdI_4nkMcMMC\",\n        \"outputId\": \"8724dfc8-195d-43f8-c5cb-0aa5de0a310a\"\n      },\n      \"execution_count\": 15,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to find out what the capital of Norway is\\n\",\n            \"Action: Language Model\\n\",\n            \"Action Input: What is the capital of Norway?\\u001b[0m\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m\\n\",\n            \"\\n\",\n            \"The capital of Norway is Oslo.\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I now know the final answer\\n\",\n            \"Final Answer: The capital of Norway is Oslo.\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': 'what is the capital of Norway?',\\n\",\n              \" 'output': 'The capital of Norway is Oslo.'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 15\n        }\n      ]\n    }\n  ]\n}"
}
```

Now we get the correct answer! We can ask the first question:

```json
{
  "_key": "e5ffd9dd3be1",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"zero_shot_agent(\\\"what is (4.5*2.1)^2.2?\\\")\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"rTf_4U4JcOkv\",\n        \"outputId\": \"d2962604-4bba-46a9-d429-cdc2a001a50d\"\n      },\n      \"execution_count\": 16,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to calculate this expression\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: (4.5*2.1)^2.2\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 139.94261298333066\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I now know the final answer\\n\",\n            \"Final Answer: 139.94261298333066\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': 'what is (4.5*2.1)^2.2?', 'output': '139.94261298333066'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 16\n        }\n      ]\n    }\n  ]\n}"
}
```

And the agent understands it must refer to the calculator tool, which it does — giving us the correct answer.

With that complete, we should understand the workflow in designing and prompting agents with different tools. Now let’s move on to the different types of agents and tools available to us.

## Agent Types

LangChain offers several types of agents. In this section, we’ll examine a few of the most common.

### Zero Shot ReAct

We’ll start with the agent we saw earlier, the `zero-shot-react-description` agent.

As described earlier, we use this agent to perform _“zero-shot”_ tasks on some input. That means the agent considers _one single_ interaction with the agent — it will have no _memory_.

Let’s create a `tools` list to use with the agent. We will include an `llm-math` tool and a SQL DB tool that we [defined here](https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/06-langchain-agents.ipynb).

```python
tools = load_tools(
    ["llm-math"], 
    llm=llm
)

# add our custom SQL db tool
tools.append(sql_tool)
```

We initialize the `zero-shot-react-description` agent like so:

```python
from langchain.agents import initialize_agent

zero_shot_agent = initialize_agent(
    agent="zero-shot-react-description", 
    tools=tools, 
    llm=llm,
    verbose=True,
    max_iterations=3,
)
```

To give some context on the SQL DB tool, we will be using it to query a “stocks database” that looks like this:

| obs_id | stock_ticker | price | data |
| 1 | ‘ABC’ | 200 | 1 Jan 23 |
| 2 | ‘ABC’ | 208 | 2 Jan 23 |
| 3 | ‘ABC’ | 232 | 3 Jan 23 |
| 4 | ‘ABC’ | 225 | 4 Jan 23 |
| 5 | ‘ABC’ | 226 | 5 Jan 23 |
| 6 | ‘XYZ’ | 810 | 1 Jan 23 |
| 7 | ‘XYZ’ | 803 | 2 Jan 23 |
| 8 | ‘XYZ’ | 798 | 3 Jan 23 |
| 9 | ‘XYZ’ | 795 | 4 Jan 23 |
| 10 | ‘XYZ’ | 791 | 5 Jan 23 |

Now we can begin asking questions about this SQL DB and pairing it with calculations via the calculator tool.

```json
{
  "_key": "569c9882fba8",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 16,\n      \"id\": \"9c775bc6\",\n      \"metadata\": {\n        \"id\": \"9c775bc6\",\n        \"outputId\": \"f659b5c5-2d16-4251-8ef4-37bb7616da24\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m I need to compare the stock prices of 'ABC' and 'XYZ' on two different days\\n\",\n            \"Action: Stock DB\\n\",\n            \"Action Input: Stock prices of 'ABC' and 'XYZ' on January 3rd and January 4th\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new SQLDatabaseChain chain...\\u001b[0m\\n\",\n            \"Stock prices of 'ABC' and 'XYZ' on January 3rd and January 4th \\n\",\n            \"SQLQuery:\\u001b[32;1m\\u001b[1;3m SELECT stock_ticker, price, date FROM stocks WHERE (stock_ticker = 'ABC' OR stock_ticker = 'XYZ') AND (date = '2023-01-03' OR date = '2023-01-04')\\u001b[0m\\n\",\n            \"SQLResult: \\u001b[33;1m\\u001b[1;3m[('ABC', 232.0, '2023-01-03'), ('ABC', 225.0, '2023-01-04'), ('XYZ', 798.0, '2023-01-03'), ('XYZ', 795.0, '2023-01-04')]\\u001b[0m\\n\",\n            \"Answer:\\u001b[32;1m\\u001b[1;3m The stock prices of 'ABC' and 'XYZ' on January 3rd and January 4th were 232.0 and 798.0 respectively for 'ABC' and 'XYZ' on January 3rd, and 225.0 and 795.0 respectively for 'ABC' and 'XYZ' on January 4th.\\u001b[0m\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\",\n            \"\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m The stock prices of 'ABC' and 'XYZ' on January 3rd and January 4th were 232.0 and 798.0 respectively for 'ABC' and 'XYZ' on January 3rd, and 225.0 and 795.0 respectively for 'ABC' and 'XYZ' on January 4th.\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I need to calculate the ratio between the two stock prices on each day\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: 232.0/798.0 and 225.0/795.0\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 0.2907268170426065\\n\",\n            \"0.2830188679245283\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m I need to calculate the multiplication of the two ratios\\n\",\n            \"Action: Calculator\\n\",\n            \"Action Input: 0.2907268170426065 * 0.2830188679245283\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mAnswer: 0.08228117463469994\\n\",\n            \"\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        }\n      ],\n      \"source\": [\n        \"result = zero_shot_agent(\\n\",\n        \"    \\\"What is the multiplication of the ratio between stock prices for 'ABC' \\\"\\n\",\n        \"    \\\"and 'XYZ' in January 3rd and the ratio between the same stock prices in \\\"\\n\",\n        \"    \\\"January the 4th?\\\"\\n\",\n        \")\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

We can see a lot of output here. At each step, there is a **Thought** that results in a chosen **Action** and **Action Input**. If the **Action** were to use a tool, then an **Observation** (the output from the tool) is passed back to the agent.

If we look at the prompt being used by the agent, we can see how the LLM decides which tool to use.

```json
{
  "_key": "2b5d5ee30888",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 17,\n      \"id\": \"3e50e81b\",\n      \"metadata\": {\n        \"id\": \"3e50e81b\",\n        \"outputId\": \"10216cb3-6562-4d01-ebe0-8ba25c6749ef\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Answer the following questions as best you can. You have access to the following tools:\\n\",\n            \"\\n\",\n            \"Calculator: Useful for when you need to answer questions about math.\\n\",\n            \"Stock DB: Useful for when you need to answer questions about stocks and their prices.\\n\",\n            \"\\n\",\n            \"Use the following format:\\n\",\n            \"\\n\",\n            \"Question: the input question you must answer\\n\",\n            \"Thought: you should always think about what to do\\n\",\n            \"Action: the action to take, should be one of [Calculator, Stock DB]\\n\",\n            \"Action Input: the input to the action\\n\",\n            \"Observation: the result of the action\\n\",\n            \"... (this Thought/Action/Action Input/Observation can repeat N times)\\n\",\n            \"Thought: I now know the final answer\\n\",\n            \"Final Answer: the final answer to the original input question\\n\",\n            \"\\n\",\n            \"Begin!\\n\",\n            \"\\n\",\n            \"Question: {input}\\n\",\n            \"Thought:{agent_scratchpad}\\n\"\n          ]\n        }\n      ],\n      \"source\": [\n        \"print(zero_shot_agent.agent.llm_chain.prompt.template)\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

We first tell the LLM the tools it can use (`Calculator` and `Stock DB`). Following this, an example format is defined; this follows the flow of `Question` (from the user), `Thought`, `Action`, `Action Input`, `Observation` — and repeat until reaching the `Final Answer`.

These tools and the thought process separate _agents_ from _chains_ in LangChain.

Whereas a _chain_ defines an immediate input/output process, the logic of agents allows a step-by-step thought process. The advantage of this step-by-step process is that the LLM can work through multiple reasoning steps or tools to produce a better answer.

There is still one part of the prompt we still need to discuss. The final line is `"Thought:{agent_scratchpad}"`.

The `agent_scratchpad` is where we add _every_ thought or action the agent has already performed. All thoughts and actions (within the _current_ agent executor chain) can then be accessed by the _next_ thought-action-observation loop, enabling continuity in agent actions.

### Conversational ReAct

The zero-shot agent works well but lacks [conversational memory](https://www.pinecone.io/learn/series/langchain/langchain-conversational-memory/). This lack of memory can be problematic for chatbot-type use cases that need to _remember_ previous interactions in a conversation.

Fortunately, we can use the `conversational-react-description` agent to _remember_ interactions. We can think of this agent as the same as our previous **Zero Shot ReAct** agent, but with _conversational memory_.

To initialize the agent, we first need to initialize the memory we’d like to use. We will use the simple `ConversationBufferMemory`.

```python
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory(memory_key="chat_history")
```

We pass this to the `memory` parameter when initializing our agent:

```python
conversational_agent = initialize_agent(
    agent='conversational-react-description', 
    tools=tools, 
    llm=llm,
    verbose=True,
    max_iterations=3,
    memory=memory,
)
```

If we run this agent with a similar question, we should see a similar process followed as before:

```json
{
  "_key": "68acc1b41d5e",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 22,\n      \"id\": \"cabbea50\",\n      \"metadata\": {\n        \"id\": \"cabbea50\",\n        \"outputId\": \"a134ea0a-5584-4464-9578-4b539bec253b\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m\\n\",\n            \"Thought: Do I need to use a tool? Yes\\n\",\n            \"Action: Stock DB\\n\",\n            \"Action Input: ABC on January the 1st\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new SQLDatabaseChain chain...\\u001b[0m\\n\",\n            \"ABC on January the 1st \\n\",\n            \"SQLQuery:\\u001b[32;1m\\u001b[1;3m SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-01'\\u001b[0m\\n\",\n            \"SQLResult: \\u001b[33;1m\\u001b[1;3m[(200.0,)]\\u001b[0m\\n\",\n            \"Answer:\\u001b[32;1m\\u001b[1;3m The price of ABC on January the 1st was 200.0.\\u001b[0m\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\",\n            \"\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m The price of ABC on January the 1st was 200.0.\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m Do I need to use a tool? No\\n\",\n            \"AI: Is there anything else I can help you with?\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        }\n      ],\n      \"source\": [\n        \"result = conversational_agent(\\n\",\n        \"    \\\"Please provide me the stock prices for ABC on January the 1st\\\"\\n\",\n        \")\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

So far, this looks very similar to our last _zero-shot_ agent. However, _unlike_ our zero-shot agent, we can now ask _follow-up_ questions. Let’s ask about the stock price for _XYZ_ on the _same date_ without specifying January 1st.

```json
{
  "_key": "b4e239038fd2",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"result = conversational_agent(\\n\",\n        \"    \\\"What are the stock prices for XYZ on the same day?\\\"\\n\",\n        \")\"\n      ],\n      \"metadata\": {\n        \"id\": \"fvDDuPBr-kH8\",\n        \"outputId\": \"5bc82144-444a-4ce8-9b31-5f6543c4436c\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"id\": \"fvDDuPBr-kH8\",\n      \"execution_count\": 24,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m\\n\",\n            \"Thought: Do I need to use a tool? Yes\\n\",\n            \"Action: Stock DB\\n\",\n            \"Action Input: Stock prices for XYZ on January 1st\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new SQLDatabaseChain chain...\\u001b[0m\\n\",\n            \"Stock prices for XYZ on January 1st \\n\",\n            \"SQLQuery:\\u001b[32;1m\\u001b[1;3m SELECT price FROM stocks WHERE stock_ticker = 'XYZ' AND date = '2023-01-01'\\u001b[0m\\n\",\n            \"SQLResult: \\u001b[33;1m\\u001b[1;3m[(810.0,)]\\u001b[0m\\n\",\n            \"Answer:\\u001b[32;1m\\u001b[1;3m The stock price for XYZ on January 1st was 810.0.\\u001b[0m\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\",\n            \"\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m The stock price for XYZ on January 1st was 810.0.\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m Do I need to use a tool? No\\n\",\n            \"AI: Is there anything else I can help you with?\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        }\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

We can see in the first `Action Input` that the agent is looking for `"Stock prices for XYZ on January 1st"`. It knows we are looking for _January 1st_ because we asked for this date in our previous interaction.

How can it do this? We can take a look at the prompt template to find out:

```json
{
  "_key": "b5b6309e6be7",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 23,\n      \"id\": \"de413fd3\",\n      \"metadata\": {\n        \"id\": \"de413fd3\",\n        \"outputId\": \"594a9925-346e-4473-ef1a-11b2b738683d\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Assistant is a large language model trained by OpenAI.\\n\",\n            \"\\n\",\n            \"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\",\n            \"\\n\",\n            \"Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\",\n            \"\\n\",\n            \"Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\",\n            \"\\n\",\n            \"TOOLS:\\n\",\n            \"------\\n\",\n            \"\\n\",\n            \"Assistant has access to the following tools:\\n\",\n            \"\\n\",\n            \"> Calculator: Useful for when you need to answer questions about math.\\n\",\n            \"> Stock DB: Useful for when you need to answer questions about stocks and their prices.\\n\",\n            \"\\n\",\n            \"To use a tool, please use the following format:\\n\",\n            \"\\n\",\n            \"```\\n\",\n            \"Thought: Do I need to use a tool? Yes\\n\",\n            \"Action: the action to take, should be one of [Calculator, Stock DB]\\n\",\n            \"Action Input: the input to the action\\n\",\n            \"Observation: the result of the action\\n\",\n            \"```\\n\",\n            \"\\n\",\n            \"When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\",\n            \"\\n\",\n            \"```\\n\",\n            \"Thought: Do I need to use a tool? No\\n\",\n            \"AI: [your response here]\\n\",\n            \"```\\n\",\n            \"\\n\",\n            \"Begin!\\n\",\n            \"\\n\",\n            \"Previous conversation history:\\n\",\n            \"{chat_history}\\n\",\n            \"\\n\",\n            \"New input: {input}\\n\",\n            \"{agent_scratchpad}\\n\"\n          ]\n        }\n      ],\n      \"source\": [\n        \"print(conversational_agent.agent.llm_chain.prompt.template)\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

We have a much larger instruction setup at the start of the prompt, but most important are the two lines near the end of the prompt:

`Previous conversation history: {chat_history}`

Here is where we add all previous interactions to the prompt. Within this space will be the information that we asked `"Please provide me the stock prices for ABC on January the 1st"` — allowing the agent to understand that our follow-up question refers to the same date.

It’s worth noting that the conversational ReAct agent is designed for conversation and struggles more than the zero-shot agent when combining multiple complex steps. We can see this if we ask the agent to answer our earlier question:

```json
{
  "_key": "4f9db007f595",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 26,\n      \"id\": \"5e109878\",\n      \"metadata\": {\n        \"id\": \"5e109878\",\n        \"outputId\": \"6d7cab70-338e-4d88-9257-f34dee165514\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3mThought: Do I need to use a tool? Yes\\n\",\n            \"Action: Stock DB\\n\",\n            \"Action Input: Get the ratio of the prices of stocks 'ABC' and 'XYZ' in January 3rd and the ratio of the same prices of the same stocks in January the 4th\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new SQLDatabaseChain chain...\\u001b[0m\\n\",\n            \"Get the ratio of the prices of stocks 'ABC' and 'XYZ' in January 3rd and the ratio of the same prices of the same stocks in January the 4th \\n\",\n            \"SQLQuery:\\u001b[32;1m\\u001b[1;3m SELECT (SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-03') / (SELECT price FROM stocks WHERE stock_ticker = 'XYZ' AND date = '2023-01-03') AS ratio_jan_3, (SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-04') / (SELECT price FROM stocks WHERE stock_ticker = 'XYZ' AND date = '2023-01-04') AS ratio_jan_4 FROM stocks LIMIT 5;\\u001b[0m\\n\",\n            \"SQLResult: \\u001b[33;1m\\u001b[1;3m[(0.2907268170426065, 0.2830188679245283), (0.2907268170426065, 0.2830188679245283), (0.2907268170426065, 0.2830188679245283), (0.2907268170426065, 0.2830188679245283), (0.2907268170426065, 0.2830188679245283)]\\u001b[0m\\n\",\n            \"Answer:\\u001b[32;1m\\u001b[1;3m The ratio of the prices of stocks 'ABC' and 'XYZ' in January 3rd is 0.2907268170426065 and the ratio of the same prices of the same stocks in January the 4th is 0.2830188679245283.\\u001b[0m\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\",\n            \"\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m The ratio of the prices of stocks 'ABC' and 'XYZ' in January 3rd is 0.2907268170426065 and the ratio of the same prices of the same stocks in January the 4th is 0.2830188679245283.\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m Do I need to use a tool? No\\n\",\n            \"AI: The answer is 0.4444444444444444. Is there anything else I can help you with?\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\",\n            \"Spent a total of 2518 tokens\\n\"\n          ]\n        }\n      ],\n      \"source\": [\n        \"result = conversational_agent(\\n\",\n        \"    \\\"What is the multiplication of the ratio of the prices of stocks 'ABC' \\\"\\n\",\n        \"    \\\"and 'XYZ' in January 3rd and the ratio of the same prices of the same \\\"\\n\",\n        \"    \\\"stocks in January the 4th?\\\"\\n\",\n        \")\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

With this, the agent still manages to solve the question but uses a more complex approach of pure SQL rather than relying on more straightforward SQL and the calculator tool.

### ReAct Docstore

Another common agent is the `react-docstore` agent. As before, it uses the ReAct methodology, but now it is explicitly built for information search and lookup using a LangChain _docstore_.

LangChain docstores allow us to store and retrieve information using traditional retrieval methods. One of these docstores is Wikipedia, which gives us access to the information on the site.

We will implement this agent using two docstore methods — `Search` and `Lookup`. With `Search`, our agent will search for a relevant article, and with `Lookup`, the agent will find the relevant chunk of information within the retrieved article. To initialize these two tools, we do:

```python
from langchain import Wikipedia
from langchain.agents.react.base import DocstoreExplorer

docstore=DocstoreExplorer(Wikipedia())
tools = [
    Tool(
        name="Search",
        func=docstore.search,
        description='search wikipedia'
    ),
    Tool(
        name="Lookup",
        func=docstore.lookup,
        description='lookup a term in wikipedia'
    )
]
```

Now initialize the agent:

```python
docstore_agent = initialize_agent(
    tools, 
    llm, 
    agent="react-docstore", 
    verbose=True,
    max_iterations=3
)
```

Let’s try the following:

```json
{
  "_key": "d0fde4a80946",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n     {\n      \"cell_type\": \"code\",\n      \"execution_count\": 30,\n      \"id\": \"bba6b065\",\n      \"metadata\": {\n        \"id\": \"bba6b065\",\n        \"outputId\": \"40179869-4d21-432c-fba3-2f3f548b3034\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3mThought: I need to search Archimedes and find his last words.\\n\",\n            \"Action: Search[Archimedes]\\u001b[0m\\n\",\n            \"Observation: \\u001b[36;1m\\u001b[1;3mArchimedes of Syracuse (; c. 287 – c. 212 BC) was a Greek mathematician, physicist, engineer...\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m The paragraph does not mention Archimedes' last words. I need to look up \\\"last words\\\".\\n\",\n            \"Action: Lookup[last words]\\u001b[0m\\n\",\n            \"Observation: \\u001b[33;1m\\u001b[1;3m(Result 1/1) Plutarch (45–119 AD) wrote in his Parallel Lives that Archimedes was related to King Hiero...\\u001b[0m\\n\",\n            \"Thought:\\u001b[32;1m\\u001b[1;3m The last words attributed to Archimedes are \\\"Do not disturb my circles\\\", so the answer is \\\"Do not disturb my circles\\\".\\n\",\n            \"Action: Finish[\\\"Do not disturb my circles\\\"]\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': \\\"What were Archimedes' last words?\\\",\\n\",\n              \" 'output': '\\\"Do not disturb my circles\\\"'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 30\n        }\n      ],\n      \"source\": [\n        \"docstore_agent(\\\"What were Archimedes' last words?\\\")\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

The prompt for this agent is very long, so we’ll show a shortened version. All it contains are several examples of how this agent should use the `Search` and `Lookup` tools:

```markdown
Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?

Thought: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.
Action: Search[Colorado orogeny]
Observation: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.

Thought: It does not mention the eastern sector. So I need to look up eastern sector.
Action: Lookup[eastern sector]
Observation: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.

Thought: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.
Action: Search[High Plains]
Observation: High Plains refers to one of two distinct land regions

Thought: I need to instead search High Plains (United States).
Action: Search[High Plains (United States)]
Observation: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]

Thought: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.
Action: Finish[1,800 to 7,000 ft]
```

The prompt contains several examples in a similar format. At the end of the prompt, we see:

```markdown
Question: {input}
{agent_scratchpad}
```

As before, we have an `{input}` to pass in the most recent user query and the `{agent_scratchpad}` to keep track of previous thoughts and actions.

Unlike our _conversational_ agent, there is _no_ `{chat_history}` input. That means that we are using another _zero-shot_ agent.

### Self-Ask With Search

Let’s look at one final agent — the `self-ask-with-search` agent. This agent is the first you should consider when connecting an LLM with a search engine.

The agent will perform searches and ask follow-up questions as often as required to get a final answer. We initialize the agent like so:

```python
from langchain import SerpAPIWrapper

# initialize the search chain
search = SerpAPIWrapper(serpapi_api_key='serp_api_key')

# create a search tool
tools = [
    Tool(
        name="Intermediate Answer",
        func=search.run,
        description='google search'
    )
]

# initialize the search enabled agent
self_ask_with_search = initialize_agent(
    tools,
    llm,
    agent="self-ask-with-search",
    verbose=True
)
```

Now let’s ask a question requiring multiple searches and _“self ask”_ steps.

```json
{
  "_key": "f14d4186bd42",
  "_type": "colabBlock",
  "jsonContent": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"self_ask_with_search(\\n\",\n        \"    \\\"who lived longer; Plato, Socrates, or Aristotle?\\\"\\n\",\n        \")\"\n      ],\n      \"metadata\": {\n        \"id\": \"i3jxXuRsbojJ\",\n        \"outputId\": \"1c1d4d25-9258-4083-c598-c03ea65f8295\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"id\": \"i3jxXuRsbojJ\",\n      \"execution_count\": 38,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Entering new AgentExecutor chain...\\u001b[0m\\n\",\n            \"\\u001b[32;1m\\u001b[1;3m Yes.\\n\",\n            \"Follow up: How old was Plato when he died?\\u001b[0m\\n\",\n            \"Intermediate answer: \\u001b[36;1m\\u001b[1;3meighty\\u001b[0m\\u001b[32;1m\\u001b[1;3m\\n\",\n            \"Follow up: How old was Socrates when he died?\\u001b[0m\\n\",\n            \"Intermediate answer: \\u001b[36;1m\\u001b[1;3mapproximately 71\\u001b[0m\\u001b[32;1m\\u001b[1;3m\\n\",\n            \"Follow up: How old was Aristotle when he died?\\u001b[0m\\n\",\n            \"Intermediate answer: \\u001b[36;1m\\u001b[1;3m62 years\\u001b[0m\\u001b[32;1m\\u001b[1;3m\\n\",\n            \"So the final answer is: Plato\\u001b[0m\\n\",\n            \"\\n\",\n            \"\\u001b[1m> Finished chain.\\u001b[0m\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"{'input': 'who lived longer; Plato, Socrates, or Aristotle?',\\n\",\n              \" 'output': 'Plato'}\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 38\n        }\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"display_name\": \"langchain\",\n      \"language\": \"python\",\n      \"name\": \"langchain\"\n    },\n    \"language_info\": {\n      \"codemirror_mode\": {\n        \"name\": \"ipython\",\n        \"version\": 3\n      },\n      \"file_extension\": \".py\",\n      \"mimetype\": \"text/x-python\",\n      \"name\": \"python\",\n      \"nbconvert_exporter\": \"python\",\n      \"pygments_lexer\": \"ipython3\",\n      \"version\": \"3.8.16\"\n    },\n    \"vscode\": {\n      \"interpreter\": {\n        \"hash\": \"578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38\"\n      }\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 5\n}"
}
```

We can see the multi-step process of the agent. It performs multiple follow-up questions to hone in on the final answer.

That’s it for this chapter on LangChain agents. As you have undoubtedly noticed, agents cover a vast scope of tooling in LangChain. We have covered much of the essentials, but there is much more that we could talk about.

The transformative potential of agents is a monumental leap forward for Large Language Models (LLMs), and it is only a matter of time before the term “LLM agents” becomes synonymous with LLMs themselves.

By empowering LLMs to utilize tools and navigate complex, multi-step thought processes within these agent frameworks, we are venturing into a mind-bogglingly huge realm of AI-driven opportunities.

## References

[1] [Langchain.io](https://web.archive.org/web/20180806170305/http://langchain.io/) (2019), Wayback Machine

[2] Jun-hang Lee, [Mother of Language Slides](https://www.slideshare.net/JunhangLee/mother-of-languages-langchain-95416686) (2018), SlideShare