feature

AI agents have been a recent fascination of mine. If you don’t already know, agents are tool-enabled, self-guiding chains of LLM logic that can execute complex tasks from simple instructions.

It doesn’t take an AI expert to see why agents are cool — you get all the wonders of ChatGPT, but all the boring stuff (copying and pasting context, utilizing foreign tools, pointing the output of one chat into another) is done automatically by a single agent.

Novelty aside, agents can really (really) suck.

Really Quick Overview of Agents

Before we dive into why I made that claim, it’s important to understand how agents work.

The important takeaway: Agents are like simple chat prompts, except they have the brain to use tools intuitively, self-correct, and memoize some information.

But Agents Actually Really Suck

Enough fanfare, why do agents really suck?

By virtue of their self-amending design, agents can be pretty hard to control directly. You need to work in a lot of prompt engineering to get them straight.

For instance, I was implementing the Wikipedia agent from the flowchart to try and search for gecko facts.

const wikipediaTool = new WikipediaQueryRun({
    topKResults: 1,
    maxDocContentLength: 100
  });
  const executor = await initializeAgentExecutorWithOptions([wikipediaTool], llm, {
    agentType: 'structured-chat-zero-shot-react-description',
    verbose: true
  });
  const result = await executor.call({
    input: `
  Answer the following questions as best you can:
  'Tell me some facts about Geckos.'
  You have access to the following tools:
  - Wikipedia: Useful for searching scientific topics and general knowledge.

  Use the following format:
  - Question: the input question you must answer
  - Thought: you should always think about what to do
  - Action: the action to take, should be one of [wikipedia]
  - Action Input: the input to the action
  - Observation: the result of the action
  - Thought: I now know the final answer
  - Final answer: The Final answer to the question

  Begin!

  Question: {input}
  Thought: {agent_scratchpad}`
  });
  console.log(result);

Overall, this seems like a really straightforward code block. I initialize a tool, pass it to the executor, and execute an input. The input contains a fairly detailed prompt that guides the agent through tool use, so it should work as expected, right?

WRONG!

Despite being syntactically correct, this code fails with the cryptic error:

Agent stopped due to max iterations.

Other times, I just get this:

Your final answer here

So why did this happen? Why does the agent sometimes loop out of control? Why does it sometimes shoot itself in the foot and return nothing? How is this even possible when I’m running the same exact command?

For some GPT power-users, the answer may emerge from the obvious: the slight variations of LLM outputs, accumulated over the run of an agent, can cause it to fail in different ways (infinite looping vs. non-answer). However, this does not explain the underlying failure.

A Deeper Look At Agent Lifecycle and Tool Use

With verbose output enabled, we can look a bit into agent output.

My first instinct (and the instinct of most sensible developers) is that the Wikipedia tool is failing.

However, this is (surprisingly) not the case!

[agent/action] [1:chain:AgentExecutor] Agent selected action: {
  "tool": "wikipedia-api",
  "toolInput": {
    "input": "Gecko species in North America"
  },
  "log": "Question: How many Gecko species are in North America?\\nThought: I should use the Wikipedia tool to search for information about Gecko species in North America.\\nAction: \\n```json\\n{\\n  \\"action\\": \\"wikipedia-api\\",\\n  \\"action_input\\": {\\n    \\"input\\": \\"Gecko species in North America\\"\\n  }\\n}\\n```"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:WikipediaQueryRun] Entering Tool run with input: "Gecko species in North America"
[tool/end] [1:chain:AgentExecutor > 4:tool:WikipediaQueryRun] [393ms] Exiting Tool run with output: "Page: Tropical house gecko
Summary: The tropical house gecko, Afro-American house gecko or cosmopoli"

As you can see in the output, the log shows that a partial summary is returned:

"Page: Tropical house gecko
Summary: The tropical house gecko, Afro-American house gecko or cosmopoli"

This brevity makes sense, as I am enforcing a maximum length of 100 (characters) for tool output.

However, the agent cannot use this short answer, as this does not satisfy the question. So it proceeds:

[llm/end] [1:chain:AgentExecutor > 8:chain:LLMChain > 9:llm:ChatOpenAI] [1.74s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "The Wikipedia page retrieved is about the tropical house gecko, which is not specific to North America. I need to search again with a more relevant query.\\nAction:\\n```json\\n{\\n  \\"action\\": \\"wikipedia-api\\",\\n  \\"action_input\\": {\\n    \\"input\\": \\"List of Gecko species in North America\\"\\n  }\\n}\\n```\\n\\n",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain_core",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "The Wikipedia page retrieved is about the tropical house gecko, which is not specific to North America. I need to search again with a more relevant query.\\nAction:\\n```json\\n{\\n  \\"action\\": \\"wikipedia-api\\",\\n  \\"action_input\\": {\\n    \\"input\\": \\"List of Gecko species in North America\\"\\n  }\\n}\\n```\\n\\n",
            "additional_kwargs": {}
          }
        },
        "generationInfo": {
          "finish_reason": "stop"
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {
      "completionTokens": 70,
      "promptTokens": 951,
      "totalTokens": 1021
    }
  }
}

The Wikipedia tool decides that the old output was insufficient, so it must run again with a “different query.” However, the agent is somewhat stupid and stubborn, and ends up just executing the same Wikipedia query again, yielding the same result, inevitably creating an infinite agent loop.

So how can we fix this? At first, I tried increasing the document length limit from 100 to something like 1000. Hopefully, this would creep deeper into the Wikipedia page summary, making it more likely to give me an answer.

This time, I got this as a final output:

[chain/end] [1:chain:AgentExecutor] [3.57s] Exiting Chain run with output: {
  "output": "I could not find an exact number of gecko species in North America."
}
{
  output: 'I could not find an exact number of gecko species in North America.'
}

Darn. This was less of a non-answer, but nonetheless did not give me what I wanted. Let’s try 3000.

[chain/end] [1:chain:AgentExecutor] [5.38s] Exiting Chain run with output: {
  "output": "There are multiple gecko species in North America, but the exact number is unknown."
}
{
  output: 'There are multiple gecko species in North America, but the exact number is unknown.'
}

It seems like this is the best answer we are going to get, as there are still undiscovered species of geckos. However, we don’t want “I don’t know as the correct answer.”

Screw geckos, let’s try something that has real precision. I swapped out the Gecko question with something that certainly has a concrete answer: I asked the agent for the year that the Parthenon was built.

const result = await executor.call({
    input: `
  Answer the following questions as best you can:
  'What year was the Parthenon built?'
  You have access to the following tools:
  - Wikipedia: Useful for searching scientific topics and general knowledge.

  Use the following format:
  - Question: the input question you must answer
  - Thought: you should always think about what to do
  - Action: the action to take, should be either wikipedia or using your own information
  - Action Input: the input to the action
  - Observation: the result of the action
  - Thought: I now know the final answer
  - Final answer: The Final answer to the question

  Begin!

  Question: {input}
  Thought: {agent_scratchpad}`
  });

I quickly get this!

[chain/end] [1:chain:AgentExecutor] [6.02s] Exiting Chain run with output: {
  "output": "438 BC"
}
{ output: '438 BC' }

Furthermore, by modifying the action-step format of the prompt, it’s possible to reveal nuance in this answer. For instance, Google reveals that the Parthenon was built in 447 BC, which conflicts with my answer. So, I modified my prompt so it reveals any historical or scientific nuance

const result = await executor.call({
    input: `
  Answer the following questions as best you can:
  'What year was the Parthenon built?'
  You have access to the following tools:
  - Wikipedia: Useful for searching scientific topics and general knowledge.

  Use the following format:
  - Question: the input question you must answer
  - Thought: you should always think about what to do
  - Action: the action to take, should be either wikipedia or using your own information
  - Action Input: the input to the action
  - Observation: the result of the action
  - Thought: I now know the final answer, or final answer is unknown, ambiguous, or has different possible answers. I should express that the final answer is uncertain. If different historical accounts exist, make sure to express this in the final answer.
  - Final answer: The Final answer to the question

  Begin!

  Question: {input}
  Thought: {agent_scratchpad}`
  });

And got this:

[chain/end] [1:chain:AgentExecutor] [4.22s] Exiting Chain run with output: {
  "output": "The construction of the Parthenon started in 447 BC and was completed in 438 BC."
}
{
  output: 'The construction of the Parthenon started in 447 BC and was completed in 438 BC.'
}

What I’ve Learned

When it comes to programming agents, there are a few things to be aware of:

  1. The recursive workflow of agents: agents can perform the same actions over and over, so clarifying what to do in every next step is important for avoiding repetition, redundancy, and infinite looping.
  2. Agents need to know when to use and not use tools: agents can use tools poorly, so it is important to identify poor tool use via the prompt.
  3. Agents need to know how to identify hallucination or lack of nuance: In the example with the Parthenon, I revealed that there was some historical nuance to the answer (447 BC vs. 438 BC). Instead of taking this answer at face value, I instructed the model to comb for nuance in its output, and was able to successfully get a more complete answer.