artiebits.com

How to Build an AI Agent

This is a beginner-friendly guide to building a code-editing agent in JavaScript. An agent you can chat with in a terminal and ask to create or edit files.

At first, it will be a simple agent with limited capabilities, but in future posts, I’ll show how to improve it.

Instead of using APIs from OpenAI or Anthropic, we’ll run a local model. This means zero API costs, full data privacy, and complete control over your own AI setup.

We’ll use llama3-groq-tool-use:8b which is a Llama 3 designed for better tool use. It works fast on my laptop with 32 GB of RAM.

You can find the complete source code on GitHub, feel free to clone/fork it, experiment, and build your own version.

Running model locally

We can run models locally using llama.cpp or Ollama. We’ll use Ollama since it’s beginner-friendly, even though it’s just a wrapper around llama.cpp. You can download it from their official website. And once you have it installed, pull and start the model with ollama run llama3-groq-tool-use:8b.

Once you have it installed, pull and start the model with ollama run llama3-groq-tool-use:8b.

Agent

Let’s get ourselves new project set up first:

mkdir code-editing-agent
cd code-editing-agent
npm init -y && npm install ollama
touch index.js

We’re installing the ollama npm package for interacting with Ollama. This save us from writing raw HTTP requests and handling streaming manually.

In index.js we’ll build simple chat loop with functions that:

  1. Reads user input from the terminal
  2. Sends it to Ollama along with chat history
  3. Prints the model’s response
  4. Repeats

The key moment here is that each call to model must include all previous messages. Otherwise, LLM won’t remember what it just did, because LLMs are stateless.

Now, let’s implement the agent functions.

const readline = require("readline")
const { Ollama } = require("ollama")

async function runAgent(getUserMessage, tools = [], model = "llama3-groq-tool-use:8b") {
  const ollama = new Ollama({ host: "http://localhost:11434" })
  const conversation = []

  console.log("Chat with Agent (use 'ctrl-c' to quit)")

  let readUserInput = true
  while (true) {
    if (readUserInput) {
      process.stdout.write("\x1b[94mYou\x1b[0m: ")
      const userInput = await getUserMessage()
      if (!userInput) break

      conversation.push({ role: "user", content: userInput })
    }

    const response = await ollama.chat({
      model: model,
      messages: conversation,
      tools: tools,
      stream: false,
    })

    const message = response.message
    conversation.push(message)

    if (message.content) {
      console.log(`\x1b[93mAgent\x1b[0m: ${message.content}`)
    }

    if (!message.tool_calls || message.tool_calls.length === 0) {
      readUserInput = true
      continue
    }

    readUserInput = false
    for (const toolCall of message.tool_calls) {
      const args =
        typeof toolCall.function.arguments === "string"
          ? JSON.parse(toolCall.function.arguments)
          : toolCall.function.arguments
      const result = executeTool(tools, toolCall.id, toolCall.function.name, args)
      conversation.push(result)
    }
    console.log("Tools called: ", message.tool_calls)
  }
}

function executeTool(tools, id, name, input) {
  // TODO: Will implement when we add tools
}

// Create a readline interface for user input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
  terminal: false,
})

// Function to get user input
const getUserMessage = () =>
  new Promise((resolve) => {
    rl.once("line", (input) => {
      resolve(input.trim() || null)
    })
  })

const tools = [] // TODO: Will add tools later

// Run the agent
runAgent(getUserMessage, tools).catch(console.error)

That’s the core of our agent. It prompts the user, captures input, adds it to the conversation history, sends everything to the model, gets a response, adds that to history, displays it, repeats.

Let’s test it:

$ node index.js

Chat with Agent (use 'ctrl-c' to quit)

You: hi there, my name is Artur, what's your name?
Agent: Hello Artur! My name is LLaMA. How can I assist you today?
You: what's my name?
Agent: You’re Artur.

It remembers your name because of conversation history.

So, we have basic chat interface. It can’t do anything except talk atm, but that’s foundation we will build on. Now let’s make it an actual agent :).

Adding tools

In order to turn chatbot into agent, we should provide it with tools.

When we call model with list of available tools, it looks at user request and tools and decides whether to use them. Our agent then:

  1. Executes tool function with those arguments
  2. Takes result and adds it back to conversation
  3. Calls API again with this updated conversation

Model sees tool result and thinks what to do next. It can either call more tools or respond to user.

Before we add tools, we need a helper function to execute them:

function executeTool(tools, id, name, input) {
  const tool = tools.find((t) => t.function.name === name)
  if (!tool) {
    return { role: "tool", tool_call_id: id, name, content: "Tool not found" }
  }

  console.log(`\x1b[92mtool\x1b[0m: ${name}(${JSON.stringify(input)})`)

  try {
    const response = tool.function.execute(input)
    return { role: "tool", tool_call_id: id, name, content: response }
  } catch (err) {
    return { role: "tool", tool_call_id: id, name, content: err.message }
  }
}

executeTool runs a tool and returns the result in the format the model expects.

First tool read_file

This tool checks if file exists, reads it and returns contents. If file doesn’t exist, throw error that model will see.

const fs = require("fs")

const tools = [
  {
    type: "function",
    function: {
      name: "read_file",
      description:
        "Read the contents of a given relative file path. Use this when you want to see what's inside a file. Do not use this with directory names.",
      parameters: {
        type: "object",
        properties: {
          path: { type: "string", description: "The relative path of a file in the working directory." },
        },
        required: ["path"],
      },
      execute: (input) => {
        const filePath = input.path
        if (!fs.existsSync(filePath)) {
          throw new Error(`File does not exist: ${filePath}`)
        }
        return fs.readFileSync(filePath, "utf8")
      },
    },
  },
]

Let’s see it in action.

$ node index.js

You: what's in index.js?be brief  
tool: read_file({"path":"index.js"})
Tools called:  [ { function: { name: 'read_file', arguments: [Object] } } ]
Agent: The `index.js` file contains the main entry point of the application, which sets up the agent functions. The agent has a list of predefined tools, including the `read_file` tool, which reads the contents of a given relative file path.

When you run the application and input a message, it processes the conversation using the agent functions, including sending requests to the Ollama API for inference and executing the `read_file` tool if necessary.

Pretty cool, right?

The list_files tool

This one walks through directory and returns files and subdirectories.

const path = require("path")

const tools = [
  // ... read_file tool from above
  {
    type: "function",
    function: {
      name: "list_files",
      description:
        "List files and directories at a given path. If no path is provided, lists files in the current directory.",
      parameters: {
        type: "object",
        properties: {
          path: {
            type: "string",
            description: "Optional relative path to list files from. Defaults to current directory if not provided.",
          },
        },
      },
      execute: (input) => {
        const dir = input.path || "."
        const files = []

        function walk(currentDir) {
          const entries = fs.readdirSync(currentDir, { withFileTypes: true })
          for (const entry of entries) {
            const fullPath = path.join(currentDir, entry.name)
            const relPath = path.relative(dir, fullPath)
            if (relPath) {
              files.push(entry.isDirectory() ? `${relPath}/` : relPath)
            }
            if (entry.isDirectory()) {
              walk(fullPath)
            }
          }
        }

        walk(dir)
        return JSON.stringify(files)
      },
    },
  },
]

Time to try it:

$ node index.js

You: what you see in this directory?
tool: list_files({"path":"."})
Tools called:  [ { function: { name: 'list_files', arguments: [Object] } } ]
Agent: The directory contains the following files:

* `index.js`: a JavaScript file
* `secret-file.txt`: an encrypted secret file

The edit_file Tool

We are going to implement edit_file by telling LLM it can edit files by replacing existing text with new text.

It reads file, counts how many times old_str appears, throws error if it’s not exactly once, replaces old_str with new_str, and writes result back to file.

const tools = [
  // ... read_file and list_files tools from above
  {
    type: "function",
    function: {
      name: "edit_file",
      description: `Make edits to a text file.

Replaces 'old_str' with 'new_str' in the given file. 'old_str' and 'new_str' MUST be different from each other.
'old_str' must match exactly and appear exactly once in the file.

If the file specified with path doesn't exist, it will be created.`,
      parameters: {
        type: "object",
        properties: {
          path: { type: "string", description: "The path to the file" },
          old_str: {
            type: "string",
            description: "Text to search for - must match exactly and must only have one match",
          },
          new_str: { type: "string", description: "Text to replace with" },
        },
        required: ["path", "old_str", "new_str"],
      },
      execute: (input) => {
        const { path: filePath, old_str, new_str } = input
        let content = ""
        if (fs.existsSync(filePath)) {
          content = fs.readFileSync(filePath, "utf8")
        }

        // Check for exactly one occurrence
        const occurrences = (content.match(new RegExp(old_str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"), "g")) || []).length
        if (occurrences !== 1) {
          throw new Error(`old_str must appear exactly once, but found ${occurrences} matches.`)
        }

        if (old_str === new_str) {
          throw new Error("old_str and new_str must be different.")
        }

        content = content.replace(old_str, new_str)
        fs.writeFileSync(filePath, content, "utf8")
        return "File edited successfully."
      },
    },
  },
]

Why the “exactly once” requirement? Because it forces model to be precise. If model wants to edit file, it needs to provide enough context in old_str to uniquely identify location. This prevents ambiguous edits and makes model’s intentions clear.

Time to see the magic happen. Let’s ask it to create a file:

You: Create a new file called hello.txt with the content 'Hello, World!'
tool: edit_file({"new_str":"Hello, World!","old_str":"","path":"hello.txt"})
Tools called:  [ { function: { name: 'edit_file', arguments: [Object] } } ]
Agent: A new file called `hello.txt` has been created with the content 'Hello, World!'

Check the file:

$ cat hello.txt
Hello, World!

It works! Now let’s edit it:

You: Edit hello.txt and change 'World' to 'Agent'
tool: edit_file({"new_str":"Agent","old_str":"World","path":"hello.txt"})
Tools called:  [ { function: { name: 'edit_file', arguments: [Object] } } ]

tool: read_file({"path":"hello.txt"})
Tools called:  [ { function: { name: 'read_file', arguments: [Object] } } ]
Agent: `hello.txt` has been updated.

Verify:

$ cat hello.txt
Hello, Agent!

Notice how model first read file to see its exact contents, then used that to construct old_str for edit. This is agent thinking and planning its actions.

More complex example

Now that we have all three tools working together, let’s ask it to write code:

$ node index.js

You: Create a fizzbuzz.js file that prints FizzBuzz from 1 to 20
tool: edit_file({"new_str":"// fizzbuzz.js\nfor (let i = 1; i <= 20; i++) {\n  if (i % 15 === 0) {\n    console.log('FizzBuzz');\n  } else if (i % 3 === 0) {\n    console.log('Fizz');\n  } else if (i % 5 === 0) {\n    console.log('Buzz');\n  } else {\n    console.log(i);\n  }\n}\n","old_str":"","path":"fizzbuzz.js"})
Tools called:  [ { function: { name: 'edit_file', arguments: [Object] } } ]

tool: read_file({"path":"fizzbuzz.js"})
Tools called:  [ { function: { name: 'read_file', arguments: [Object] } } ]
Agent: `fizzbuzz.js` has been created. with the following content:

for (let i = 1; i <= 20; i++) {
  if (i % 15 === 0) {
    console.log('FizzBuzz');
  } else if (i % 3 === 0) {
    console.log('Fizz');
  } else if (i % 5 === 0) {
    console.log('Buzz');
  } else {
    console.log(i);
  }
}

Does it work?

$ node fizzbuzz.js
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz

It does!

Wrapping Up

Tools like Cursor or Claude Code seem magical when you watch them work. But at their core, they are built on same foundation: loop that maintains conversation context, set of tools, and model that uses it.

Our agent of course is very simple implementation and has serious limitations. Good news, these are solvable problems. In the next post, we’ll use LangChain and add more functionality to make agent more powerful. Stay tuned!

You can find the source code for this agent on GitHub. Feel free to fork it, experiment, and build your own version.