DEV Community

Cover image for LLM Development with JavaScript: Is that a thing?
Jacob Orshalick
Jacob Orshalick

Posted on

LLM Development with JavaScript: Is that a thing?

This tutorial is a fast track to developing JavaScript apps that talk to LLM models. You'll have a REST service up and talking to an LLM in under 10 minutes. Let the coding magic begin!


This is part 1 from my free e-book:

The Busy Developers Guide to Generative AI

Fig 1: The Busy Developers Guide to Gen AI

All source code is available on GitHub.


ChatGPT vaulted generative AI into mainstream culture. But, it's really just a user interface, powered by the true marvel that lies beneath - the large language model (LLM). 

More precisely, LLMs are very large deep learning models that are pre-trained on vast amounts of data. The keyword there is pre-trained. 

All we have to do to make use of these same models is send them a prompt telling it what we want. We can do that by calling the OpenAI APIs.

Fig 2: Your REST service can call the same APIs used by ChatGPT

1. Install Node

Download and install: https://nodejs.org. Verify the install in your terminal:

~ % node -v
Enter fullscreen mode Exit fullscreen mode

If the installation succeeded, the version will print.

2. Initialize your project

Create a new directory for your project. Navigate to it in your terminal and run the following command:

~/ai-for-devs % npm init -y
Enter fullscreen mode Exit fullscreen mode

This creates a new package.json file, initializing the project.

  1. Install Node modules

The node modules we'll be using:

  • express: which makes server creation quick and easy
  • langchain: which provides a framework for building Apps with LLMs
  • @langchain/openai: which provides OpenAI integrations through their SDK
  • cors: Express middleware to enable CORS

In the same terminal, run the following command:

~/ai-for-devs % npm install express langchain @langchain/openai cors
Enter fullscreen mode Exit fullscreen mode

4. Create the server file

Create a file called server.mjs in the project directory. Open it in a text editor and add the following lines of code:

import express from "express";
import { ChatOpenAI } from "@langchain/openai";
import cors from 'cors';

const app = express();

app.use(cors());

const chatModel = new ChatOpenAI({});

app.get('/', async (req, res) => {
  const response = 
    await chatModel.invoke(
      "Can you simply say 'test'?");

  res.send(response.content);
});

app.listen(3000, () => {
  console.log(`Server is running on port 3000`);
});
Enter fullscreen mode Exit fullscreen mode

5. Create an OpenAI account

Register here: https://platform.openai.com. Obtain an API key:

  • Simply select 'API keys' in the upper left navigation
  • Select '+ Create new secret key'
  • Copy the key somewhere safe for now

Fig 3: The API keys in Open AI's interface

6. Set an environment variable

In the same terminal, run the following command with your key value:

~/ai-for-devs % export OPENAI_API_KEY=<YOUR_KEY_VALUE>
Enter fullscreen mode Exit fullscreen mode

Optionally add this command to your bash profile: ~/.zshrc

7. Launch your server

Back in the terminal, run the following command:

~/node-openai % node server.mjs
Enter fullscreen mode Exit fullscreen mode

Open your web browser and visit: http://localhost:3000

You'll see the response from the OpenAI model: "test"

Congratulations!

You've successfully built a functional REST service. Beyond its ability to prompt an AI and generate responses, it forms the foundation for the remainder of my free to download e-book:

The Busy Developer's Guide to Gen AI

In Part 2, we'll explore the process of streaming longer responses so our users don't have to wait. Part 3 and part 4 will guide you through creating a complete RAG (Retrieval Augmented Generation) implementation.

Download the book to learn more!

Top comments (5)

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

Thought you were about to explain Brain JS or something like that at first 😂 either way it hasn't been disappointing, straight to the point and clear, thanks you for sharing! 😁

Collapse
 
jorshali profile image
Jacob Orshalick

Glad you found it helpful! Maybe I’ll dive into Brain JS for the next one 😉

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

That'll be amazing! 😍 Mention me if you do so 😁

Collapse
 
matthewhou profile image
Matthew Hou

I've switched AI tools three times in the last year and the one thing that saved me was keeping my workflow config in plain markdown files. AGENTS.md, coding conventions, project context — all tool-agnostic. When I moved from Cursor to Claude Code, I just pointed it at the same files. Maybe 10% needed format tweaks. The other 90% just worked. The tool doesn't matter nearly as much as your workflow documentation.

Collapse
 
matthewhou profile image
Matthew Hou

We had the exact same problem. Our coding standards were in a Confluence page that nobody read — including AI. So we converted the most important rules into ESLint and Ruff configs. Now when AI generates code, it auto-lints on save. Our code review time dropped significantly because all the 'you forgot to follow our pattern' comments disappeared. If your standards aren't executable, they're suggestions.