By: Amir Tadrisi

Published on: 6/13/2025

Last updated on: 6/20/2025

LLM-Driven Data Analytics: Build AI-Powered Insights & Dynamic Charts with Next.js and OpenAI

Introduction

What are LLM-driven insights?

LLM-driven insights refer to using advanced AI language models, like OpenAI’s GPT, to understand, interpret, and generate meaningful information from complex datasets. Instead of manually writing queries or code, you can ask the AI in natural language questions like:

  • What were the total sales last quarter?
  • Which product category had the highest revenue in May?”
  • Show me the monthly sales trend for the past year.
  • Identify the top 5 customers by purchase volume.
  • Compare sales performance between regions A and B.

The LLM understands your request, analyzes the sales data, and provides clear, actionable answers — sometimes even generating charts or summaries to visualize the findings.

What are we going to build?

We’ll create a powerful yet simple data analysis chatbot using Next.js and OpenAI’s language model. This chatbot will allow users to interact with sales data through natural language queries — no complex coding, SQL knowledge, or need to install BI platforms.

Why use Next.js?

Choosing the right framework is essential when building a chatbot that’s both powerful and user-friendly. Next.js stands out as an excellent choice thanks to its robust features tailored for modern web applications. Here’s why Next.js is perfect for our data analysis chatbot:

  • Full-Stack Capabilities
  • AI SDK: Vercel provides an SDK to integrate different LLM providers with our project without worrying about the integration part. This SDK takes care of a lot of repeated steps and standardizes the integration with different providers like OpenAI, Gemini, or Grok, to name a few.
  • Easy Deployment & Self-Hosting
  • Fast and Scalable

Prerequisites and Setup

Installing Next.js

Open your Command line interface and run the following command npx create-next-app@latest

After installation is done, in the command line, change the directory cd llm-insights and run npm run dev

Visit Localhost. You should see something like the following

Next.js Default Page
Next.js Default Page

Install Libraries and Dependencies

In the command line, stop the development server and install Vercel's AI SDK by running the following command npm install ai @ai-sdk/openai

Now, go to the OpenAI site and on the left sidebar, get an API key. Copy the key and in the root of the project, create a .env file, and add OPENAI_API_KEY="YourKey" to it.

Now, let's install frontend libraries and dependencies. Install Heroicon by running npm install @heroicons/react

Set up Project Structure

In the project root directory, create a folder called lib And inside it, create a file called openai.ts

and in the app/global.css Remove everything and only add the following to it

Adding ChatBot components

In the app folder, add components directory and create the following files in it

In the app directory, change the page.tsx to the following

Here is what you should see on your screen

Chatbot Skeleton
Chatbot Skeleton

Sales Data

In our example, we are going to create a file and add some sample sales data there, but in a real-world application, you probably pull data from an API or Database. In the project rootCreate a new folder called data and add a new file called sales.js in it

Integrate with Vercel AI SDK

Vercel AI for chatbots has 2 main components

  1. streamText, which streams text from the LLM, We should provide the model name, the model provider(OpenAI, Gemini, etc), messages (system and user back and forth messages), and we can also provide system prompts to guide the model on how to respond
  2. useChat: wires up the streamText with our UI and takes care of a lot of tasks like handling submit, input change, and messages state for us out of the box.

AI API

Let's implement our API endpoint where we receive messages here and use streamText to pass messages to it and generate an LLM response. In the app directory, create api/ai/chat/route.tsx

Wiring up to ChatBot

Now we can wire up our ChatBot component to the endpoint to send messages and receive a streamed response.

First, let's install the markdown parser, because LLM provides the answer in Markdown by default, run npm i react-markdown remark-gfm In the command prompt, next, change the ChatBot content with the following

useChat by default uses the api/chat endpoint, but in our case, since we changed the endpoint to api/ai/chat We should mention it in the options. Let's give it a try

Prompt Engineering

Prompt engineering is the Large Language Models (LLMs) steering wheel which lets us guide the model to produce accurate and relevant results. Since LLMs generate responses based on the prompt context, how you phrase questions or instructions directly impacts the quality of insights you get.

For our sales data chatbot, prompt engineering helps the AI understand exactly what data to analyze, how to interpret it, and how to present the results, whether as text summaries or chart specifications.

Key Techniques We’ll Use:

  • Clear Context Setting: We’ll provide the model with a concise description of the dataset structure and what kind of data it contains, so it knows what it’s working with.
  • Explicit Instructions: The prompt will specify the type of analysis or insight requested, such as “calculate total sales,” “identify top customers,” or “generate a monthly sales trend chart.”
  • Few-shot Examples: When needed, we’ll include example queries and expected responses in the prompt to guide the model’s behavior and improve accuracy.

If you want to learn more about prompt engineering and its advanced techniques, feel free to read The Definitive Guide to LLM Prompt Engineering and Prompt Debugging Techniques

System Prompt

streamText can receive system prompt, which can help us to give our model a persona (Expert Data Analyzer) This helps the model to look into the related trained data, define its tasks, pand rovide constraints (our sales data) and give it a few example to make sure it provides an accurate answer.

In the route.tsx Let's add a system prompt and pass it to the streamText

Let's see how it works

Tool Calling

Tool calling is a powerful feature that allows a language model to interact with external tools or APIs during a conversation. Instead of just generating text, the model can “call” specialized functions to perform tasks like fetching data, running calculations, or creating visualizations.

For our sales data chatbot, tool calling enables the AI to request the generation of charts based on user queries. When a user asks for a sales trend or comparison, the model can produce a structured chart specification (like Vega-Lite JSON) and then call a chart-rendering tool to display the visualization directly in the app.

Adding a tool to the streamText

Let's create a tool called generateRevenueChart, which generates a Pie chart for a given category. In the route.tsx

Here we imported the tool library from Vercel AI SDK, created our tool with parameter category as string (parameter is what we give to the function that we want the tool to execute), our function pulls sales JSON and using the passed category generates revenue data for our pie chart.

Create PieChart

In the components folder, create a new file PieChart.tsx With the following content

Since we are using recharts let's install it npm install recharts

Render PieChart

When our tool gets invoked, the response type, instead of text, will be tool-invocation. Let's add a case for that to make sure that when the LLM response is tool-invocation, we get the tool result for the revenue chart and pass it to our PieChart

Let's change the ChatBot.tsx content with the following

Wrapping Up the Development Journey

Throughout this tutorial, we’ve seen how combining Next.js with OpenAI’s powerful language models enables us to build an intuitive, AI-driven data analysis chatbot. From setting up the project and integrating sample sales data to crafting effective prompts and leveraging the tool calling for dynamic chart generation, each step contributes to a seamless user experience. This approach not only simplifies complex data querying but also enhances decision-making by delivering clear, actionable insights in real time.

Conclusion

Building an LLM-driven data analysis chatbot with Next.js and OpenAI unlocks a new level of accessibility and efficiency in working with complex datasets. By harnessing prompt engineering and tool calling, we empower users to interact with data naturally, asking questions in plain language and receiving insightful answers along with visualizations. This method reduces reliance on specialized technical skills and accelerates data-driven decisions.

As AI continues to evolve, integrating such intelligent chatbots into your applications can transform how your teams explore and understand data. Whether you’re analyzing sales, marketing, or operational metrics, the combination of Next.js and LLMs offers a scalable, flexible foundation to build smarter analytics tools.

Ready to take your data analysis capabilities further? Experiment with expanding your chatbot’s dataset, refining prompts, or adding new tools to generate richer insights. The future of AI-powered analytics is at your fingertips.

You can find the project repo here

Related Blogs

Looking to learn more about Prompt Engineering, ai querying, openai, chatbot, nextjs and ? These related blog articles explore complementary topics, techniques, and strategies that can help you master LLM-Driven Data Analytics: Build AI-Powered Insights & Dynamic Charts with Next.js and OpenAI.