Language Models and Large Language Models
Dive into AI models and tokenization: discover what are models, large language models and how models are trained to predict
By: Amir Tadrisi
Published on: 6/13/2025
Last updated on: 6/20/2025
LLM-driven insights refer to using advanced AI language models, like OpenAI’s GPT, to understand, interpret, and generate meaningful information from complex datasets. Instead of manually writing queries or code, you can ask the AI in natural language questions like:
The LLM understands your request, analyzes the sales data, and provides clear, actionable answers — sometimes even generating charts or summaries to visualize the findings.
We’ll create a powerful yet simple data analysis chatbot using Next.js and OpenAI’s language model. This chatbot will allow users to interact with sales data through natural language queries — no complex coding, SQL knowledge, or need to install BI platforms.
Choosing the right framework is essential when building a chatbot that’s both powerful and user-friendly. Next.js stands out as an excellent choice thanks to its robust features tailored for modern web applications. Here’s why Next.js is perfect for our data analysis chatbot:
Open your Command line interface and run the following command npx create-next-app@latest
After installation is done, in the command line, change the directory cd llm-insights
and run npm run dev
Visit Localhost. You should see something like the following
In the command line, stop the development server and install Vercel's AI SDK by running the following command npm install ai @ai-sdk/openai
Now, go to the OpenAI site and on the left sidebar, get an API key. Copy the key and in the root of the project, create a .env
file, and add OPENAI_API_KEY="YourKey"
to it.
Now, let's install frontend libraries and dependencies. Install Heroicon by running npm install @heroicons/react
In the project root directory, create a folder called lib
And inside it, create a file called openai.ts
and in the app/global.css
Remove everything and only add the following to it
In the app
folder, add components
directory and create the following files in it
In the app directory, change the page.tsx
to the following
Here is what you should see on your screen
In our example, we are going to create a file and add some sample sales data there, but in a real-world application, you probably pull data from an API or Database. In the project root
Create a new folder called data
and add a new file called sales.js
in it
Vercel AI for chatbots has 2 main components
Let's implement our API endpoint where we receive messages here and use streamText to pass messages to it and generate an LLM response. In the app directory, create api/ai/chat/route.tsx
Now we can wire up our ChatBot component to the endpoint to send messages and receive a streamed response.
First, let's install the markdown parser, because LLM provides the answer in Markdown by default, run npm i react-markdown remark-gfm
In the command prompt, next, change the ChatBot content with the following
useChat
by default uses the api/chat
endpoint, but in our case, since we changed the endpoint to api/ai/chat
We should mention it in the options. Let's give it a try
Prompt engineering is the Large Language Models (LLMs) steering wheel which lets us guide the model to produce accurate and relevant results. Since LLMs generate responses based on the prompt context, how you phrase questions or instructions directly impacts the quality of insights you get.
For our sales data chatbot, prompt engineering helps the AI understand exactly what data to analyze, how to interpret it, and how to present the results, whether as text summaries or chart specifications.
If you want to learn more about prompt engineering and its advanced techniques, feel free to read The Definitive Guide to LLM Prompt Engineering and Prompt Debugging Techniques
streamText can receive system prompt, which can help us to give our model a persona (Expert Data Analyzer) This helps the model to look into the related trained data, define its tasks, pand rovide constraints (our sales data) and give it a few example to make sure it provides an accurate answer.
In the route.tsx
Let's add a system prompt and pass it to the streamText
Let's see how it works
Tool calling is a powerful feature that allows a language model to interact with external tools or APIs during a conversation. Instead of just generating text, the model can “call” specialized functions to perform tasks like fetching data, running calculations, or creating visualizations.
For our sales data chatbot, tool calling enables the AI to request the generation of charts based on user queries. When a user asks for a sales trend or comparison, the model can produce a structured chart specification (like Vega-Lite JSON) and then call a chart-rendering tool to display the visualization directly in the app.
Let's create a tool called generateRevenueChart
, which generates a Pie chart for a given category. In the route.tsx
Here we imported the tool library from Vercel AI SDK, created our tool with parameter category as string (parameter is what we give to the function that we want the tool to execute), our function pulls sales JSON and using the passed category generates revenue data for our pie chart.
In the components
folder, create a new file PieChart.tsx
With the following content
Since we are using recharts let's install it npm install recharts
When our tool gets invoked, the response type, instead of text, will be tool-invocation
. Let's add a case for that to make sure that when the LLM response is tool-invocation
, we get the tool result for the revenue chart and pass it to our PieChart
Let's change the ChatBot.tsx content with the following
Throughout this tutorial, we’ve seen how combining Next.js with OpenAI’s powerful language models enables us to build an intuitive, AI-driven data analysis chatbot. From setting up the project and integrating sample sales data to crafting effective prompts and leveraging the tool calling for dynamic chart generation, each step contributes to a seamless user experience. This approach not only simplifies complex data querying but also enhances decision-making by delivering clear, actionable insights in real time.
Building an LLM-driven data analysis chatbot with Next.js and OpenAI unlocks a new level of accessibility and efficiency in working with complex datasets. By harnessing prompt engineering and tool calling, we empower users to interact with data naturally, asking questions in plain language and receiving insightful answers along with visualizations. This method reduces reliance on specialized technical skills and accelerates data-driven decisions.
As AI continues to evolve, integrating such intelligent chatbots into your applications can transform how your teams explore and understand data. Whether you’re analyzing sales, marketing, or operational metrics, the combination of Next.js and LLMs offers a scalable, flexible foundation to build smarter analytics tools.
Ready to take your data analysis capabilities further? Experiment with expanding your chatbot’s dataset, refining prompts, or adding new tools to generate richer insights. The future of AI-powered analytics is at your fingertips.
You can find the project repo here
Looking to learn more about Prompt Engineering, ai querying, openai, chatbot, nextjs and ? These related blog articles explore complementary topics, techniques, and strategies that can help you master LLM-Driven Data Analytics: Build AI-Powered Insights & Dynamic Charts with Next.js and OpenAI.
Dive into AI models and tokenization: discover what are models, large language models and how models are trained to predict
Master LLM prompt engineering to craft precise prompts, boost AI accuracy, and speed up responses. Improve your AI prompting skills today!
Master LLM prompts with 7 proven techniques, real-world code snippets & downloadable template. Boost AI accuracy today!
Get a bird’s-eye view of chatbot prompt engineering: learn 6 proven strategies—context, personas, constraints, iteration, and metrics—to craft smarter AI conversations.
Master LLM prompt engineering and boost Google Search Console performance. Craft high-impact prompts, monitor keywords, and elevate your site’s SEO results.
Explore Alan Turing's five pioneering AI contributions that laid the foundation for modern Artificial Intelligence. See his legacy today!
Discover 6 step-by-step prompt debugging methods to fix hallucinations, refine your LLM workflows, and boost model reliability.
Learn how to build a powerful contract review chatbot using Next.js and OpenAI’s GPT-4o-mini model. This step-by-step tutorial covers file uploads, API integration, prompt engineering, and deployment — perfect for developers wanting to create AI-powered legal assistants.
Learn how to build an AI-powered quiz generator using OpenAI and Next.js. Upload PDF content and automatically generate multiple-choice questions with answers and difficulty levels in JSON format.