Combining AI With React For A Smarter Frontend - The New Stack

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

AI / FRONTEND DEVELOPMENT

Combining AI with React for a Smarter Frontend


Jesse Hall, senior developer advocate with MongoDB, explained the building blocks for
integrating artificial intelligence into React apps.
Nov 21st, 2023 12:30pm by Loraine Lawson

Image via Unsplash

VOXPOP
Try our new 5 second poll. It's fast. And it's fun!

How has the recent turmoil within the OpenAI offices changed your plans to use
FOLLOW
GPT in aTNS
business process or product in 2024? TNS DAILY

SUBSCRIBE
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.

No change in plans, though we will keep an eye on the situation.

I HAVE AN OPINION

We'd love to hear what you think.

Frontend development will have to incorporate artificial intelligence sooner rather than
later. The burning questions though are what does that even look like and must it be a
chatbot?

“Almost every application going forward is going to use AI in some capacity, AI is going
to wait for no one,” said Jesse Hall, a senior developer advocate at MongoDB, during
last week’s second virtual day of React Summit US. “In order to stay competitive, we
need to build intelligence into our applications in order to gain rich insights from our
data.”

A Tech Stack for React AI Apps


First, developers can take custom data — image, blogs, videos, articles, PDFs, whatever
— and generate embeddings using an embedding model, then store those embeddings
in a vector database. It doesn’t necessitate LangChain, but that can be helpful in
facilitating that process, he added. Once the embeddings are created, it’s possible to
accept natural language queries to find relevant information from that custom data, he
explained.

FOLLOW TNS TNS DAILY


MongoDB Senior Developer Advocate Jesse Hall explains the RAG workflow.
“We send the user’s natural language query to an LLM, which vectorizes the query, then
we use vector search to find information that is closely related — semantically related
— to the user’s query, and then we return those results,” Hall said.

For example, the results might provide a text summary or links to specific document
pages, he added.

“Imagine your React app has an intelligent chatbot with RAG [Retrieval Augmented
Generation] and vector embeddings. This chatbot could pull in real-time data, maybe
the latest product inventory, and offer it during a customer service interaction, [using]
RAG and vector embeddings,” he said. “Your React app isn’t just smart, it’s adaptable,
real-time and incredibly context-aware.”

To put a tech stack around that, he suggested developers could use Next.js version 13.5
with Vercel’s app router, then connect with OpenAI’s Chat GPT 3.5, Turbo and GPT 4.
Then LangChain could be a crucial part of the stack because it helps with data pre-
processing, routing data to the proper storage, and making the AI part of the app more
efficient, he said. He also suggested using Vercel’s AI SDK, an open source library
designed to build conversational, streaming user interfaces.

TRENDING STORIES

1. Combining AI with React for a Smarter Frontend

2. How Qwik’s Astro Integration Beats Both React and Vanilla JS

3. Dev News: Vite Rust-ifies, Roc Language, JS Framework SDKs

4. React Panel: Frontend Should Embrace React Server Components

5. Dev News: Rust on Android, Flutter’s Release and Rust v1.74

Then, not surprisingly for a MongoDB developer advocate, he suggested leveraging


MongoDB to store the vector embeddings and MongoDB Atlas Vector Search.

“It’s a game changer for AI applications, enabling us to provide a more contextual and
FOLLOW TNS TNS DAILY
meaningful user experience by storing our vector embeddings directly in our
application database, instead of bolting on yet another external service,” he said. “And
it’s not just vector search. MongoDB Atlas itself brings a new level of power to our
generative AI capabilities. “

When combined, this technology stack would enable smarter, more powerful React
applications, he said.

“Remember, the future is not just about smarter AI, but also about how well it’s
integrated into user-centric platforms like your next React-based project,” Hall said.

How to Approach GPTs


Hall, who also creates the YouTube show codeSTACKr, also broke down the terms and
technology that developers need in order to incorporate artificial intelligence into their
React applications, starting with what to do with general pre-trained models (GPTs).

“It’s not merely about leveraging the power of GPT in React. It’s about taking your
React applications to the next level by making them intelligent and context-aware,”
Hall said. “We’re not just integrating AI into React, we’re optimizing it to be as smart
and context-aware as possible.”

There’s a huge demand for building intelligence into applications and to make faster,
personalized experiences for users, he added. Smarter apps will use AI-powered
models to take action autonomously for the user. That could look like a chatbot, but it
could also look like personalized recommendations and fraud detection.

The results will be two-fold, Hall said.

“First, your apps drive competitive advantage by deepening user engagement and
satisfaction as they interact with your application,” he explained. “Second, your apps
unlock higher efficiency and profitability by making intelligent decisions faster on
fresher, more accurate data.”

AI will be used to power the user-facing aspects of applications, but it will also lead to
“fresh data and insights” from those interactions, which in turn will power a more
efficient business decision model, he said.
FOLLOW TNS TNS DAILY

GPTs, Meet React


Drilling down on GPTs, aka large language models, he noted that GPTs are not perfect.

“One of their key limitations is their static knowledge base,” he said. “They only know
what they’ve been trained on. There are integrations with some models now that can
search the internet for newer information. But how do we know that the information
that they’re finding on the internet is accurate? They can hallucinate very confidently, I
might add. So how can we minimize this?”

The models can be made to be real-time, adaptable and more aligned with specific
needs by using React, large language models and RAG, he explained.

“We’re not just integrating AI into React, we’re optimizing it to be as smart and
context-aware as possible,” he said.

He explained what’s involved with RAG, starting with vectors. Vectors are the building
blocks that allow developers to represent complex multidimensional data in a format
that’s easy to manipulate and understand. Sometimes, vectors are referred to as vector
embeddings, or just embedding.

“Now the simplest explanation is a vector is a numerical representation of data and


array of numbers. And these numbers are coordinates in an n-dimensional space,
where n is the array length. So, however, how many numbers we have in the array is
how many dimensions we have,” he explained.

For example, video games use 2-D and 3-D coordinates to know where objects are in
the games world. But what makes vectors important in AI is that they enable semantic
search, he said.

“In simpler terms, they let us find information that is contextually relevant, not just a
keyword search,” Hall said. “And the data source is not just limited to text. It can also
be images, video, or audio — these can all be converted to vectors.”

So step one would be creating vectors, and the way to do that is through an encoder.
Encoders define how the information is organized in the virtual space, and there are
different types of encoders that can organize vectors in different ways, Hall explained.
For example, there are encoders for text, audio, images, etc. Most of the popular
FOLLOW TNS TNS DAILY
encoders can be found on Hugging Face or OpenAI, he added.
Finally, RAG comes into play. RAG is “an AI framework for retrieving acts from an
external knowledge base to ground large language models (LLMs) on the most
accurate, up-to-date information and to give users insight into LLMs’ generative
process,” according to IBM.

It does so by bringing together generative models with vector databases and


LangChain.

“RAG leverages vectors to pull in real-time, context-relevant data and to augment the
capabilities of an LLM,” Hall explained. “Vector search capabilities can augment the
performance and accuracy of GPT models by providing a memory or a ground truth to
reduce hallucinations, provide up-to-date information, and allow access to private
data.”

Loraine Lawson is a veteran technology reporter who has covered technology issues from data
integration to security for 25 years. Before joining The New Stack, she served as the editor of the
banking technology site, Bank Automation News. She has...

Read more from Loraine Lawson

MongoDB is a sponsor of The New Stack.

SHARE THIS STORY

TRENDING STORIES

1. Combining AI with React for a Smarter Frontend

2. How Qwik’s Astro Integration Beats Both React and Vanilla JS

3. Dev News: Vite Rust-ifies, Roc Language, JS Framework SDKs

4. React Panel: Frontend Should Embrace React Server Components


FOLLOW TNS TNS DAILY
5. Dev News: Rust on Android, Flutter’s Release and Rust v1.74

INSIGHTS FROM OUR SPONSORS

Scout APM Changelog - October 2023


23 October 2023

WebAssembly (WASM): Opportunities for Ruby


Developers
16 October 2023

Scout APM Changelog - September 2023


29 September 2023

Introducing ThanosCon in Paris KubeCon +


CloudNativeCon EU 2024!
28 November 2023

My KubeCon + CloudNativeCon NA Chicago 2023


Experience
28 November 2023

Implementation of CURP Server


27 November 2023

After enabling AI search ,when toggling between


navigation tabs the search results are not appearing
29 November 2023

Learning Path or On Demand Course


FOLLOW TNS TNS DAILY
29 November 2023
After enabling AI search ,when toggling between
navigation tabs the search results are not appearing
29 November 2023

THE NEW STACK UPDATE

A newsletter digest of the


week’s most important stories
& analyses.
EMAIL ADDRESS

SUBSCRIBE

The New stack does not sell your information or share it with unaffiliated third parties. By
continuing, you agree to our Terms of Use and Privacy Policy.

ARCHITECTURE ENGINEERING

Cloud Native Ecosystem AI


Containers Frontend Development
Edge Computing Software Development
Microservices TypeScript
FOLLOW TNS
Networking WebAssembly TNS DAILY

Serverless Cloud Services


Storage Data
Security

OPERATIONS CHANNELS

Platform Engineering Podcasts


Operations Ebooks
CI/CD Events
Tech Life Newsletter
DevOps TNS RSS Feeds
Kubernetes
Observability
Service Mesh

THE NEW STACK roadmap.sh

About / Contact Community created roadmaps, articles,


Sponsors resources and journeys for developers to
help you choose your path and grow in
Sponsorship
your career.
Contributions

Frontend Developer Roadmap


FOLLOW TNS Backend Developer Roadmap
Devops Roadmap

© The New Stack 2023


Disclosures Terms of Use Privacy Policy Cookie Policy

FOLLOW TNS TNS DAILY

You might also like