Video description
In this course, you will explore the development of Retrieval-Augmented Generation (RAG) applications using LlamaIndex and JavaScript. You'll start with an introduction to the course structure, prerequisites, and project goals. Initial sections focus on setting up the development environment, including configuring Node.js and obtaining OpenAI API keys to facilitate seamless interaction with LlamaIndex.
Next, you'll delve into LlamaIndex fundamentals, covering data ingestion, indexing, and querying. Through hands-on sessions, you'll build basic and custom RAG systems, query structured data, and interact with LlamaIndex using an Express API. These practical exercises will equip you with the skills to handle complex scenarios, such as querying PDF files and integrating multiple data sources.
The final sections focus on advanced topics, including managing data persistence and deploying production-ready applications. You'll learn to create a full-stack chatbot app with NextJS, utilizing the create-llama CLI for rapid setup and customization. By course end, you'll be able to build, customize, and deploy scalable RAG applications with confidence.
What you will learn
- Create RAG systems with LlamaIndex and JavaScript.
- Set up and configure a full development environment for RAG apps.
- Implement data ingestion and indexing techniques using LlamaIndex.
- Build complex LlamaIndex queries with custom data loaders and engines.
- Develop a full-stack chatbot with NextJS, LlamaIndex, and OpenAI.
- Deploy scalable RAG systems with persistent data for production use.
Audience
This course is designed for developers with a solid foundation in JavaScript who want to explore RAG systems using LlamaIndex. Prior experience with Node.js and familiarity with APIs are recommended. If you're interested in building scalable, AI-powered applications, this course is for you.
About the Author
Paulo Dichone: Paulo Dichone, a dedicated developer and educator in Android, Java, and Flutter, has empowered over 80,000 students globally with both soft and technical skills through his platform, Build Apps with Paulo. Holding a Computer Science degree and with extensive experience in mobile and web development, Paulo's passion lies in guiding learners to become proficient developers. Beyond his 5 years of online teaching, he cherishes family time, music, and travel, aiming to make impactful developers irrespective of their background.
Table of contents
- Chapter 1 : Introduction
- Chapter 2 : Development Environment Setup
- Chapter 3 : LlamaIndex Deep Dive – Fundamentals
-
Chapter 4 : LlamaIndex Deep Dive - Main Concepts and Data Loaders
- LlamaIndex Core Concepts - Loaders Index
- The Querying Stage - Overview
- Querying Stage - ChatEngine Querying Engine Full Overview
- Hands-on: Create a Custom RAG System with LlamaIndex
- Hands-on: Structured Data Extraction
- Hands-on: Querying a PDF File
- Hands-on: Interacting with a RAG System Through an Express API - Full Hands-on
- Summary
- Chapter 5 : Agents Advanced Queries with LlamaIndex
- Chapter 6 : Persist Your Data Production-ready Techniques
- Chapter 7 : NextJS Full-stack Web Application Chatbot with One Command Deployment
- Chapter 8 : Wrap up
Product information
- Title: Developing RAG Apps with LlamaIndex and JS
- Author(s):
- Release date: September 2024
- Publisher(s): Packt Publishing
- ISBN: 9781836646112
You might also like
article
Splitting Strings on Any of Multiple Delimiters
Build your knowledge of Python with this Shortcuts collection. Focusing on common problems involving text manipulation, …
article
Use Github Copilot for Prompt Engineering
Using GitHub Copilot can feel like magic. The tool automatically fills out entire blocks of code--but …
video
Harnessing LLMs & Text-Embeddings API with Google Vertex AI
This course takes you on an in-depth journey through the capabilities of Google Vertex AI's Text-Embeddings …
article
Reinventing the Organization for GenAI and LLMs
Previous technology breakthroughs did not upend organizational structure, but generative AI and LLMs will. We now …