Skip to content
  • Sign In
  • Try Now
View all events
Azure OpenAI

Adding AI to Your Applications

Published by O'Reilly Media, Inc.

Intermediate content levelIntermediate

Design, develop, and productionize AI-powered apps

Course outcomes

  • Learn how to develop AI-based applications
  • Understand how to integrate cloud-deployed LLMs into software applications
  • Explore deploying LLMs with APIs for public consumption

Course description

Join expert Madhusudhan Konda to demystify the process of developing AI-powered applications from the ground up. You’ll start from a basic understanding of LLMs and fundamentals of generative AI and advance to topics such as RAG patterns, vector search, and cloud deployments. You’ll see how to design, develop, and productionize AI-powered apps and deploy them to the cloud.

What you’ll learn and how you can apply it

  • Understand the key fundamentals of generative AI and large language models
  • Learn how to design and develop AI-powered applications using frameworks such as LangChain and Semantic Kernel with popular LLMs
  • Explore methods for productionizing LLM enabled applications

This live event is for you because...

  • You’re a software engineer who’s looking to develop AI-powered applications or integrate LLMs into your traditional software development lifecycle.
  • You’re a solution architect who wants to design LLM-integrated application architectures.
  • You’re an engineering manager who wants to provide guidance to your teams on how to integrate LLMs into their applications.

Prerequisites

  • Basic understanding of generative AI, GPT models, and cloud deployments
  • Some familiarity with software development lifecycle (any knowledge of microservices is helpful)
  • An understanding of LLM landscape and basics of Python or Java programming (helpful but not required)

Recommended follow-up:

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Introducing LLMs and GenAI (60 minutes)

  • Presentation: Setting the scene of LLMs and generative AI; tools and frameworks (LangChain, Semantic Kernel, Ollama, and Python/Jupyter)
  • Group discussion: GPT, Llama 2, Mistral, and other models
  • Demonstration: LangChain
  • Q&A
  • Break

Designing and developing applications (60 minutes)

  • Presentation: Developing AI-powered applications; running Local LLMs
  • Demonstration: Creating “MyGPT” chat application; LLMs on localhost using downloaded models and Ollama
  • Q&A
  • Break

Productionizing AI apps (60 minutes)

  • Presentation: Integrating LLMs into software development
  • Demonstration: Developing and deploying dockerized LLMs; integrating apps via APIs; deploying containerized LLM APIs to clouds; productionizing AI-powered applications
  • Q&A

Your Instructor

  • Madhusudhan Konda

    Madhusudhan Konda is a passionate technologist and a lifelong tech learner who loves distilling complex problems into simpler solutions, looking at the big picture and providing technical direction, and experimenting with newer programming languages and shiny frameworks. Over his career, Madhusudhan has held roles such as solution architect, development lead, lead developer, and others, always with a strong inclination toward teaching his fellow techies programming languages, frameworks, and new technologies. He’s been instrumental in delivering high-quality solutions to major clients such as EBRD, Credit Suisse, UBS, Mizuho, Deutsche Bank, Halifax, British Petroleum, British Airways, and Lloyd’s of London, to name a few. His core competencies lie in not only creating simple architectures for complex business problems and designing and developing the software projects from the ground up but also providing strategic road maps, cost-effective architectures, and product designs, leading teams, mentoring, and providing thought leadership. He’s written well-received books and produced video courses on Java, Spring, and the Hibernate ecosystem.

    linkedinXsearch