Answers: Generative AI as Learning Tool

A look at what O’Reilly is building with AI

By Mike Loukides
June 11, 2024
Questions identified Questions identified (source: Pixabay)

Learn more from the Generative AI Success Stories Superstream on June 12th.

At O’Reilly, we’re not just building training materials about AI. We’re also using it to build new kinds of learning experiences. One of the ways we are putting AI to work is our update to Answers. Answers is a generative AI-powered feature that aims to answer questions in the flow of learning. It’s in every book, on-demand course, and video and will eventually be available across our entire learning platform. To see it, click the “Answers” icon (the last item in the list at the right side of the screen). 

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Answers enables active learning: interacting with content by asking questions and getting answers rather than simply ingesting a stream from a book or video. If you’re solving a problem for work, it puts learning in the flow of work. It is natural to have questions while you’re working on something; those of us who remember hardcopy books also remember having a stack of books open upside down on our desks (to save the page) as we got deeper and deeper into researching a problem. Something similar happens online: you open so many tabs while searching for an answer that you can’t remember which is which. Why can’t you just ask a question and get an answer? Now you can.

Here are a few insights into the decisions that we made in the process of building Answers. Of course, everything is subject to change; that’s the first thing you need to realize before starting any AI project. This is unknown territory; everything is an experiment. You won’t know how people will use your application until you build it and deploy it; there are many questions about Answers for which we are still awaiting answers. It is important to be careful when deploying an AI application, but it’s also important to realize that all AI is experimental. 

The core of Answers was built through collaboration with a partner that provided the AI expertise. That’s an important principle, especially for small companies: don’t build by yourself when you can partner with others. It would have been very difficult to develop the expertise to build and train a model, and much more effective to work with a company that already has that expertise. There will be plenty of decisions and problems for your staff to make and solve. At least for the first few products, leave the heavy AI lifting to someone else. Focus on understanding the problem you are solving. What are your specific use cases? What kinds of answers will your users expect? What kind of answers do you want to deliver? Think about how the answers to those questions affect your business model.

If you build a chat-like service, you must think seriously about how it will be used: what kinds of prompts to expect and what kinds of answers to return. Answers places few restrictions on the questions you can ask. While most users think of O’Reilly as a resource for software developers and IT departments, our platform contains many other kinds of information. Answers is able to answer questions about topics like chemistry, biology, and climate change—anything that’s on our platform. However, it differs from chat applications like ChatGPT in several ways. First, it’s limited to questions and answers. Although it suggests followup questions, it’s not conversational. Each new question starts a new context. We believe that many companies experimenting with AI want to be conversational for the sake of conversation, not a means to their end—possibly with the goal of monopolizing their users’ attention. We want our users to learn; we want our users to get on with solving their technical problems. Conversation for its own sake doesn’t fit this use case. We want interactions to be short, direct, and to the point.

Limiting Answers to Q&A also minimizes abuse; it’s harder to lead an AI system “off the rails” when you’re limited to Q&A. (Honeycomb, one of the first companies to integrate ChatGPT into a software product, made a similar decision.) 

Unlike many AI-driven products, Answers will tell you when it genuinely doesn’t have an answer. For example, if you ask it “Who won the world series?” it will reply “I don’t have enough information to answer this question.” If you ask a question that it can’t answer but on which our platform may have relevant information, it will point you to that information. This design decision was simple but surprisingly important. Very few AI systems will tell you that they can’t answer the question, and that inability is an important source of hallucinations, errors, and other kinds of misinformation. Most AI engines can’t say “Sorry, I don’t know.” Ours can and will.

Answers are always attributed to specific content, which allows us to compensate our talent and our partner publishers. Designing the compensation plan was a significant part of the project. We are committed to treating authors fairly—we won’t just generate answers from their content. When a user asks a question, Answers generates a short response and provides links to the resources from which it pulled the information. This data goes to our compensation model, which is designed to be revenue-neutral. It doesn’t penalize our talent when we generate answers from their material.

The design of Answers is more complex than you might expect—and it’s important for organizations starting an AI project to understand that “the simplest thing that might possibly work” probably won’t work. From the start, we knew that we couldn’t simply use a model like GPT or Gemini. In addition to being error-prone, they don’t have any mechanism for providing data about how they built an answer, data that we need as input to our compensation model. That pushed us immediately towards the retrieval-augmented generation pattern (RAG), which provided a solution. With RAG, a program generates a prompt that includes both the question and the data needed to answer the question. That augmented prompt is sent to the language model, which provides an answer. We can compensate our talent because we know what data was used to build the answer.

Using RAG begs the question: Where do the documents come from? Another AI model that has access to a database of our platform’s content to generate “candidate” documents. Yet another model ranks the candidates, selecting those that seem most useful; and a third model reevaluates each candidate to ensure that they are actually relevant and useful. Finally, the selected documents are trimmed to minimize content that’s unrelated to the question. This process has two purposes: it minimizes hallucination and the data sent to the model answering the question; it also minimizes the context required. The more context that’s required, the longer it takes to get an answer, and the more it costs to run the model. Most of the models we use are small open source models. They’re fast, effective, and inexpensive.

In addition to minimizing hallucination and making it possible to attribute content to creators (and from there, assign royalties), this design makes it easy to add new content. We are constantly adding new content to the platform: thousands of items per year. With a model like GPT, adding content would require a lengthy and expensive training process. With RAG, adding content is trivial. When anything is added to the platform, it is added to the database from which relevant content is chosen. This process isn’t computationally intensive and can take place almost immediately—in real time, as it were. Answers never lags the rest of the platform. Users will never see “This model has only been trained on data through July 2023.”

Answers is one product, but it’s only one piece of an ecosystem of tools that we’re building. All of these tools are designed to serve the learning experience: to help our users and our corporate clients develop the skills they need to stay relevant in a changing world. That’s the goal—and it’s also the key to building successful applications with generative AI. What is the real goal? It’s not to impress your customers with your AI expertise. It’s to solve some problem. In our case, that problem is helping students to acquire new skills more efficiently. Focus on that goal, not on the AI. The AI will be an important tool—maybe the most important tool. But it’s not an end in itself.

Post topics: AI & ML, Artificial Intelligence
Post tags: Research
Share:

Get the O’Reilly Radar Trends to Watch newsletter