Chapter 9. LLM Workflows
Classic machine learning models were typically competent at only one skill in one domain—sentiment analysis of tweets, fraud detection from credit card transactions, translating text from English to French, and the like. With the advent of GPT models, a single model can now perform an enormous variety of tasks from seemingly any domain.
But even though model quality has improved tremendously since GPT-2, we are nowhere near the point of creating artificial general intelligence, (AGI), which is an AI that meets or exceeds human-level cognition. When we do create AGI, it will have the ability to assimilate knowledge, reason about it, solve novel and complex problems, and even generate new knowledge. AGI will use humanlike creativity to address real-world problems in any domain.
In contrast, today’s LLMs show marked deficiencies in reasoning and problem-solving and are especially bad at mathematics, a critical component of scientific discovery. Text they generate demonstrates a vast understanding of existing knowledge, but rarely does it introduce anything new. And outside of training, these models are incapable of learning new information. Future AGI, by definition, will possess both strength (the ability to solve complex problems) and generality (the ability to solve problems in any domain). But with current LLMs, there seems to be a trade-off between these two aspects of intelligence (see Figure 9-1).
At one end of the spectrum is a conversational agent, ...
Get Prompt Engineering for LLMs now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.