Chapter 10. Data-Centric Scaling
Parts I and II of this book discussed hardware, software, and algorithmic techniques to scale your model development–related workload. Part III focuses on data, design, processes, and other application-specific considerations needed to scale effectively. As discussed in Chapters 1 and 2, data has been fueling the success of deep learning for over two decades, and there is a long-held belief that increasing the size of the training dataset will continue to improve model performance.1 It has been said that data is the oil of the 21st century, and much like oil, data possesses characteristics that can fuel innovation—when used and prepared with care. This is a real challenge, as has been confirmed by the 2023 State of AI Infrastructure Survey,2 which ranks data among the top three biggest development challenges faced by organizations (along with infrastructure and compute). According to this survey, two out of five AI-practicing organizations identify data as the biggest issue in AI development.
The importance of data curation is evident from the success of ChatGPT, which has become a highly influential model largely due to the careful use of a set of data curation techniques to ensure the quality of the results. This example is especially relevant because ChatGPT mainly reimplemented the neural network innovations already proposed by InstructGPT, with extensive innovation in terms of data application strategies.
In this chapter, you will learn about ...
Get Deep Learning at Scale now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.