The Buffer Cache
The whole idea behind the buffer cache is to relieve processes from having to wait for relatively slow disks to retrieve or store data. Thus, it would be counterproductive to write a lot of data at once; instead, data should be written piecemeal at regular intervals so that I/O operations have a minimal impact on the speed of the user processes and on response time experienced by human users.
The kernel maintains a lot of information about each buffer to help it pace the writes, including a “dirty” bit to indicate the buffer has been changed in memory and needs to be written, and a timestamp to indicate how long the buffer should be kept in memory before being flushed to disk. Information on buffers is kept in buffer heads (introduced in the previous chapter), so these data structures require maintenance along with the buffers of user data themselves.
The size of the buffer cache may vary. Page frames are allocated on demand when a new buffer is required and one is not available. When free memory becomes scarce, as we shall see in Chapter 16, buffers are released and the corresponding page frames are recycled.
The buffer cache consists of two kinds of data structures:
A set of buffer heads describing the buffers in the cache
A hash table to help the kernel quickly derive the buffer head that describes the buffer associated with a given pair of device and block numbers
Buffer Head Data Structures
As mentioned in Section 13.4.4, each buffer head is stored in a data ...
Get Understanding the Linux Kernel, Second Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.