Compilers: Principles, Techniques, and Tools, Updated 2nd Edition by Pearson

Book description

Pearson’s flagship title Compilers: Principles, Techniques and Tools, known to professors, students, and developers worldwide as the "Dragon Book," is available in a new edition to reflect the current state of compilation. This book provides the foundation for understanding the theory and practice of compilers.

Revised and updated with new chapters on Programming Language Semantics and Undefined Behaviour Semantics, the title addresses modern issues in compiler design. However, the authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development.

Table of contents

  1. Cover
  2. About Pearson
  3. Title Page
  4. Contents
  5. Preface
  6. About the Authors
  7. Chapter 1: Introduction
    1. 1.1 Language Processors
    2. 1.2 The Structure of a Compiler
      1. 1.2.1 Lexical Analysis
      2. 1.2.2 Syntax Analysis
      3. 1.2.3 Semantic Analysis
      4. 1.2.4 Intermediate Code Generation
      5. 1.2.5 Code Optimization
      6. 1.2.6 Code Generation
      7. 1.2.7 Symbol-Table Management
      8. 1.2.8 The Grouping of Phases into Passes
      9. 1.2.9 Compiler-Construction Tools
    3. 1.3 The Evolution of Programming Languages
      1. 1.3.1 The Move to Higher-level Languages
      2. 1.3.2 Impacts on Compilers
    4. 1.4 The Science of Building a Compiler
      1. 1.4.1 Modeling in Compiler Design and Implementation
      2. 1.4.2 The Science of Code Optimization
    5. 1.5 Applications of Compiler Technology
      1. 1.5.1 Implementation of High-Level Programming Languages
      2. 1.5.2 Optimizations for Computer Architectures
      3. 1.5.3 Design of New Computer Architectures
      4. 1.5.4 Program Translations
      5. 1.5.5 Software Productivity Tools
    6. 1.6 Programming Language Basics
      1. 1.6.1 The Static/Dynamic Distinction
      2. 1.6.2 Environments and States
      3. 1.6.3 Static Scope and Block Structure
      4. 1.6.4 Explicit Access Control
      5. 1.6.5 Dynamic Scope
      6. 1.6.6 Parameter Passing Mechanisms
      7. 1.6.7 Aliasing
    7. 1.7 Summary of Chapter 1
    8. 1.8 References for Chapter 1
  8. Chapter 2: A Simple Syntax-Directed Translator
    1. 2.1 Introduction
    2. 2.2 Syntax Definition
      1. 2.2.1 Definition of Grammars
      2. 2.2.2 Derivations
      3. 2.2.3 Parse Trees
      4. 2.2.4 Ambiguity
      5. 2.2.5 Associativity of Operators
      6. 2.2.6 Precedence of Operators
    3. 2.3 Syntax-Directed Translation
      1. 2.3.1 Postfix Notation
      2. 2.3.2 Synthesized Attributes
      3. 2.3.3 Simple Syntax-Directed Definitions
      4. 2.3.4 Tree Traversals
      5. 2.3.5 Translation Schemes
    4. 2.4 Parsing
      1. 2.4.1 Top-Down Parsing
      2. 2.4.2 Predictive Parsing
      3. 2.4.3 When to Use ϵ-Productions
      4. 2.4.4 Designing a Predictive Parser
      5. 2.4.5 Left Recursion
    5. 2.5 A Translator for Simple Expressions
      1. 2.5.1 Abstract and Concrete Syntax
      2. 2.5.2 Adapting the Translation Scheme
      3. 2.5.3 Procedures for the Nonterminals
      4. 2.5.4 Simplifying the Translator
      5. 2.5.5 The Complete Program
    6. 2.6 Lexical Analysis
      1. 2.6.1 Removal of White Space and Comments
      2. 2.6.2 Reading Ahead
      3. 2.6.3 Constants
      4. 2.6.4 Recognizing Keywords and Identifiers
      5. 2.6.5 A Lexical Analyzer
    7. 2.7 Symbol Tables
      1. 2.7.1 Symbol Table Per Scope
      2. 2.7.2 The Use of Symbol Tables
    8. 2.8 Intermediate Code Generation
      1. 2.8.1 Two Kinds of Intermediate Representations
      2. 2.8.2 Construction of Syntax Trees
      3. 2.8.3 Static Checking
      4. 2.8.4 Three-Address Code
    9. 2.9 Summary of Chapter 2
  9. Chapter 3: Lexical Analysis
    1. 3.1 The Role of the Lexical Analyzer
      1. 3.1.1 Lexical Analysis Versus Parsing
      2. 3.1.2 Tokens, Patterns, and Lexemes
      3. 3.1.3 Attributes for Tokens
      4. 3.1.4 Lexical Errors
    2. 3.2 Input Buffering
      1. 3.2.1 Buffer Pairs
      2. 3.2.2 Sentinels
    3. 3.3 Specification of Tokens
      1. 3.3.1 Strings and Languages
      2. 3.3.2 Operations on Languages
      3. 3.3.3 Regular Expressions
      4. 3.3.4 Regular Definitions
      5. 3.3.5 Extensions of Regular Expressions
    4. 3.4 Recognition of Tokens
      1. 3.4.1 Transition Diagrams
      2. 3.4.2 Recognition of Reserved Words and Identifiers
      3. 3.4.3 Completion of the Running Example
      4. 3.4.4 Architecture of a Transition-Diagram-Based Lexical Analyzer
    5. 3.5 The Lexical-Analyzer Generator Lex
      1. 3.5.1 Use of Lex
      2. 3.5.2 Structure of Lex Programs
      3. 3.5.3 Conflict Resolution in Lex
      4. 3.5.4 The Lookahead Operator
    6. 3.6 Finite Automata
      1. 3.6.1 Nondeterministic Finite Automata
      2. 3.6.2 Transition Tables
      3. 3.6.3 Acceptance of Input Strings by Automata
      4. 3.6.4 Deterministic Finite Automata
    7. 3.7 From Regular Expressions to Automata
      1. 3.7.1 Conversion of an NFA to a DFA
      2. 3.7.2 Simulation of an NFA
      3. 3.7.3 Efficiency of NFA Simulation
      4. 3.7.4 Construction of an NFA from a Regular Expression
      5. 3.7.5 Efficiency of String-Processing Algorithms
    8. 3.8 Design of a Lexical-Analyzer Generator
      1. 3.8.1 The Structure of the Generated Analyzer
      2. 3.8.2 Pattern Matching Based on NFA’s
      3. 3.8.3 DFA’s for Lexical Analyzers
      4. 3.8.4 Implementing the Lookahead Operator
    9. 3.9 Optimization of DFA-Based Pattern Matchers
      1. 3.9.1 Important States of an NFA
      2. 3.9.2 Functions Computed From the Syntax Tree
      3. 3.9.3 Computing nullable, firstpos, and lastpos
      4. 3.9.4 Computing followpos
      5. 3.9.5 Converting a Regular Expression Directly to a DFA
      6. 3.9.6 Minimizing the Number of States of a DFA
      7. 3.9.7 State Minimization in Lexical Analyzers
      8. 3.9.8 Trading Time for Space in DFA Simulation
    10. 3.10 Summary of Chapter 3
    11. 3.11 References for Chapter 3
  10. Chapter 4: Syntax Analysis
    1. 4.1 Introduction
      1. 4.1.1 The Role of the Parser
      2. 4.1.2 Representative Grammars
      3. 4.1.3 Syntax Error Handling
      4. 4.1.4 Error-Recovery Strategies
    2. 4.2 Context-Free Grammars
      1. 4.2.1 The Formal Definition of a Context-Free Grammar
      2. 4.2.2 Notational Conventions
      3. 4.2.3 Derivations
      4. 4.2.4 Parse Trees and Derivations
      5. 4.2.5 Ambiguity
      6. 4.2.6 Verifying the Language Generated by a Grammar
      7. 4.2.7 Context-Free Grammars Versus Regular Expressions
    3. 4.3 Writing a Grammar
      1. 4.3.1 Lexical Versus Syntactic Analysis
      2. 4.3.2 Eliminating Ambiguity
      3. 4.3.3 Elimination of Left Recursion
      4. 4.3.4 Left Factoring
      5. 4.3.5 Non-Context-Free Language Constructs
    4. 4.4 Top-Down Parsing
      1. 4.4.1 Recursive-Descent Parsing
      2. 4.4.2 FIRST and FOLLOW
      3. 4.4.3 LL(1) Grammars
      4. 4.4.4 Nonrecursive Predictive Parsing
      5. 4.4.5 Error Recovery in Predictive Parsing
    5. 4.5 Bottom-Up Parsing
      1. 4.5.1 Reductions
      2. 4.5.2 Handle Pruning
      3. 4.5.3 Shift-Reduce Parsing
      4. 4.5.4 Conflicts During Shift-Reduce Parsing
    6. 4.6 Introduction to LR Parsing: Simple LR
      1. 4.6.1 Why LR Parsers?
      2. 4.6.2 Items and the LR(0) Automaton
      3. 4.6.3 The LR-Parsing Algorithm
      4. 4.6.4 Constructing SLR-Parsing Tables
      5. 4.6.5 Viable Prefixes
    7. 4.7 More Powerful LR Parsers
      1. 4.7.1 Canonical LR(1) Items
      2. 4.7.2 Constructing LR(1) Sets of Items
      3. 4.7.3 Canonical LR(1) Parsing Tables
      4. 4.7.4 Constructing LALR Parsing Tables
      5. 4.7.5 Efficient Construction of LALR Parsing Tables
      6. 4.7.6 Compaction of LR Parsing Tables
    8. 4.8 Using Ambiguous Grammars
      1. 4.8.1 Precedence and Associativity to Resolve Conflicts
      2. 4.8.2 The “Dangling-Else” Ambiguity
      3. 4.8.3 Error Recovery in LR Parsing
    9. 4.9 Parser Generators
      1. 4.9.1 The Parser Generator Yacc
      2. 4.9.2 Using Yacc with Ambiguous Grammars
      3. 4.9.3 Creating Yacc Lexical Analyzers with Lex
      4. 4.9.4 Error Recovery in Yacc
    10. 4.10 Summary of Chapter 4
    11. 4.11 References for Chapter 4
  11. Chapter 5: Syntax-Directed Translation
    1. 5.1 Syntax-Directed Definitions
      1. 5.1.1 Inherited and Synthesized Attributes
      2. 5.1.2 Evaluating an SDD at the Nodes of a Parse Tree
    2. 5.2 Evaluation Orders for SDD’s
      1. 5.2.1 Dependency Graphs
      2. 5.2.2 Ordering the Evaluation of Attributes
      3. 5.2.3 S-Attributed Definitions
      4. 5.2.4 L-Attributed Definitions
      5. 5.2.5 Semantic Rules with Controlled Side Effects
    3. 5.3 Applications of Syntax-Directed Translation
      1. 5.3.1 Construction of Syntax Trees
      2. 5.3.2 The Structure of a Type
    4. 5.4 Syntax-Directed Translation Schemes
      1. 5.4.1 Postfix Translation Schemes
      2. 5.4.2 Parser-Stack Implementation of Postfix SDT’s
      3. 5.4.3 SDT’s With Actions Inside Productions
      4. 5.4.4 Eliminating Left Recursion From SDT’s
      5. 5.4.5 SDT’s for L-Attributed Definitions
    5. 5.5 Implementing L-Attributed SDD’s
      1. 5.5.1 Translation During Recursive-Descent Parsing
      2. 5.5.2 On-The-Fly Code Generation
      3. 5.5.3 L-Attributed SDD’s and LL Parsing
      4. 5.5.4 Bottom-Up Parsing of L-Attributed SDD’s
    6. 5.6 Summary of Chapter 5
    7. 5.7 References for Chapter 5
  12. Chapter 6: Intermediate-Code Generation
    1. 6.1 Variants of Syntax Trees
      1. 6.1.1 Directed Acyclic Graphs for Expressions
      2. 6.1.2 The Value-Number Method for Constructing DAG’s
    2. 6.2 Three-Address Code
      1. 6.2.1 Addresses and Instructions
      2. 6.2.2 Quadruples
      3. 6.2.3 Triples
      4. 6.2.4 Static Single-Assignment Form
    3. 6.3 Types and Declarations
      1. 6.3.1 Type Expressions
      2. 6.3.2 Type Equivalence
      3. 6.3.3 Declarations
      4. 6.3.4 Storage Layout for Local Names
      5. 6.3.5 Sequences of Declarations
      6. 6.3.6 Fields in Records and Classes
    4. 6.4 Translation of Expressions
      1. 6.4.1 Operations Within Expressions
      2. 6.4.2 Incremental Translation
      3. 6.4.3 Addressing Array Elements
      4. 6.4.4 Translation of Array References
    5. 6.5 Type Checking
      1. 6.5.1 Rules for Type Checking
      2. 6.5.2 Type Conversions
      3. 6.5.3 Overloading of Functions and Operators
      4. 6.5.4 Type Inference and Polymorphic Functions
      5. 6.5.5 An Algorithm for Unification
    6. 6.6 Control Flow
      1. 6.6.1 Boolean Expressions
      2. 6.6.2 Short-Circuit Code
      3. 6.6.3 Flow-of-Control Statements
      4. 6.6.4 Control-Flow Translation of Boolean Expressions
      5. 6.6.5 Avoiding Redundant Gotos
      6. 6.6.6 Boolean Values and Jumping Code
    7. 6.7 Backpatching
      1. 6.7.1 One-Pass Code Generation Using Backpatching
      2. 6.7.2 Backpatching for Boolean Expressions
      3. 6.7.3 Flow-of-Control Statements
      4. 6.7.4 Break-, Continue-, and Goto-Statements
    8. 6.8 Switch-Statements
      1. 6.8.1 Translation of Switch-Statements
      2. 6.8.2 Syntax-Directed Translation of Switch-Statements
    9. 6.9 Intermediate Code for Procedures
    10. 6.10 Summary of Chapter 6
    11. 6.11 References for Chapter 6
  13. Chapter 7: Run-Time Environments
    1. 7.1 Storage Organization
      1. 7.1.1 Static Versus Dynamic Storage Allocation
    2. 7.2 Stack Allocation of Space
      1. 7.2.1 Activation Trees
      2. 7.2.2 Activation Records
      3. 7.2.3 Calling Sequences
      4. 7.2.4 Variable-Length Data on the Stack
    3. 7.3 Access to Nonlocal Data on the Stack
      1. 7.3.1 Data Access Without Nested Procedures
      2. 7.3.2 Issues With Nested Procedures
      3. 7.3.3 A Language With Nested Procedure Declarations
      4. 7.3.4 Nesting Depth
      5. 7.3.5 Access Links
      6. 7.3.6 Manipulating Access Links
      7. 7.3.7 Access Links for Procedure Parameters
      8. 7.3.8 Displays
    4. 7.4 Heap Management
      1. 7.4.1 The Memory Manager
      2. 7.4.2 The Memory Hierarchy of a Computer
      3. 7.4.3 Locality in Programs
      4. 7.4.4 Reducing Fragmentation
      5. 7.4.5 Manual Deallocation Requests
    5. 7.5 Introduction to Garbage Collection
      1. 7.5.1 Design Goals for Garbage Collectors
      2. 7.5.2 Reachability
      3. 7.5.3 Reference Counting Garbage Collectors
    6. 7.6 Introduction to Trace-Based Collection
      1. 7.6.1 A Basic Mark-and-Sweep Collector
      2. 7.6.2 Basic Abstraction
      3. 7.6.3 Optimizing Mark-and-Sweep
      4. 7.6.4 Mark-and-Compact Garbage Collectors
      5. 7.6.5 Copying Collectors
      6. 7.6.6 Comparing Costs
    7. 7.7 Short-Pause Garbage Collection
      1. 7.7.1 Incremental Garbage Collection
      2. 7.7.2 Incremental Reachability Analysis
      3. 7.7.3 Partial-Collection Basics
      4. 7.7.4 Generational Garbage Collection
      5. 7.7.5 The Train Algorithm
    8. 7.8 Advanced Topics in Garbage Collection
      1. 7.8.1 Parallel and Concurrent Garbage Collection
      2. 7.8.2 Partial Object Relocation
      3. 7.8.3 Conservative Collection for Unsafe Languages
      4. 7.8.4 Weak References
    9. 7.9 Summary of Chapter 7
    10. 7.10 References for Chapter 7
  14. Chapter 8: Code Generation
    1. 8.1 Issues in the Design of a Code Generator
      1. 8.1.1 Input to the Code Generator
      2. 8.1.2 The Target Program
      3. 8.1.3 Instruction Selection
      4. 8.1.4 Register Allocation
      5. 8.1.5 Evaluation Order
    2. 8.2 The Target Language
      1. 8.2.1 A Simple Target Machine Model
      2. 8.2.2 Program and Instruction Costs
    3. 8.3 Addresses in the Target Code
      1. 8.3.1 Static Allocation
      2. 8.3.2 Stack Allocation
      3. 8.3.3 Run-Time Addresses for Names
    4. 8.4 Basic Blocks and Flow Graphs
      1. 8.4.1 Basic Blocks
      2. 8.4.2 Next-Use Information
      3. 8.4.3 Flow Graphs
      4. 8.4.4 Representation of Flow Graphs
      5. 8.4.5 Loops
    5. 8.5 Optimization of Basic Blocks
      1. 8.5.1 The DAG Representation of Basic Blocks
      2. 8.5.2 Finding Local Common Subexpressions
      3. 8.5.3 Dead Code Elimination
      4. 8.5.4 The Use of Algebraic Identities
      5. 8.5.5 Representation of Array References
      6. 8.5.6 Pointer Assignments and Procedure Calls
      7. 8.5.7 Reassembling Basic Blocks From DAG’s
    6. 8.6 A Simple Code Generator
      1. 8.6.1 Register and Address Descriptors
      2. 8.6.2 The Code-Generation Algorithm
      3. 8.6.3 Design of the Function getReg
    7. 8.7 Peephole Optimization
      1. 8.7.1 Eliminating Redundant Loads and Stores
      2. 8.7.2 Eliminating Unreachable Code
      3. 8.7.3 Flow-of-Control Optimizations
      4. 8.7.4 Algebraic Simplification and Reduction in Strength
      5. 8.7.5 Use of Machine Idioms
    8. 8.8 Register Allocation and Assignment
      1. 8.8.1 Global Register Allocation
      2. 8.8.2 Usage Counts
      3. 8.8.3 Register Assignment for Outer Loops
      4. 8.8.4 Register Allocation by Graph Coloring
    9. 8.9 Instruction Selection by Tree Rewriting
      1. 8.9.1 Tree-Translation Schemes
      2. 8.9.2 Code Generation by Tiling an Input Tree
      3. 8.9.3 Pattern Matching by Parsing
      4. 8.9.4 Routines for Semantic Checking
      5. 8.9.5 General Tree Matching
    10. 8.10 Optimal Code Generation for Expressions
      1. 8.10.1 Ershov Numbers
      2. 8.10.2 Generating Code From Labeled Expression Trees
      3. 8.10.3 Evaluating Expressions with an Insufficient Supply of Registers
    11. 8.11 Dynamic Programming Code-Generation
      1. 8.11.1 Contiguous Evaluation
      2. 8.11.2 The Dynamic Programming Algorithm
    12. 8.12 Summary of Chapter 8
    13. 8.13 References for Chapter 8
  15. Chapter 9: Machine-Independent Optimizations
    1. 9.1 The Principal Sources of Optimization
      1. 9.1.1 Causes of Redundancy
      2. 9.1.2 A Running Example: Quicksort
      3. 9.1.3 Semantics-Preserving Transformations
      4. 9.1.4 Global Common Subexpressions
      5. 9.1.5 Copy Propagation
      6. 9.1.6 Dead-Code Elimination
      7. 9.1.7 Code Motion
      8. 9.1.8 Induction Variables and Reduction in Strength
    2. 9.2 Introduction to Data-Flow Analysis
      1. 9.2.1 The Data-Flow Abstraction
      2. 9.2.2 The Data-Flow Analysis Schema
      3. 9.2.3 Data-Flow Schemas on Basic Blocks
      4. 9.2.4 Reaching Definitions
      5. 9.2.5 Live-Variable Analysis
      6. 9.2.6 Available Expressions
      7. 9.2.7 Summary
    3. 9.3 Foundations of Data-Flow Analysis
      1. 9.3.1 Semilattices
      2. 9.3.2 Transfer Functions
      3. 9.3.3 The Iterative Algorithm for General Frameworks
      4. 9.3.4 Meaning of a Data-Flow Solution
    4. 9.4 Constant Propagation
      1. 9.4.1 Data-Flow Values for the Constant-Propagation Framework
      2. 9.4.2 The Meet for the Constant-Propagation Framework
      3. 9.4.3 Transfer Functions for the Constant-Propagation Framework
      4. 9.4.4 Monotonicity of the Constant-Propagation Framework
      5. 9.4.5 Nondistributivity of the Constant-Propagation Framework
      6. 9.4.6 Interpretation of the Results
    5. 9.5 Partial-Redundancy Elimination
      1. 9.5.1 The Sources of Redundancy
      2. 9.5.2 Can All Redundancy Be Eliminated?
      3. 9.5.3 The Lazy-Code-Motion Problem
      4. 9.5.4 Anticipation of Expressions
      5. 9.5.5 The Lazy-Code-Motion Algorithm
    6. 9.6 Loops in Flow Graphs
      1. 9.6.1 Dominators
      2. 9.6.2 Depth-First Ordering
      3. 9.6.3 Edges in a Depth-First Spanning Tree
      4. 9.6.4 Back Edges and Reducibility
      5. 9.6.5 Depth of a Flow Graph
      6. 9.6.6 Natural Loops
      7. 9.6.7 Speed of Convergence of Iterative Data-Flow Algorithms
    7. 9.7 Region-Based Analysis
      1. 9.7.1 Regions
      2. 9.7.2 Region Hierarchies for Reducible Flow Graphs
      3. 9.7.3 Overview of a Region-Based Analysis
      4. 9.7.4 Necessary Assumptions About Transfer Functions
      5. 9.7.5 An Algorithm for Region-Based Analysis
      6. 9.7.6 Handling Nonreducible Flow Graphs
    8. 9.8 Symbolic Analysis
      1. 9.8.1 Affine Expressions of Reference Variables
      2. 9.8.2 Data-Flow Problem Formulation
      3. 9.8.3 Region-Based Symbolic Analysis
    9. 9.9 Summary of Chapter 9
    10. 9.10 References for Chapter 9
  16. Chapter 10: Instruction-Level Parallelism
    1. 10.1 Processor Architectures
      1. 10.1.1 Instruction Pipelines and Branch Delays
      2. 10.1.2 Pipelined Execution
      3. 10.1.3 Multiple Instruction Issue
    2. 10.2 Code-Scheduling Constraints
      1. 10.2.1 Data Dependence
      2. 10.2.2 Finding Dependences Among Memory Accesses
      3. 10.2.3 Tradeoff Between Register Usage and Parallelism
      4. 10.2.4 Phase Ordering Between Register Allocation and Code Scheduling
      5. 10.2.5 Control Dependence
      6. 10.2.6 Speculative Execution Support
      7. 10.2.7 A Basic Machine Model
    3. 10.3 Basic-Block Scheduling
      1. 10.3.1 Data-Dependence Graphs
      2. 10.3.2 List Scheduling of Basic Blocks
      3. 10.3.3 Prioritized Topological Orders
    4. 10.4 Global Code Scheduling
      1. 10.4.1 Primitive Code Motion
      2. 10.4.2 Upward Code Motion
      3. 10.4.3 Downward Code Motion
      4. 10.4.4 Updating Data Dependences
      5. 10.4.5 Global Scheduling Algorithms
      6. 10.4.6 Advanced Code Motion Techniques
      7. 10.4.7 Interaction with Dynamic Schedulers
    5. 10.5 Software Pipelining
      1. 10.5.1 Introduction
      2. 10.5.2 Software Pipelining of Loops
      3. 10.5.3 Register Allocation and Code Generation
      4. 10.5.4 Do-Across Loops
      5. 10.5.5 Goals and Constraints of Software Pipelining
      6. 10.5.6 A Software-Pipelining Algorithm
      7. 10.5.7 Scheduling Acyclic Data-Dependence Graphs
      8. 10.5.8 Scheduling Cyclic Dependence Graphs
      9. 10.5.9 Improvements to the Pipelining Algorithms
      10. 10.5.10 Modular Variable Expansion
      11. 10.5.11 Conditional Statements
      12. 10.5.12 Hardware Support for Software Pipelining
    6. 10.6 Summary of Chapter 10
    7. 10.7 References for Chapter 10
  17. Chapter 11: Optimizing for Parallelism and Locality
    1. 11.1 Basic Concepts
      1. 11.1.1 Multiprocessors
      2. 11.1.2 Parallelism in Applications
      3. 11.1.3 Loop-Level Parallelism
      4. 11.1.4 Data Locality
      5. 11.1.5 Introduction to Affine Transform Theory
    2. 11.2 Matrix Multiply: An In-Depth Example
      1. 11.2.1 The Matrix-Multiplication Algorithm
      2. 11.2.2 Optimizations
      3. 11.2.3 Cache Interference
    3. 11.3 Iteration Spaces
      1. 11.3.1 Constructing Iteration Spaces from Loop Nests
      2. 11.3.2 Execution Order for Loop Nests
      3. 11.3.3 Matrix Formulation of Inequalities
      4. 11.3.4 Incorporating Symbolic Constants
      5. 11.3.5 Controlling the Order of Execution
      6. 11.3.6 Changing Axes
    4. 11.4 Affine Array Indexes
      1. 11.4.1 Affine Accesses
      2. 11.4.2 Affine and Nonaffine Accesses in Practice
    5. 11.5 Data Reuse
      1. 11.5.1 Types of Reuse
      2. 11.5.2 Self Reuse
      3. 11.5.3 Self-Spatial Reuse
      4. 11.5.4 Group Reuse
    6. 11.6 Array Data-Dependence Analysis
      1. 11.6.1 Definition of Data Dependence of Array Accesses
      2. 11.6.2 Integer Linear Programming
      3. 11.6.3 The GCD Test
      4. 11.6.4 Heuristics for Solving Integer Linear Programs
      5. 11.6.5 Solving General Integer Linear Programs
      6. 11.6.6 Summary
    7. 11.7 Finding Synchronization-Free Parallelism
      1. 11.7.1 An Introductory Example
      2. 11.7.2 Affine Space Partitions
      3. 11.7.3 Space-Partition Constraints
      4. 11.7.4 Solving Space-Partition Constraints
      5. 11.7.5 A Simple Code-Generation Algorithm
      6. 11.7.6 Eliminating Empty Iterations
      7. 11.7.7 Eliminating Tests from Innermost Loops
      8. 11.7.8 Source-Code Transforms
    8. 11.8 Synchronization Between Parallel Loops
      1. 11.8.1 A Constant Number of Synchronizations
      2. 11.8.2 Program-Dependence Graphs
      3. 11.8.3 Hierarchical Time
      4. 11.8.4 The Parallelization Algorithm
    9. 11.9 Pipelining
      1. 11.9.1 What is Pipelining?
      2. 11.9.2 Successive Over-Relaxation (SOR): An Example
      3. 11.9.3 Fully Permutable Loops
      4. 11.9.4 Pipelining Fully Permutable Loops
      5. 11.9.5 General Theory
      6. 11.9.6 Time-Partition Constraints
      7. 11.9.7 Solving Time-Partition Constraints by Farkas’ Lemma
      8. 11.9.8 Code Transformations
      9. 11.9.9 Parallelism With Minimum Synchronization
    10. 11.10 Locality Optimizations
      1. 11.10.1 Temporal Locality of Computed Data
      2. 11.10.2 Array Contraction
      3. 11.10.3 Partition Interleaving
      4. 11.10.4 Putting it All Together
    11. 11.11 Other Uses of Affine Transforms
      1. 11.11.1 Distributed Memory Machines
      2. 11.11.2 Multi-Instruction-Issue Processors
      3. 11.11.3 Vector and SIMD Instructions
      4. 11.11.4 Prefetching
    12. 11.12 Summary of Chapter 11
    13. 11.13 References for Chapter 11
  18. Chapter 12: Programming Language Semantics
    1. 12.1 Introduction
    2. 12.2 The Need for Specifying Programming Language Semantics
      1. 12.2.1 A Compiler Transforms the Program Across Multiple Abstraction Levels
      2. 12.2.2 The Need for Semantics at Each Abstraction Level
    3. 12.3 Formally Specifying Evaluation Rules through Inference Rules
    4. 12.4 Specifying Programming Language Semantics
      1. 12.4.1 Big-Step Operational Semantics of a C/C++-like Programming Language
    5. 12.5 Evaluation Judgment Rules for Simple Operations
      1. 12.5.1 Evaluation Judgment Rules for Constants
      2. 12.5.2 Evaluation Judgment Rules for Variables
      3. 12.5.3 Evaluation Judgment Rule for the Assignment Operator
      4. 12.5.4 Evaluation Judgment Rule for Binary Operations
      5. 12.5.5 Evaluation Judgment Rule for the Sequencing Operator
      6. 12.5.6 Evaluation Judgment Rule for the Block Operator
    6. 12.6 Evaluation Judgment Rules for Control Flow
      1. 12.6.1 Evaluation Judgment Rules for If-Then-Else
      2. 12.6.2 Evaluation Judgment Rules for the Ternary Operator
      3. 12.6.3 Evaluation Judgment Rules for the While Loop
    7. 12.7 Evaluation Judgment Rules for Variable Declarations
    8. 12.8 Summary of Chapter 12
    9. 12.9 References for Chapter 12
  19. Chapter 13: Undefined Behaviour Semantics
    1. 13.1 Introduction
      1. 13.1.1 UB Associated with Array Indexing
      2. 13.1.2 UB Associated with Integer Overflow
    2. 13.2 Implications of UB Semantics for Compiler Optimization
      1. 13.2.1 Example for Signed Integer Overflow
      2. 13.2.2 Example for Type-based Strict Aliasing
      3. 13.2.3 Example for Based-On Semantics in C
    3. 13.3 Undefined Behaviour in IR
    4. 13.4 Considerations during the Design of IR Semantics
      1. 13.4.1 Summary
    5. 13.5 Poison Values: A Weaker Semantics for Errors
    6. 13.6 Semantics for the Poison Value
      1. 13.6.1 Summary
    7. 13.7 LLVM Semantics for the Poison Value
      1. 13.7.1 Division by Poison
      2. 13.7.2 Branch on Poison
      3. 13.7.3 Storing Poison to Memory
      4. 13.7.4 Loading-from and Storing-to Poison Pointer
      5. 13.7.5 Hoisting Transformations in LLVM
      6. 13.7.6 Compiler Analyses May Guarantee Poison-Free Values
      7. 13.7.7 Immediate UB in Source vs. Deferred UB in IR
      8. 13.7.8 Freeze Opcode
    8. 13.8 Summary of Chapter 13
    9. 13.9 References for Chapter 13
  20. Index
  21. Chapter 14: Interprocedural Analysis
  22. Appendix A: A Complete Front End
  23. Appendix B: Finding Linearly Independent Solutions
  24. Copyright

Product information

  • Title: Compilers: Principles, Techniques, and Tools, Updated 2nd Edition by Pearson
  • Author(s): Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman, Sorav Bansal
  • Release date: December 2023
  • Publisher(s): Pearson India
  • ISBN: 9789357054881