Chapter 3. Linear and Logistic Regression with TensorFlow
This chapter will show you how to build simple, but nontrivial, examples of learning systems in TensorFlow. The first part of this chapter reviews the mathematical foundations for building learning systems and in particular will cover functions, continuity, and differentiability. We introduce the idea of loss functions, then discuss how machine learning boils down to the ability to find the minimal points of complicated loss functions. We then cover the notion of gradient descent, and explain how it can be used to minimize loss functions. We end the first section by briefly discussing the algorithmic idea of automatic differentiation. The second section focuses on introducing the TensorFlow concepts underpinned by these mathematical ideas. These concepts include placeholders, scopes, optimizers, and TensorBoard, and enable the practical construction and analysis of learning systems. The final section provides case studies of how to train linear and logistic regression models in TensorFlow.
This chapter is long and introduces many new ideas. It’s OK if you don’t grasp all the subtleties of these ideas in a first reading. We recommend moving forward and coming back to refer to the concepts here as needed later. We will repeatedly use these fundamentals in the remainder of the book in order to let these ideas sink in gradually.
Mathematical Review
This first section reviews the mathematical tools needed to conceptually understand ...
Get TensorFlow for Deep Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.