Gradient descent uses the partial derivative of the loss or error function in order to propagate the updates back to the neuron weights. Our cost function in this example is the sigmoid function, which relates back to our activation function. In order to find the gradient for the output neuron, we need to derive the partial derivative of the sigmoid function. The following graph shows how the gradient descent method walks down the derivative in order to find the minimum:
![](/api/v2/epubs/9781788830409/files/assets/f3899ca3-835e-4d3e-8e7f-fd1c5a9044fb.png)