Deep neural networks do the input-to-target mapping via a deep sequence of simple data tranformations (layers). This transformation implemented by a layer is parameterised by its weights. Weights are also sometimes called the parameters of a layer.

  • Learning means finding a set of values for the weights of all layers in a network.

The network will correctly map the inputs to their associated targets only if the weights are reasonable.

 

  • To control the output of a neural network, we need to be able to measure how far this output is from what we expected. This is the job of the loss function of the network. The loss function is also sometimes called the objective function or cost function.

The loss function takes the predictions of the network and the true target and computes a distance score, capturing how well the network has done.

 

The fundamental trick in deep learning is to use this score as a feedback signal to adjust the value of the weights a little, in a direction that will lower the loss score. This adjustment is the job of the optimiser, which implements what's called the backpropagation algorithm, which is the central algorithm in deep learning.

With every example the network processes, the weights are adjusted a little in the correct direction, and the loss score decreases. This is the training loop.

'ArtificialIntelligence > NeuralNetworkDesign' 카테고리의 다른 글

Hebb Rule 헵 규칙  (1) 2023.10.24
Learning Rule  (0) 2023.10.24
Neuron Network Model  (0) 2023.07.13

+ Recent posts