Soft Computing unit 4
Error-Correction
Learning
Error-Correction Learning, used with supervised
learning, is the technique of comparing the system output to the desired output
value, and using that error to direct the training. In the most direct route,
the error values can be used to directly adjust the tap weights, using an
algorithm such as the backpropagation algorithm.
Gradient
Descent
Gradient Descent is defined as one of the most
commonly used iterative optimization algorithms of machine learning to train
the machine learning and deep learning models. It helps in finding the local
minimum of a function.
If we move towards a negative gradient or away from
the gradient of the function at the current point, it will give the local
minimum of that function.
Whenever we move towards a positive gradient or towards
the gradient of the function at the current point, we will get the local
maximum of that function.
What is Cost-function?
The cost function is defined as the measurement of
difference or error between actual values and expected values at the current position
and present in the form of a single real number.
Learning
Rate
It is defined as the step size taken to reach the
minimum or lowest point. This is typically a small value that is evaluated and
updated based on the behavior of the cost function. If the learning rate is
high, it results in larger steps but also leads to risks of overshooting the
minimum. At the same time, a low learning rate shows the small step sizes,
which compromises overall efficiency but gives the advantage of more precision.
Mean
Squared Error (MSE)
The mean squared error (MSE) tells you how close a
regression line is to a set of points.
It does this by taking the distances from the points
to the regression line (these distances are the “errors”) and squaring them.
The squaring is necessary to remove any negative
signs.
It also gives more weight to larger differences.
It’s called the mean squared error as finding the
average of a set of errors.
Mean Squared Error Example
MSE formula = (1/n) * Σ(actual – forecast)2
Where:
n = number of items,
Σ = summation notation,
Actual = original or observed y-value,
Forecast = y-value from regression.
Find the regression line.
Insert your X values into the linear regression
equation to find the new Y values (Y’).
Subtract the new Y value from the original to get the
error.
Square the errors.
Add up the errors (the Σ in the formula is summation
notation).
Find the mean.
Ex.
Find the MSE for the following set of values: (43,41),
(44,45), (45,49), (46,47), (47,44).
Step 1: Find the regression line. I used this online
calculator and got the regression line y = 9.2 + 0.8x.
9.2 + 0.8(44) = 44.4
9.2 + 0.8(45) = 45.2
9.2 + 0.8(46) = 46
9.2 + 0.8(47) = 46.8
Step 1: Find the regression line. I used this online calculator and got the regression line y = 9.2 + 0.8x.
Step 3: Find the error (Y – Y’):
45 – 44.4 = 0.6
49 – 45.2 = 3.8
47 – 46 = 1
44 – 46.8 = -2.8
Step 4: Square the Errors:
0.62 = 0.36
3.82 = 14.44
12 = 1
-2.82 = 7.84
Backpropagation
Backpropagation is the essence of neural network
training.
It is the method of fine-tuning the weights of a
neural network based on the error rate obtained in the previous epoch (i.e.,
iteration).
Proper tuning of the weights allows to reduce error
rates and make the model reliable by increasing its generalization.
Backpropagation in neural network is a short form for
“backward propagation of errors.”
It is a standard method of training artificial neural
networks.
This method helps calculate the gradient of a loss
function with respect to all the weights in the network.