Tuesday, November 8, 2022

XOR Problem in neural network

 

XOR problem with neural networks

The XOR gate can be usually termed as a combination of NOT and AND gates

The linear separability of points

Linear separability of points is the ability to classify the data points in the hyperplane by avoiding the overlapping of the classes in the planes. Each of the classes should fall above or below the separating line and then they are termed as linearly separable data points. With respect to logical gates operations like AND or OR the outputs generated by this logic are linearly separable in the hyperplane








So here we can see that the pink dots and red triangle points in the plot do not overlap each other and the linear line is easily separating the two classes where the upper boundary of the plot can be considered as one classification and the below region can be considered as the other region of classification.

Need for linear separability in neural networks

Linear separability is required in neural networks is required as basic operations of neural networks would be in N-dimensional space and the data points of the neural networks 




Linear separability of data is also considered as one of the prerequisites which help in the easy interpretation of input spaces into points whether the network is positive and negative and linearly separate the data points in the hyperplane.

linear separable use cases and XOR is one of the logical operations which are not linearly separable as the data points will overlap the data points of the linear line or different classes occur on a single side of the linear line. 


we can see that above the linear separable line the red triangle is overlapping with the pink dot and linear separability of data points is not possible using the XOR logic. So this is where multiple neurons also termed as Multi-Layer Perceptron are used with a hidden layer to induce some bias while weight updating and yield linear separability of data points using the XOR logic. So now let us understand how to solve the XOR problem with neural networks.



Solution of xor problem

The XOR problem with neural networks can be solved by using Multi-Layer Perceptron’s or a neural network architecture with an input layer, hidden layer, and output layer.

 


To solve this problem, we add an extra layer to our vanilla perceptron, i.e., we create a Multi Layered Perceptron (or MLP). We call this extra layer as the Hidden layer. To build a perceptron, we first need to understand that the XOr gate can be written as a combination of AND gates, NOT gates and OR gates in the following way:

XOr b = (a AND NOT b)OR(bAND NOTa)

 

 

So during the forward propagation through the neural networks, the weights get updated to the corresponding layers and the XOR logic gets executed. The Neural network architecture to solve the XOR problem will be as shown below.


 

 

 

problem wherein linear separability of data points is not possible using single neurons or perceptron’s. So for solving the XOR problem for neural networks it is necessary to use multiple neurons in the neural network architecture with certain weights and appropriate activation functions to solve the XOR problem with neural networks.

 

Saturday, November 5, 2022

C Programming

 C Language 

C language Tutorial with programming approach for beginners and professionals, helps you to understand the C language tutorial easily. Our C tutorial explains each topic with programs.

The C Language is developed by Dennis Ritchie for creating system applications that directly interact with the hardware devices such as drivers, kernels, etc.

C programming is considered as the base for other programming languages, that is why it is known as mother language.

Sunday, October 30, 2022

 Soft Computing unit 4

Error-Correction Learning

Error-Correction Learning, used with supervised learning, is the technique of comparing the system output to the desired output value, and using that error to direct the training. In the most direct route, the error values can be used to directly adjust the tap weights, using an algorithm such as the backpropagation algorithm.

Gradient Descent

Gradient Descent is defined as one of the most commonly used iterative optimization algorithms of machine learning to train the machine learning and deep learning models. It helps in finding the local minimum of a function.

If we move towards a negative gradient or away from the gradient of the function at the current point, it will give the local minimum of that function.

Whenever we move towards a positive gradient or towards the gradient of the function at the current point, we will get the local maximum of that function.

What is Cost-function?

The cost function is defined as the measurement of difference or error between actual values and expected values at the current position and present in the form of a single real number.

Learning Rate

It is defined as the step size taken to reach the minimum or lowest point. This is typically a small value that is evaluated and updated based on the behavior of the cost function. If the learning rate is high, it results in larger steps but also leads to risks of overshooting the minimum. At the same time, a low learning rate shows the small step sizes, which compromises overall efficiency but gives the advantage of more precision.

Mean Squared Error (MSE)

The mean squared error (MSE) tells you how close a regression line is to a set of points.

It does this by taking the distances from the points to the regression line (these distances are the “errors”) and squaring them.

The squaring is necessary to remove any negative signs.

It also gives more weight to larger differences.

It’s called the mean squared error as finding the average of a set of errors.


Mean Squared Error Example

MSE formula = (1/n) * Σ(actual – forecast)2

Where:

n = number of items,

Σ = summation notation,

Actual = original or observed y-value,

Forecast = y-value from regression.

Find the regression line.

Insert your X values into the linear regression equation to find the new Y values (Y’).

Subtract the new Y value from the original to get the error.

Square the errors.

Add up the errors (the Σ in the formula is summation notation).

Find the mean.

Ex.

Find the MSE for the following set of values: (43,41), (44,45), (45,49), (46,47), (47,44).

Step 1: Find the regression line. I used this online calculator and got the regression line y = 9.2 + 0.8x.

 Step 2: Find the new Y’ values:

 9.2 + 0.8(43) = 43.6

9.2 + 0.8(44) = 44.4

9.2 + 0.8(45) = 45.2

9.2 + 0.8(46) = 46

9.2 + 0.8(47) = 46.8

Step 1: Find the regression line. I used this online calculator and got the regression line y = 9.2 + 0.8x.

Step 3: Find the error (Y – Y’):

 41 – 43.6 = -2.6

45 – 44.4 = 0.6

49 – 45.2 = 3.8

47 – 46 = 1

44 – 46.8 = -2.8

Step 4: Square the Errors:

 -2.62 = 6.76

0.62 = 0.36

3.82 = 14.44

12 = 1

-2.82 = 7.84

 

Backpropagation

Backpropagation is the essence of neural network training.

It is the method of fine-tuning the weights of a neural network based on the error rate obtained in the previous epoch (i.e., iteration).

Proper tuning of the weights allows to reduce error rates and make the model reliable by increasing its generalization.

Backpropagation in neural network is a short form for “backward propagation of errors.”

It is a standard method of training artificial neural networks.

This method helps calculate the gradient of a loss function with respect to all the weights in the network.

Soft Computing -Exercise Question

 

Soft Computing -Exercise Question

 

1)                    

Soft Computing -Exercise Question

 

1)   Explain Pattern recognition and data classification

2)   What is convex sets, convex hulls and linear separability?

3)   What is Space of Boolean functions?

4)   Explain XOR problem

5)   What is Multiplayer network?

6)   Explain types of Learning Algorithms

7)   Explain Error correction and gradient descent rules

8)   What is Perceptron Learning algorithm?

9)   What is MSE (Error)?

10)                  Explain Backpropagation Learning Algorithm

11)                  What are applications of Feedforward Neural Networks?

12)                  Find the MSE for the following set of values: (43,41), (44,45), (45,49), (46,47), (47,44).?

 

Wednesday, October 26, 2022

 Practice_study_Exercises_Question


 

1. Attempt all (1 M)

1)   “Vi is a structure editor”. True/False? Justify.

2)   Define the term location counter and instruction pointer.

3)   What is the difference between label and sequencing symbol?

4)   “Static binding leads to more efficient execution of program than dynamic binding”. True/False? Justify.

5)   “Runtime efficiency of program is better in compilation than interpretation”. True/False? Justify.

6)   Which code representations of expression are suitable for optimizing compilers?

7)   What is translated origin?

8)   What is SMACO?

 

Q2. Attempt the Following

1)   Explain various types of assembly language statements with their importance and suitable examples.

2)   “Definition of each macro is a source program is stored as it is in MDT”. True/False? Justify by giving suitable example.

3)   What is code optimization? Explain various code optimization techniques with suitable examples

4)   List various types of errors detected by compiler in various phases of compilation.

5)   Give any 2 differences between the instructions STOP and END.

6)   For the following assembly language program, show the entries in various data structures used by 2-Pass Assembler.

 

 

 

 

 

 

START 300

READ A

READ B

RAMA MOVER DREG, A

MOVER CREG, = ‘15’

MULT DREG, = ‘21’

MOVEM CREG, C

BC ANY, AGAIN

DIV AREG, C

LTORG

MOVER AREG, = ‘66’

ADD AREG, B

DIV AREG, = ‘15’

JMP1 SUB AREG, C

JMP2 DIV AREG, = ‘51’

ORIGIN RAMA + 5

SUB AREG, C

ORIGIN JMP2 + 1

AGAIN, EQU JMP1

PRINT C

STOP

A DS 1

B DS 1

C DS 1

D DC ‘7’

stop

 

7)   What is the use of statements AIF and AGO?

8)   List the properties of Intermediate code and show the Intermediate code variant – I and variant – II for the following assembly language program.

START 200

READ A

READ B

MOVER AREG, = ‘56’

ADD AREG, B

MOVER BREG, A

SUB BREG, A

MOVEM BREG, ZERO

STOP

A DS 1

B DS 1

END

9)   Explain the Lexical analysis phase of a language processor?

10)                  Discuss briefly about pass 2 of a compiler in detail?

11)                  Explain the data structures used by Two pass assembler?

12)                  Explain in detail the expansion processing of nested macro calls?

13)                  List the data structures used by a macro processor?

14)                  Differentiate between a compiler and interpreter?

15)                  Elaborate Declarative statements in Assembly Language?

16)                  Discuss the compilation process with a suitable example

17)                  Explain how control sections are handled by an Assembler? Explain with an example.

18)                  List out the functions of two pass assembler

19)                  With a neat block diagram explain the structure of an editor.

20)                  Compare and contrast among line editor and screen editor.

21)                  How the Assembler gives Program Relocation Information to the Loader?

22)                  State and explain the basic functions of a loader?

Monday, October 24, 2022

System Programming -Linker and Loder

 

Loaders and Linkers

Introduction:

In this chapter we will understand the concept of linking and loading. As discussed earlier the source program is converted to object program by assembler. The loader is a program which takes this object program, prepares it for execution, and loads this executable code of the source into memory for execution.

Definition of Loader:

Loader is utility program which takes object code as input prepares it for execution and loads the executable code into the memory. Thus loader is actually responsible for initiating the execution process.

Functions of Loader:

The loader is responsible for the activities such as allocation, linking,

relocation and loading

1) It allocates the space for program in the memory, by calculating the

size of the program. This activity is called allocation.

2) It resolves the symbolic references (code/data) between the object

modules by assigning all the user subroutine and library subroutine

addresses. This activity is called linking.

3) There are some address dependent locations in the program, such

address constants must be adjusted according to allocated space, such

activity done by loader is called relocation.

4) Finally, it places all the machine instructions and data of corresponding

programs and subroutines into the memory. Thus, program now becomes

ready for execution, this activity is called loading.

What is a Linker?

A linker is an important utility program that takes the object files, produced by the assembler and compiler, and other code to join them into a single executable file. There are two types of linkers, dynamic and linkage.

 

What is a Loader?

In the world of computer science, a loader is a vital component of an operating system that is accountable for loading programs and libraries. Absolute, Direct Linking, Bootstrap and Relocating are the types of loaders.

Difference between Linker and Loader

S.No.

LINKER

LOADER

1

A linker is an important utility program that takes the object files, produced by the assembler and compiler, and other code to join them into a single executable file.

A loader is a vital component of an operating system that is accountable for loading programs and libraries.

2

It uses an input of object code produced by the assembler and compiler.

It uses an input of executable files produced by the linker.

3

The foremost purpose of a linker is to produce executable files.

The foremost purpose of a loader is to load executable files to memory.

4

Linker is used to join all the modules.

Loader is used to allocate the address to executable files.

5

It is accountable for managing objects in the program’s space.

It is accountable for setting up references that are utilized in the program.

 

 

 

 

 

 

 

 

 

Relocation: As per its need, the OS may move (i.e. relocate) one or more segments of the program from one are of the memory to another. When the program gets relocated, instructions referring to code or data in these relocated segments must also be changed. Such instructions which must be changed when relocation occurs are called as “address sensitive” instructions.

The job of OS is to adjust addresses of all such address-sensitive instructions, when OS relocated one or more program segments.

(Note: Relocation function must be carried out every time the OS relocates the program segments; Relocation is often performed by a Linker for the Loader)

Functions of a Linker

A Linker basically performs following three functions:

1. Linking Object Files:A linker links multiple relocatable object files used by a

program and generates a single .exe file that can be loaded and executed by the

Loader.

2. Resolving External References:While linking those object files, the linker

resolves inter-segment and inter-program references to generate a single

continuous executable file.

3. Relocate Symbols: A linker relocates symbols from their relative locations in

input object files to new absolute positions in the executable file.

Functions of a Linker

A Linker basically performs following three functions:

1. Linking Object Files: A linker links multiple relocatable object files used by a

program and generates a single .exe file that can be loaded and executed by the

Loader.

2. Resolving External References: While linking those object files, the linker resolves inter-segment and inter-program references to generate a single continuous executable file.

3. Relocate Symbols: A linker relocates symbols from their relative locations in input object files to new absolute positions in the executable file.

 

 

 

 

 

Types of Programs w.r.t Relocation

Based on relocation, programs can be broadly classified as:

1. Non-relocatable Programs – These are static programs whose memory area is fixed at the time of coding and remains static i.e. cannot be changed thereafter.

(For example, the OS)

2. Relocatable Programs – These programs can be relocated to different memory areas as and when memory storage is needed by the OS. With the help of relocation information in the .exe file, the linker (at compile time) or relocating loader (at run time) will perform the functions needed to relocate the program.

Self-relocatable Programs – Such programs have small part of code (or subprograms) embedded in them which handle the operations needed to relocate the program. When OS relocates some (or all) part of code, control is transferred to the “Relocating subprogram” which adjusts addresses of its address sensitive portions of the code

EXTERN Table – includes name, type and (relative) usage addresses of symbols used by a current program that has been defined externally in some other program (i.e. symbols specified by EXTERN keyword)

Program Relocation

The squares program in the previous section contains four labels. The addresses of these labels are shown in the symbol table below:

 

These addresses are used to construct the constants contained in the branch and jump instructions. The address constants stored in the instructions have two possible interpretations:

 

The actual address of a memory location, also called an absolute address;

The offset to a memory location relative to a second known location.

A relocatable program is one which can be processed to relocate it to a selected area of memory. For illustration an object module. The difference among a relocatable and a non-relocatable program is the availability of information relating to the address sensitive instructions in this. A self-relocating program is the one which can perform the relocation of its own address sensitive instructions.

 

A self-relocating program can execute into any area of memory. It is very significant in time-sharing operating system where load address of a program is likely to be different for various executions.

A Non-Relocatable program is one which cannot be made to execute in any area of storage other than the one designated for it at the time of its coding or translation.


 



XOR Problem in neural network

  XOR problem with neural networks The XOR gate can be usually termed as a combination of NOT and AND gates The linear separability of p...