# Perceptron

## 0. Introduction

*This article will explain how to create the Perceptron in C#. You gonna need Visual Studio C#, I've used 2010 Express version, which is free by the
way, but this code should work in any other version too. I assume that you know basics about what Perceptron is and how it works and you are at
least little experienced at programming and familiar with OOP principles.*

## 1. Perceptron

Perceptron is the simpliest neuron network which can linearly separate objects into two classes. Perceptron has two inputs and one output. Beside that, Perceptron has Bias, Treshold, Learning rate and Weights. So we gonna start with declaring our Perceptron class and its variables.

class Perceptron { private double bias, treshold, learning_rate; private double weight0, weight1, weight2; }

Before Perceptron can starts learning, it needs weights set to random values. We can do this at constructor of our class and set rest of variables to default values too.

public Perceptron() { // Set random weights Random r = new Random(); weight0 = r.NextDouble(); weight1 = r.NextDouble(); weight2 = r.NextDouble(); // Set default values bias = -1; treshold = 0; learning_rate = 0.1; }

So we have set weights to random values and then set bias, treshold and learning rate to default values. You can change this default values
according to separation you are looking for. To be honest, I'm still not sure what **bias** is used for, but treshold is value that our sum of weights and
inputs is going to be compared with, and if sum is greater then we gonna send 1 at the output of perceptron. Otherway, we send 0 at the output. In
this example, I used 1 to describe that object belongs into **Blue group** and 0 to describe **Red group**. Last variable is learning_rate and it influences how
fast the Perceptron will change it's weights while is learning. The bigger the value, the more will perceptron change it's weights and faster will find
the final values. But in some cases if we set learning rate too big, it may happen that percetron will never find the final values.

Okay, next step is Output method. It's gonna have two parameters that are gonna represent input values and will return output for entered inputs.

// Get output of perceptron for x, y public double Output(double x, double y) { // Calculate sum of weights and input values double sum = weight1 * x + weight2 * y + weight0 * bias; // Check whether our sum is larger then treshold if (sum > treshold) return 1.0; else return 0.0; }

So as I've written before, we gonna calculate our sum and then compare it with treshold. To get output from neuron, you have to sum it's weights multiplied with input value.

y = x1*w1 + x2*w2 + ... ; where x represents input values and w weights

Because we are working with Perceptron, we have only two inputs and bias ( we can think about it as another input, that is constant, but it's weight is changing) and every input has it's own weight. So our equation for output is:

y = x1*w1 + x2*w2 + bias*w0 ;

Last thing in output function we need to do is to compare our sum with trehold, if sum is greater then treshold, we gonna return 1, otherway 0.

Next, we gonna create our method for perceptron learning. Parameters are going to be two input values and desired output for this inputs. Method will return error, difference between desired output and actual output.

// This method learns perceptron for desired output and returns current error public double Learn(double x, double y, double output_desired) { // Get an actual output double output = Output(x, y); // Get error for actual output double error = output_desired - output; // If there's an error... if (error != 0) { // ... Update weights Weight0 += learning_rate * error * bias; Weight1 += learning_rate * error * x; Weight2 += learning_rate * error * y; } return error; }

Firstly, we have to get output value and calculate error for this inputs. If there's an error, we have to adjust weights to minimize the error. To update weights, use this formula:

wx = wx + lr * e * x; where wx is weight, lr is learning_rate, e is error and x is input

Because the error isn't calculated as absolute value, it may be negative. In that case weights will decrase, if it's positive, weigts will incrase. Because error is multiplied with weight, you know that, the bigger the weight is (has more influence on result value at the output) the more it's gonna change. At the end we multiply this with learning rate so we can control how fast the weights should change.

Now, how to teach perceptron? You need cycle where you send input values and desired output to perceptron through Learn method. Learn method will return error for every input. You need to repeat this learning until sum of returned erros (global error) isn't equal to zero.

// Teach the perceptron double error_global = 0; // Run learning number of iterations: for (int i = 0; i < iterations; i++) // Or you can keep cycle running until error isn't equal to zero // for (error_global=1;error_global>0;) { status_progress_bar.Value = i; for (int j = 0; j < points.Count; j++) { error_global += Math.abs(perceptron.Learn(points[j].x, points[j].y, points[j].value)); } }

You should consider using second method (second "for" cycle that is commented) for cycle because if current set of inputs are not solvable (linearly separable), your cycle will never end.

Last thing I spent time thinking about while I was programming peceptron was how to draw that seperating line between red and blue points.

*Program Screenshot*

We know that it's line and equation for line is

y = ax + b

But how to get "a" and "b"? I've spent day and half with this and then I found what "a" and "b" are:

a = -w1/w2 b = -w0*bi/w2

I think this is all you need to know about perceptron to successfully create one.